Maxing out Multicore

From smartphones to data centers, multicore processors are becoming the norm. The extra processing power is good news for developers, but there’s a catch: The real-world performance gains are often held back by old ways of thinking about coding, a caveat summed up in the oft-overlooked Amdahl’s Law.

More than two years ago, University of Wisconsin Professor Mark Hill gave a presentation about how Amdahl’s Law affects multicore performance. He recently spoke with Intelligence in Software about why parallelism is key to unlocking multicore’s benefits and why the computing world could use a new Moore’s Law.

Q: In your presentation, you mentioned a conversation you had in a bar with IBM researcher Thomas Puzak. He said that everybody knows Amdahl’s

Law but quickly forgets it. Why?

Mark Hill: You learn the math of this, but often when it comes to real life, you forget how harsh Amdahl’s Law really is. If you’re 99 percent parallel and 1 percent serial, and you have 256 cores, how much faster do you think you can go? You’d think you get a speed-up of maybe 250 out of 256. (A speed-up for 250 means that one is computing at 250 times the rate of one core.)

But the answer is a speed-up of 72. That 1 percent has already cost you that much. That’s the kind of thing I mean. People’s intuition often is more optimistic than if they did a calculation with Amdahl’s Law.

Q: Hence your point about why there’s a growing need for dramatic increases in parallelism.

M.H.: Correct. It also ties in with the fact that I don’t think you’re going to take old software and get dramatic parallelism because it’s going to get that sequential component down.

Let’s say you get the sequential component down to 35 percent. Then your

speed-up is limited to three — at most three times faster than a single core. Nice, but hardly what you want. So in my opinion, dramatic gains in parallelism are going to have to happen due to new software that’s written for this person.

Q: For developers, what are the challenges to writing parallel-centric software, for lack of a better term? Is it mainly a change in mindset?

M.H.: It’s a pretty huge hurdle. There have been people writing parallel software in niche domains such as supercomputers, but most developers don’t have experience with it. If you think of software as a numerical recipe — you do this, you do that — parallel computing is like a bunch of numerical recipes operating at the same time. That can be conceptually a lot more difficult. It can be easier if you have a large dataset and say, “Let’s do approximately the same thing on each element of this large dataset.” That’s not so mind-blowing.

Part of the problem is that the literature can be a little biased because you can more easily publish results that show things working fantastically than working poorly. If you read these papers, you might think things are working pretty well, but people select things they want to publish with as opposed to the problems need to be done.

Q: You’ve talked about the need for a new Moore’s Law, where parallelism doubles every two years.

M.H.: It used to be that you could design a piece of software, and if it ran like a pig, you could say, “Well, it’s not a problem because processors are going to get twice as fast in two years.” Going forward, that’s going to be true only if you get parallelism that keeps increasing. That’s not going to happen by luck. You’re going to have to plan for it.

Q: Maybe there’s an analogy with oil: When gas prices skyrocket, as they did in the 1970s and again today, automakers start looking for ways to wring every mile they can out of a gallon. In the computing world, enterprises want data centers that aren’t electricity hogs, and smartphones that can last an entire workday before they need a charge.

M.H.: Electricity is increasingly becoming the limiting factor in machines. In this new era, there’s going to be a lot more pressure for the code to be more efficient. People say, “That doesn’t matter anymore because computers are so fast.” Yeah, but if my code is twice as efficient as your code, that’s going to matter because if somebody’s battery lasts twice as long, it’s a very good thing.

To download the slides from Hill’s presentation, visit CS.Wisc.edu .

Photo Credit: @iStockphoto.com/Petrovich9

by Tim Kridel