CSC 456 Spring 2012/ch4b

From PG_Wiki
Jump to: navigation, search


Gustafson's Law

In 1985, IBM scientist Alan Karp issued a challenge to anyone who could produce a speedup of over 200 times.[1] "Karp's Challenge", as it became known, highlighted the limitations of Amdahl's Law. Prevailing speedups at the time were less than tenfold [2], and were for applications with little real-world value. C. Gordon Bell decided to up the ante, offering a $1000 award for the same challenge, issued annually to the winner, but only if the speedup was at least twice that of the previous award. He initially expected the first winner to have a speedup close to ten times, and that it would be difficult to advance beyond that.

John Gustafson won the 1988 Gordon Bell prize by demonstrating a 1000x speedup on a parallel program.[3] He noticed a limitation in Amdahl's Law, which assumed a constant serial fraction of the problem, regardless of problem size. Gustafson realized that when you scale the problem size up proportional to the number of processors, the non-parallelizable fraction of work decreases (i.e., big machines do big problems, bigger problems means smaller portions of serial code, which means that there is more room for processors to parallelize). Gustafson referred to this as scaled speedup. This provided the basis of what became known as "Gustafson's Law".

Derivation from Amdahl's Law

Amdahl's law assumes that the problem size is constant. A different expression, Gustafson's law, can be derived if the total time used to execute a workload is kept constant instead.[4] This source contains the derivation.

Superlinear Speedup

If a problem were 100% parallelizable, then under ideal circumstances one would expect the speedup for a 4-processor system running the same problem to be 4. However, there are cases where such a system might achieve a speedup of, say, 4.3, or 5. This seems counter intuitive, and is a controversial topic known as superlinear speedup. This controversy is the product of differences in interpretations of the basis of what constitutes speedup, making this an ideological disagreement.[5]

Superlinear speedup can most easily be attained by taking advantage of the combined cache size of all the processors. If the total cache size is greater than the problem's total working set, the problem can be placed inside the cache and executed much more quickly, allowing faster execution while doing the same amount of work.[6]

Another explanation for superlinear speedup is that the parallel execution of the problem does less total work than a uniprocessor system. This can be done by clever usage of algorithms such that the problem size is reduced, resulted in less total work.

Lack of a Serial Equivalent

However, it is not possible to serialize the parallel algorithm used in achieving superlinear speedup in order to get a better serial algorithm. While it is possible for one processor to have a cache size large enough to encompass a problem's working set, the usage of cache manipulation in achieving superlinear speedup relies on parallel execution, which a single processor is incapable of doing. Also, the monetary cost of such a cache would be so high as to make this implementation practically unfeasible.

Additionally, a serial algorithm could be constructed that would reduce the total problem size, but it would be much slower than its parallel counterpart. E.g., you could take this serial algorithm and parallelize it and both instances would do the same amount of work (less than the original problem size), but the parallel version would still do it much faster, achieving the superlinear speedup.


Personal tools