CSC/ECE 506 Spring 2012/4b rs: Difference between revisions
No edit summary |
|||
Line 7: | Line 7: | ||
===Scaled speedup=== | ===Scaled speedup=== | ||
====Gustafson's Law==== | ====Gustafson's Law==== | ||
====Derivation of Gustafson's Law==== | |||
====Gordon Bell prize==== | ====Gordon Bell prize==== | ||
===Superlinear speedup=== | ===Superlinear speedup=== | ||
==Conclusion== | ==Conclusion== | ||
==References== | ==References== |
Revision as of 23:39, 12 February 2012
The limits to speedup
Introduction
In parallel computing, speedup refers to how much a parallel algorithm is faster than a corresponding sequential algorithm. According to Amdahl's law the speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. But this solves a fixed problem in the shortest possible period of time, rather than solving the largest possible problem (e.g., the most accurate possible approximation) in a fixed "reasonable" amount of time. To overcome these shortcomings, John L. Gustafson and his colleague Edwin H. Barsis described Gustafson's Law, which provides a counterpoint to Amdahl's law, which describes a limit on the speed-up that parallelization can provide, given a fixed data set size.