CSC/ECE 506 Spring 2012/10a vm: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
Line 3: Line 3:


Almost all processors today, use prefetching as a means to speed up execution. Primarily prefetching is used to shorten the amount of time a processor is in wait state by predicting which cache block  would be accessed, so that where there is a cache miss the old block in the cache can be replaced with the prefetched block immediately and thus decreasing the idle time of the processor.  Performance in prefetching is best when it is done by following the program order. However it need not always be that prefetching is done in program order, a processor trying to guess the result of a calculation during a complex [http://en.wikipedia.org/wiki/Branch_prediction branch prediction] algorithm will need to anticipate the result and fetch the right set of instructions for execution. Things get more complex when it comes to graphical processing units or GPUs. Prefetching can take advantage of [http://en.wikipedia.org/wiki/Coherence_(physics)#Spatial_coherence spatial coherence] and the data that is prefetched are not a set of instructions but instead they are texture elements that can be mapped to a polygon <ref>[http://en.wikipedia.org/wiki/Instruction_prefetch]</ref>.
Almost all processors today, use prefetching as a means to speed up execution. Primarily prefetching is used to shorten the amount of time a processor is in wait state by predicting which cache block  would be accessed, so that where there is a cache miss the old block in the cache can be replaced with the prefetched block immediately and thus decreasing the idle time of the processor.  Performance in prefetching is best when it is done by following the program order. However it need not always be that prefetching is done in program order, a processor trying to guess the result of a calculation during a complex [http://en.wikipedia.org/wiki/Branch_prediction branch prediction] algorithm will need to anticipate the result and fetch the right set of instructions for execution. Things get more complex when it comes to graphical processing units or GPUs. Prefetching can take advantage of [http://en.wikipedia.org/wiki/Coherence_(physics)#Spatial_coherence spatial coherence] and the data that is prefetched are not a set of instructions but instead they are texture elements that can be mapped to a polygon <ref>[http://en.wikipedia.org/wiki/Instruction_prefetch]</ref>.
== References ==
<references />

Revision as of 00:31, 3 April 2012

Landing page for "Prefetching and consistency models."

Prefetching

Almost all processors today, use prefetching as a means to speed up execution. Primarily prefetching is used to shorten the amount of time a processor is in wait state by predicting which cache block would be accessed, so that where there is a cache miss the old block in the cache can be replaced with the prefetched block immediately and thus decreasing the idle time of the processor. Performance in prefetching is best when it is done by following the program order. However it need not always be that prefetching is done in program order, a processor trying to guess the result of a calculation during a complex branch prediction algorithm will need to anticipate the result and fetch the right set of instructions for execution. Things get more complex when it comes to graphical processing units or GPUs. Prefetching can take advantage of spatial coherence and the data that is prefetched are not a set of instructions but instead they are texture elements that can be mapped to a polygon <ref>[1]</ref>.

References

<references />