CSC/ECE 506 Spring 2012/10a vm

From Expertiza_Wiki
Jump to navigation Jump to search

Landing page for "Prefetching and consistency models."

Prefetching

Almost all processors today, use prefetching as a means to speed up execution. Primarily prefetching is used to shorten the amount of time a processor is in wait state by predicting which cache block would be accessed, so that where there is a cache miss the old block in the cache can be replaced with the prefetched block immediately and thus decreasing the idle time of the processor. Performance in prefetching is best when it is done by following the program order. However it need not always be that prefetching is done in program order, a processor trying to guess the result of a calculation during a complex branch prediction algorithm will need to anticipate the result and fetch the right set of instructions for execution. Things get more complex when it comes to graphical processing units or GPUs. Prefetching can take advantage of spatial coherence and the data that is prefetched are not a set of instructions but instead they are texture elements that can be mapped to a polygon.<ref>Instruction prefetch wiki article.</ref>

References

<references />