CSC 456 Spring 2012/10a AJ

From Expertiza_Wiki
Jump to navigation Jump to search

Prefetching and Consistency Models

Intro

While memory consistency models insure instructions are executed in correct order, these models can hinder efficiency. Since consistency models dictate order of execution, prefetching allows operations to complete quicker once their turn comes by bringing the necessarily data closer into the cache before it is needed.

A solution that was being explored in the 1990 was prefetching which is a hardware optimization technique in which the processor automatically prefetches ownership for any write operations that are delayed due to the program order requirement (e.g., by issuing prefetch-exclusive requests for any writes delayed in the write buffer), thus partially overlapping the service of the delayed writes with the operations preceding them in program order. This technique is only applicable to cache-based systems that use an invalidation-based protocol. This technique is suitable for statically scheduled processors.

Methods

Fixed vs. Adaptive Sequential Prefetching

Fixed sequential prefetching refers to prefetching of that occurs at a constant rate over time. Adaptive sequential prefetching, on the other hand, changes the rate of prefetching allowed over time. The prefetching rate is increased/decreased based on the count of successful prefetches. The rate is therefore dependent on workload and application (a start-up process will have a high rate of cold misses). While both methods improve efficiency, adaptive sequential prefetching is the most efficient as well as the most costly.

Where they stand now

Why?

References