Chapter 6: Joshua Mohundro, Patrick Wong: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
Line 23: Line 23:
Actual *implications* of victim caches for inclusive vs. exclusive cache hierarchies... yep
Actual *implications* of victim caches for inclusive vs. exclusive cache hierarchies... yep
==Notes==
==Notes==
{{Reflist}}
==Refrences
1
*http://www.eecs.berkeley.edu/Pubs/TechRpts/1999/CSD-99-1034.pdf

Revision as of 19:02, 6 February 2012

Sectored Cache

History

One of the first commercially available PCs that used a cache, IBM 360/85, used a sectored cache. The primary reason for a sectored cache is that during the time of the IBM 360/85 it was easier to build than current non-sectored designs. However, the sectored design proved to be much less efficient then the non-sectored designs (of the time) and thus largely disappeared.

How they work

A sectored cache is broken up into sectors (hence the name) each of which has an address tag associated with it. Each sector is further broken down into subsectors, each of which has a "valid" bit allowing for some subsectors to remain empty while others are full.

This is a picture of a sectored cache
This is a diagram of a sectored cache taken from Jeffry B. Rotham. <ref>http://www.eecs.berkeley.edu/Pubs/TechRpts/1999/CSD-99-1034.pdf</ref>

When there is a miss to a sector, a resident sector is evicted, an address tag is set to point to the missed sector and a single subsector is fetched. When a subsector is missing but the sector "containing" it is present then only the subsector needs to be fetched. As mentioned in the history section sectored caches were all but abandoned due to the inferiority to other designs, this inferiority came mainly to the design flaw that a sector would often be evicted before all subsectors had been loaded thus at any given time much of the cache was not utilized.Sector caches do have, however, one important advantage. In a normal, (non-sectored) cache, the only way to have a very large cache capacity with a relatively small numb er of tag bits is to make the cache blocks (lines) very large; the problem in that case is that every miss requires that a large block be fetched in its entirety. With a sector cache, it is possible to fetch only a portion of a block (or sector), and thus the time to handle a miss, and the bus traffic, can both be significantly reduced. Thus, although it is likely that sector caches will have higher miss ratios than normal caches, there is the possibility that when timing is considered, the sector cache will be found to have better performance.

Victim Cache

The Victim Cache, in architectures with them, stores just-evicted lines from another level of cache. This cache is usually highly associative and has very few entries, but solves one of the pathological cases for direct-mapped caches, the alternating memory access pattern (of which a cache line conflict occurs). In effect, this extends the associativity of would-be conflict misses by the number of entries in the victim cache for very low cost.

Architectures implementing victim cache for x86 include the Transmeta Efficeon, AMD K7, AMD K8, and finally the AMD K10.

AMD has traditionally implemented an exclusive cache hierarchy, a form of cache that avoids duplication of data. Therefore, a victim cache is a natural development from implementation of an exclusive cache.

In K7, the cache is on a very slow external bus. The victim cache acted as a buffer between evicted lines from L1 cache, and slow L2 cache.

The K10's "victim cache" deserves some more inspection, as it is 2-6 MB, an order of magnitude larger than most victim cache implementations. It is more of a buffer for efficient implementation of AMD's exclusive cache hierarchy. It is possible that AMD decided that the L3 cache was fast enough to act as a victim cache.

Look into: non-x86 victim caches, herp

Actual *implications* of victim caches for inclusive vs. exclusive cache hierarchies... yep

Notes

==Refrences