Chapter 6: Allison Hamann, Chris Barile: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
Line 5: Line 5:


==Handling Misses==
==Handling Misses==
The proposed victim cache is fully-associative and lies between the L1 memory and the next level of memory. While Jouppi proposed a victim cache with 1 to 5 entries, Naz et al. proposed that the victim caches should be 4 to 16 cache lines.<ref>{{cite web|url=http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=134547|title=Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers|accessdate=30 January 2012|publisher=Digital Equipment Corp., Palo Alto, CA}}</ref><ref>{{cite web|url=http://dl.acm.org/citation.cfm?id=1101876|title=
The proposed victim cache is fully-associative and lies between the L1 memory and the next level of memory. While Jouppi proposed a victim cache with 1 to 5 entries, Naz et al. proposed that the victim caches should be 4 to 16 cache lines.[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=134547][http://dl.acm.org/citation.cfm?id=1101876] Regardless of the size, when a miss occurs in the L1 cache, the victim cache is then scanned for the wanted line. If a miss occurs in both the L1 and victim cache, the needed line is then pulled from the next level, and the line evicted from the L1 cache is then placed in the victim cache. If a miss occurs in the L1 cache but hits in the victim cache, the two lines are swapped between the two caches. Thus, this eliminates the majority of conflict misses that occur due to temporal locality.
Improving data cache performance with integrated use of split caches, victim cache and stream buffers|accessdate=30 January 2012|publisher=MEDEA '04 Proceedings of the 2004 workshop on Memory performance}}</ref> Regardless of the size, when a miss occurs in the L1 cache, the victim cache is then scanned for the wanted line. If a miss occurs in both the L1 and victim cache, the needed line is then pulled from the next level, and the line evicted from the L1 cache is then placed in the victim cache. If a miss occurs in the L1 cache but hits in the victim cache, the two lines are swapped between the two caches. Thus, this eliminates the majority of conflict misses that occur due to temporal locality.


=Sector Cache=
=Sector Cache=

Revision as of 04:55, 14 February 2012

Victim Cache

Victim caches were first proposed by Norman P. Jouppi in 1990. Victim caching implements a small, fully-associative cache between direct-mapped L1 memory and the next level of memory. The cache allows lines evicted from the L1 cache a “second-chance” by loading them into the victim cache. Victim caches decrease the overall conflict miss rate (Jouppi).

Direct-mapped caches can especially benefit from victim caching due to their large miss rates. Victim caching allows direct-mapped caches to still be used in order to take advantage of their speed while decreasing the miss rate to an even lower rate than the miss rate found in set-associative caches (Jouppi).

Handling Misses

The proposed victim cache is fully-associative and lies between the L1 memory and the next level of memory. While Jouppi proposed a victim cache with 1 to 5 entries, Naz et al. proposed that the victim caches should be 4 to 16 cache lines.[1][2] Regardless of the size, when a miss occurs in the L1 cache, the victim cache is then scanned for the wanted line. If a miss occurs in both the L1 and victim cache, the needed line is then pulled from the next level, and the line evicted from the L1 cache is then placed in the victim cache. If a miss occurs in the L1 cache but hits in the victim cache, the two lines are swapped between the two caches. Thus, this eliminates the majority of conflict misses that occur due to temporal locality.

Sector Cache

A. Organziation

1) Sectors
2) Subsectors
3) Validity Bit

B. Load Procedure

1) Sector miss
2) Subsector miss

C. Advantages

Sector cache is where the cache is divided into sectors. These sectors correspond to a logical sector on the main storage device. Sectors are not loaded into the cache all at once, but in smaller pieces known as subsectors, which are similar to the cache lines in direct mapped cache.

When a process requests data from a disk sector that is not in the cache, a cache sector is assigned to the sector on the main storage device where the requested data is stored. Then a portion of that sector, otherwise known as the subsector, is loaded into the cache. The subsector's validity bit is then set to reflect that it has been loaded from the main storage. When data from other subsectors are requested, the system loads those subsectors into the cache sector and sets their validity bits. The sector is not removed from the cache until it is needed for another program.

One reason for this approach is that programs are generally organized in contiguous blocks on disk. Another is that data is first looked up by sector, and then by subsector, which means that it can be found much quicker, and the hardware to do the simultaneous comparison of tags is less expensive.