CSC 456 Spring 2012/ch7 AA: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
 
(36 intermediate revisions by 2 users not shown)
Line 3: Line 3:
===Problem===
===Problem===


[[Image:sharedmem.jpg|thumbnail|right|550px|Shared Memory system with dedicated Cache for each processor<sup><span id="3body">[[#4foot|[4]]]</span></sup>]]
[[Image:sharedmem.jpg|thumbnail|right|550px|Shared Memory system with dedicated Cache for each processor<sup><span id="3body"></span></sup>]]


A cache is considered '''coherent''' if its read operations always return the most recently written values at the same address.  In a system with a single processor (single core), maintaining cache coherency is simple and easy, but in a multiprocessor system, it is not as simple. Data can be present in any processor's cache, and protocol needs to ensure that the data is same in every cache. If it cannot ensure that all the caches are the same, then it needs to flag a cache line to indicate that it is not updated.
A cache is considered '''coherent''' if its read operations always return the most recently written values at the same address.  In a system with a single processor (single core), maintaining cache coherence is simple and easy, but in a multiprocessor system, it is not as simple. Data can be present in any processor's cache, and protocol needs to ensure that the data is same in every cache. If it cannot ensure that all the caches are the same, then it needs to flag a cache line to indicate that it is not updated.


In the figure shown here, there is a 4 processor shared memory system where each processor has its own cache. Suppose processor P1 reads memory location M1 and stores it in its local cache. Then, processor P2 also reads from M1 and stores its own local cache. Now, if P1 changes value of M1, there will be two copies of same data residing in different caches, but the one in P1's cache will be different. The problem arises when P2 operates on M1, and uses the stale value of M1 that was stored in its cache. There exist solutions to this problem, and are described in the next section.
In the figure shown here, there is a 4 processor shared memory system where each processor has its own cache. Suppose processor P1 reads memory location M1 and stores it in its local cache. Then, processor P2 also reads from M1 and stores its own local cache. Now, if P1 changes value of M1, there will be two copies of same data residing in different caches, but the one in P1's cache will be different. The problem arises when P2 operates on M1, and uses the stale value of M1 that was stored in its cache. There exist solutions to this problem, and are described in the next section.
Line 11: Line 11:
===Solutions===
===Solutions===


Cache coherence solutions are mainly classified as '''software-based''' or '''hardware-based''' solutions.
Cache coherence solutions are mainly classified as '''software-based''' or '''hardware-based''' solutions.  Software-based solutions can be implemented using compiler-based or run-time system support.  In addition, some of them can also be done with hardware assistance.  Hardware-based solutions can also be implemented in many different ways.  For example, one approach could be done by having each cache controller listen for traffic on the bus that would invalidate its cache line.  Alternatively, the solution could use directories which track which cache blocks most recently accessed a particular cache block.  Another variation of implementation is write-through versus write-back; the former writes to main memory whenever a cache is written to, while the latter waits to update main memory until the cache block is evicted.


Software-based solutions include:
The main concern in the case of software-based solutions is that perfect information is needed at all times when memory aliasing and explicit parallelism are required. So, the focus is more on improving hardware-based solutions, making them more common. Studies have shown that different snoop-based cache coherence schemes are more strongly sensitive toward write-policy than the specific coherence protocol. Write-back schemes are more efficient despite the increased hardware complexity involved in cache coherence support.
* Compiler-based or with run-time system support
* With or without hardware assist
 
Hardware-based solutions include:
* Shared caches or Snoopy schemes or Directory-based schemes
* Write-through vs write-back protocols
* Update vs invalidation protocols
* Dirty-sharing vs. no-dirty-sharing protocols
 
The main concern in the case of software-based solutions is that perfect information is needed at all times when memory aliasing and explicit parallelism are required. So, the focus is more on improving hardware-based solutions, making them more common. Studies have shown that different snoop-based cache coherency schemes are more strongly sensitive toward write-policy than the specific coherency protocol. Write-back schemes are more efficient despite the increased hardware complexity involved in cache-coherency support.


Hardware-based cache-coherence protocols, though more competitive in terms of performance with respect to basic architectures with no hardware support, incur significant power cost as coherence traffic grows. Thus, as power constraints become tighter and the degree of multiprocessing increases, viability of hardware-based solutions becomes doubtful.
Hardware-based cache-coherence protocols, though more competitive in terms of performance with respect to basic architectures with no hardware support, incur significant power cost as coherence traffic grows. Thus, as power constraints become tighter and the degree of multiprocessing increases, viability of hardware-based solutions becomes doubtful.
Line 31: Line 21:
===Protocols===
===Protocols===


The two basic methods to utilize the inter-core bus to notify other cores when a core changes something in its cache are '''update''' and '''invalidate'''. In the update method, if variable 'x' is modified by core 1, core 1 has to send the updated value of 'x' onto the inter-core bus. Each cache listens to the inter-core bus and if a cache sees a variable on the bus which it has a copy of, it will read the updated value. This ensures that all caches have the most up-to-date value of the variable.
Whenever a processor changes a word in its cache, all other caches holding the value must be notified. In order to keep their own values consistent with the freshly written cache, the cache's could be either '''updated''' or '''invalidated'''. In the update method, if variable 'x' is modified by core 1, core 1 has to send the updated value of 'x' onto the inter-core bus. Each cache listens to the inter-core bus and if a cache sees a variable on the bus which it has a copy of, it will read the updated value. This ensures that all caches have the most up-to-date value of the variable.


In case of invalidation, an invalidation message is sent onto the inter-core bus when a variable is changed. The other caches will read this invalidation signal; if its core attempts to access that variable, it will result in a cache miss and the variable will be read from main memory.  
In case of invalidation, an invalidation message is sent onto the inter-core bus when a variable is changed. The other caches will read this invalidation signal; if its core attempts to access that variable, it will result in a cache miss and the variable will be read from main memory.  
Line 37: Line 27:
The update method results in significant amount of traffic on the inter-core bus as the update signal is sent onto the bus every time the variable is updated. The invalidation method only requires that an invalidation signal be sent the first time a variable is altered; this is why the invalidation method is the preferred method.
The update method results in significant amount of traffic on the inter-core bus as the update signal is sent onto the bus every time the variable is updated. The invalidation method only requires that an invalidation signal be sent the first time a variable is altered; this is why the invalidation method is the preferred method.


In order to improve cache coherency performance over the years, the following protocols were proposed:
In order to improve cache coherence performance over the years, the following protocols were proposed:


* '''MSI'''
* '''MSI'''
Line 55: Line 45:
The MOESI protocol is a combination of the MESI and MOSI protocols.
The MOESI protocol is a combination of the MESI and MOSI protocols.


==Peterson's Algorithm==
==Memory Consistency==
One method by which to obtain mutual exclusion is Peterson's algorithm. This algorithm utilizes the simple idea of combining per-thread '''flags''' to indicate the intent to enter the lock and the '''turn''' variable to determine which thread should enter in the rare case that both wish to enter at the same time. The algorithm is as shown below:
''Memory consistency'' deals with the ordering of all memory operations - loads and stores - to different memory locations. It can also be present in systems without caches. The code below from Solihin [1] shows an example of possible inconsistency:


<pre>
<pre>
void init() {
flag[0] = flag[1] = 0; // 1 -> thread wants to acquire lock (intent)
turn = 0; // whose turn is it? (thread 0 or thread 1?)
}


void lock() {
                      P0                                          P1
flag[self] = 1;
                S1 : datum = 5                            S3 : while(!datumIsReady) {}
turn = 1 - self; // be generous: make it the other thread’s turn
                S2 : datumIsReady = 1                     S4 : print datum
while ((flag[1-self] == 1) && (turn == 1 - self))
 
; // spin-wait while other thread has intent
// AND it is other thread’s turn
}


void unlock() {
flag[self] = 0; // simply undo your intent
}
</pre>
</pre>


The advantages of this approach are:
In this example, P0 generates sets the values of ''datum'' and ''datumIsReady.'' By setting ''datumIsReady'' to 1, this signals P1 that ''datum'' is now ready. P1 spins in the while loop waiting for this flag to become 1 and then prints ''datum''. In this example an in similar cases, it is important that the compiler understand what the programmer intended so that the program order is preserved. Within a uni-processor, this problem can be solved by declaring which variables must be synchronized. For instance when using C, this can be accomplished by declaring variables that may be susceptible to inconsistency as ''volatile''.


* It does not assume much about the underlying hardware. It relies on the facts that load and store instructions are atomic (even across processors) and they execute in an order.
=== Memory Semantics in Uniprocessor Systems ===
* It does not require special instructions.


Though Peterson's algorithm has the above advantages, there are a few problems that render it impractical for use:
Uniprocessor languages use simple sequential semantics for memory operations, which allow the programmer to assume that all memory operations will occur one at a time in the sequential order specified by the program. Thus, the programmer can expect a read to return the value of the most recent write to the location according to sequential program order. It is sufficient to only maintain uniprocessor data and control dependences.  The compiler and hardware can freely reorder operations to different locations if the uniprocessor data and control dependences are respected. This enables compiler optimizations such as register allocation, code motion, and loop transformations, and hardware optimizations, such as pipelining, multiple issue, write buffer bypassing and forwarding, and lockup-free caches, all of which lead to overlapping and reordering of memory operations. [2]


* Spin-waiting is possible which causes a thread to spend lot of its CPU time waiting for another thread to release the lock.  
=== Memory Semantics in Multiprocessor Systems ===
* Algorithm might not work well with out-of-order execution of instructions supported by most of the modern processors.
Unfortunately, memory consistency is not as straight-forward on multiprocessors. With regard to the example above, instead of each process running on a separate thread, each process runs on a separate processor. With multiple processors, more problems arise with respect to order of execution. While one process might execute instructions in program order, other processes may not recognize the executions in the same order due to delays in communication. How processors handle this problem depends on the chosen ''consistency model''.
* The algorithm also lacks scalability.


Thus, only the software solutions cannot work well. Sufficient amount of hardware and OS support is needed to get the locking work properly.
Some examples of consistency models are:


==Memory Consistency==
* atomic consistency (or linearizability)
''Memory consistency'' deals with the ordering of all memory operations - loads and stores - to different memory locations. It can also be present in systems without caches. The code below from Solihin shows an example of possible inconsistency:
* causal consistency
* delta consistency
* entry consistency
* eventual consistency
* fork consistency
* one-copy serializability
* PRAM consistency (or FIFO consistency)
* release consistency
* sequential consistency
* vector-field consistency
* weak consistency
 
== Memory Coherence and Shared Virtual Memory ==
 
The memory coherence problem in a shared virtual memory system and in multicache systems are different. In a multicache multiprocessor, there are processors sharing a physical memory through their private caches. The relatively small size of a cache and the fast bus connection to the shared memory, enables using a sophisticated coherence protocol for the multicache hardware such that the time delay of conflicting writes to a memory location is small. [3]
 
In contrast, in a shared virtual memory system on a loosely coupled multiprocessor which has no physical shared memory (with a nontrivial communication cost between processors), conflicts are not likely to be solved with negligible delay. These conflicts resemble a “page fault” in a traditional virtual memory system. Thus, there are two design choices that greatly influence the implementation of a shared virtual memory: the granularity of the memory units (i.e., the “page size”) and the strategy for maintaining coherence.
 
Memory coherence strategies are classified based on how they deal with '''''page synchronization''''' and '''''page ownership'''''. The algorithms for memory coherence depend on the page fault handlers, their servers and the data structures used. So ''page table'' becomes an important part of these protocols.
 
== Centralized Manager Algorithms ==
 
One way to obtain mutual exclusive access to data is to use a centralized algorithm.  The first version of this algorithm is very similar to a monitor and makes use of a '''info table''' which has an entry for every page, and also three fields for each of those pages.  The '''owner''' field keeps track of the processor which had the latest write access to the page.  The '''copy set''' field has a list of every processor with a copy of the page, which is used for broadcast-free invalidation operations.  Finally, the '''lock''' field is used when synchronizing requests for the page.  Each processor in the algorithm also keeps track of page accessibility in its own page table with an '''access''' and a '''lock''' field. 
 
By setting the lock in both of the tables, the algorithm can synchronize page faults whenever there are multiple processes waiting for a page, or whenever the page is in the process of being accessed.  In conjunction with this are '''confirmation''' messages that are sent to let the managing processor know when it can give page access to someone else.  So, while read-page faults on the managing processor only need a message to and a message from the owner, read-page faults on the non-managing processor need an additional message to the manager and a confirmation message. For write-page faults, the message sending process is the same, except for the additional cost of an invalidation.
 
The second version of the algorithm improves on the first by eliminating the need for confirmation messages.  To accomplish this, the responsibility of synchronizing page ownership is moved from the manager to the individual owners.  This also means that the lock field is removed from the info table, and the copy set field is transferred to each of the page tables.  Ultimately, this removes the cost of one send and one receive from the algorithm.


<pre>
== Distributed Manager Algorithm ==
In order to fix the possibe bottle-neck centralized manager algorithms may create, ''Distributed Manager Algorithms'' assigns tasks to various individual processors. There are several methods of assigning these tasks. In a ''Fixed'' Distributed Manager Algorithm, there is a predetermined set of pages for which each processor is responsible. In a ''Broadcast'' Distributed Manager Algorithm, page faults results in broadcast requests to the other processors. Due to the number of requests, the Broadcast Distributed Manager Algorithm does not scale well. In the case of the ''Dynamic'' Distributed Manager Algorithm, instead of using broadcasting requests to find page owners, each processor holds a page table that stores the owners for each page. Since the processors that own each page (page owners) are not static, each entry in a processor's page table is only a "hint," or a starting point for looking for the true owner.


                      P0                                          P1
== References ==
                S1 : datum = 5                            S3 : while(!datumIsReady) {}
                S2 : datumIsReady = 1                    S4 : print datum


[1] Yan Solihin. "Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems." Solihin Publishing & Consulting LLC, 2009.


</pre>
[2] Sarita V. Adve. Kourosh Gharachorloo. "Shared Memory. Consistency Models: A Tutorial."  Digital Western Research Laboratory 250 University Avenue Palo Alto. <http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf>


In this example, P0 generates sets the values of ''datum'' and ''datumIsReady.'' By setting ''datumIsReady'' to 1, this signals P1 that ''datum'' is now ready. P1 spins in the while loop waiting for this flag to become 1 and then prints ''datum''. In this example an in similar cases, it is important that the compiler understand what the programmer intended so that the program order is preserved. In C, this can be accomplished by declaring variables that may be susceptible to inconsistency as ''volatile''.
[3] Kai Li, and Paul Hudak. "Memory coherence in shared virtual memory systems". Published in Journal ACM Transactions on Computer Systems (TOCS), Volume 7 Issue 4, Nov. 1989 <http://dl.acm.org.prox.lib.ncsu.edu/citation.cfm?id=75104.75105&coll=DL&dl=GUIDE&CFID=251927&CFTOKEN=71004880>.

Latest revision as of 13:59, 30 March 2012

Cache Coherence

Problem

Shared Memory system with dedicated Cache for each processor

A cache is considered coherent if its read operations always return the most recently written values at the same address. In a system with a single processor (single core), maintaining cache coherence is simple and easy, but in a multiprocessor system, it is not as simple. Data can be present in any processor's cache, and protocol needs to ensure that the data is same in every cache. If it cannot ensure that all the caches are the same, then it needs to flag a cache line to indicate that it is not updated.

In the figure shown here, there is a 4 processor shared memory system where each processor has its own cache. Suppose processor P1 reads memory location M1 and stores it in its local cache. Then, processor P2 also reads from M1 and stores its own local cache. Now, if P1 changes value of M1, there will be two copies of same data residing in different caches, but the one in P1's cache will be different. The problem arises when P2 operates on M1, and uses the stale value of M1 that was stored in its cache. There exist solutions to this problem, and are described in the next section.

Solutions

Cache coherence solutions are mainly classified as software-based or hardware-based solutions. Software-based solutions can be implemented using compiler-based or run-time system support. In addition, some of them can also be done with hardware assistance. Hardware-based solutions can also be implemented in many different ways. For example, one approach could be done by having each cache controller listen for traffic on the bus that would invalidate its cache line. Alternatively, the solution could use directories which track which cache blocks most recently accessed a particular cache block. Another variation of implementation is write-through versus write-back; the former writes to main memory whenever a cache is written to, while the latter waits to update main memory until the cache block is evicted.

The main concern in the case of software-based solutions is that perfect information is needed at all times when memory aliasing and explicit parallelism are required. So, the focus is more on improving hardware-based solutions, making them more common. Studies have shown that different snoop-based cache coherence schemes are more strongly sensitive toward write-policy than the specific coherence protocol. Write-back schemes are more efficient despite the increased hardware complexity involved in cache coherence support.

Hardware-based cache-coherence protocols, though more competitive in terms of performance with respect to basic architectures with no hardware support, incur significant power cost as coherence traffic grows. Thus, as power constraints become tighter and the degree of multiprocessing increases, viability of hardware-based solutions becomes doubtful.

One may think that cache write policy can provide cache coherence, but it is not true. Cache write policy only controls how a change in value of a cache is propagated to a lower level cache or main memory. It is not responsible for propagating changes to other caches.

Protocols

Whenever a processor changes a word in its cache, all other caches holding the value must be notified. In order to keep their own values consistent with the freshly written cache, the cache's could be either updated or invalidated. In the update method, if variable 'x' is modified by core 1, core 1 has to send the updated value of 'x' onto the inter-core bus. Each cache listens to the inter-core bus and if a cache sees a variable on the bus which it has a copy of, it will read the updated value. This ensures that all caches have the most up-to-date value of the variable.

In case of invalidation, an invalidation message is sent onto the inter-core bus when a variable is changed. The other caches will read this invalidation signal; if its core attempts to access that variable, it will result in a cache miss and the variable will be read from main memory.

The update method results in significant amount of traffic on the inter-core bus as the update signal is sent onto the bus every time the variable is updated. The invalidation method only requires that an invalidation signal be sent the first time a variable is altered; this is why the invalidation method is the preferred method.

In order to improve cache coherence performance over the years, the following protocols were proposed:

  • MSI

MSI stands for Modified, Shared, and Invalid, which is based on the three states that a line of cache can be in. The Modified state means that a variable in the cache has been modified and therefore has a different value than that found in main memory; the cache is responsible for writing the variable back to main memory. The Shared state means that the variable exists in at least one cache and is not modified; the cache can evict the variable without writing it back to the main memory. The Invalid state means that the value of the variable has been modified by another cache and is invalid; the cache must read a new value from main memory (or another cache).

  • MESI

MESI stands for Modified, Exclusive, Shared, and Invalid. The Modified and Invalid states are the same for this protocol as they are for the MSI protocol. This protocol introduces a new state; the Exclusive state. The Exclusive state means that the variable is only in this cache and its value matches the value within the main memory. This now means that the Shared state indicates that the variable is contained in more than one cache.

  • MOSI

The MOSI protocol is identical to the MSI protocol except that it adds an Owned state. The Owned state means that the processor "Owns" the variable and will provide the current value to other caches when requested (or at least it will decide if it will provide it when asked). This is useful because another cache will not have to read the value from main memory and will receive it from the Owning cache much, much, faster.

  • MOESI

The MOESI protocol is a combination of the MESI and MOSI protocols.

Memory Consistency

Memory consistency deals with the ordering of all memory operations - loads and stores - to different memory locations. It can also be present in systems without caches. The code below from Solihin [1] shows an example of possible inconsistency:


                      P0                                          P1
                S1 : datum = 5                            S3 : while(!datumIsReady) {}
                S2 : datumIsReady = 1                     S4 : print datum


In this example, P0 generates sets the values of datum and datumIsReady. By setting datumIsReady to 1, this signals P1 that datum is now ready. P1 spins in the while loop waiting for this flag to become 1 and then prints datum. In this example an in similar cases, it is important that the compiler understand what the programmer intended so that the program order is preserved. Within a uni-processor, this problem can be solved by declaring which variables must be synchronized. For instance when using C, this can be accomplished by declaring variables that may be susceptible to inconsistency as volatile.

Memory Semantics in Uniprocessor Systems

Uniprocessor languages use simple sequential semantics for memory operations, which allow the programmer to assume that all memory operations will occur one at a time in the sequential order specified by the program. Thus, the programmer can expect a read to return the value of the most recent write to the location according to sequential program order. It is sufficient to only maintain uniprocessor data and control dependences. The compiler and hardware can freely reorder operations to different locations if the uniprocessor data and control dependences are respected. This enables compiler optimizations such as register allocation, code motion, and loop transformations, and hardware optimizations, such as pipelining, multiple issue, write buffer bypassing and forwarding, and lockup-free caches, all of which lead to overlapping and reordering of memory operations. [2]

Memory Semantics in Multiprocessor Systems

Unfortunately, memory consistency is not as straight-forward on multiprocessors. With regard to the example above, instead of each process running on a separate thread, each process runs on a separate processor. With multiple processors, more problems arise with respect to order of execution. While one process might execute instructions in program order, other processes may not recognize the executions in the same order due to delays in communication. How processors handle this problem depends on the chosen consistency model.

Some examples of consistency models are:

  • atomic consistency (or linearizability)
  • causal consistency
  • delta consistency
  • entry consistency
  • eventual consistency
  • fork consistency
  • one-copy serializability
  • PRAM consistency (or FIFO consistency)
  • release consistency
  • sequential consistency
  • vector-field consistency
  • weak consistency

Memory Coherence and Shared Virtual Memory

The memory coherence problem in a shared virtual memory system and in multicache systems are different. In a multicache multiprocessor, there are processors sharing a physical memory through their private caches. The relatively small size of a cache and the fast bus connection to the shared memory, enables using a sophisticated coherence protocol for the multicache hardware such that the time delay of conflicting writes to a memory location is small. [3]

In contrast, in a shared virtual memory system on a loosely coupled multiprocessor which has no physical shared memory (with a nontrivial communication cost between processors), conflicts are not likely to be solved with negligible delay. These conflicts resemble a “page fault” in a traditional virtual memory system. Thus, there are two design choices that greatly influence the implementation of a shared virtual memory: the granularity of the memory units (i.e., the “page size”) and the strategy for maintaining coherence.

Memory coherence strategies are classified based on how they deal with page synchronization and page ownership. The algorithms for memory coherence depend on the page fault handlers, their servers and the data structures used. So page table becomes an important part of these protocols.

Centralized Manager Algorithms

One way to obtain mutual exclusive access to data is to use a centralized algorithm. The first version of this algorithm is very similar to a monitor and makes use of a info table which has an entry for every page, and also three fields for each of those pages. The owner field keeps track of the processor which had the latest write access to the page. The copy set field has a list of every processor with a copy of the page, which is used for broadcast-free invalidation operations. Finally, the lock field is used when synchronizing requests for the page. Each processor in the algorithm also keeps track of page accessibility in its own page table with an access and a lock field.

By setting the lock in both of the tables, the algorithm can synchronize page faults whenever there are multiple processes waiting for a page, or whenever the page is in the process of being accessed. In conjunction with this are confirmation messages that are sent to let the managing processor know when it can give page access to someone else. So, while read-page faults on the managing processor only need a message to and a message from the owner, read-page faults on the non-managing processor need an additional message to the manager and a confirmation message. For write-page faults, the message sending process is the same, except for the additional cost of an invalidation.

The second version of the algorithm improves on the first by eliminating the need for confirmation messages. To accomplish this, the responsibility of synchronizing page ownership is moved from the manager to the individual owners. This also means that the lock field is removed from the info table, and the copy set field is transferred to each of the page tables. Ultimately, this removes the cost of one send and one receive from the algorithm.

Distributed Manager Algorithm

In order to fix the possibe bottle-neck centralized manager algorithms may create, Distributed Manager Algorithms assigns tasks to various individual processors. There are several methods of assigning these tasks. In a Fixed Distributed Manager Algorithm, there is a predetermined set of pages for which each processor is responsible. In a Broadcast Distributed Manager Algorithm, page faults results in broadcast requests to the other processors. Due to the number of requests, the Broadcast Distributed Manager Algorithm does not scale well. In the case of the Dynamic Distributed Manager Algorithm, instead of using broadcasting requests to find page owners, each processor holds a page table that stores the owners for each page. Since the processors that own each page (page owners) are not static, each entry in a processor's page table is only a "hint," or a starting point for looking for the true owner.

References

[1] Yan Solihin. "Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems." Solihin Publishing & Consulting LLC, 2009.

[2] Sarita V. Adve. Kourosh Gharachorloo. "Shared Memory. Consistency Models: A Tutorial." Digital Western Research Laboratory 250 University Avenue Palo Alto. <http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf>

[3] Kai Li, and Paul Hudak. "Memory coherence in shared virtual memory systems". Published in Journal ACM Transactions on Computer Systems (TOCS), Volume 7 Issue 4, Nov. 1989 <http://dl.acm.org.prox.lib.ncsu.edu/citation.cfm?id=75104.75105&coll=DL&dl=GUIDE&CFID=251927&CFTOKEN=71004880>.