CSC/ECE 506 Spring 2013/10b ps: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 8: Line 8:
In order for our discussion to begin we must understand what a Memory Consistency Model is. The Memory Consistency Model specifies how memory behaves when multiple processors do reads and writes to the same location. Because this concept has such  a vast scope for such a critical role to the system, memory consistency impacts system design, programming languages, and underlying hardware.
In order for our discussion to begin we must understand what a Memory Consistency Model is. The Memory Consistency Model specifies how memory behaves when multiple processors do reads and writes to the same location. Because this concept has such  a vast scope for such a critical role to the system, memory consistency impacts system design, programming languages, and underlying hardware.


This is critically important because we want our systems to be as efficient as possible. In order to achieve this goal there are various optimizations the system performs to overlap operations and allow these operations to complete out-of-order. To add to this challenge, there are also several optimizations the compiler can perform to increase execution efficiency. It is intuitive to think that instructions are executed sequentially but in reality this execution really only needs to be sequential where operations operate on the same memory location. As multi-processor systems and parallel computing became more of a necessity in the technology industry the necessity for correct and highly efficient memory consistency models came into place. Memory consistency allows for techniques to optimize execution but also puts a contract in place to ensure correctness between the programmer and the system. In short, allowing tasks to execute in parallel is great for improving performance, but it also introduces new challenges because these tasks can now complete in different order than how they were initially conceived [[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16], [http://www2.engr.arizona.edu/~ece569a/Readings/ppt/memory.ppt 10]].
This is critically important because we want our systems to be as efficient as possible. In order to achieve this goal there are various optimizations the system performs to overlap operations and allow these operations to complete out-of-order. To add to this challenge, there are also several optimizations the compiler can perform to increase execution efficiency. It is intuitive to think that instructions are executed sequentially but in reality this execution really only needs to be sequential where operations operate on the same memory location. As multi-processor systems and parallel computing became more of a necessity in the technology industry the necessity for correct and highly efficient memory consistency models came into place. Memory consistency allows for techniques to optimize execution but also puts a contract in place to ensure correctness between the programmer and the system. In short, allowing tasks to execute in parallel is great for improving performance, but it also introduces new challenges because these tasks can now complete in different order than how they were initially conceived [[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10], [http://www2.engr.arizona.edu/~ece569a/Readings/ppt/memory.ppt 8]].


=How Memory Consistency Used to be Done=
=How Memory Consistency Used to be Done=


It is intuitive to think of operations executing on a multi-processor system to behave the same as instructions executing on a single processor system. This an interleaving model called Sequential Consistency and is the classical approach to consistency can be achieved. It was formally defined by Lamport as, “The result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program.” What this means is that all memory access are immediately visible to all processors.[[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 17]]
It is intuitive to think of operations executing on a multi-processor system to behave the same as instructions executing on a single processor system. This an interleaving model called Sequential Consistency and is the classical approach to consistency can be achieved. It was formally defined by Lamport as, “The result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program.” What this means is that all memory access are immediately visible to all processors.[[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 11]]


[[File:SC_diagram.PNG|center|frame||Conceptual Model for Sequential Consistency Model]]
[[File:SC_diagram.PNG|center|frame||Conceptual Model for Sequential Consistency Model]]


This type of system was seen in the 1980’s and present in system like the 386-based Compaq SystemPro. The key way Sequential Consistency was able to achieve overlap was to put writes in a write buffer so that the computation can be overlapped to the next read. Memory operations are issued in program order by processors. Because operations are executed one at a time they appear to be atomic with respect to each other.[[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 17]]
This type of system was seen in the 1980’s and present in system like the 386-based Compaq SystemPro. The key way Sequential Consistency was able to achieve overlap was to put writes in a write buffer so that the computation can be overlapped to the next read. Memory operations are issued in program order by processors. Because operations are executed one at a time they appear to be atomic with respect to each other.[[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 11]]


The advantages to a model like this is that it is easy to understand from a programmer perspective. Unfortunately the rules put in place by this model are very restrictive and can negatively impact performance. Sequential Consistency disallows the reordering of shared memory operations between processors. The drawbacks of this strict model produces high latency and very little benefit of true serial execution. This is because reads tend to be closely interleaved with writes yielding very little utilization of the write buffer.[[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 17]]
The advantages to a model like this is that it is easy to understand from a programmer perspective. Unfortunately the rules put in place by this model are very restrictive and can negatively impact performance. Sequential Consistency disallows the reordering of shared memory operations between processors. The drawbacks of this strict model produces high latency and very little benefit of true serial execution. This is because reads tend to be closely interleaved with writes yielding very little utilization of the write buffer.[[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 11]]


[[File:SeqC.jpg|center|frame||Conceptual Model for Sequential Consistency Model]]
[[File:SeqC.jpg|center|frame||Conceptual Model for Sequential Consistency Model]]
Line 28: Line 28:




Relaxed models can be categorized based on how they relax the program order constraint.  Below are three general categories we will be examining during this discussion[[http://www.cs.cornell.edu/Courses/cs717/2001fa/lectures/sarita.ppt 21]]:
Relaxed models can be categorized based on how they relax the program order constraint.  Below are three general categories we will be examining during this discussion[[http://www.cs.cornell.edu/Courses/cs717/2001fa/lectures/sarita.ppt 13]]:


1. '''Write to Read relaxation:''' allow a write followed by a read to execute out of program order. Examples of models that provide this relaxation include the IBM-370 and the Sun SPARC V8 Total Store Ordering (TSO).
1. '''Write to Read relaxation:''' allow a write followed by a read to execute out of program order. Examples of models that provide this relaxation include the IBM-370 and the Sun SPARC V8 Total Store Ordering (TSO).
Line 36: Line 36:
3. '''Read to Read, Write:''' allows reads to execute out of program order with respect to their following reads and writes. Examples of models that provide this relaxation include the DEC Alpha (Alpha), the Sun SPARC V9 Relaxed Memory Order (RMO), and the IBM PowerPC (PowerPC).
3. '''Read to Read, Write:''' allows reads to execute out of program order with respect to their following reads and writes. Examples of models that provide this relaxation include the DEC Alpha (Alpha), the Sun SPARC V9 Relaxed Memory Order (RMO), and the IBM PowerPC (PowerPC).


Apart from the program order relaxation, some of these models also relax the write atomicity requirement.  Some examples of this type of this type of consistency requirement relaxation are [[http://www.cs.cornell.edu/Courses/cs717/2001fa/lectures/sarita.ppt 21]]
Apart from the program order relaxation, some of these models also relax the write atomicity requirement.  Some examples of this type of this type of consistency requirement relaxation are [[http://www.cs.cornell.edu/Courses/cs717/2001fa/lectures/sarita.ppt 13]]


4. '''Read others’ write early:''' allow a read to return the value of another processor’s write before the write is made visible to all other processors for cache-based systems.  
4. '''Read others’ write early:''' allow a read to return the value of another processor’s write before the write is made visible to all other processors for cache-based systems.  
Line 60: Line 60:


The IBM-370 model relaxes certain writes to reads program order but maintains the atomicity requirements of Sequential Consistency. It enforces the program order constraint from a write to a following read, by using the safety net serialization instructions that may be placed between the two operations. This method of relaxed consistency model is the most stringent, only next to Sequential Consistency.  
The IBM-370 model relaxes certain writes to reads program order but maintains the atomicity requirements of Sequential Consistency. It enforces the program order constraint from a write to a following read, by using the safety net serialization instructions that may be placed between the two operations. This method of relaxed consistency model is the most stringent, only next to Sequential Consistency.  
It should be noted that Write to Read relaxed consistency model implemented by the IBM-370 still requires program order for a write and a read in the following  three cases[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]:  
It should be noted that Write to Read relaxed consistency model implemented by the IBM-370 still requires program order for a write and a read in the following  three cases[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]:  


1. A write followed by a read to the same location [depicted by dashed line in between W and R in the figure].  
1. A write followed by a read to the same location [depicted by dashed line in between W and R in the figure].  
Line 74: Line 74:




The conceptual system shown in the above figure is similar to that used for representing Sequential Consistency. The main difference is the presence of a buffer between each processor and the memory. Since we assume that each processor issues its operations in program order, we use the buffer to model the fact that the operations are not necessarily issued in the same order to memory. The cancelled reply path from the buffer to the processor implies that a read is not allowed to return the value of a write to the same location from the buffer.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]
The conceptual system shown in the above figure is similar to that used for representing Sequential Consistency. The main difference is the presence of a buffer between each processor and the memory. Since we assume that each processor issues its operations in program order, we use the buffer to model the fact that the operations are not necessarily issued in the same order to memory. The cancelled reply path from the buffer to the processor implies that a read is not allowed to return the value of a write to the same location from the buffer.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]


The IBM-370 model has two types of serialization instructions: special instructions that generate memory operations (e.g., compare-and-swap) and special non-memory instructions (e.g., a special branch)[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]
The IBM-370 model has two types of serialization instructions: special instructions that generate memory operations (e.g., compare-and-swap) and special non-memory instructions (e.g., a special branch)[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]


   
   
Line 85: Line 85:
[[File:TSO1.jpg|center|frame||Conceptual Model for SPARC V8 Total Store Ordering]]
[[File:TSO1.jpg|center|frame||Conceptual Model for SPARC V8 Total Store Ordering]]
   
   
For TSO, a safety net for write atomicity is required only for a write that is followed by a read to the same location in the same processor. The atomicity can be achieved by ensuring program order from the write to the read using read-modify-writes[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]].  
For TSO, a safety net for write atomicity is required only for a write that is followed by a read to the same location in the same processor. The atomicity can be achieved by ensuring program order from the write to the read using read-modify-writes[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]].  




Line 94: Line 94:
[[File:Code 1.jpg|center|frame||Example of how IBM-370 and SPARC V8 TSO differ]]
[[File:Code 1.jpg|center|frame||Example of how IBM-370 and SPARC V8 TSO differ]]
   
   
First consider the program segment in (a). Under the SC or IBM-370 model, the outcome (u,v,w,x)=(1,1,0,0) is disallowed. However, this outcome is possible under TSO because reads are allowed to bypass all previous writes, even if they are to the same location; therefore the sequence (b1,b2,c1,c2,a1,a2) is a valid total order for TSO. Of course, consistency value requirement still requires b1 and b2 to return the values of a1 and a2, respectively, even though the reads occur earlier in the sequence than the writes. This maintains the intuition that a read observes all the writes issued from the same processor as the read. Consider the program segment in (b). In this case, the outcome (u,v,w,x)=(1,2,0,0)is not allowed under Sequential Consistency or IBM-370, but is possible under TSO.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]
First consider the program segment in (a). Under the SC or IBM-370 model, the outcome (u,v,w,x)=(1,1,0,0) is disallowed. However, this outcome is possible under TSO because reads are allowed to bypass all previous writes, even if they are to the same location; therefore the sequence (b1,b2,c1,c2,a1,a2) is a valid total order for TSO. Of course, consistency value requirement still requires b1 and b2 to return the values of a1 and a2, respectively, even though the reads occur earlier in the sequence than the writes. This maintains the intuition that a read observes all the writes issued from the same processor as the read. Consider the program segment in (b). In this case, the outcome (u,v,w,x)=(1,2,0,0)is not allowed under Sequential Consistency or IBM-370, but is possible under TSO.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]
   
   


Line 100: Line 100:




For Write to Write relaxation we allow[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]:
For Write to Write relaxation we allow[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]:
1. Two writes to execute out of program order.  
1. Two writes to execute out of program order.  


2. A write followed by a read execute out of program order.  
2. A write followed by a read execute out of program order.  


This relaxation enables a number of hardware optimizations, including write merging in a write buffer and overlapping multiple write misses, thus servicing the writes at a much faster rate. However, this relaxation is still not sufficiently flexible to be useful to a compiler. The SPARC V8 Partial Store Ordering model (PSO) is an example of a system that uses Write to Write relaxation.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]
This relaxation enables a number of hardware optimizations, including write merging in a write buffer and overlapping multiple write misses, thus servicing the writes at a much faster rate. However, this relaxation is still not sufficiently flexible to be useful to a compiler. The SPARC V8 Partial Store Ordering model (PSO) is an example of a system that uses Write to Write relaxation.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]




'''''SPARC V8 Partial Store Ordering (PSO)'''''
'''''SPARC V8 Partial Store Ordering (PSO)'''''


The Partial Store Ordering (PSO) model is similar to the TSO model for SPARC V8. The figure below shows mostly an identical conceptual system, with only a slight difference in the program order.  This relaxation allows writes to locations to be overlapped if they are not to same locations. This is represented by the dashed line between the W's in the program order figure. Safety net for the program order is provided through a fence instruction, called the store barrier or STBAR, which may be used to enforce the program order between writes.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]
The Partial Store Ordering (PSO) model is similar to the TSO model for SPARC V8. The figure below shows mostly an identical conceptual system, with only a slight difference in the program order.  This relaxation allows writes to locations to be overlapped if they are not to same locations. This is represented by the dashed line between the W's in the program order figure. Safety net for the program order is provided through a fence instruction, called the store barrier or STBAR, which may be used to enforce the program order between writes.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]


[[File:PSO.jpg|center|frame||Conceptual Model of SPARC V8 PSO]]
[[File:PSO.jpg|center|frame||Conceptual Model of SPARC V8 PSO]]
Line 119: Line 119:
[[File:Code2.jpg|center|frame||Conceptual Model of SPARC V8 PSO]]
[[File:Code2.jpg|center|frame||Conceptual Model of SPARC V8 PSO]]


With relaxed Write to Read models, the only value for (u,v) is (1,1). With PSO, since it allows writes to different locations to complete out of program order, it also allows the outcomes (0,0) or (0,1) or (1,0) for (u,v). In this example, STBAR instruction needs to be placed immediately before c1 on P1 in order to disallow all outcomes except (1,1).[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]  
With relaxed Write to Read models, the only value for (u,v) is (1,1). With PSO, since it allows writes to different locations to complete out of program order, it also allows the outcomes (0,0) or (0,1) or (1,0) for (u,v). In this example, STBAR instruction needs to be placed immediately before c1 on P1 in order to disallow all outcomes except (1,1).[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]  




Line 125: Line 125:




For Read to Read and Read to Write we shed the restrictions on program order between all operations, including read to read and read followed by write to different locations . This flexibility provides the possibility of hiding the latency of read operations by implementing true non-blocking reads in the context of either static (in-order) or dynamic (out-of-order) scheduling processors, supported by techniques such as non-blocking (lockup-free) caches and speculative execution. The compiler also has full flexibility to reorder operations.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]
For Read to Read and Read to Write we shed the restrictions on program order between all operations, including read to read and read followed by write to different locations . This flexibility provides the possibility of hiding the latency of read operations by implementing true non-blocking reads in the context of either static (in-order) or dynamic (out-of-order) scheduling processors, supported by techniques such as non-blocking (lockup-free) caches and speculative execution. The compiler also has full flexibility to reorder operations.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]


The DEC Alpha, Relaxed Memory Order (RMO) on the SPARC V9, and PowerPC are examples of systems using this model. Except for Alpha, all the other models allow reordering of two reads to the same location.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]
The DEC Alpha, Relaxed Memory Order (RMO) on the SPARC V9, and PowerPC are examples of systems using this model. Except for Alpha, all the other models allow reordering of two reads to the same location.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]


All of the models in this group allow a processor to read its own write early with the exception of the PowerPC. All these architectures provide explicit fence instructions to ensure program order, however it should be noted that the frequent use of fence instructions can incur a significant overhead due to an increase in the number of instructions and the extra delay that may be associated with executing fence instructions.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]
All of the models in this group allow a processor to read its own write early with the exception of the PowerPC. All these architectures provide explicit fence instructions to ensure program order, however it should be noted that the frequent use of fence instructions can incur a significant overhead due to an increase in the number of instructions and the extra delay that may be associated with executing fence instructions.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]




'''''DEC Alpha'''''
'''''DEC Alpha'''''


The DEC Alpha allows reads and writes to different locations to complete out of program order unless there is a fence instruction between them. Memory operations to the same location, including reads, are required to complete in program order. Safety net for program order is provided through the fence instructions, the memory barrier (MB) and the write memory barrier (WMB). The MB instruction can be used to maintain program order between any memory operations, while WMB instruction provides this guarantee only among write operations. The Alpha model does not require a safety net for write atomicity.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]  
The DEC Alpha allows reads and writes to different locations to complete out of program order unless there is a fence instruction between them. Memory operations to the same location, including reads, are required to complete in program order. Safety net for program order is provided through the fence instructions, the memory barrier (MB) and the write memory barrier (WMB). The MB instruction can be used to maintain program order between any memory operations, while WMB instruction provides this guarantee only among write operations. The Alpha model does not require a safety net for write atomicity.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]  


[[File:Alpha.jpg|center|frame||Conceptual Model of SPARC V8 PSO]]
[[File:Alpha.jpg|center|frame||Conceptual Model of SPARC V8 PSO]]


   
   
The conceptual system shown above is the same as IBM 370 which requires a read to return the value of the last write operation to the same location. But since we can relax the program order from a write to a read, we can safely exploit optimizations such as read forwarding.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]  
The conceptual system shown above is the same as IBM 370 which requires a read to return the value of the last write operation to the same location. But since we can relax the program order from a write to a read, we can safely exploit optimizations such as read forwarding.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]  




Line 145: Line 145:


The SPARC V9 architecture uses RMO which is an extension of the TSO and PSO models used in SPARC V8. The Read to Read, Read to write program order is relaxed in this model, like in PSO. But a Read to Write or the order between two writes to the same location are not relaxed. This is shown in the figure for program order below.   
The SPARC V9 architecture uses RMO which is an extension of the TSO and PSO models used in SPARC V8. The Read to Read, Read to write program order is relaxed in this model, like in PSO. But a Read to Write or the order between two writes to the same location are not relaxed. This is shown in the figure for program order below.   
RMO provides four types of fences [F1 through F4] that allow program order to be selectively maintained between any two types of operations. A single fence instruction, MEMBAR, can specify a combination of the above fence types by setting the appropriate bits in a four-bit opcode. No safety net for write atomicity is required.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]
RMO provides four types of fences [F1 through F4] that allow program order to be selectively maintained between any two types of operations. A single fence instruction, MEMBAR, can specify a combination of the above fence types by setting the appropriate bits in a four-bit opcode. No safety net for write atomicity is required.[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]




'''''IBM PowerPC'''''
'''''IBM PowerPC'''''


The PowerPC does constrain the program order in some situations such as[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]:
The PowerPC does constrain the program order in some situations such as[[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]:


1. Between sub-operations. That is each operation may consist of multiple sub-operations and that all sub-operations of the first operation must complete before any sub-operations of the second operation, represented by the double lines between operations in the figure below.  
1. Between sub-operations. That is each operation may consist of multiple sub-operations and that all sub-operations of the first operation must complete before any sub-operations of the second operation, represented by the double lines between operations in the figure below.  
Line 158: Line 158:
3. Among conflicting operations.  
3. Among conflicting operations.  
   
   
The IBM PowerPC model exposes multiple-copy semantics of memory to the programmer as shown in the conceptual system above. The safety net fence instruction is called SYNC which is similar to the MB fence instruction of the Alpha systems. However, SYNC between two reads allows the instructions to occur out of program order. Also the PowerPC allows a write to be seen early by another processor’s read. Hence a read-modify-write operation may be needed to enforce program order between two reads to the same location as well as to make writes appear atomic. [[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16]]
The IBM PowerPC model exposes multiple-copy semantics of memory to the programmer as shown in the conceptual system above. The safety net fence instruction is called SYNC which is similar to the MB fence instruction of the Alpha systems. However, SYNC between two reads allows the instructions to occur out of program order. Also the PowerPC allows a write to be seen early by another processor’s read. Hence a read-modify-write operation may be needed to enforce program order between two reads to the same location as well as to make writes appear atomic. [[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10]]


The table below summarizes the program constraint under each model we have described so far and other commercial servers that also implement these three Relaxed Consistency models (these additional examples are not discussed above)[[http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaaw/ordering.2006.03.13a.pdf 2], [http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16], [http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 17]].
The table below summarizes the program constraint under each model we have described so far and other commercial servers that also implement these three Relaxed Consistency models (these additional examples are not discussed above)[[http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaaw/ordering.2006.03.13a.pdf 2], [http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10], [http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 11]].


{| border="1" cellpadding = "5" style="margin: 1em auto 1em auto;"
{| border="1" cellpadding = "5" style="margin: 1em auto 1em auto;"
Line 249: Line 249:
2. The memory system does not guarantee that writes will become visible to all other hardware threads at the same point.  <br />
2. The memory system does not guarantee that writes will become visible to all other hardware threads at the same point.  <br />
<br />
<br />
In order to conceptualize the architecture think of each as effectively having it’s own copy of memory. A write by one thread may propagate to other threads in any order. When this propagation happens they may be interleaved arbitrarily. This allows for great hardware execution optimization but also put a responsibility on the programmers shoulders to instill barriers if they want to constrain this behavior. [[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 17]]
In order to conceptualize the architecture think of each as effectively having it’s own copy of memory. A write by one thread may propagate to other threads in any order. When this propagation happens they may be interleaved arbitrarily. This allows for great hardware execution optimization but also put a responsibility on the programmers shoulders to instill barriers if they want to constrain this behavior. [[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 11]]


[[File:ARM_Star.png‎|center|frame||Conceptual ARM architecture]]
[[File:ARM_Star.png‎|center|frame||Conceptual ARM architecture]]
   
   
Given these properties, a greater focus is put on the programmer-observable behavior. This is the behavior exhibited by a multithreaded program as it is executed and analyzing the results. This is not related to performance but instead the correctness of the results. The looseness of the ARM architecture must be taken into account by the programmer and they must realize that their multithreaded application may exhibit different observable behavior given the conditions it is run under. [[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 17]] <br />
Given these properties, a greater focus is put on the programmer-observable behavior. This is the behavior exhibited by a multithreaded program as it is executed and analyzing the results. This is not related to performance but instead the correctness of the results. The looseness of the ARM architecture must be taken into account by the programmer and they must realize that their multithreaded application may exhibit different observable behavior given the conditions it is run under. [[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 11]] <br />


[[File:ARM_tree.png‎|center|frame||ARM Thread level execution]]
[[File:ARM_tree.png‎|center|frame||ARM Thread level execution]]
Line 325: Line 325:
[http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.html 1. http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.html] <br />
[http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.html 1. http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.html] <br />
[http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaaw/ordering.2006.03.13a.pdf 2. http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaaw/ordering.2006.03.13a.pdf] <br />
[http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaaw/ordering.2006.03.13a.pdf 2. http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaaw/ordering.2006.03.13a.pdf] <br />
[http://dl.acm.org/citation.cfm?id=1941553.1941594 3. http://dl.acm.org/citation.cfm?id=1941553.1941594] <br />
[http://home.deib.polimi.it/speziale/tech/memory_consistency/mc.html 3. http://home.deib.polimi.it/speziale/tech/memory_consistency/mc.html] <br />
[http://home.deib.polimi.it/speziale/tech/memory_consistency/mc.html 4. http://home.deib.polimi.it/speziale/tech/memory_consistency/mc.html] <br />
[http://people.ee.duke.edu/~sorin/papers/asplos10_consistency.pdf 4. http://people.ee.duke.edu/~sorin/papers/asplos10_consistency.pdf] <br />
[http://homes.cs.washington.edu/~djg/papers/asf_micro2010.pdf 5. http://homes.cs.washington.edu/~djg/papers/asf_micro2010.pdf] <br />
[http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.06.07c.pdf 5. http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.06.07c.pdf] <br />
[http://people.ee.duke.edu/~sorin/papers/asplos10_consistency.pdf 6. http://people.ee.duke.edu/~sorin/papers/asplos10_consistency.pdf] <br />
[http://en.wikipedia.org/wiki/Memory_ordering 6. http://en.wikipedia.org/wiki/Memory_ordering] <br />
[http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.06.07c.pdf 7. http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.06.07c.pdf] <br />
[http://www.montefiore.ulg.ac.be/~pw/cours/psfiles/struct-cours12-e.pdf 7. http://www.montefiore.ulg.ac.be/~pw/cours/psfiles/struct-cours12-e.pdf] <br />
[http://en.wikipedia.org/wiki/Memory_ordering 8. http://en.wikipedia.org/wiki/Memory_ordering] <br />
[http://www2.engr.arizona.edu/~ece569a/Readings/ppt/memory.ppt 8. http://www2.engr.arizona.edu/~ece569a/Readings/ppt/memory.ppt] <br />
[http://www.montefiore.ulg.ac.be/~pw/cours/psfiles/struct-cours12-e.pdf 9. http://www.montefiore.ulg.ac.be/~pw/cours/psfiles/struct-cours12-e.pdf] <br />
[http://os.inf.tu-dresden.de/Studium/DOS/SS2009/04-Coherency.pdf 9. http://os.inf.tu-dresden.de/Studium/DOS/SS2009/04-Coherency.pdf] <br />
[http://www2.engr.arizona.edu/~ece569a/Readings/ppt/memory.ppt 10. http://www2.engr.arizona.edu/~ece569a/Readings/ppt/memory.ppt] <br />
[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 10. http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf] <br />
[http://ipdps.cc.gatech.edu/2000/fmppta/18000989.pdf 11. http://ipdps.cc.gatech.edu/2000/fmppta/18000989.pdf] <br />
[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 11. http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf] <br />
[http://preshing.com/20120930/weak-vs-strong-memory-models 12. http://preshing.com/20120930/weak-vs-strong-memory-models] <br />
[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/pldi105-sarkar.pdf 12. http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/pldi105-sarkar.pdf] <br />
[http://csg.csail.mit.edu/pubs/memos/Memo-493/memo-493.pdf 13. http://csg.csail.mit.edu/pubs/memos/Memo-493/memo-493.pdf] <br />
[http://www.cs.cornell.edu/Courses/cs717/2001fa/lectures/sarita.ppt 13. http://www.cs.cornell.edu/Courses/cs717/2001fa/lectures/sarita.ppt]
[http://classes.soe.ucsc.edu/cmpe221/Spring05/papers/29multi.pdf 14. http://classes.soe.ucsc.edu/cmpe221/Spring05/papers/29multi.pdf] <br />
[http://os.inf.tu-dresden.de/Studium/DOS/SS2009/04-Coherency.pdf 15. http://os.inf.tu-dresden.de/Studium/DOS/SS2009/04-Coherency.pdf] <br />
[http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf 16. http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf] <br />
[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf 17. http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf] <br />
[http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/pldi105-sarkar.pdf 18. http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/pldi105-sarkar.pdf] <br />
[http://www.cs.utah.edu/formal_verification/publications/conferences/pdf/charme03.pdf 19. http://www.cs.utah.edu/formal_verification/publications/conferences/pdf/charme03.pdf] <br />
[http://www.cl.cam.ac.uk/~pes20/weakmemory/x86tso-paper.tphols.pdf 20. http://www.cl.cam.ac.uk/~pes20/weakmemory/x86tso-paper.tphols.pdf]
[http://www.cs.cornell.edu/Courses/cs717/2001fa/lectures/sarita.ppt 21. http://www.cs.cornell.edu/Courses/cs717/2001fa/lectures/sarita.ppt]

Latest revision as of 23:25, 10 April 2013

Topic_Writeup
Past Wiki 1
Past Wiki 2

Introduction

In this wiki we seek to perform a survey of memory consistency models in modern microprocessors. We will begin by giving a brief synopsis of what memory consistency is and how it can be achieved. We will also discuss some legacy systems that utilized some initial consistency models and then discuss how those models evolved into more sophisticated versions in more recent years. After we have this foundation we will move into more advanced architectures and implementations typically seen today.

What is Memory Consistency

In order for our discussion to begin we must understand what a Memory Consistency Model is. The Memory Consistency Model specifies how memory behaves when multiple processors do reads and writes to the same location. Because this concept has such a vast scope for such a critical role to the system, memory consistency impacts system design, programming languages, and underlying hardware.

This is critically important because we want our systems to be as efficient as possible. In order to achieve this goal there are various optimizations the system performs to overlap operations and allow these operations to complete out-of-order. To add to this challenge, there are also several optimizations the compiler can perform to increase execution efficiency. It is intuitive to think that instructions are executed sequentially but in reality this execution really only needs to be sequential where operations operate on the same memory location. As multi-processor systems and parallel computing became more of a necessity in the technology industry the necessity for correct and highly efficient memory consistency models came into place. Memory consistency allows for techniques to optimize execution but also puts a contract in place to ensure correctness between the programmer and the system. In short, allowing tasks to execute in parallel is great for improving performance, but it also introduces new challenges because these tasks can now complete in different order than how they were initially conceived [10, 8].

How Memory Consistency Used to be Done

It is intuitive to think of operations executing on a multi-processor system to behave the same as instructions executing on a single processor system. This an interleaving model called Sequential Consistency and is the classical approach to consistency can be achieved. It was formally defined by Lamport as, “The result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program.” What this means is that all memory access are immediately visible to all processors.[11]

Conceptual Model for Sequential Consistency Model

This type of system was seen in the 1980’s and present in system like the 386-based Compaq SystemPro. The key way Sequential Consistency was able to achieve overlap was to put writes in a write buffer so that the computation can be overlapped to the next read. Memory operations are issued in program order by processors. Because operations are executed one at a time they appear to be atomic with respect to each other.[11]

The advantages to a model like this is that it is easy to understand from a programmer perspective. Unfortunately the rules put in place by this model are very restrictive and can negatively impact performance. Sequential Consistency disallows the reordering of shared memory operations between processors. The drawbacks of this strict model produces high latency and very little benefit of true serial execution. This is because reads tend to be closely interleaved with writes yielding very little utilization of the write buffer.[11]

Conceptual Model for Sequential Consistency Model

Memory Consistency Today

While maintaining Sequential Consistency guarantees correctness, it is very limiting in terms of performance as only one instruction may be executed at a time and executing any instructions requiring memory access atomically. As a result, widely implemented computer architectures try to reduce the requirements imposed by the Sequential Consistency model while still maintaining program correctness. These Relaxed Consistency models allows for the processor to make adjustments in the order of memory accesses in order to execute the instructions more efficiently.


Relaxed models can be categorized based on how they relax the program order constraint. Below are three general categories we will be examining during this discussion[13]:

1. Write to Read relaxation: allow a write followed by a read to execute out of program order. Examples of models that provide this relaxation include the IBM-370 and the Sun SPARC V8 Total Store Ordering (TSO).

2. Write to Write relaxation: allows two writes to execute out of program order. An example of a model that provides this relaxation is the Sun SPARC V8 Partial Store Ordering (PSO).

3. Read to Read, Write: allows reads to execute out of program order with respect to their following reads and writes. Examples of models that provide this relaxation include the DEC Alpha (Alpha), the Sun SPARC V9 Relaxed Memory Order (RMO), and the IBM PowerPC (PowerPC).

Apart from the program order relaxation, some of these models also relax the write atomicity requirement. Some examples of this type of this type of consistency requirement relaxation are [13]

4. Read others’ write early: allow a read to return the value of another processor’s write before the write is made visible to all other processors for cache-based systems.

5. Read own write early: a processor is allowed to read the value of its own previous write before the write is made visible to other processors.

Implementations

Below we look at some Relaxed Consistency models that were implemented commercial systems. We divide our discussion of commercialized systems into two major categories, the more traditional computing architectures that are found in servers, large clusters, and in personal computers, and the newly emerging mobile devices which are becoming more sophisticated each day.

Note that a single architecture may employ multiple Relaxed Consistency models, we are just selecting some examples that illustrate how each memory consistency model works.

Commercial Computers

Write to Read relaxation:

In Write to Read relaxation, we allow program order optimization by allowing a read to be reordered with respect to previous writes from the same processor and sequential consistency is maintained through the enforcement of the remaining program order constraints. Though relaxing the program order from a write followed by a read can improve performance, this reordering is not sufficient for compiler optimizations. Most compiler optimizations require the full flexibility of reordering any two operations in program in addition to just write to read. The IBM-370 and the SPARC V8 TSO use Write to Read relaxation with the primary difference between them being when they allow a read to return the value of a write.


IBM-370

The IBM-370 model relaxes certain writes to reads program order but maintains the atomicity requirements of Sequential Consistency. It enforces the program order constraint from a write to a following read, by using the safety net serialization instructions that may be placed between the two operations. This method of relaxed consistency model is the most stringent, only next to Sequential Consistency. It should be noted that Write to Read relaxed consistency model implemented by the IBM-370 still requires program order for a write and a read in the following three cases[10]:

1. A write followed by a read to the same location [depicted by dashed line in between W and R in the figure].

2. If either the write or the read are generated by a serialization instruction [WS , RS].

3. If there is a non-memory serialization instruction such as fence instruction [F] between the write and read.


Conceptual Model for IBM 370



The conceptual system shown in the above figure is similar to that used for representing Sequential Consistency. The main difference is the presence of a buffer between each processor and the memory. Since we assume that each processor issues its operations in program order, we use the buffer to model the fact that the operations are not necessarily issued in the same order to memory. The cancelled reply path from the buffer to the processor implies that a read is not allowed to return the value of a write to the same location from the buffer.[10]

The IBM-370 model has two types of serialization instructions: special instructions that generate memory operations (e.g., compare-and-swap) and special non-memory instructions (e.g., a special branch)[10]


SPARC V8 Total Store Ordering (TSO)

The total store ordering (TSO) model is one of the models proposed for the SPARC V8 architecture. TSO always allows a write followed by a read to complete out of program order; i.e reads issued to the same pending writes are allowed to be reordered. Read returns a previous write value if present, else it fetches the value from the memory. All other program orders are maintained. This is shown in the conceptual system below by the reply path from the buffer to a read that is no longer blocked.

Conceptual Model for SPARC V8 Total Store Ordering

For TSO, a safety net for write atomicity is required only for a write that is followed by a read to the same location in the same processor. The atomicity can be achieved by ensuring program order from the write to the read using read-modify-writes[10].


Difference between IBM-370 and SPARC V8 TSO:

Let us consider the program segment below to demonstrate the difference between TSO and IBM 370 models:

Example of how IBM-370 and SPARC V8 TSO differ

First consider the program segment in (a). Under the SC or IBM-370 model, the outcome (u,v,w,x)=(1,1,0,0) is disallowed. However, this outcome is possible under TSO because reads are allowed to bypass all previous writes, even if they are to the same location; therefore the sequence (b1,b2,c1,c2,a1,a2) is a valid total order for TSO. Of course, consistency value requirement still requires b1 and b2 to return the values of a1 and a2, respectively, even though the reads occur earlier in the sequence than the writes. This maintains the intuition that a read observes all the writes issued from the same processor as the read. Consider the program segment in (b). In this case, the outcome (u,v,w,x)=(1,2,0,0)is not allowed under Sequential Consistency or IBM-370, but is possible under TSO.[10]


Write to Write relaxation:

For Write to Write relaxation we allow[10]: 1. Two writes to execute out of program order.

2. A write followed by a read execute out of program order.

This relaxation enables a number of hardware optimizations, including write merging in a write buffer and overlapping multiple write misses, thus servicing the writes at a much faster rate. However, this relaxation is still not sufficiently flexible to be useful to a compiler. The SPARC V8 Partial Store Ordering model (PSO) is an example of a system that uses Write to Write relaxation.[10]


SPARC V8 Partial Store Ordering (PSO)

The Partial Store Ordering (PSO) model is similar to the TSO model for SPARC V8. The figure below shows mostly an identical conceptual system, with only a slight difference in the program order. This relaxation allows writes to locations to be overlapped if they are not to same locations. This is represented by the dashed line between the W's in the program order figure. Safety net for the program order is provided through a fence instruction, called the store barrier or STBAR, which may be used to enforce the program order between writes.[10]

Conceptual Model of SPARC V8 PSO


Let us consider an example to demonstrate the working of PSO.

Conceptual Model of SPARC V8 PSO

With relaxed Write to Read models, the only value for (u,v) is (1,1). With PSO, since it allows writes to different locations to complete out of program order, it also allows the outcomes (0,0) or (0,1) or (1,0) for (u,v). In this example, STBAR instruction needs to be placed immediately before c1 on P1 in order to disallow all outcomes except (1,1).[10]


Read to Read and Read to Write relaxation:

For Read to Read and Read to Write we shed the restrictions on program order between all operations, including read to read and read followed by write to different locations . This flexibility provides the possibility of hiding the latency of read operations by implementing true non-blocking reads in the context of either static (in-order) or dynamic (out-of-order) scheduling processors, supported by techniques such as non-blocking (lockup-free) caches and speculative execution. The compiler also has full flexibility to reorder operations.[10]

The DEC Alpha, Relaxed Memory Order (RMO) on the SPARC V9, and PowerPC are examples of systems using this model. Except for Alpha, all the other models allow reordering of two reads to the same location.[10]

All of the models in this group allow a processor to read its own write early with the exception of the PowerPC. All these architectures provide explicit fence instructions to ensure program order, however it should be noted that the frequent use of fence instructions can incur a significant overhead due to an increase in the number of instructions and the extra delay that may be associated with executing fence instructions.[10]


DEC Alpha

The DEC Alpha allows reads and writes to different locations to complete out of program order unless there is a fence instruction between them. Memory operations to the same location, including reads, are required to complete in program order. Safety net for program order is provided through the fence instructions, the memory barrier (MB) and the write memory barrier (WMB). The MB instruction can be used to maintain program order between any memory operations, while WMB instruction provides this guarantee only among write operations. The Alpha model does not require a safety net for write atomicity.[10]

Conceptual Model of SPARC V8 PSO


The conceptual system shown above is the same as IBM 370 which requires a read to return the value of the last write operation to the same location. But since we can relax the program order from a write to a read, we can safely exploit optimizations such as read forwarding.[10]


SPARC V9 Relaxed Memory Order (RMO)

The SPARC V9 architecture uses RMO which is an extension of the TSO and PSO models used in SPARC V8. The Read to Read, Read to write program order is relaxed in this model, like in PSO. But a Read to Write or the order between two writes to the same location are not relaxed. This is shown in the figure for program order below. RMO provides four types of fences [F1 through F4] that allow program order to be selectively maintained between any two types of operations. A single fence instruction, MEMBAR, can specify a combination of the above fence types by setting the appropriate bits in a four-bit opcode. No safety net for write atomicity is required.[10]


IBM PowerPC

The PowerPC does constrain the program order in some situations such as[10]:

1. Between sub-operations. That is each operation may consist of multiple sub-operations and that all sub-operations of the first operation must complete before any sub-operations of the second operation, represented by the double lines between operations in the figure below.

2. Writes to the same location.

3. Among conflicting operations.

The IBM PowerPC model exposes multiple-copy semantics of memory to the programmer as shown in the conceptual system above. The safety net fence instruction is called SYNC which is similar to the MB fence instruction of the Alpha systems. However, SYNC between two reads allows the instructions to occur out of program order. Also the PowerPC allows a write to be seen early by another processor’s read. Hence a read-modify-write operation may be needed to enforce program order between two reads to the same location as well as to make writes appear atomic. [10]

The table below summarizes the program constraint under each model we have described so far and other commercial servers that also implement these three Relaxed Consistency models (these additional examples are not discussed above)[2, 10, 11].

Relaxed Consistency Method
Commercial System or Architecture W-> R W -> W R -> R and/or R->W
IBM-370 Yes
SPARC V8 TSO Yes
SPARC V8 PSO Yes Yes
DEC Alpha Yes Yes Yes
SPARC V9 RMO Yes Yes Yes
IBM PowerPC Yes Yes Yes
AlphaServer 8200/8400 Yes Yes Yes
Cray T3D Yes Yes Yes
SpareCenter1000/2000 Yes
Sequent Balance Yes
AMD 64 Yes Yes
IA64 Yes Yes Yes
PA-RISC Yes Yes Yes
x86 Yes Yes

Mobile Devices

Processors that are designed to be quick, low power, low heat, and capable of supporting many applications are becoming very popular in today’s technology space. From tablets to smartphones, many of these devices are run on ARM, and to a lesser extent IBM POWER architecture. This type of multiprocessors utilize highly relaxed memory models. This allows a wide range of processor optimizations including better performance, better power usage, and simpler hardware. The pillars that ARM architecture are built upon to achieve as much hardware optimization as possible is:

1. Reads and Write can be done out of order. This includes speculative reads and writes that are a part of certain branches not yet reached. Many other relaxed memory models only allow local reordering under specific conditions, but in ARM any local reordering is allowed unless conditions are implemented by the programer.
2. The memory system does not guarantee that writes will become visible to all other hardware threads at the same point.

In order to conceptualize the architecture think of each as effectively having it’s own copy of memory. A write by one thread may propagate to other threads in any order. When this propagation happens they may be interleaved arbitrarily. This allows for great hardware execution optimization but also put a responsibility on the programmers shoulders to instill barriers if they want to constrain this behavior. [11]

Conceptual ARM architecture

Given these properties, a greater focus is put on the programmer-observable behavior. This is the behavior exhibited by a multithreaded program as it is executed and analyzing the results. This is not related to performance but instead the correctness of the results. The looseness of the ARM architecture must be taken into account by the programmer and they must realize that their multithreaded application may exhibit different observable behavior given the conditions it is run under. [11]

ARM Thread level execution

Quiz

1. Which of these is a reason that modern multiprocessor systems moved away from Sequential Consistency?

A. Sequential Consistency does not guarantee correctness.

B. Maintaining Sequential Consistency tends to have a negative impact on performance.

C. Memory Consistency in general is not needed if Cache Coherence is implemented, so there was never a need for Sequential Consistency.


2. Which of the following discussed Relaxed Consistency models allow enough freedom for the compiler to move around memory accesses?

A. Write to Read

B. Write to Write

C. Read to Read and Read to Write


3. Which of the following architectures are considered to have the most flexible memory consistency policy?

A. DEC Alpha

B. x86

C. IBM-370


4. True or false, all the discussed architectures have some sort of safety net to allow programmers to specify to the compiler which instructions must be kept in order.


5. True or false, mobile devices today still use Sequential Consistency as the processing requirements are not as great as more traditional computing systems.


6. Which of these major architectures are described in the context of mobile devices.

A. Cray T3D

B. ARM

C. IA64


7. True or false, while many examples of architectures that relax Sequential Consistency exist, none have been commercialized.


8. True or false, by relaxing Sequential Consistency, the programs executing on the architecture no longer have a predictable output (become non-deterministic).


9. True or false, IBM-370 will reorder writes to the same location.


10. While the DEC Alpha has a very relaxed memory consistency model, it does not allow one of the following reordering of instructions, which one?

A. Two reads to the same location.

B. A read followed by a write to the same location.

C. Any two writes.


Answers: 1. B, 2. C, 3. A, 4. True, 5. False, 6. B, 7. False, 8. False, 9. False , 10. A

References

1. http://www.intel.com/content/www/us/en/architecture-and-technology/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.html
2. http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaaw/ordering.2006.03.13a.pdf
3. http://home.deib.polimi.it/speziale/tech/memory_consistency/mc.html
4. http://people.ee.duke.edu/~sorin/papers/asplos10_consistency.pdf
5. http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2010.06.07c.pdf
6. http://en.wikipedia.org/wiki/Memory_ordering
7. http://www.montefiore.ulg.ac.be/~pw/cours/psfiles/struct-cours12-e.pdf
8. http://www2.engr.arizona.edu/~ece569a/Readings/ppt/memory.ppt
9. http://os.inf.tu-dresden.de/Studium/DOS/SS2009/04-Coherency.pdf
10. http://infolab.stanford.edu/pub/cstr/reports/csl/tr/95/685/CSL-TR-95-685.pdf
11. http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf
12. http://www.cl.cam.ac.uk/~pes20/ppc-supplemental/pldi105-sarkar.pdf
13. http://www.cs.cornell.edu/Courses/cs717/2001fa/lectures/sarita.ppt