CSC/ECE 506 Spring 2012/10b sr: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
Line 13: Line 13:
== Consistency models used ==
== Consistency models used ==


* [[address translation aware memory consistency]]
=== Address translation aware memory consistency ===
* [[causal consistency]]
=== Causal consistency ===
* [[delta consistency]]
=== Delta consistency ===
* [[entry consistency]]
=== Entry consistency ===
* [[eventual consistency]]
=== Eventual consistency ===
* [[fork consistency]]
=== Fork consistency ===
* [[linearizability]] (also known as strict or atomic consistency)
=== Linearizability ===
* [[one-copy serializability]]
(also known as strict or atomic consistency)
* [[PRAM consistency]] (also known as FIFO consistency)
=== One-copy serializability ===
* [[release consistency]]
=== PRAM consistency ===
* [[sequential consistency]]
(also known as FIFO consistency)
* [[serializability]]
=== Release consistency ===
* [[vector-field consistency]]
=== Sequential consistency ===
* [[weak consistency]]
=== Serializability ===
* [[strong consistency]]
=== Vector-field consistency ===
=== Weak consistency ===
=== Strong consistency ===


== Practical performance impact ==
== Practical performance impact ==
== Conclusion ==
== Conclusion ==

Revision as of 06:36, 4 April 2012

Use of consistency models in current multiprocessors

The memory consistency model of a shared memory system determines the order in which memory operations will appear to execute to the programmer. This article describes how consistency is used in multiprocessors today and later digs into the details of popular consistency models in use today. The impact of these models on the multiprocessor performance is also discussed. The article finishes off with a discussion about how the consistency models perform with larger multiprocessors.

Introduction

Many modern computer systems and most multicore chips support shared memory in hardware. In a shared memory system, each of the processor cores may read and write to a single shared address space. These designs seek various goodness properties, such as high performance, low power, and low cost. Of course, it is not valuable to provide these goodness properties without first providing correctness. Correct shared memory seems intuitive at a hand-wave level, but, there are subtle issues in even defining what it means for a shared memory system to be correct, as well as many subtle corner cases in designing a correct shared memory implementation. Moreover, these subtleties must be mastered in hardware implementations where bug fixes are expensive.

It is the job of consistency to define shared memory correctness. Consistency definitions provide rules about loads and stores (or memory reads and writes) and how they act upon memory. Ideally, consistency definitions would be simple and easy to understand. However, defining what it means for shared memory to behave correctly is more subtle than defining the correct behavior of, for example, a single-threaded processor core. The correctness criterion for a single processor core partitions behavior between one correct result and many incorrect alternatives. This is because the processor’s architecture mandates that the execution of a thread transforms a given input state into a single well-defined output state, even on an out-of-order core. Shared memory consistency models, however, concern the loads and stores of multiple threads and usually allow many correct executions while disallowing many incorrect ones. The possibility of multiple correct executions is due to the ISA allowing multiple threads to execute concurrently, often with many possible legal interleavings of instructions from different threads. The multitude of correct executions complicates the erstwhile simple challenge of determining whether an execution is correct. Nevertheless, consistency must be mastered to implement shared memory and, in some cases, to write correct programs that use it.

Consistency in current-day multiprocessors

Consistency models used

Address translation aware memory consistency

Causal consistency

Delta consistency

Entry consistency

Eventual consistency

Fork consistency

Linearizability

(also known as strict or atomic consistency)

One-copy serializability

PRAM consistency

(also known as FIFO consistency)

Release consistency

Sequential consistency

Serializability

Vector-field consistency

Weak consistency

Strong consistency

Practical performance impact

Conclusion