CSC/ECE 506 Fall 2007/wiki1 10 mt: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
Line 14: Line 14:


===Systolic===
===Systolic===
Systolic architecture sought to replace a uniprocessor by stringing together a system of processing elements in arrays, known as '''systolic arrays'''.  Their initial birth came from the bottleneck that can occur between a Central Processing Unit (CPU) and a memory request to the main-memory.  A uniprocessor must sit and wait for the result from main memory to return or request another data item.  In a systolic architecture, the data moves through a system via regular, timed "heartbeats" (the term systolic actually refers to the systolic contraction of heartbeats [http://en.wikipedia.org/wiki/Systole_%28medicine%29].
'''Systolic architecture''' sought to replace a uniprocessor by stringing together a system of processing elements in arrays, known as '''systolic arrays'''.  Their initial birth came from the bottleneck that can occur between a Central Processing Unit (CPU) and a memory request to the main-memory.  A uniprocessor must sit and wait for the result from main memory to return or request another data item.  In a systolic architecture, the data moves through a system via regular, timed "heartbeats" (the term systolic actually refers to the systolic contraction of heartbeats [http://en.wikipedia.org/wiki/Systole_%28medicine%29]) and work is done in-between each heartbeat. Each processor produces a new data item after each heartbeat. Those items are then either continued on the journey toward completion, or returned to the main memory. The systolic architecture's ability to put highly specialized computation under simple, regular and highly localized communication patterns are the key to the systolic architecture.
 
The reasoning behind this architecture centres around the bottleneck between a CPU and its main-memory. Having issued a memory request, a processor has to wait a short time for the memory system to deliver the data item. Once they have a piece of data uniprocessors perform one calculation with it and then either return the result to main-memory or request another data item. Systolic arrays, however, use that data item to perform a calculation at every processor in the chain before returning a it back to main-memory, see figure 2.4-1. The memory access penalty is not paid for every instruction, and so, systolic arrays are much faster than uniprocessors.
On every beat of a global system clock, each processor passes its results to the next processor in the chain, and receives another data item from the previous processor in the chain. Each processor produces a result every clock cycle, and so complex multi-cycle instructions are not implemented. Systolic arrays are said to be 'lock-stepped' or synchronous. There is no master controller, as found with array processors, and so control is effectively distributed across the network.
The periodic pumping of data around the systolic array is the feature by which systolic arrays get their name. A systole is the name given to a contraction of the heart. When the heart contracts blood moves along the vanes and arteries coming to rest at the end of the contraction. During the brief pause between beats the blood does its work - distributing oxygen and nutrients. When the work is being done the network of vanes and arteries is still. This is followed by another contraction and the whole cycle starts again. Note that when the network is active no 'work' is being done and vice-versa.
 
Systolic architectures are designed by using linear
mapping techniques on regular dependence graphs (DG).
• Regular Dependence Graph : The presence of an edge in
a certain direction at any node in the DG represents
presence of an edge in the same direction at all nodes
in the DG.
• DG corresponds to space representation à no time
instance is assigned to any computation Þ t=0.
• Systolic architectures have a space-time
representation where each node is mapped to a certain
processing element(PE) and is scheduled at a particular
time instance.
• Systolic design methodology maps an N-dimensional DG
to a lower dimensional systolic architecture.
• Mapping of N-dimensional DG to (N-1) dimensional
systolic array is considered.


== New Developments in Dataflow and Systolic Architectures ==
== New Developments in Dataflow and Systolic Architectures ==

Revision as of 02:14, 5 September 2007


Dataflow & Systolic Architectures

The dataflow and systolic models are two of the many possible parallel computer architectures. Unlike shared address, message passing and data parallel processing, the dataflow and systolic architectures were not as commonly used for parallel programming systems although they recieved a considerable amount of analysis from both private industry and academia.

Dataflow

Dataflow architecture is in oppostion to the von Neumann or control flow architecture which has memory, and I/O subsystem, an arithmetic unit and a control unit. The one shared memory is used for both program instructions and data with a data bus and address bus between the memory and processing unit. Because instructions and data must be fetched in sequential order, a bottleneck may occur limiting the throughput between the CPU and the memory.

The dataflow model of architecture, in contrast, is a distributive model where there is no single point of control and the execution of an instructions takes place only when the required data is available. Dataflow models are typically represented as a graph of nodes where each node in the graph is an operation to be executed when its operands become available along with the address of the subsequent nodes in the graph that need the results of the operation.

Included in the dataflow model of architecture there is also static and dynamic dataflow. The static dataflow model is characterized by the use of the memory address to specify the destination nodes that are data dependent. The dynamic model uses content-addressable memory which searches the computer memory for specific tags. Each subprogram or subgraph should be able to execute in parallel as separate instances. In the dynamic dataflow model, programs are executed by dealing with tokens which contain both data and a tag. A node is executed when incoming tokens with identical tags are present.

Systolic

Systolic architecture sought to replace a uniprocessor by stringing together a system of processing elements in arrays, known as systolic arrays. Their initial birth came from the bottleneck that can occur between a Central Processing Unit (CPU) and a memory request to the main-memory. A uniprocessor must sit and wait for the result from main memory to return or request another data item. In a systolic architecture, the data moves through a system via regular, timed "heartbeats" (the term systolic actually refers to the systolic contraction of heartbeats [1]) and work is done in-between each heartbeat. Each processor produces a new data item after each heartbeat. Those items are then either continued on the journey toward completion, or returned to the main memory. The systolic architecture's ability to put highly specialized computation under simple, regular and highly localized communication patterns are the key to the systolic architecture.

New Developments in Dataflow and Systolic Architectures

Since the 1990's, little advancement has been made in the field of dataflow architecture. Dataflow was primarily abandoned due to several problems.

  1. The dynamic dataflow model requires some sort of associative memory to store the tokens waiting to be matched. Unfortunately, even in moderate size programs the required memory needed for storage tends to be large and therefore not very cost efficient.
  2. Dataflow programs typically made use of multiple threads since parallel functions and loops were frequently used in the progamming. Therefore, if there wasn't enough of a workload for multiple threads, single threaded execution of a program provided poor performance.
  3. The dataflow model failed to take advantage of locality such as the usage of local registers and cache. Since all information for the tokens (data and tags) moves through the network, it is difficult to transfer all that information in a timely efficient manner over a large parallel system.

Regardless of the problems that the dataflow model of machine design encountered, today out of order execution, which is a form of restricted dataflow is one of the most successful models of microprocessor design. AMD and Intel both implemented an architecture where after decoding into RISC instructions instructions are placed in a central pool where they are allowed to execute in the order which is best matched to the current resources available.

Explicit token store approach(monsoon)

Enhancing data flow with control flow