CSC/ECE 506 Fall 2007/wiki1 10 aj: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
 
No edit summary
Line 7: Line 7:
'''Developments in Dataflow Architecture'''
'''Developments in Dataflow Architecture'''


(Cite: Based on ideas presented in the Wikipedia article at: http://en.wikipedia.org/wiki/Dataflow_architecture )
(Reference: Based on ideas presented in the Wikipedia article at: http://en.wikipedia.org/wiki/Dataflow_architecture )


Dataflow architectures (from a hardware standpoint) was an important research topic in the 1970s and early 1980s. Interest in the field has subsided in recent years due to the inability to resolve certain inherent problems in the Dataflow architectural model. To understand, this, the Dataflow concept is summarized below.


Dataflow architectures (from a hardware standpoint) was an important research topic in the 1970s and early 1980s.
 
'''
'''Summary of the Dataflow Idea'''
(Summary of the Dataflow Idea)'''


In its essence, a Dataflow architecture executes instructions based on whether the input arguments to the instructions are available. There is no program counter as present in a Von Neumann computer. To indicate the dependency of instructions, tags were used. When the tags contained simple memory addresses, the design was a static Dataflow machine. However, static designs couldn’t allow for multiple instances of a routine.  
In its essence, a Dataflow architecture executes instructions based on whether the input arguments to the instructions are available. There is no program counter as present in a Von Neumann computer. To indicate the dependency of instructions, tags were used. When the tags contained simple memory addresses, the design was a static Dataflow machine. However, static designs couldn’t allow for multiple instances of a routine.  


Dynamic Dataflow machines used Content-Addressable memory (CAM)(i.e. the tags were stored in memory) to solve this problem.
Dynamic Dataflow machines used Content-Addressable memory (CAM)(i.e. the tags were stored in memory) to solve this problem.
The programs were loaded in the CAM. When the tagged operands of an instruction became available, the CAM would send it to an “execution unit”. After execution, the output data and it’s tags were sent back to the CAM (as a “data token”). The CAM would then execute the next instruction whose dependencies had been satisfied.
The programs were loaded in the CAM. When the tagged operands of an instruction became available, the CAM would send it to an '''execution unit'''. After execution, the output data and it’s tags were sent back to the CAM (as a '''data token'''). The CAM would then execute the next instruction whose dependencies had been satisfied.


Since, the CAM could identify instructions whose tags were not dependant on any unexecuted instruction, parallelization was possible.
Since, the CAM could identify instructions whose tags were not dependant on any unexecuted instruction, parallelization was possible.
Line 29: Line 29:


Due to these issues, new developments in this area were largely stagnant.
Due to these issues, new developments in this area were largely stagnant.
(New Developments in the Dataflow Architectural Model) 


(from http://en.wikipedia.org/wiki/Out-of-order_execution)
 
1) A subset of the Dataflow model - “Out of order execution” is widely used in present day architecture. It uses the conventional Von Neumann Architecture to run “Execution Windows” in the usual order. Within the Window however, instructions are run according to the Dataflow Paradigm thus enabling parallelization and efficient utilization of CPU cycles.
'''New Developments in the Dataflow Architectural Model'''
 
(Referenced from: http://en.wikipedia.org/wiki/Out-of-order_execution)
1) A subset of the Dataflow model - '''Out of order execution''' is widely used in present day architecture. It uses the conventional Von Neumann Architecture to run '''Execution Windows''' in the usual order. Within the Window however, instructions are run according to the Dataflow Paradigm thus enabling parallelization and efficient utilization of CPU cycles.




http://wavescalar.cs.washington.edu/
(Referenced form: http://wavescalar.cs.washington.edu/)
2)The WaveScalar Instruction Set Architecture and Execution Model, currently being developed at the University of Washington attempts at building an architectural model that can work with current Imperative languages.
2)The WaveScalar Instruction Set Architecture and Execution Model, currently being developed at the University of Washington attempts at building an architectural model that can work with current Imperative languages.




http://portal.acm.org/citation.cfm?id=1032450
(Referenced from: http://portal.acm.org/citation.cfm?id=1032450 & http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=1332593&isnumber=29428)
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=1332593&isnumber=29428
 
3)Calstrom and Boden propose a “packet instruction set computer (PISC) architecture” which would employ dataflow architecture for network processors. They detailed a 40 gb/s network processor using Dataflow architecture.
3)Calstrom and Boden propose a '''packet instruction set computer (PISC) architecture''' which would employ dataflow architecture for network processors. They detailed a 40 gb/s network processor using Dataflow architecture.




http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel2/3036/8629/00379757.pdf?isnumber=&arnumber=379757
(Referenced from: http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel2/3036/8629/00379757.pdf?isnumber=&arnumber=379757)
4) Data flow architecture is also being evaluated for DSP processors as outlined in the referenced paper by Lee et al. Their proposed design was for a static Dataflow processor with 9 parallel processors for the multi-standard video codec processor.
4) Data flow architecture is also being evaluated for DSP processors as outlined in the referenced paper by Lee et al. Their proposed design was for a static Dataflow processor with 9 parallel processors for the multi-standard video codec processor.

Revision as of 17:50, 4 September 2007

( Section 1.2.6)

New developments in dataflow and systolic architectures, if any.

Or if not, why are these styles not evolving with time?

Developments in Dataflow Architecture

(Reference: Based on ideas presented in the Wikipedia article at: http://en.wikipedia.org/wiki/Dataflow_architecture )

Dataflow architectures (from a hardware standpoint) was an important research topic in the 1970s and early 1980s. Interest in the field has subsided in recent years due to the inability to resolve certain inherent problems in the Dataflow architectural model. To understand, this, the Dataflow concept is summarized below.


Summary of the Dataflow Idea

In its essence, a Dataflow architecture executes instructions based on whether the input arguments to the instructions are available. There is no program counter as present in a Von Neumann computer. To indicate the dependency of instructions, tags were used. When the tags contained simple memory addresses, the design was a static Dataflow machine. However, static designs couldn’t allow for multiple instances of a routine.

Dynamic Dataflow machines used Content-Addressable memory (CAM)(i.e. the tags were stored in memory) to solve this problem. The programs were loaded in the CAM. When the tagged operands of an instruction became available, the CAM would send it to an execution unit. After execution, the output data and it’s tags were sent back to the CAM (as a data token). The CAM would then execute the next instruction whose dependencies had been satisfied.

Since, the CAM could identify instructions whose tags were not dependant on any unexecuted instruction, parallelization was possible. There were however, major problems related to -

1) broadcasting the data tokens in a massively parallel system efficiently.

2) dispatching instruction tokens in a massively parallel system efficiently.

3) A real program had a huge number of dependencies. Building a CAM large enough for this proved difficult.

Due to these issues, new developments in this area were largely stagnant.


New Developments in the Dataflow Architectural Model

(Referenced from: http://en.wikipedia.org/wiki/Out-of-order_execution) 1) A subset of the Dataflow model - Out of order execution is widely used in present day architecture. It uses the conventional Von Neumann Architecture to run Execution Windows in the usual order. Within the Window however, instructions are run according to the Dataflow Paradigm thus enabling parallelization and efficient utilization of CPU cycles.


(Referenced form: http://wavescalar.cs.washington.edu/) 2)The WaveScalar Instruction Set Architecture and Execution Model, currently being developed at the University of Washington attempts at building an architectural model that can work with current Imperative languages.


(Referenced from: http://portal.acm.org/citation.cfm?id=1032450 & http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=1332593&isnumber=29428)

3)Calstrom and Boden propose a packet instruction set computer (PISC) architecture which would employ dataflow architecture for network processors. They detailed a 40 gb/s network processor using Dataflow architecture.


(Referenced from: http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel2/3036/8629/00379757.pdf?isnumber=&arnumber=379757) 4) Data flow architecture is also being evaluated for DSP processors as outlined in the referenced paper by Lee et al. Their proposed design was for a static Dataflow processor with 9 parallel processors for the multi-standard video codec processor.