CSC/ECE 506 Fall 2007/wiki1 6 r8e: Difference between revisions
No edit summary |
No edit summary |
||
Line 37: | Line 37: | ||
== Links to Trends in Parallel Computing == | == Links to Trends in Parallel Computing == | ||
[http://www-unix.mcs.anl.gov/dbpp/text/node7.html Parallelism and Computing] | [http://www-unix.mcs.anl.gov/dbpp/text/node7.html Parallelism and Computing]<br> | ||
[http://www.intel.com/cd/ids/developer/asmo-na/eng/segments/hpc/95223.htm?page=3 Intel - Trends in Distributed Computing] | [http://www.intel.com/cd/ids/developer/asmo-na/eng/segments/hpc/95223.htm?page=3 Intel - Trends in Distributed Computing]<br> | ||
[http://css.psu.edu/news/nlsp98/progtrends.html Programming Trends in High Performance Computing] | [http://css.psu.edu/news/nlsp98/progtrends.html Programming Trends in High Performance Computing] |
Revision as of 23:06, 10 September 2007
Communication Architecture
Parallel computers must have two types of architectures. One of these is this computer architecture, or that of the individual processors that are interconnected. Each of these processors has an instruction set architecture and microarchitecture, or a system of computer organization at a very low level that make up the computer architecture. The second type of architecture that is needed for parallel computing is that of a communication architecture, or a way in which the individual processors can communicate and work together to do useful things, this includes synchronization among the processors. Also, the communication architecture is closely linked with the hardware of the parallel computer because the hardware must be able to accommodate the communication operations.
To see the trends of parallel computers becoming widespread in use this site provides a chronology of microprocessors.
Parallel Programming Models
The communication architecture dictates communication operations that the user software is allowed to perform. Parallel programming models layout the framework for the way in which the communication operations take place. In the beginning of parallel computing a single programming model was used for one sytem. The hardware was constructed to support only the communication operations that the specific programming model used. However, As parallel computing became more widespread there has been a convergence of programming models. Many models have become integrated and have been used together. Many parallel programming models exist, the most common of which are described below.
Shared address space programming can be understood best when related to a message or bulletin board. Anyone can post a message on this board, and anyone can read what others have written. The key to the shared address space model is the fact that all of the vital information is posted in shared locations that can be accessed by all with memory operations such as loads and stores.
OpenMP - shared-memory parallel programming.
Message Passing
Message passing machines convey information in a way similar to that of a phone call or a letter. There are very specific events that trigger the movement of information from a unique sender to a unique receiver. The sending process sends the data along with information that directs the message to the correct receiving process.
A widespread interface for message passing machines is the message passing interface, or MPI. More information on the MPI can be found at the MPI forum.
Data Parallel
Data parallel programming is by far the most regulated of the three parallel programming models. In this type of programming work is carried out on different elements of a data set by different operators. Once all operations are finished information is exchanged among all, basically after the work is done a global data organization is performed.
Intel's MMX processors are a good example of data parallel processing.
These lectures speak on the topic of data parallelism in the F90 and HPF (fortran).
Convergence
As parallel computer architecture has matured it has become apparent that the parallel programming models have all been created to accomplish the same sorts of problems. For this reason the division between the programming models has become unclear since hardware for recent parallel computers is able to support a variety of programming models. This is due to the fact that the hardware primitives have basically become the same for all parallel architectures. A good example of this convergence is the way in which the message-passing machines of today are very similar to nonuniform memory access machines (NUMA), a type of machine that uses the shared address space model.
A 2003 comparison of parallel computing problems solved with message passing and shared address space models can be viewed here.
This paper, which was written in 2005, discusses the need to integrate message passing into shared address space models.
Layers of Abstraction
Links to Trends in Parallel Computing
Parallelism and Computing
Intel - Trends in Distributed Computing
Programming Trends in High Performance Computing