CSC/ECE 506 Fall 2007/wiki1 6 r8e: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(8 intermediate revisions by the same user not shown)
Line 5: Line 5:


== Parallel Programming Models ==
== Parallel Programming Models ==
The communication architecture dictates communication operations that the user software is allowed to perform.  Parallel programming models layout the framework for the way in which the communication operations take place.  In the beginning of parallel computing a single programming model was used for one sytem.  The hardware was constructed to support only the communication operations that the specific programming model used.  However, As parallel computing became more widespread there has been a convergence of programming models.  Many models have become integrated and have been used together.  Many parallel programming models exist, the most common of which are described below.   
The communication architecture dictates communication operations that the user software is allowed to perform.  Parallel programming models layout the framework for the way in which the communication operations take place.  In the beginning of parallel computing a single programming model was used for one system.  The hardware was constructed to support only the communication operations that the specific programming model used.  However, As parallel computing became more widespread there has been a convergence of programming models.  Many models have become integrated and have been used together.  Many parallel programming models exist, the most common of which are described below.   


=== Shared Address Space ===
=== Shared Address Space ===
Line 11: Line 11:


[http://www.sas.com/grid SAS grid computing.]
[http://www.sas.com/grid SAS grid computing.]
[http://www.openmp.org/drupal/ OpenMP - shared-memory parallel programming.]


=== Message Passing ===
=== Message Passing ===
Line 20: Line 22:
Data parallel programming is by far the most regulated of the three parallel programming models.  In this type of programming work is carried out on different elements of a data set by different operators.  Once all operations are finished information is exchanged among all, basically after the work is done a global data organization is performed.
Data parallel programming is by far the most regulated of the three parallel programming models.  In this type of programming work is carried out on different elements of a data set by different operators.  Once all operations are finished information is exchanged among all, basically after the work is done a global data organization is performed.


Intel's [http://www.intel.com/design/archives/Processors/mmx/ MMX] processors are a good example of using data parallel processing.
Intel's [http://www.intel.com/design/archives/Processors/mmx/ MMX] processors are a good example of data parallel processing.


These [http://www.vcpc.univie.ac.at/activities/tutorials/HPF/lectures/html/jhm.2.html lectures] speak on the topic of data parallelism in the F90 and HPF (fortran).
These [http://www.vcpc.univie.ac.at/activities/tutorials/HPF/lectures/html/jhm.2.html lectures] speak on the topic of data parallelism in the F90 and HPF (fortran).


== Convergence ==
== Convergence ==
As parallel computer architecture has matured it has become apparent that the parallel programming models have all been created to accomplish the same sorts of problems.  For this reason the division between the programming models has become unclear since hardware for recent parallel computers is able to support a variety of programming models.  A good example of this convergence is the way in which the message-passing machines of today are very similar to nonuniform memory access machines (NUMA), a type of machine that uses the shared address space model.
As parallel computer architecture has matured it has become apparent that the parallel programming models have all been created to accomplish the same sorts of problems.  For this reason the division between the programming models has become unclear since hardware for recent parallel computers is able to support a variety of programming models.  This is due to the fact that the hardware
primitives have basically become the same for all parallel architectures.  A good example of this convergence is the way in which the message-passing machines of today are very similar to nonuniform memory access machines (NUMA), a type of machine that uses the shared address space model.  


A 2003 comparison of parallel computing problems solved with message passing and shared address space models can be viewed [http://portal.acm.org/citation.cfm?id=763446 here].
A 2003 comparison of parallel computing problems solved with message passing and shared address space models can be viewed [http://portal.acm.org/citation.cfm?id=763446 here].
Line 31: Line 34:
This [http://www.cs.princeton.edu/picasso/seminarsS04/MPI_Day1.pdf paper], which was written in 2005, discusses the need to integrate message passing into shared address space models.
This [http://www.cs.princeton.edu/picasso/seminarsS04/MPI_Day1.pdf paper], which was written in 2005, discusses the need to integrate message passing into shared address space models.


=== Layers of Abstraction ===
A very good 1999 [http://www.cs.berkeley.edu/~culler/cs258-s99/slides/lec02/index.htm presentation] on the convergence of parallel architectures.
 
== Layers of Abstraction ==
There are many layers of abstraction between the application and the hardware.  The user program is written using a certain programming model.  This programming model specifies the way in which the information is communicated among different pieces of the program.  The compiler and/or libraries provide the layer of abstraction between the programming model and the available hardware primitives.  Figure 1.13 on pg. 27 of "Parallel Computer Architecture: A Hardware/Software Approach" displays these layers of abstraction well.  However, it seems that with the convergence of programming models and the use of data parallelism in message passing and shared address space programming the layer of abstraction containing the programming models should be revised if not edited out completely.
 
== Links to Trends in Parallel Computing ==
[http://www-unix.mcs.anl.gov/dbpp/text/node7.html Parallelism and Computing]<br>
[http://www.intel.com/cd/ids/developer/asmo-na/eng/segments/hpc/95223.htm?page=3 Intel - Trends in Distributed Computing]<br>
[http://css.psu.edu/news/nlsp98/progtrends.html Programming Trends in High Performance Computing]<br>
[http://www.cs.cmu.edu/~scandal/research-groups.html Supercomputing and Parallel Computing Research Groups]
 
== References ==
Culler, David and Singh, Jaswinder.  "Parallel Computer Architecture: A Hardware/Software Approach".  ISBN 1-55860-343-3

Latest revision as of 00:24, 11 September 2007

Communication Architecture

Parallel computers must have two types of architectures. One of these is this computer architecture, or that of the individual processors that are interconnected. Each of these processors has an instruction set architecture and microarchitecture, or a system of computer organization at a very low level that make up the computer architecture. The second type of architecture that is needed for parallel computing is that of a communication architecture, or a way in which the individual processors can communicate and work together to do useful things, this includes synchronization among the processors. Also, the communication architecture is closely linked with the hardware of the parallel computer because the hardware must be able to accommodate the communication operations.

To see the trends of parallel computers becoming widespread in use this site provides a chronology of microprocessors.

Parallel Programming Models

The communication architecture dictates communication operations that the user software is allowed to perform. Parallel programming models layout the framework for the way in which the communication operations take place. In the beginning of parallel computing a single programming model was used for one system. The hardware was constructed to support only the communication operations that the specific programming model used. However, As parallel computing became more widespread there has been a convergence of programming models. Many models have become integrated and have been used together. Many parallel programming models exist, the most common of which are described below.

Shared Address Space

Shared address space programming can be understood best when related to a message or bulletin board. Anyone can post a message on this board, and anyone can read what others have written. The key to the shared address space model is the fact that all of the vital information is posted in shared locations that can be accessed by all with memory operations such as loads and stores.

SAS grid computing.

OpenMP - shared-memory parallel programming.

Message Passing

Message passing machines convey information in a way similar to that of a phone call or a letter. There are very specific events that trigger the movement of information from a unique sender to a unique receiver. The sending process sends the data along with information that directs the message to the correct receiving process.

A widespread interface for message passing machines is the message passing interface, or MPI. More information on the MPI can be found at the MPI forum.

Data Parallel

Data parallel programming is by far the most regulated of the three parallel programming models. In this type of programming work is carried out on different elements of a data set by different operators. Once all operations are finished information is exchanged among all, basically after the work is done a global data organization is performed.

Intel's MMX processors are a good example of data parallel processing.

These lectures speak on the topic of data parallelism in the F90 and HPF (fortran).

Convergence

As parallel computer architecture has matured it has become apparent that the parallel programming models have all been created to accomplish the same sorts of problems. For this reason the division between the programming models has become unclear since hardware for recent parallel computers is able to support a variety of programming models. This is due to the fact that the hardware primitives have basically become the same for all parallel architectures. A good example of this convergence is the way in which the message-passing machines of today are very similar to nonuniform memory access machines (NUMA), a type of machine that uses the shared address space model.

A 2003 comparison of parallel computing problems solved with message passing and shared address space models can be viewed here.

This paper, which was written in 2005, discusses the need to integrate message passing into shared address space models.

A very good 1999 presentation on the convergence of parallel architectures.

Layers of Abstraction

There are many layers of abstraction between the application and the hardware. The user program is written using a certain programming model. This programming model specifies the way in which the information is communicated among different pieces of the program. The compiler and/or libraries provide the layer of abstraction between the programming model and the available hardware primitives. Figure 1.13 on pg. 27 of "Parallel Computer Architecture: A Hardware/Software Approach" displays these layers of abstraction well. However, it seems that with the convergence of programming models and the use of data parallelism in message passing and shared address space programming the layer of abstraction containing the programming models should be revised if not edited out completely.

Links to Trends in Parallel Computing

Parallelism and Computing
Intel - Trends in Distributed Computing
Programming Trends in High Performance Computing
Supercomputing and Parallel Computing Research Groups

References

Culler, David and Singh, Jaswinder. "Parallel Computer Architecture: A Hardware/Software Approach". ISBN 1-55860-343-3