CSC/ECE 506 Fall 2007: Difference between revisions
(→Topics) |
(→Topics) |
||
Line 42: | Line 42: | ||
[http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_506_Fall_2007/wiki1_5_jp07 Supercomputers] - This summary give a detail description of the definition of what a supercomputer is, the evolution of the supercomputer's architecture and performance, and explores the main metric (LINPACK Benchmark Suite) for evaluating the effectiveness of supercomputers. It also takes a look at the most dominant supercomputers of the last 10 years. | [http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_506_Fall_2007/wiki1_5_jp07 Supercomputers] - This summary give a detail description of the definition of what a supercomputer is, the evolution of the supercomputer's architecture and performance, and explores the main metric (LINPACK Benchmark Suite) for evaluating the effectiveness of supercomputers. It also takes a look at the most dominant supercomputers of the last 10 years. | ||
[http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_506_Fall_2007/wiki1_5_1008 Supercomputers] - This summary give a detail description of the definition of what a supercomputer is and explores the main metric (LINPACK Benchmark Suite) for evaluating the effectiveness of supercomputers. It also displays the current trend in the industry by exploring the types of | [http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_506_Fall_2007/wiki1_5_1008 Supercomputers] - This summary give a detail description of the definition of what a supercomputer is and explores the main metric (LINPACK Benchmark Suite) for evaluating the effectiveness of supercomputers. It also displays the current trend in the industry by exploring the types of systems used in the 500 fastest computer systems in the world and it explores the concept of cluster computing. | ||
Line 56: | Line 56: | ||
** What is the size and capacity of current SMPs? | ** What is the size and capacity of current SMPs? | ||
** How have supercomputers evolved since the Cray T3E? | ** How have supercomputers evolved since the Cray T3E? | ||
[http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_506_Fall_2007/wiki1_7_a1 Shared address space] - This summary highlights the recent design trends in shared address space, evolution of interconnect technology, current high end SMPs, and explores the evolution of supercomputers since the Cray T3E. | |||
Line 65: | Line 67: | ||
[http://pg-server.csc.ncsu.edu/mediawiki/index.php/2.3_General_blade_server_architecture Blade Server Architecture] - This summary highlights the general blade server architecture. | [http://pg-server.csc.ncsu.edu/mediawiki/index.php/2.3_General_blade_server_architecture Blade Server Architecture] - This summary highlights the general blade server architecture. | ||
[http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_506_Fall_2007/wiki2_3_pa Message passing] - This summary explores LAMMPS(Large Scale Atomic/Molecular Massively Parallel System) algorithm, the sequential algorithm. It also explores the concepts of Decomposition & Assignment, Orchestration, and Mapping. | |||
* '''''Section 1.2.5: Trends in vector processing and array processing.''''' | * '''''Section 1.2.5: Trends in vector processing and array processing.''''' | ||
** New machines have recently been announced. Why will this be an important architectural dimension in the coming years? | ** New machines have recently been announced. Why will this be an important architectural dimension in the coming years? | ||
[http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_506_Fall_2007/wiki2_5_as Trends in vector processing and array processing] - This summary highlights cache sizes in multicore architectures. | |||
Line 86: | Line 90: | ||
* '''''Sections 1.3.3 and 1.3.4: Most changes here are probably related to performance metrics. ''''' | * '''''Sections 1.3.3 and 1.3.4: Most changes here are probably related to performance metrics. ''''' | ||
** Cite other models for measuring artifacts such as data-transfer time, overhead, occupancy, and communication cost. Focus on the models that are most useful in practice. | ** Cite other models for measuring artifacts such as data-transfer time, overhead, occupancy, and communication cost. Focus on the models that are most useful in practice. | ||
[http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_506_Fall_2007/wiki1_12_dp3 Performance metrics] - This summary give a detail description of communication and replication. It also looks at the artifacts of measuring performance, overhead and occupancy, communication cost, and scalability. |
Revision as of 09:02, 9 February 2009
Formatting Resources
Formatting Help Guide from MetaWiki
Peer-reviewed Assignment 1
Important Dates
- 08/31/2007 Peer-reviewed 1 Selection
- 09/05/2007 Peer-reviewed 1 Submission
- 09/07/2007 Peer-reviewed 1 First feedback
- 09/10/2007 Peer-reviewed 1 Resubmission
- 09/12/2007 Peer-reviewed 1 Final review
- 09/14/2007 Peer-reviewed 1 Review of review
Topics
- Sections 1.1 and 1.1.2
- Update performance trends in multiprocessors.
- Section 1.1.1, first half: Scientific/engineering application trends
- What characterizes present-day applications?
- How much memory, processor time, etc.?
- How high is the speedup?
- Section 1.1.1, second half: Commercial application trends
- What characterizes present-day applications?
- How much memory, processor time, etc.?
- How high is the speedup?
Commercial application trends - This summary give an overview of Commercial Applications of Parallel Computing Architecture. It also highlights who is doing parellel computing and what they are using it for.
- Section 1.1.3: Architectural trends
- How have architectures changed in the past 10 years?
- Update Figs. 1.8 and 1.9 with new points, for 2000, 2002, 2004, 2006, and 2007.
Architectural Trends - This summary give a detail observation of architectural trends. It also highlights the concepts of VLIW(Very Long Instruction Word), Multi-threading, Multi-core CPUs, and Speculative Execution. It also updates Figs. 1.8 and 1.9 with new points, for 2000, 2002, 2004, 2006, and 2007.
- Section 1.1.4: Supercomputers
- Compare current supercomputers with those of 10 yrs. ago.
- Update Figures 1.10 to 1.12 with new data points. For 1.12, consult top500.org.
Supercomputers - This summary give a detail description of the definition of what a supercomputer is, the evolution of the supercomputer's architecture and performance, and explores the main metric (LINPACK Benchmark Suite) for evaluating the effectiveness of supercomputers. It also takes a look at the most dominant supercomputers of the last 10 years.
Supercomputers - This summary give a detail description of the definition of what a supercomputer is and explores the main metric (LINPACK Benchmark Suite) for evaluating the effectiveness of supercomputers. It also displays the current trend in the industry by exploring the types of systems used in the 500 fastest computer systems in the world and it explores the concept of cluster computing.
- Sections 1.2.1 and 1.2.4: Communication architecture
- Trends in last 10 years.
- How has data parallelism found its way into shared-memory and message-passing machines? An early example would be MMX.
- Would you change the number of layers in Fig. 1.13?
- Section 1.2.2: Shared address space
- Any changes in the organization of address spaces in the last 10 years?
- Are the interconnection structures different in new computers now than they were 10 years ago?
- What is the size and capacity of current SMPs?
- How have supercomputers evolved since the Cray T3E?
Shared address space - This summary highlights the recent design trends in shared address space, evolution of interconnect technology, current high end SMPs, and explores the evolution of supercomputers since the Cray T3E.
- Section 1.2.3: Message passing
- Are blade servers an extension of message passing?
- How have blade architectures evolved over the past 10 years?
Message Passing - This summary highlights the typical structure of message passing machines, advantages of using message passing and gives a detailed introduction of what message passing is.
Blade Server Architecture - This summary highlights the general blade server architecture.
Message passing - This summary explores LAMMPS(Large Scale Atomic/Molecular Massively Parallel System) algorithm, the sequential algorithm. It also explores the concepts of Decomposition & Assignment, Orchestration, and Mapping.
- Section 1.2.5: Trends in vector processing and array processing.
- New machines have recently been announced. Why will this be an important architectural dimension in the coming years?
Trends in vector processing and array processing - This summary highlights cache sizes in multicore architectures.
- Section 1.2.6
- New developments in dataflow and systolic architectures, if any.
- Or if not, why are these styles not evolving with time?
Dataflow and Systolic Architectures - This summary give a detail description of the new developments in dataflow and systolic architectures. It even explores why systolic architecture has not truly evolved with time (to the extent of other architectures).
Dataflow and Systolic Architectures - This summary give a detail description of the new developments in dataflow and systolic architectures. It also looks at the current state of both dataflow architectures and systolic architectures. It even explores several papers that propose different applications for systolic architecture.
- Sections 1.3.1 and 1.3.2: Communication and programming model
- How have reordering strategies evolved to accommodate larger multicomputers?
- Have new kinds of synchronization operations been developed?
- I doubt that other topics covered in these sections have changed much, but do check.
- Sections 1.3.3 and 1.3.4: Most changes here are probably related to performance metrics.
- Cite other models for measuring artifacts such as data-transfer time, overhead, occupancy, and communication cost. Focus on the models that are most useful in practice.
Performance metrics - This summary give a detail description of communication and replication. It also looks at the artifacts of measuring performance, overhead and occupancy, communication cost, and scalability.