CSC/ECE 506 Fall 2007

From Expertiza_Wiki
Revision as of 17:44, 13 February 2009 by Nbdatiri (talk | contribs)
Jump to navigation Jump to search

Formatting Resources

Formatting Help Guide from MetaWiki


Peer-reviewed Assignment 1

Important Dates

  • 08/31/2007 Peer-reviewed 1 Selection
  • 09/05/2007 Peer-reviewed 1 Submission
  • 09/07/2007 Peer-reviewed 1 First feedback
  • 09/10/2007 Peer-reviewed 1 Resubmission
  • 09/12/2007 Peer-reviewed 1 Final review
  • 09/14/2007 Peer-reviewed 1 Review of review

Topics

  • Sections 1.1 and 1.1.2
    • Update performance trends in multiprocessors.

Performance trends in multiprocessors - This summary discusses Moore's Law in the future and multiprocessor architecture's price vs. performance. It also concludes on how the relationship between the development of microprocessors and Moore's Law will be affected in the future.


  • Section 1.1.1, first half: Scientific/engineering application trends
    • What characterizes present-day applications?
    • How much memory, processor time, etc.?
    • How high is the speedup?

Scientific/engineering application trends - This summary defines TPC-C (Transaction Processing Performance Council) benchmarks, lists the Top 10 supercomputers according to TPC-C benchmarking performance, lists TPC-C's Top 10 supercomputers according to performance per unit price, and graphs the throughput versus the number of processors for each vendor. It also highlights processor and memory speeds, commercial computers, and the concept of speedup.


  • Section 1.1.1, second half: Commercial application trends
    • What characterizes present-day applications?
    • How much memory, processor time, etc.?
    • How high is the speedup?

Commercial application trends - This summary give an overview of commercial applications of parallel computing architecture. It also highlights who is doing parallel computing and what they are using it for.


  • Section 1.1.3: Architectural trends
    • How have architectures changed in the past 10 years?
    • Update Figs. 1.8 and 1.9 with new points, for 2000, 2002, 2004, 2006, and 2007.

Architectural Trends - Summary 1 - This summary gives a detailed observation of architectural trends. It also highlights the concepts of VLIW (very long instruction word) processors, multi-threading, multi-core CPUs, and speculative execution. It also updates Figs. 1.8 and 1.9 with new points, for 2000, 2002, 2004, 2006, and 2007.

Architectural Trends - Summary 2 - This summary gives a general overview of architectural trends. It also highlights "My dual quad-core with quad-SLI", the use of silicon/carbon, and buses and memory.


  • Section 1.1.4: Supercomputers
    • Compare current supercomputers with those of 10 yrs. ago.
    • Update Figures 1.10 to 1.12 with new data points. For 1.12, consult top500.org.

Supercomputers - Summary 1 - This summary details what a supercomputer is, the evolution of supercomputer architecture and performance, and explores the metric (LINPACK Benchmark Suite) most commonly used for evaluating the effectiveness of supercomputers. It also takes a look at the most dominant supercomputers of the last 10 years.

Supercomputers - Summary 2 - This summary details what a supercomputer is and explores the main metric (LINPACK Benchmark Suite) for evaluating the effectiveness of supercomputers. It also illustrates current trends in the industry by exploring the types of systems used in the 500 fastest computer systems in the world. It explores the concept of cluster computing.


  • Sections 1.2.1 and 1.2.4: Communication architecture
    • Trends in last 10 years.
    • How has data parallelism found its way into shared-memory and message-passing machines? An early example would be MMX.
    • Would you change the number of layers in Fig. 1.13?

Message Passing - This summary highlights the typical structure of message-passing machines, advantages of using message passing, and gives a detailed introduction of what message passing is.


  • Section 1.2.2: Shared address space
    • Any changes in the organization of address spaces in the last 10 years?
    • Are the interconnection structures different in new computers now than they were 10 years ago?
    • What is the size and capacity of current SMPs?
    • How have supercomputers evolved since the Cray T3E?

Shared address space - This summary highlights the recent design trends in shared address spaces, evolution of interconnect technology, current high end SMPs, and explores the evolution of supercomputers since the Cray T3E.


  • Section 1.2.3: Message passing
    • Are blade servers an extension of message passing?
    • How have blade architectures evolved over the past 10 years?

General Blade Server Architecture - This summary highlights the general blade-server architecture. It also give a detailed figure that defines the different components within a general blade-server architecture.

Blade Servers - This summary introduces the general blade-server and highlights the advantages of blade servers. It also explores its evolution, its architecture, blade enclosures, and if blade servers are an extension of message passing.

Evolution of Blade Servers - This summary simply focuses on the evolution from standalone conventional server to the blade servers that have become popular today.


  • Section 1.2.5: Trends in vector processing and array processing.
    • New machines have recently been announced. Why will this be an important architectural dimension in the coming years?

Trends in vector processing and array processing - Summary 1 - This summary highlights cache sizes in multicore architectures.

Trends in vector processing and array processing - Summary 2 - This summary highlights current trends, past trends and emerging trends in vector processing and array processing. It also discusses the advantages of vector processing and the pitfalls of vector processing as well.


  • Section 1.2.6
    • New developments in dataflow and systolic architectures, if any.
    • Or if not, why are these styles not evolving with time?

Dataflow and Systolic Architectures - Summary 1 - This summary give a detailed description of the new developments in dataflow and systolic architectures. It even explores why systolic architecture has not truly evolved with time (to the extent of other architectures).

Dataflow and Systolic Architectures - Summary 2 - This summary give a detailed description of the new developments in dataflow and systolic architectures. It also looks at the current state of both dataflow architectures and systolic architectures. It even explores several papers that propose different applications for systolic architecture.


  • Sections 1.3.1 and 1.3.2: Communication and programming model
    • How have reordering strategies evolved to accommodate larger multicomputers?
    • Have new kinds of synchronization operations been developed?
    • I doubt that other topics covered in these sections have changed much, but do check.

Communication and programming models - Summary 1 - This summary gives a brief overview of the SSCI Protocol, a brief overview of the SCI Protocol, and discusses why additional states are needed.

Communication and programming models - Summary 2 - This summary gives a detailed description of directory-based cache coherence. It also explores Simple Scalable Coherent Interface (SSCI) and the Scalable Coherent Interface (SCI).

Communication and programming models - Summary 3 - This summary gives a detailed description of true sharing and false sharing. It discusses the problem with false sharing, strategies to combat false sharing, and diminishing true-sharing misses.


  • Sections 1.3.3 and 1.3.4: Most changes here are probably related to performance metrics.
    • Cite other models for measuring artifacts such as data-transfer time, overhead, occupancy, and communication cost. Focus on the models that are most useful in practice.

Performance metrics - This summary give a detailed description of communication and replication. It also looks at the artifacts of measuring performance, overhead and occupancy, communication cost, and scalability.

Peer-reviewed Assignment 2

Important Dates

  • 09/17/2007 Peer-reviewed 1 Selection
  • 09/24/2007 Peer-reviewed 1 Submission
  • 09/26/2007 Peer-reviewed 1 First feedback
  • 09/28/2007 Peer-reviewed 1 Resubmission
  • 10/03/2007 Peer-reviewed 1 Final review
  • 10/05/2007 Peer-reviewed 1 Review of review

Topics

  • Sections 1.1 and 1.1.2
    • Update performance trends in multiprocessors.

Performance trends in multiprocessors - This summary discusses Moore's Law in the future and multiprocessor architecture's price vs. performance. It also concludes on how the relationship between the development of microprocessors and Moore's Law will be affected in the future.


  • Animations
    • Do an animation of how consistency can be violated given a particular code sequence from the textbook (which sequence will be named on the signup sheet).
    • Show how multilevel inclusion in caches interacts with cache coherence. Specifically, take a code sequence that results in level-2 cache misses in at least two processors, and show what information is transferred to which cache levels, assuming that the L1 cache is direct mapped and the L2 cache is two-way associative.
  • Wiki page
    • Pick another parallel application, not covered in the text, and less than 7 years old, and describe the various steps in parallelizing it (decomposition, assignment, orchestration, and mapping). You may use an example from the peer-reviewed literature, or a Web page. You do not have to go into great detail, but you should describe enough about these four stages to make the algorithm interesting.
    • Create a table of caches used in current multicore architectures, including such parameters as number of levels, line size, size and associativity of each level, latency of each level, whether each level is shared, and coherence protocol used. Compare this with two or three recent single-core designs.
    • MSIMD architectures have garnered quite a bit of contention recently. Read a few papers on these architectures and write a survey of applications for which they would be suitable. If possible, talk about the steps in parallelizing these applications (decomposition, assignment, orchestration, and mapping).
    • On p. 300 of the test, cache-to-cache sharing is introduced. If a cache has an up-to-date copy of a block, should it supply it, or should it wait for memory to do it? What do current multiprocessors do? In current machines, is cache-to-cache sharing faster or slower than waiting for memory to respond?
  • Sections 2.2, 2.2.1 and 2.2.2
    • Special Topic: Parallelizing an application

LAMMPS and a Flowchart of Molecular Dynamics Sequential code - This summary picks a parallel application, not covered in the text, and less than 7 years old, and describe the various steps in parallelizing it (decomposition, assignment, orchestration, and mapping). It also explores LAMMPS (Large Scale Atomic/Molecular Massively Parallel System) algorithm, the sequential algorithm. It also explores the concepts of Decomposition & Assignment, Orchestration, and Mapping for the LAMMPS programming model.

MapReduce - This summary explores MapReduce, a programming model. It also explores the concepts of Decomposition & Assignment, Orchestration, and Mapping for the MapReduce programming model.

Shuffled Complex Evolution Metropolis (SCEM-UA) - This summary explores Shuffled Complex Evolution Metropolis (SCEM-UA), a programming model. It also explores the concepts of Decomposition & Assignment, Orchestration, and Mapping for the Shuffled Complex Evolution Metropolis programming model.


  • Sections 2.2, 2.2.1 and 2.2.2
    • Special Topic: Cache sizes in multicore architectures

Cache sizes in multicore architectures - This summary created a table of caches used in current multicore architectures, including such parameters as number of levels, line size, size and associativity of each level, latency of each level, whether each level is shared, and coherence protocol used. It also compares current multicore architectures with two or three recent single-core designs.

Cache sizes in multicore architectures - This summary created a table of caches used in current multicore architectures, including such parameters as number of levels, line size, size and associativity of each level, latency of each level, whether each level is shared, and coherence protocol used. It also compares current multicore architectures with two or three recent single-core designs.


Peer-reviewed Assignment 3

Important Dates

  • 10/12/2007 Peer-reviewed 1 Selection
  • 10/17/2007 Peer-reviewed 1 Submission
  • 10/19/2007 Peer-reviewed 1 First feedback
  • 10/22/2007 Peer-reviewed 1 Resubmission
  • 10/24/2007 Peer-reviewed 1 Final review
  • 10/26/2007 Peer-reviewed 1 Review of review


Topics

  • Sections 1.1 and 1.1.2
    • Update performance trends in multiprocessors.

Performance trends in multiprocessors - This summary discusses Moore's Law in the future and multiprocessor architecture's price vs. performance. It also concludes on how the relationship between the development of microprocessors and Moore's Law will be affected in the future.


Peer-reviewed Assignment 4

Important Dates

  • 11/23/2007 Peer-reviewed 1 Selection
  • 11/28/2007 Peer-reviewed 1 Submission
  • 11/30/2007 Peer-reviewed 1 First feedback
  • 12/03/2007 Peer-reviewed 1 Resubmission
  • 12/05/2007 Peer-reviewed 1 Final review
  • 12/07/2007 Peer-reviewed 1 Review of review


Topics

  • Sections 1.1 and 1.1.2
    • Update performance trends in multiprocessors.

Performance trends in multiprocessors - This summary discusses Moore's Law in the future and multiprocessor architecture's price vs. performance. It also concludes on how the relationship between the development of microprocessors and Moore's Law will be affected in the future.