CSC/ECE 506 Spring 2012/1a mw: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
Line 37: Line 37:
[[File:2011nov-top500-mpp-count.png|Image from i.TOP500.org/stats]]
[[File:2011nov-top500-mpp-count.png|Image from i.TOP500.org/stats]]
<p><ref name="t500stats" /></p>
<p><ref name="t500stats" /></p>
<p>The total processing power of the top 500 MPP supercomputers is 23,823.97 TFLOPS<ref name="t500stats" />.</p>


== Constellation ==
== Constellation ==

Revision as of 02:43, 31 January 2012

Comparisons Between Supercomputers

Introduction

Supercomputers are specialized computers that are generally very expensive, not available for general purpose use and are used in computations where large amounts of numerical processing is required. They are used in scientific, military, graphics applications and for other number or data intensive computations <ref>http://dictionary.reference.com/browse/supercomputer Definition of supercomputer</ref>, <ref>http://www.webopedia.com/TERM/S/supercomputer.html Definition of supercomputer</ref>.

Supercomputers are generally compared qualitatively using floating point operations per second, or FLOPS. Using standard prefixes, higher levels of FLOPS can be specified as the computing power of supercomputers increases. For example, KiloFLOPS for thousands of FLOPS and MegaFLOPS for millions of FLOPS <ref>http://kevindoran.blogspot.com/2011/04/comparing-performance-of-supercomputers.html Doran, Kevin (April 2011) Comparing the performance of supercomputers</ref>. Often you'll see just the first letter of the prefix with FLOPS. For example, for GigaFLOPS or billions of FLOPS, you'll see GFLOPS <ref>http://top500.org/faq/what_gflop_s Definition of GFLOPS</ref>.

A software package called LINPACK is a standard approach to testing or benchmarking supercomputers by solving a dense system of linear equations using the Gauss method. <ref>http://www.top500.org/project/linpack LINPACK defined</ref>. However, LINPACK benchmarking software isn't only used to benchmark supercomputers, it can also be used to benchmark a typical user computer <ref>http://www.xtremesystems.org/forums/showthread.php?197835-IntelBurnTest-The-new-stress-testing-program Intel Benchmark Software</ref>.

Finding Supercomputer Comparison Data

Starting in 1993, TOP500.org began collecting performance data on computers and update their list every six months <ref>http://top500.org/faq/what_top500 What is the TOP500</ref>. This appears to be an excellent online source of information that collects benchmark data submitted by users of computers and readily provides performance statistics by Vendor, Application, Architecture and nine (9) other areas <ref name="t500stats">http://i.top500.org/stats TOP500 Stats</ref>.

Comparison of Supercomputers by Architecture

Traditional supercomputers of today are composed of three (3) types of parallel processing architecture. These architectures are Cluster, Massively Parallel Processing or MPP and Constellation <ref name="t500stats" />. A non-traditional, or disruptive approach, to supercomputers is Grid Computing<ref>http://searchdatacenter.techtarget.com/definition/grid-computing</ref>.

The following graphic generated at TOP500.org shows the distribution of supercomputers by architecture:

Image from i.TOP500.org/stats

<ref name="t500stats" />

Cluster

A Cluster is a group of computers connected together that appear as a single system to the outside world and provide load balancing and resource sharing <ref>http://searchdatacenter.techtarget.com/definition/cluster-computing Definition of Cluster Computing</ref>. Invented by Digital Equipment Corporation in the 1980's, clusters of computers form the largest number of supercomputers available today <ref>http://books.google.com/books?id=Hd_JlxD7x3oC&pg=PA90&lpg=PA90&dq=what+is+a+constellation+in+parallel+computing?&source=bl&ots=Rf9nxSqOgL&sig=-xleas5wXvNpvkgYYxguvP1tSLA&hl=en&sa=X&ei=aDcnT-XRNqHX0QHymbjrAg&ved=0CGMQ6AEwBw#v=onepage&q=what%20is%20a%20constellation%20in%20parallel%20computing%3F&f=false Applied Parallel Computing</ref>, <ref name="t500stats" />.

TOP500.org data as of November 2011 shows that Cluster computing makes up the largest subset of supercomputers at eight-two percent (82%). The following chart shows the growth of cluster supercomputer systems with the oldest data on the right:

Image from i.TOP500.org/stats

<ref name="t500stats" />

The total processing power of the top 500 cluster supercomputers is reported at 50,192.82 TFLOPS<ref name="t500stats" />.

Massively Parallel Processing, MPP

Massively Parallel Processing or MPP supercomputers are made up of hundreds of computing nodes and process data in a coordinated fashion <ref name="ttmppdef">http://whatis.techtarget.com/definition/0,,sid9_gci214085,00.html</ref>. Each node of the MPP generally has its own memory and operating system and can be made up of nodes that have multiple processors and/or multiple cores <ref name="ttmppdef" />.

TOP500.org/stats for the MPP architecture of supercomputers shows that as of November 2011, MPP makes up approximately 17.8% of all supercomputers reported. A graph of the growth and subsequent decline of the MPP architecture from data displayed at TOP500.org/stats is shown below:

Image from i.TOP500.org/stats

<ref name="t500stats" />

The total processing power of the top 500 MPP supercomputers is 23,823.97 TFLOPS<ref name="t500stats" />.

Constellation

A Constellation is a cluster of supercomputers <ref>http://www.mimuw.edu.pl/~mbiskup/presentations/Parallel%20Computing.pdf</ref>. TOP500.org shows only one constellation supercomputer as of November 2011:

Image from i.TOP500.org/stats

<ref name="t500stats" />

This author's speculation about the decline of constellations is based on several factors: Multiple processor and/or multiple core computers have been getting faster and less expensive. Combine these less expensive computers into very large clusters and you can get computing power that rivals a constellation. Alternatively, more and more computers have symmetric multiprocessing, SMP, and the concept of constellations and clusters is converging.

Grid Computing

Grid Computing is defined as applying many networked computers to solving a single problem simultaneously<ref>http://searchdatacenter.techtarget.com/definition/grid-computing</ref>. It is also defined as a network of computers used by a single company or organization to solve a problem<ref>http://boinc.berkeley.edu/trac/wiki/DesktopGrid</ref>. Yet another definition as implemented by GridRepublic.org creates a supercomputing grid by using volunteer computers from across the globe<ref>http://www.gridrepublic.org/index.php?page=about</ref>. All of these definitions have something in common, and that is using parallel processing to attack a problem that can be broken up into many pieces.

The following graph generated by data at [www.GridRepublic.Org GridRepublic.org] shows the average processing power of this supercomputer created by volunteers from around the world:

Image from http://www.gridrepublic.org/index.php?page=stats

Image from GridRepublic.org<ref name="grstats">http://www.gridrepublic.org/index.php?page=stats Image from GridRepublic.Org</ref>

The GridRepublic.org statistics is for 55 applications running using a total of 10,979,114 GFLOPS or 10,979.114 TFLOPS<ref name="grstats" />

.

Conclusion/Summary

References

Your references go here. You should allow the WIKI to create your references list automatically by using inline citations.

<references />