CSC/ECE 506 Spring 2012/1a ry: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
 
(79 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Introduction<ref>http://en.wikipedia.org/wiki/Supercomputer</ref>==
A [http://en.wikipedia.org/wiki/Supercomputer supercomputer] is generally considered to be the front-line “cutting-edge” in terms of processing capacity (number crunching) and computational speed at the time it is built, but with the pace of development, yesterday's supercomputers have become regular servers today. 
A state-of-the-art supercomputer is an extremely powerful computer capable of manipulating massive amounts of data in a relatively short amount of time.
Supercomputers are very expensive and are deployed for specialized scientific and engineering applications that must handle very large databases or do a great amount of computation -- among them are meteorology, animated graphics, fluid dynamic calculations, nuclear energy research, weapons simulation and petroleum exploration.
== Supercomputer Evolution <ref>http://www.bukisa.com/articles/13059_supercomputer-evolution</ref>==
== Supercomputer Evolution <ref>http://www.bukisa.com/articles/13059_supercomputer-evolution</ref>==


Line 4: Line 9:
[[Image:kcomp2.jpg|thumb|right|250px|A 'K computer' rack. Each computer rack is equipped with about 100 CPUs]]
[[Image:kcomp2.jpg|thumb|right|250px|A 'K computer' rack. Each computer rack is equipped with about 100 CPUs]]


The United States government has played the key role in the development and use of supercomputers, During World War II, the US Army paid for the construction of ENIAC in order to speed the calculations of artillery tables.  In the 30 years after World War II, the US government used high-performance computers to design nuclear weapons, break codes, and perform other security-related applications.
The United States government has played the key role in the development and use of supercomputers, During World War II, the US Army paid for the construction of [http://en.wikipedia.org/wiki/ENIAC Electronic Numerical Integrator And Computer(ENIAC)]in order to speed the calculations of artillery tables.  In the 30 years after World War II, the US government used high-performance computers to design nuclear weapons, break codes, and perform other security-related applications.


A supercomputer is generally considered to be the front-line “cutting-edge” in terms of processing capacity (number crunching) and computational speed at the time it is built, but with the pace of development yesterday's supercomputers become regular servers today.
The most powerful supercomputers introduced in the 1960s were designed primarily by Seymour Cray at [http://en.wikipedia.org/wiki/Control_Data_Corporation Control Data Corporation] (CDC).  They led the market into the 1970s until Cray left to form his own company,[http://en.wikipedia.org/wiki/Cray_Research Cray Research].
A state-of-the-art supercomputer is an extremely powerful computer capable of manipulating massive amounts of data in a relatively short amount of time.  
Supercomputers are very expensive and are deployed for specialized scientific and engineering applications that must handle very large databases or do a great amount of computation -- among them are meteorology, animated graphics, fluid dynamic calculations, nuclear energy research, weapons simulation and petroleum exploration.


The most powerful supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC). They led the market into the 1970s until Cray left to form his own company, Cray Research.
With [http://en.wikipedia.org/wiki/Moore%27s_law Moore’s Law] still holding after more than thirty years, the rate at which future mass-market technologies overtake today’s cutting-edge super-duper wonders continues to accelerate.  The effects of this are manifest in the abrupt about-face we have witnessed in the underlying philosophy of building supercomputers.
 
With Moore’s Law still holding after more than thirty years, the rate at which future mass-market technologies overtake today’s cutting-edge super-duper wonders continues to accelerate.  The effects of this are manifest in the abrupt about-face we have witnessed in the underlying philosophy of building supercomputers.


During the 1970s and all the way through the mid-1980s supercomputers were built using specialized custom vector processors working in parallel. Typically, this meant anywhere between four to sixteen CPUs. The next phase of the supercomputer evolution saw the introduction of massive parallel processing and a drift away from vector-only microprocessors. However, the processors used in the construction of this generation of supercomputers were still primarily highly specialized purpose-specific custom designed and fabricated units.
During the 1970s and all the way through the mid-1980s supercomputers were built using specialized custom vector processors working in parallel. Typically, this meant anywhere between four to sixteen CPUs. The next phase of the supercomputer evolution saw the introduction of massive parallel processing and a drift away from vector-only microprocessors. However, the processors used in the construction of this generation of supercomputers were still primarily highly specialized purpose-specific custom designed and fabricated units.
Line 20: Line 21:
So we now find that instead of using specialized custom-built processors in their design, supercomputers are based on "off the shelf" server-class multicore microprocessors, such as the IBM PowerPC, Intel Itanium, or AMD x86-64. The modern supercomputer is firmly based around massively parallel processing by clustering very large numbers of commodity processors combined with a custom interconnect.
So we now find that instead of using specialized custom-built processors in their design, supercomputers are based on "off the shelf" server-class multicore microprocessors, such as the IBM PowerPC, Intel Itanium, or AMD x86-64. The modern supercomputer is firmly based around massively parallel processing by clustering very large numbers of commodity processors combined with a custom interconnect.


Currently, the K computer is the world's fastest supercomputer at 10.51 petaFLOPS. K is built by the Japanese computer firm Fujitsu, based in Kobe's Riken Advanced Institute for Computational Science. It consists of 88,000 SPARC64 VIIIfx CPUs, and spans 864 server racks. In November 2011, the power consumption was reported to be 12659.89 kW<ref>http://www.top500.org/list/2011/11/100</ref>. K's performance is equivalent to one million linked desktop computers, which is more than its five closest competitors combined. It consists of 672 cabinets stuffed with circuit-boards, and its creators plan to increase that to 800 in the coming months. It uses enough energy to power nearly 10,000 homes and costs $10 million (£6.2 million) annually to run<ref>http://www.telegraph.co.uk/technology/news/8586655/Japanese-supercomputer-K-is-worlds-fastest.html</ref>.
Currently, the [http://en.wikipedia.org/wiki/K_computer K computer] is the world's fastest supercomputer at 10.51 petaFLOPS. K is built by the Japanese computer firm Fujitsu, based in Kobe's Riken Advanced Institute for Computational Science. It consists of 88,000 SPARC64 VIIIfx CPUs, and spans 864 server racks. In November 2011, the power consumption was reported to be 12659.89 kW<ref>http://www.top500.org/list/2011/11/100</ref>. K's performance is equivalent to one million linked desktop computers, which is more than its five closest competitors combined. It consists of 672 cabinets stuffed with circuit-boards, and its creators plan to increase that to 800 in the coming months. It uses enough energy to power nearly 10,000 homes and costs $10 million (£6.2 million) annually to run<ref>http://www.telegraph.co.uk/technology/news/8586655/Japanese-supercomputer-K-is-worlds-fastest.html</ref>.


Some of the companies which build supercomputers are Silicon Graphics, Intel, IBM, Cray, Orion, Aspen Systems etc.
Some of the companies which build supercomputers are Silicon Graphics, Intel, IBM, Cray, Orion, Aspen Systems etc.
Line 26: Line 27:
Here is a list of the top 10 supercomputers [http://www.datacenterknowledge.com/the-top-10-supercomputers-illustrated-nov-2011/ top10 supercomputers] as of November 2011.
Here is a list of the top 10 supercomputers [http://www.datacenterknowledge.com/the-top-10-supercomputers-illustrated-nov-2011/ top10 supercomputers] as of November 2011.


== First Supercomputer ( ENIAC )==
=== First Supercomputer ( ENIAC )===
[[Image:Eniac.jpg|thumb|right|400px|ENIAC - The World's first supercomputer]]
[[Image:Eniac.jpg|thumb|right|400px|ENIAC - The World's first supercomputer]]


The ENIAC was first developed in 1949 and it took the world by storm. Originally, it was built to solve very complex problems that would take several months or years to solve. Because of this some of us use computers today but ENIAC was built with a single purpose: to solve scientific problems for the entire nation. The military were first to use it, benefiting the country's defenses. Even today, most new supercomputer technology is designed for the military first, and then is redesigned for civilian uses.
The [http://en.wikipedia.org/wiki/ENIAC Electronic Numerical Integrator And Computer(ENIAC)] was first developed in 1949 and it took the world by storm. Originally, it was built to solve very complex problems that would take several months or years to solve. Because of this some of us use computers today but ENIAC was built with a single purpose: to solve scientific problems for the entire nation. The military were first to use it, benefiting the country's defenses. Even today, most new supercomputer technology is designed for the military first, and then is redesigned for civilian uses.


This system actually was used to compute the firing tables for White Sands missile range from 1949 until it was replaced in 1957. This allowed the military to synchronize the liftoff of missiles should it be deemed necessary. This was one of the important milestones in military history for the United States, at least on a technological level.
This system actually was used to compute the firing tables for White Sands missile range from 1949 until it was replaced in 1957. This allowed the military to synchronize the liftoff of missiles should it be deemed necessary. This was one of the important milestones in military history for the United States, at least on a technological level.
Line 37: Line 38:
So what made ENIAC run? That task took a lot of manpower to complete and took hours to set up.The people completing the task used board, plus and wires to program the desired commands into the colossal machine. They also had to input the numbers by turning tons of dials until they matched the correct numbers, much like one has to do on a combination lock.
So what made ENIAC run? That task took a lot of manpower to complete and took hours to set up.The people completing the task used board, plus and wires to program the desired commands into the colossal machine. They also had to input the numbers by turning tons of dials until they matched the correct numbers, much like one has to do on a combination lock.


== Cray History ==
=== Cray History <ref>http://www.cray.com/Assets/PDF/about/CrayTimeline.pdf</ref>===


[[Image:cray1.jpg|thumb|right|300px|Cray 1 supercomputer installed at Lawrence Livermore National Laboratory (LLNL), California, USA.]]
[[Image:cray1.jpg|thumb|right|300px|Cray 1 supercomputer installed at Lawrence Livermore National Laboratory (LLNL), California, USA.]]
[[Image:cray-t3e.gif|thumb|right|300px|The Cray-T3E-1200E supercomputer]]
[[Image:cray-t3e.gif|thumb|right|300px|The Cray-T3E-1200E supercomputer]]


Cray Inc. has a history that extends back to 1972, when the legendary Seymour Cray, the "father of supercomputing," founded Cray Research. R&D and manufacturing were based in his hometown of Chippewa Falls, Wisconsin and business headquarters were in Minneapolis, Minnesota.
[http://en.wikipedia.org/wiki/Cray Cray Inc.] has a history that extends back to 1972, when the legendary [http://en.wikipedia.org/wiki/Seymour_Cray Seymour Cray], the "father of supercomputing," founded Cray Research. R&D and manufacturing were based in his hometown of Chippewa Falls, Wisconsin and business headquarters were in Minneapolis, Minnesota.


The first Cray-1 system was installed at Los Alamos National Laboratory in 1976 for $8.8 million. It boasted a world-record speed of 160 million floating-point operations per second (160 megaflops) and an 8 megabyte (1 million word) main memory. In order to increase the speed of the system, the Cray-1 had a unique "C" shape which made integrated circuits to be closer together and no wire in the system was more than four feet long. To handle the intense heat generated by the computer, Cray developed an innovative refrigeration system using Freon.
The first Cray-1 system was installed at Los Alamos National Laboratory in 1976 for $8.8 million. It boasted a world-record speed of 160 million floating-point operations per second (160 megaflops) and an 8 megabyte (1 million word) main memory. In order to increase the speed of the system, the Cray-1 had a unique "C" shape which made integrated circuits to be closer together and no wire in the system was more than four feet long. To handle the intense heat generated by the computer, Cray developed an innovative refrigeration system using Freon.
Line 50: Line 51:
Always a visionary, Seymour Cray had been exploring the use of gallium arsenide in creating a semiconductor faster than silicon. However, the costs and complexities of this material made it difficult for the company to support both the Cray-3 and the Cray C90 development efforts. In 1989, Cray Research spun off the Cray-3 project into a separate company, Cray Computer Corporation, headed by Seymour Cray and based in Colorado Springs, Colorado. Tragically, Seymour Cray died in September 1996 at the age of 71.
Always a visionary, Seymour Cray had been exploring the use of gallium arsenide in creating a semiconductor faster than silicon. However, the costs and complexities of this material made it difficult for the company to support both the Cray-3 and the Cray C90 development efforts. In 1989, Cray Research spun off the Cray-3 project into a separate company, Cray Computer Corporation, headed by Seymour Cray and based in Colorado Springs, Colorado. Tragically, Seymour Cray died in September 1996 at the age of 71.


The 1990s brought a number of transformations to Cray Research. The company continued its leadership in providing the most powerful supercomputers for production applications. The Cray C90 featured a new central processor which produced a performance of 1 gigaflop. Using 16 of these powerful processors and 256 million words of central memory, the system boasted of amazing total performance. The company also produced its first "mini-supercomputer," the Cray XMS system, followed by the Cray Y-MP EL series and the subsequent Cray J90. In 1993, it offered the first massively parallel processing (MPP) system, the Cray T3D supercomputer, and quickly captured MPP market leadership from early MPP companies such as Thinking Machines and MasPar. The Cray T3D proved to be exceptionally robust, reliable, sharable and easy-to-administer, compared with competing MPP systems.
The 1990s brought a number of transformations to Cray Research. The company continued its leadership in providing the most powerful supercomputers for production applications. The Cray C90 featured a new central processor which produced a performance of 1 gigaflop. Using 16 of these powerful processors and 256 million words of central memory, the system boasted of amazing total performance. The company also produced its first "mini-supercomputer," the Cray XMS system, followed by the Cray Y-MP EL series and the subsequent Cray J90. In 1993, it offered the first [http://en.wikipedia.org/w/index.php?title=Massively_parallel_computing&redirect=no massively parallel processing] (MPP) system, the Cray T3D supercomputer, and quickly captured MPP market leadership from early MPP companies such as Thinking Machines and MasPar. The Cray T3D proved to be exceptionally robust, reliable, sharable and easy-to-administer, compared with competing MPP systems.


The successor, Cray T3E supercomputer has been the world's best selling MPP system. The Cray T3E-1200E system was the first supercomputer to sustain one teraflop (1 trillion calculations per second) on a real-world application. In November 1998, a joint scientific team from Oak Ridge National Laboratory, the National Energy Research Scientific Computing Center (NERSC), Pittsburgh Supercomputing Center and the University of Bristol (UK) ran a magnetism application at a sustained speed of 1.02 teraflops. In another technological landmark, the Cray T90 became the world's first wireless supercomputer when it was released in 1994. Also the Cray J90 series which was released during the same year has become the world's most popular supercomputer, with over 400 systems sold.
The successor, Cray T3E supercomputer has been the world's best selling MPP system. The Cray T3E-1200E system was the first supercomputer to sustain one teraflop (1 trillion calculations per second) on a real-world application. In November 1998, a joint scientific team from Oak Ridge National Laboratory, the National Energy Research Scientific Computing Center (NERSC), Pittsburgh Supercomputing Center and the University of Bristol (UK) ran a magnetism application at a sustained speed of 1.02 teraflops. In another technological landmark, the Cray T90 became the world's first wireless supercomputer when it was released in 1994. Also the Cray J90 series which was released during the same year has become the world's most popular supercomputer, with over 400 systems sold.
Line 59: Line 60:
[http://www.cray.com/Assets/PDF/about/CrayTimeline.pdf Historical Timeline of Cray].
[http://www.cray.com/Assets/PDF/about/CrayTimeline.pdf Historical Timeline of Cray].


== Supercomputer History in Japan<ref>http://www.versionone.com/Agile101/Methodologies.asp </ref> ==
=== Supercomputer History in Japan<ref>http://www.versionone.com/Agile101/Methodologies.asp </ref> ===


In the beginning there were only a few Cray-1s installed in Japan, and until 1983 no Japanese company produced supercomputers. The first models were announced in 1983. Naturally there had been prototypes earlier like the Fujitsu F230-75 APU produced in two copies in 1978, but Fujitsu's VP-200 and Hitachi's S-810 were the first officially announced versions. NEC announced its SX series slightly later.
In the beginning there were only a few Cray-1s installed in Japan, and until 1983 no Japanese company produced supercomputers. The first models were announced in 1983. Naturally there had been prototypes earlier like the [http://en.wikipedia.org/wiki/Fujitsu Fujitsu]  F230-75 APU produced in two copies in 1978, but Fujitsu's VP-200 and Hitachi's S-810 were the first officially announced versions. NEC announced its SX series slightly later.


The last decade has rather been a surprise. About three generations of machines have been produced by each of the domestic manufacturers. During the last ten years about 300 supercomputer systems have been shipped and installed in Japan, and a whole infrastructure of supercomputing has been established. All major universities, many of the large industrial companies and research centers have supercomputers.
The last decade has rather been a surprise. About three generations of machines have been produced by each of the domestic manufacturers. During the last ten years about 300 supercomputer systems have been shipped and installed in Japan, and a whole infrastructure of supercomputing has been established. All major universities, many of the large industrial companies and research centers have supercomputers.
Line 73: Line 74:
In 1992 NEC announced the SX-3R with a couple of improvements compared to the first version. The clock was further reduced to 2.5 ns, so that the peak performance increased to 6.4 Gflop/s per processor
In 1992 NEC announced the SX-3R with a couple of improvements compared to the first version. The clock was further reduced to 2.5 ns, so that the peak performance increased to 6.4 Gflop/s per processor


=== Fujitsu's VP series ===
==== Fujitsu's VP series <ref>http://www.netlib.org/benchmark/top500/reports/report94/Japan/node5.html</ref>====


In 1977 Fujitsu produced the first supercomputer prototype called the F230-75 APU which was a pipelined vector processor added to a scalar processor. This attached processor was installed in the Japanese Atomic Energy Commission (JAERI) and the National Aeronautic Lab (NAL).
In 1977 [http://en.wikipedia.org/wiki/Fujitsu Fujitsu] produced the first supercomputer prototype called the F230-75 APU which was a pipelined vector processor added to a scalar processor. This attached processor was installed in the Japanese Atomic Energy Commission (JAERI) and the National Aeronautic Lab (NAL).


In 1983 the company came out with the VP-200 and VP-100 systems. In 1986 VP-400 was released with twice as many pipelines as the VP-200 and during mid-1987 the whole family became the E-series with the addition of an extra (multiply-add) pipelined floating point unit that increased the performance potential by 50%. With the flexible range of systems in this generation (VP-30E to VP-400E), good marketing and a broad range of applications, Fujitsu has became the largest domestic supplier with over 80 systems installed, many of which are named in TOP500.
In 1983 the company came out with the VP-200 and VP-100 systems. In 1986 VP-400 was released with twice as many pipelines as the VP-200 and during mid-1987 the whole family became the E-series with the addition of an extra (multiply-add) pipelined floating point unit that increased the performance potential by 50%. With the flexible range of systems in this generation (VP-30E to VP-400E), good marketing and a broad range of applications, Fujitsu has became the largest domestic supplier with over 80 systems installed, many of which are named in TOP500.
Line 83: Line 84:
Previous machines wre heavily criticized for the lack of memory throughput. The VP-400 series had only one load/store path to memory that peaked at 4.57 GB/s. This was improved in the VP-2000 series by doubling the paths so that each pipeline set can do two load/store operations per cycle giving a total transfer rate of 20 GB/s. Fujitsu recently decided to use the label, VPX-2x0, for the VP-2x00 systems adapted to their Unix system. Keio Daigaku university now runs such a system.
Previous machines wre heavily criticized for the lack of memory throughput. The VP-400 series had only one load/store path to memory that peaked at 4.57 GB/s. This was improved in the VP-2000 series by doubling the paths so that each pipeline set can do two load/store operations per cycle giving a total transfer rate of 20 GB/s. Fujitsu recently decided to use the label, VPX-2x0, for the VP-2x00 systems adapted to their Unix system. Keio Daigaku university now runs such a system.


=== The VPP-500 series ===
==== The VPP-500 series ====


In 1993 Fujitsu sprung a surprise to the world by announcing a Vector Parallel Processor (VPP) series that was designed for reaching in the range of hundreds of Gflop/s. At the core of the system is a combined Ga-As/Bi-CMOS processor, based largely on the original design of the VP-200. The processor chips gate delay was made as low as 60 ps in the Ga-As chips by using the most advanced hardware technology available. The resulting cycle time was 9.5 ns. The processor has four independent pipelines each capable of executing two Multiply-Add instructions in parallel resulting in a peak speed of 1.7 Gflop/s per processor. Each processor board is equipped with 256 Megabytes of central memory.
In 1993 Fujitsu sprung a surprise to the world by announcing a Vector Parallel Processor (VPP) series that was designed for reaching in the range of hundreds of Gflop/s. At the core of the system is a combined Ga-As/Bi-CMOS processor, based largely on the original design of the VP-200. The processor chips gate delay was made as low as 60 ps in the Ga-As chips by using the most advanced hardware technology available. The resulting cycle time was 9.5 ns. The processor has four independent pipelines each capable of executing two Multiply-Add instructions in parallel resulting in a peak speed of 1.7 Gflop/s per processor. Each processor board is equipped with 256 Megabytes of central memory.
Line 91: Line 92:
As mentioned in the introduction, an early version of this system called the Numeric Wind Tunnel, was developed together with NAL. This early version of the VPP-500 (with 140 processors) is today the fastest supercomputer in the world and stands out at the beginning of the TOP500 due to a value that is twice that of the TMC CM-5/1024 installed at Los Alamo.
As mentioned in the introduction, an early version of this system called the Numeric Wind Tunnel, was developed together with NAL. This early version of the VPP-500 (with 140 processors) is today the fastest supercomputer in the world and stands out at the beginning of the TOP500 due to a value that is twice that of the TMC CM-5/1024 installed at Los Alamo.


=== Hitachi's Supercomputers ===
==== Hitachi's Supercomputers ====


Hitachi has been producing supercomputers since 1983 but differs from other manufacturers by not exporting them. For this reason, their supercomputers are less well known in the West. After having gone through two generations of supercomputers, the S-810 series started in 1983 and the S-820 series in 1988, Hitachi leapfrogged NEC in 1992 by announcing the most powerful vector supercomputer ever.The top S-820 model consisted of one processor operating at 4 ns and contained 4 vector pipelines with four pipelines and two independent floating-point units. This corresponded to a peak performance of 2 Gflop/s. Hitachi put great emphasis on a fast memory although this meant limiting its size to a maximum of 512 MB. The memory bandwidth of 2 words per pipe per vector cycle, giving a peak rate of 16 GB/s was a respectable achievement, but it was not enough to keep all functional units busy.
[http://en.wikipedia.org/wiki/Hitachi Hitachi] has been producing supercomputers since 1983 but differs from other manufacturers by not exporting them. For this reason, their supercomputers are less well known in the West. After having gone through two generations of supercomputers, the S-810 series started in 1983 and the S-820 series in 1988, Hitachi leapfrogged NEC in 1992 by announcing the most powerful vector supercomputer ever.The top S-820 model consisted of one processor operating at 4 ns and contained 4 vector pipelines with four pipelines and two independent floating-point units. This corresponded to a peak performance of 2 Gflop/s. Hitachi put great emphasis on a fast memory although this meant limiting its size to a maximum of 512 MB. The memory bandwidth of 2 words per pipe per vector cycle, giving a peak rate of 16 GB/s was a respectable achievement, but it was not enough to keep all functional units busy.


The S-3800 was announced two years ago and is comparable to NEC's SX- 3R in its features. It has up to four scalar processors with a vector processing unit each. These vector units have in turn up to four independent pipelines and two floating point units that can each perform a multiply/add operation per cycle. With a cycle time of 2.0 ns, the whole system achieves a peak performance level of 32 Gflop/s.
The S-3800 was announced two years ago and is comparable to NEC's SX- 3R in its features. It has up to four scalar processors with a vector processing unit each. These vector units have in turn up to four independent pipelines and two floating point units that can each perform a multiply/add operation per cycle. With a cycle time of 2.0 ns, the whole system achieves a peak performance level of 32 Gflop/s.
Line 103: Line 104:
[http://www.top500.org/static/lists/2009/11/top500_statistics.pdf statistics]
[http://www.top500.org/static/lists/2009/11/top500_statistics.pdf statistics]


== IBM History ==
=== IBM History ===
[[Image:ibm704.jpg|thumb|right|300px|IBM 704 at Lawrence Livermore National Laboratory (LLNL), California, USA (October 1956).]]
[[Image:ibm704.jpg|thumb|right|300px|IBM 704 at Lawrence Livermore National Laboratory (LLNL), California, USA (October 1956).]]


In the early 1950s, IBM built their first scientific computer, the IBM 701. The IBM 704 and other high-end systems appeared in the 1950s and 1960s, but by today's standards, these early machines were little more than oversized calculators. After going through a rough patch, IBM re-emerged as a leader in supercomputing research and development in the mid-1990s, creating several systems for the U.S. Government's Accelerated Strategic Computing Initiative (ASCI). These computers boast approximately 100 times as much computational power as supercomputers of just ten years ago.  
In the early 1950s, [http://en.wikipedia.org/wiki/IBM IBM] built their first scientific computer, the IBM 701. The IBM 704 and other high-end systems appeared in the 1950s and 1960s, but by today's standards, these early machines were little more than oversized calculators. After going through a rough patch, IBM re-emerged as a leader in supercomputing research and development in the mid-1990s, creating several systems for the U.S. Government's Accelerated Strategic Computing Initiative (ASCI). These computers boast approximately 100 times as much computational power as supercomputers of just ten years ago.  


Sequoia is a petascale Blue Gene/Q supercomputer being constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It is scheduled to be delivered to the Lawrence Livermore National Laboratory in 2011 and fully deployed in 2012.
Sequoia is a petascale Blue Gene/Q supercomputer being constructed by IBM for the National Nuclear Security Administration as part of the [http://en.wikipedia.org/wiki/Advanced_Simulation_and_Computing_Program Advanced Simulation and Computing Program] (ASC). It is scheduled to be delivered to the Lawrence Livermore National Laboratory in 2011 and fully deployed in 2012.


Sequoia was revealed in February 2009; the targeted performance of 20 petaflops was more than the combined performance of the top 500 supercomputers in the world and about 20 times faster than Roadrunner, the reigning champion of the time. It will be twice as fast as the current record-holding K computer and also twice as fast as the intended future performance of Pleiades.
Sequoia was revealed in February 2009; the targeted performance of 20 petaflops was more than the combined performance of the top 500 supercomputers in the world and about 20 times faster than Roadrunner, the reigning champion of the time. It will be twice as fast as the current record-holding K computer and also twice as fast as the intended future performance of Pleiades.


IBM has also built a smaller prototype called "Dawn," capable of 500 teraflops, using the Blue Gene/P design, to evaluate the Sequoia design. This system was delivered in April 2009 and entered the Top500 list at 9th place in June 2009
IBM has also built a smaller prototype called "Dawn," capable of 500 teraflops, using the [http://en.wikipedia.org/wiki/Blue_gene Blue Gene/P design], to evaluate the Sequoia design. This system was delivered in April 2009 and entered the Top500 list at 9th place in June 2009


Supercomputer speeds are advancing rapidly as manufacturers latch on to new techniques and cheaper prices for computer chips. The first machine to break the teraflop barrier - a trillion calculations per second - was only built in 1996. Two years ago a $59m machine from Sun Microsystems, called Constellation, attempted to take the crown of world's fastest with operating speeds of 421 teraflops. Just two years later, Sequoia was able to achieve nearly 50 times the computing power.
Supercomputer speeds are advancing rapidly as manufacturers latch on to new techniques and cheaper prices for computer chips. The first machine to break the teraflop barrier - a trillion calculations per second - was only built in 1996. Two years ago a $59m machine from Sun Microsystems, called Constellation, attempted to take the crown of world's fastest with operating speeds of 421 teraflops. Just two years later, [http://en.wikipedia.org/wiki/IBM_Sequoia Sequoia] was able to achieve nearly 50 times the computing power.


== Comparision==
== Current Top Supercomputers<ref>http://http://www.top500.org</ref>==
=== Top Supercomputer Vendor In The World (November 2011)===
=== Comparison of Top Supercomputer Vendors In The World (November 2011)===
<table border = 1>
<table border = 1>
{|class="wikitable"
{|class="wikitable"
Line 126: Line 127:
!Rpeak(GFlops)
!Rpeak(GFlops)
!Processor cores
!Processor cores
!Advantages
|-
|-
|IBM
|IBM
Line 134: Line 134:
|31888720.48
|31888720.48
|3317036
|3317036
|
|-
|-
|Hewlett-Packard
|Hewlett-Packard
Line 142: Line 141:
|16410722.22
|16410722.22
|1509694
|1509694
|
|-
|-
|Cray Inc.
|Cray Inc.
Line 150: Line 148:
|13558554.6
|13558554.6
|1457068
|1457068
|
|-
|-
|SGI
|SGI
Line 158: Line 155:
|3764607.92
|3764607.92
|336104
|336104
|
|-
|-
|Bull SA
|Bull SA
Line 166: Line 162:
|4146261.12
|4146261.12
|321284
|321284
|
|-
|-
|Appro International
|Appro International
Line 174: Line 169:
|3122119.2
|3122119.2
|219648
|219648
|
|-
|-
|Dell
|Dell
Line 182: Line 176:
|1492525.8
|1492525.8
|136722
|136722
|
|-
|-
|Oracle
|Oracle
Line 190: Line 183:
|1965064.96
|1965064.96
|183040
|183040
|
|-
|-
|Hitachi
|Hitachi
Line 198: Line 190:
|548899.8
|548899.8
|32032
|32032
|
|-
|-
|Fujitsu
|Fujitsu
Line 206: Line 197:
|11707788
|11707788
|743176
|743176
|
|-
|-
</table>
</table>
==== Legend ====
==== Legend ====
* '''Vendor''' – The manufacturer of the platform and hardware.
* '''Vendor''' – The manufacturer of the platform and hardware.
* '''Rmax''' – The highest score measured using the [[LINPACK]] benchmark suite. This is the number that is used to rank the computers. Measured in [[1000000000000_%28number%29|quadrillion]]s of [[floating point]] operations per [[second]], i.e. [[FLOPS|petaflops]].
* '''Rmax''' – The highest score measured using the [http://en.wikipedia.org/wiki/LINPACK LINPACK] benchmark suite. This is the number that is used to rank the computers. Measured in quadrillions of floating point operations per second, i.e. Petaflops(Pflops).
* '''Rpeak''' – This is the theoretical peak performance of the system. Measured in Pflops.
* '''Rpeak''' – This is the theoretical peak performance of the system. Measured in Pflops.
* '''Processor cores''' – The number of active [[Multi-core|processor cores]] actively used running Linpack. After this figure is the [[Instruction set architecture|processor architecture]] of the cores named.
* '''Processor cores''' – The number of active [http://en.wikipedia.org/wiki/Multi-core processor cores] used.
 
===Top 10 supercomputers of today<ref>http://www.junauza.com/2011/07/top-10-fastest-linux-based.html</ref>===
Below are the Top 10 supercomputers in the World(as of June 2011). An effort has been made to compare the architectural features of these supercomputers.
[[Image:K-computer-Fastest-Linux-Supercomputers 1.jpg|thumb|right|200px|World s fastest supercomputer: K-computer]]
 
'''1.K-computer:'''
 
*K-computer is currently the world's fastest supercomputer. It is developed by Fujitsu at the RIKEN Advanced Institute for Computational Science campus in Kobe, Japan.
* As per the LINPACK benchmarking standards, K-computer managed to give a peak performance of a mind-blowing 8.16 petaflops toppling Tianhe-1A off its number one spot.
*This  supercomputer uses 68,544 2.0 GHZ 8-core SPARC 64 VIIIfx processors packed in 672 cabinets, for a total of 548,352 cores. In layman's term, K-computer's performance is almost equivalent to the performance of 1 million desktop computers.
*The file system used here is an optimized parallel file system based on Lustre, called Fujitsu Exabyte File System.
*One of the disadvantage with this high-performer is it consumes about 9.8 MW of power, that's the amount of power that would be enough to light 10,000 houses. When compared with its closest competitor, that is the Tianhe-1A, the K-computer is miles ahead and it is highly unlikely that it would lose its number 1 spot any time soon.
[[Image:Tianhe-IA-Fastest-Linux-Supercomputers 2.jpg|thumb|right|200px|Tianhe-IA]]
 
'''2.Tianhe-IA:'''
 
*Tianhe-1A is an upgraded model of Tianhe-1 that was developed by the Chinese National University of Defense in Changsha, Hunan. Tianhe-1 stands for “Milky Way number 1” in Chinese.
*Till June 2011, Tianhe-1A was the world's fastest supercomputer before being overtaken by Japan's K computer.
*This 88 million dollar beast consists of 112 computer cabinets, 12 storage cabinets, 6 communication cabinets and 8 I/O cabinets. Each cabinet has 4 frames, each frame having eight blades and a 16-port switching board. The system has 3584 such blades containing 7168 GPUs and 14,336 CPUs.
*This Chinese marvel has given a peak performance of about 2.5 petaflops and is used in carrying out computations for petroleum exploration and aircraft design.
*The best part about Tianhe-1A however, is the fact that it is an open access computer. Which means that it will provide services to other countries too.
* And maintaining this supercomputer costs about 20 million USD a year.
[[Image:Jaguar-Cray-Fastest-Linux-Supercomputers 3.jpg|thumb|right|200px|Jaguar Cray]]
 
 
'''3. Jaguar Cray:'''
 
*Running on Cray Linux Environment, Jaguar is currently the world's third fastest supercomputer. It has achieved a peak performance of about 1.75 petaflops and was once the world's fastest supercomputer before being overtaken by the Chinese Tianhe-1A in 2010-11.
*The current model, that is Cray CTX5, is an upgraded version of the popular Cray CTX4. Jaguar has around 224, 256 x86-based AMD Opteron processor cores with 16 GB of memory for each node.
*The file system used here is an external [http://en.wikipedia.org/wiki/Lustre_(file_system) Lustre] file system, which is basically a massively parallel-distributed file system that is used for cluster computing.The file system is capable of storing over 10 Petabytes of data and has a read/write benchmark of 240 GB/s.
*This mean that this supercomputer costs a whopping 104 million USD and can be found at the Oak Ridge National Laboratory in Tennessee.
[[Image:4.jpg|thumb|right|200px|Nebulae]]
 
 
'''4. Nebulae:'''
 
*Nebulae is a research supercomputer located in Shenzhen, Guangdong, China.
*It has a theoretical peak performance of around 2.9 petaflops.
*Nebulae is the 4th most powerful supercomputer in the world and the second most powerful in China.
[[Image:Tsubame-2.0-Fastest-Linux-Supercomputers 5.jpg|thumb|right|200px|TSUBAME 2.0]]
 
 
'''5.TSUBAME 2.0:'''
 
*TSUBAME 2.0 is the successor of TSUBAME 1.0, which previously was the fastest supercomputer in Japan.
*TSUBAME stands for Tokyo Tech Supercomputer Ubiquitously Accessible Mass storage Environment.Tsubame is also the word for a swallow in Japanese that forms an integral part of their logo.
*The Japanese marvel has a theoretical peak performance of a whopping 2.4 petaflops making it the 5th fastest supercomputer in the world. It has an aggregated memory bandwidth of 720 Terabytes per second.
[[Image:Cielo-Cray-XE6-Fastest-Linux-Supercomputers 6.jpg|thumb|right|200px|Cielo Cray XE6]]
 
 
'''6. Cielo Cray XE6:'''
 
*This mean machine that was unveiled in May 2010, is the sixth fastest supercomputer in the world.
*It is powered by AMD x86-64 Opteron 8 core processor. Cielo is located in Los Alamos National Laboratory in New Mexico, USA and is mainly used for research purposes.
[[Image:Pleiades-SGI-Altix-Fastest-Linux-Supercomputers 7.jpg|thumb|right|200px|Pleiades SGI Altix]]
 
 
'''7. Pleiades SGI Altix:'''
 
*Pleiades is a supercomputer used by NASA to conduct modeling and simulation for their missions.Pleiades is the world's 7th fastest supercomputer.
*Its performance averages around 1.09 petaflops with a peak of 1.315 petaflops. Loaded with a memory of 185 TB and 111,104 cores.
*The beast runs on SUSE Linux and has about 6.9 PB of storage space with 12 Direct Data Network (DDN) RAIDs.
[[Image:Cray-XE6-Fastest-Linux-Supercomputers 8.jpg|thumb|right|200px|Cray XE]]
 
 
'''8. Cray XE:'''
 
*Housed in DOE's National Energy Research Scientific Computing Center (NERSC), California, Cray XE6 is currently the world's 8th fastest supercomputer.
*It has achieved a peak performance of 1.5 petaflops and runs on Cray Linux Environment version 3. Specs include, 1536 cores per cabinet with 8 or 12-core 64-bit AMD Opteron 6100 Series processors.
*XE6 also comes with a Hardware Supervisory System (HSS) that integrates hardware and software components to provide system monitoring, fault identification and recovery.
[[Image:Tera-100-Fastest-Linux-Supercomputers 9.jpg|thumb|right|200px |Tera 100]]
 
 
'''9. Tera 100:'''
 
*Built by the French company Bull SA, Tera 100 is Europe's fastest supercomputer.
*It runs on Red Hat Enterprise Linux and gives an average of 1 petaflops, peaking at 1.25 petaFlops.
*It is one of the most efficient supercomputers in the world running at an efficiency of 83.7 %.
*Going back to the specs, Tera 100 comes with 20 Petabytes of storage, 300 TB of memory and the processing power of 140,000 Intel Xeon processor cores.
*This supercomputer includes specially designed water-cooled doors, which cut electrical consumption to half when compared with traditional air-cooled ones.
[[Image:Roadrunner 10.jpg|thumb|right|200px|IBM Roadrunner]]
 
 
'''10. IBM Roadrunner:'''
 
*This is the world's tenth fastest supercomputer, IBM Roadrunner was built by IBM at the Los Almos National Laboratory in New Mexico, USA.
*It costs around 125 million USD and is the fourth most energy efficient supercomputer in the world.
*A computer's performance is generally measured in FLOPS, which stands for floating point operations per second. IBM's Roadrunner has a speed of about 1 petaflops(1015) with a top speed of 1.456 petaflops which it reached in November 2008.
*It uses Red Hat Enterprise Linux along with Fedora as its operating system and occupies almost 6000 sq. ft. of real estate.
*Roadrunner's main use is to predict whether USA's aging arsenal of nuclear weapons is safe and reliable. It is also used in other fields like financial, aerospace and automotive industries.
*The unique thing about Roadrunner is its use of two different processing architectures at the same time, more commonly known as hybrid design.
*This consists of AMD's Opteron along with IBM's own Powercell 8i. In case your dual core computer's speed was never good enough for you, the IBM Roadrunner boasts of a whopping 122,400 cores
 
 
Benchmarking: The benchmarks – that is, the figures which are in petaflops – are carried out using [http://www.top500.org/project/linpack LINPACK]. LINPACK is basically a collection of FORTRAN subroutines that analyzes and solves linear equations and linear least-square problems. The computer runs a program that solves a system of linear equations and the floating point rate of execution is measured. It is currently the best way to understand how fast a computer works thus making it a benchmarking standard in the world of supercomputers.


== Supercomputer Design ==
== Supercomputer Design ==
Line 228: Line 313:
climate change, including global warming.
climate change, including global warming.


== Supercomputer Architecture <ref>http://http://www.top500.org</ref>==
=== Supercomputer Architecture <ref>http://http://www.top500.org</ref>===
[[Image:Architecture Share1.png|thumb|right|300px]]
[[Image:Architecture Share1.png|thumb|right|300px]]
[[Image:Architecture Share2.png|thumb|right|300px]]
[[Image:Architecture Share2.png|thumb|right|300px]]
[[Image:arch1.jpg|thumb|right|300px]]
[[Image:arch2.jpg|thumb|right|300px]]
Over the years, we see the changes in supercomputer architecture. Various architectures were developed and abandoned, as computer technology progressed.
Over the years, we see the changes in supercomputer architecture. Various architectures were developed and abandoned, as computer technology progressed.
In the early '90s, single processors were still common in the supercomputer arena. However, two other architectures played more important roles. One was Massive Parallel Processing (MPP), which is a computer system with many independent arithmetic units or entire microprocessors, that run in parallel. The other is Symmetric Multiprocessing (SMP), a good representative of the earliest styles of multiprocessor machine architectures. The existence of these two architectures met two of the supercomputer's key needs: parallelism and high performance.
In the early '90s, single processors were still common in the supercomputer arena. However, two other architectures played more important roles. One was Massive Parallel Processing (MPP), which is a computer system with many independent arithmetic units or entire microprocessors, that run in parallel. The other is Symmetric Multiprocessing (SMP), a good representative of the earliest styles of multiprocessor machine architectures. The existence of these two architectures met two of the supercomputer's key needs: parallelism and high performance.
Line 237: Line 324:
With the rise of cluster computing, the supercomputer world was transformed. In 2009, cluster computing accounted for 83.4% of the architectures in the Top 500. A cluster computer is a group of linked computers, working together closely so that in many respects they form a single computer. Compared to a single computer, clusters are deployed to improve performance and/or availability, while being more cost-effective than single computers of comparable speed or availability. Cluster computers offer a high-performance computing alternative over SMP and massively parallel computing systems. Using redundancy, cluster architectures also aim to provide reliability.
With the rise of cluster computing, the supercomputer world was transformed. In 2009, cluster computing accounted for 83.4% of the architectures in the Top 500. A cluster computer is a group of linked computers, working together closely so that in many respects they form a single computer. Compared to a single computer, clusters are deployed to improve performance and/or availability, while being more cost-effective than single computers of comparable speed or availability. Cluster computers offer a high-performance computing alternative over SMP and massively parallel computing systems. Using redundancy, cluster architectures also aim to provide reliability.
From the analysis above, we can see that supercomputers are highly related to technological change, and actively motivated by it.
From the analysis above, we can see that supercomputers are highly related to technological change, and actively motivated by it.
== Supercomputer Hierarchical Architecture ==
[[Image:arch1.jpg|thumb|right|300px]]
[[Image:arch2.jpg|thumb|right|300px]]


The supercomputer of today is built on a hierarchical design where a number of clustered computers are joined by ultra high speed network optical interconnections.
The supercomputer of today is built on a hierarchical design where a number of clustered computers are joined by ultra high speed network optical interconnections.
1.Supercomputer – Cluster of interconnected multiple multi-core microprocessor computers.
1.Supercomputer – Cluster of interconnected multiple multi-core microprocessor computers.
2.Cluster Members - Each cluster member is a computer composed of a number of Multiple Instruction, Multiple Data (MIMD) multi-core microprocessors and runs its own instance of an operating system.
2.Cluster Members - Each cluster member is a computer composed of a number of Multiple Instruction, Multiple Data ([http://en.wikipedia.org/wiki/MIMD MIMD]) multi-core microprocessors and runs its own instance of an operating system.
3.Multi-Core Microprocessors - Each of these multi-core microprocessors has multiple processing cores of which the application software is oblivious and share tasks using Symmetric Multiprocessing (SMP) and Non-Uniform Memory Access (NUMA).
3.Multi-Core Microprocessors - Each of these multi-core microprocessors has multiple processing cores of which the application software is oblivious and share tasks using [http://en.wikipedia.org/wiki/Symmetric_multiprocessing Symmetric Multiprocessing] (SMP) and [http://en.wikipedia.org/wiki/Non-Uniform_Memory_Access Non-Uniform Memory Access] (NUMA).
4.Multi-Core Microprocessor Core - Each core of these multi-core microprocessors is in itself a complete Single Instruction, Multiple Data (SIMD) microprocessor capable of running a number of instructions simultaneously and many SIMD instructions per nanosecond.
4.Multi-Core Microprocessor Core - Each core of these multi-core microprocessors is in itself a complete Single Instruction, Multiple Data ([http://en.wikipedia.org/wiki/SIMD SIMD]) microprocessor capable of running a number of instructions simultaneously and many SIMD instructions per nanosecond.


*SISD machines: These are the conventional systems that contain one CPU, so can accommodate one instruction stream that is executed serially. Nowadays many large mainframes may have more than one CPU but each of these execute instruction streams that are unrelated. Therefore, such systems still should be regarded as SISD machines acting on different data spaces. Examples of SISD machines are workstations like those of DEC, Hewlett-Packard and Sun Microsystems.  
*[http://en.wikipedia.org/wiki/SISD SISD] machines: These are the conventional systems that contain one CPU, so can accommodate one instruction stream that is executed serially. Nowadays many large mainframes may have more than one CPU but each of these execute instruction streams that are unrelated. Therefore, such systems still should be regarded as SISD machines acting on different data spaces. Examples of SISD machines are workstations like those of DEC, Hewlett-Packard and Sun Microsystems.  


*SIMD machines: Such systems often have a large number of processing units, ranging from 1,024 to 16,384 that all may execute the same instruction on different data in lock-step. So, a single instruction manipulates many data items in parallel. Examples of SIMD machines are CPP DAP Gamma II and the Quadrics Apemille.
*[http://en.wikipedia.org/wiki/SIMD SIMD] machines: Such systems often have a large number of processing units, ranging from 1,024 to 16,384 that all may execute the same instruction on different data in lock-step. So, a single instruction manipulates many data items in parallel. Examples of SIMD machines are CPP DAP Gamma II and the Quadrics Apemille.


Another subclass of the SIMD systems are the vectorprocessors. Vectorprocessors act on arrays of similar data rather than on single data items using specially structured CPUs. When data can be manipulated by these vector units, results can be delivered with a rate of one, two or three per clock cycle. So, vector processors execute on their data in an almost parallel way but only when executing in vector mode. In this case they are several times faster than when executing in conventional scalar mode. For practical purposes vectorprocessors are mostly regarded as SIMD machines. An example of such a system is for instance the NEC SX-6i.
Another subclass of the SIMD systems are the vectorprocessors. Vectorprocessors act on arrays of similar data rather than on single data items using specially structured CPUs. When data can be manipulated by these vector units, results can be delivered with a rate of one, two or three per clock cycle. So, vector processors execute on their data in an almost parallel way but only when executing in vector mode. In this case they are several times faster than when executing in conventional scalar mode. For practical purposes vectorprocessors are mostly regarded as SIMD machines. An example of such a system is for instance the NEC SX-6i.


*MISD machines: Theoretically in these type of machines multiple instructions should act on a single stream of data. As yet no practical machine in this class has been constructed nor are such systems easily to conceive.  
*[http://en.wikipedia.org/wiki/MISD MISD] machines: Theoretically in these type of machines multiple instructions should act on a single stream of data. As yet no practical machine in this class has been constructed nor are such systems easily to conceive.  


* MIMD machines: These machines execute several instruction streams in parallel on different data. The difference with the multi-processor SISD machines is that the instructions and data are related because they represent different parts of the same task to be executed. So, MIMD systems may run many sub-tasks in parallel in order to shorten the time-to-solution for the main task to be executed. There is a large variety of MIMD systems and especially in this class the Flynn taxonomy proves to be not fully adequate for the classification of systems. Systems that behave very differently like a four-processor NEC SX-6 and a thousand processor SGI/Cray T3E fall both in this class. Now we will make another important distinction between classes of systems.
* [http://en.wikipedia.org/wiki/MIMD MIMD] machines: These machines execute several instruction streams in parallel on different data. The difference with the multi-processor SISD machines is that the instructions and data are related because they represent different parts of the same task to be executed. So, [http://en.wikipedia.org/wiki/MIMD MIMD] systems may run many sub-tasks in parallel in order to shorten the time-to-solution for the main task to be executed. There is a large variety of MIMD systems and especially in this class the Flynn taxonomy proves to be not fully adequate for the classification of systems. Systems that behave very differently like a four-processor NEC SX-6 and a thousand processor SGI/Cray T3E fall both in this class. Now we will make another important distinction between classes of systems.


a)Shared memory systems: Shared memory systems have multiple CPUs all of which share the same address space. This means that the knowledge of where data is stored is of no concern to the user as there is only one memory accessed by all CPUs on an equal basis. Shared memory systems can be both SIMD or MIMD. Single-CPU vector processors can be regarded as an example of the former, while the multi-CPU models of these machines are examples of the latter. We will sometimes use the abbreviations SM-SIMD and SM-MIMD for the two subclasses.
a)Shared memory systems: [http://en.wikipedia.org/wiki/Shared_memory Shared memory] systems have multiple CPUs all of which share the same address space. This means that the knowledge of where data is stored is of no concern to the user as there is only one memory accessed by all CPUs on an equal basis. Shared memory systems can be both SIMD or MIMD. Single-CPU vector processors can be regarded as an example of the former, while the multi-CPU models of these machines are examples of the latter. We will sometimes use the abbreviations SM-SIMD and SM-MIMD for the two subclasses.


b)Distributed memory systems: In this case each CPU has its own associated memory. The CPUs are connected by some network and may exchange data between their respective memories when required. In contrast to shared memory machines the user must be aware of the location of the data in the local memories and will have to move or distribute these data explicitly when needed. Again, distributed memory systems may be either SIMD or MIMD. The first class of SIMD systems mentioned which operate in lock step, all have distributed memories associated to the processors. Distributed-memory MIMD systems exhibit a large variety in the topology of their connecting network. The details of this topology are largely hidden from the user which is quite helpful with respect to portability of applications. For the distributed-memory systems we will sometimes use DM-SIMD and DM-MIMD to indicate the two subclasses.  
b)Distributed memory systems: In this case each CPU has its own associated memory. The CPUs are connected by some network and may exchange data between their respective memories when required. In contrast to shared memory machines the user must be aware of the location of the data in the local memories and will have to move or distribute these data explicitly when needed. Again, distributed memory systems may be either SIMD or MIMD. The first class of SIMD systems mentioned which operate in lock step, all have distributed memories associated to the processors. [http://en.wikipedia.org/wiki/Distributed_memory Distributed-memory] MIMD systems exhibit a large variety in the topology of their connecting network. The details of this topology are largely hidden from the user which is quite helpful with respect to portability of applications. For the distributed-memory systems we will sometimes use DM-SIMD and DM-MIMD to indicate the two subclasses.


====Why have vector machines declined so fast in popularity?<ref>http://jes.ece.wisc.edu/papers/ics98.espasa.pdf</ref>====


=== AsRock X58 SuperComputer motherboard Specifications ===
Since the early nineties, supercomputers based on the vector paradigm have lost their dominance of the supercomputing market. In June 1993, of the top 500 computers, 310 were parallel-vector machines. All the machines included in the list at that time totaled a peak computing power of 1.8 Teraflops. The 310 vector systems represented roughly 43% of all that computing power. Four and a half years later, in November 97, the same list reports that only 108 PVP's are still in the top-500 systems. Moreover, the total peak power of all systems listed had sky-rocketed to 24.2 Teraflops, but now the vector machines only accounted for 17% of this power.
ASRock has released lot of new motherboards. They have produced a lot of different models, although a new one may only be a slight variation from the last board based on the same chipset. They have produced some good boards. Today, we get to see if that trend continues with the ASRock X58 SuperComputer motherboard. Based on the latest and greatest from Intel, this board is priced at a premium and geared to the enthusiast and high end market.
Here shows the complete specification of AsRock X58
[http://www.legitreviews.com/article/958/1/ Specifications]


[[Image:Com11.jpg]]
The main reason for the decline of vector machines is the cost. Why are vector supercomputers so much more expensive than MPPs or SMPs?
There are several related reasons.


== Supercomputer Operating Systems ==
* Probably the most important reason is that scalar-parallel systems use commodity parts. With commodity parts, design and non-recurring manufacturing costs can be spread over a larger number of chips. If a vector only sells a few dozen copies, then design costs can easily be the dominant overall cost.
 
* The most expensive part of a computer( whether a PC, workstation, or supercomputer) is usually the memory system. Vector processors provide high performance memory systems that sustain very large bandwidths between main memory and the vector registers. To achieve this bandwidth, vector processors rely on high-performance, highly interleaved memory systems. Moreover, for a high performance machine, latency also plays an important role. Therefore, vector supercomputers use the fastest memory technology available.
 
* Another problem is how one packages a processor with such high bandwidths. That is, consider a 20 GB/s memory system and a typical [http://en.wikipedia.org/wiki/Cmos CMOS] package that allows it's pins to operate at 133 MHz. A back-of-the-envelope calculation indicates that 1200 pins would be needed to sustain a peak of 20 GB/s. Such numbers of pins are difficult to implement. In the past, vector manufacturers have employed multi-chip designs. These designs tend to be substantially more expensive than single-chip solutions.
 
* Another factor that keeps vector costs up is the base technology used in these machines. Up to very recently, most vector designs were based on [http://en.wikipedia.org/wiki/Emitter-coupled_logic ECL]. While this choice was adequate in the 1976-1991 time frame, vector vendors apparently failed to realize the potential of CMOS implementations. Nor were they willing to shift from gate array to custom design in order to exploit the capabilities of CMOS. In the last 8 years, CMOS chips have outperformed ECL in numbers of transistors, speed, and reliability. Recently, most vector vendors have introduced CMOS-based vector machines.
 
* Also important is the fact that users often have difficulty achieving peak performance on vector supercomputers. Despite high performance processors and high bandwidth memory systems, even programs that are highly vectorized fall short of theoretical peak performance.
 
* Finally, it is important to note that there have been relatively few architectural innovations since the CRAY-1. The top of the line CRAY T90 still has only 8 vector registers and has a relatively slow scalar microarchitecture when compared to current [http://en.wikipedia.org/wiki/Superscalar superscalar] microprocessors. Meanwhile, superscalar microprocessors have adopted many architectural features to increase performance while still retaining low cost.
 
Japan's Fujitsu announced it's decision to shift to scalar processors in the year 2009. [http://www.rit.edu/news/?v=47077 Read article].
 
=== Supercomputer Operating Systems <ref>http://http://www.top500.org</ref>===


[[Image:Operating System Share Over Time1.png|thumb|right|300px]]
[[Image:Operating System Share Over Time1.png|thumb|right|300px]]
[[Image:Operating System Share Over Time2.png|thumb|right|300px]]
[[Image:Operating System Share Over Time2.png|thumb|right|300px]]
Supercomputer operating systems are most often variants of Linux. Until the early-to-mid-1980s, supercomputers usually sacrificed instruction-set compatibility and code portability for processing and memory access speed.  Most supercomputers up to this time had vastly different operating systems from high-end mainframes. The Cray-1 alone had at least six different proprietary OS's. In similar manner, different and incompatible vectorizing and parallelizing compilers for Fortran existed. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of computer systems such as Cray's Unicos, or Linux.


Now we will discuss a few of the operating systems
Supercomputer use various of operating systems. The operating system of s specific supercomputer depends on its vendor. Until the early-to-mid-1980s, supercomputers usually sacrificed instruction-set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers at this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. In a similar manner, there existed different and incompatible vectorizing and parallelizing compilers for Fortran. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of computer systems such as Cray's Unicos, or Linux.
 
From the Top 500 statistics, before the 21st century almost all the OSs fell into the Unix family, while after year 2000 more and more Linux versions were adopted for supercomputers. In the 2009/11 list, 446 out of 500 supercomputers at the top were using their own distribution of Linux. When we list the OS for each of the top 20 supercomputers, the result for Linux is very impressive:
18 of the top 20 supercomputers in the world are running some form of Linux.
And if you just look at the top 10, **all** of them use Linux. Looking at the list, it becomes clear that prominent supercomputer vendors such as Cray, IBM and SGI have wholeheartedly embraced Linux. In a few cases Linux coexists with a lightweight kernel running on the compute nodes (the part of the supercomputer that performs the actual calculations), but often even these lightweight kernels are based on Linux. Cray, for example, has a modified version of Linux they call CNL (Compute Node Linux).
 
====IBM AIX====
 
AIX (Advanced Interactive eXecutive) is the name given to a series of proprietary operating systems sold by IBM for several of its computer system platforms, based on UNIX System V with 4.3BSD-compatible command and programming interface extensions. AIX runs on 32-bit or 64-bit IBM POWER or PowerPC CPUs (depending on version) and can address up to 32 terabytes (TB) of random access memory. The JFS2 file system—first introduced by IBM as part of AIX—allows computer files and partitions over 4 petabytes in size.


1) '''Linux''' refers to Unix-like computer operating systems based on the Linux kernel. Their development is one of the most prominent examples of free and open source software collaboration. The source code can be used, freely modified, and redistributed, both commercially and non-commercially, by anyone under licenses such as the GNU General Public License.
====Linux Family====


Linux can be installed on a wide variety of computer hardware, ranging from embedded devices such as mobile phones and wristwatches to mainframes and supercomputers. Linux is predominantly known for its use in servers. In 2007 Linux's overall share of the server market was estimated at 12.7%, while a 2008 estimate suggested that 60% of all web servers ran Linux.  
SuSE Linux Enterprise Server Family
SLES has been developed based on SUSE Linux. It was first released on 31 October 2000 as a version for IBM S/390 mainframe machines. In December 2000, the first enterprise client (Telia) was made public. In April 2001, the first SLES for x86 was released. SLES version 9 was released in August 2004; SUSE Linux Enterprise Server 10 was released in July 2006; SUSE Linux Enterprise Server 11 was released on March 24, 2009. All of them are supported by the major hardware vendors—IBM, HP, Sun Microsystems, Dell, SGI, Lenovo, and Fujitsu Siemens Computers.
Redhat Enterprise/CentOS
Redhat Enterprise along with CentOS are adopted in some vendors' platform. Red Hat Enterprise Linux (RHEL) is a Linux distribution produced by Red Hat and targeted toward the commercial market, including mainframes. CentOS is a community-supported, free and open source operating system based on Red Hat Enterprise Linux.


The name "Linux" comes from the Linux kernel, originally written in 1991 by Linus Torvalds. The contribution of a supporting Userland in the form of system tools and libraries from the GNU Project announced in 1983 by Richard Stallman is the basis for the Free Software Foundation's preferred name GNU/Linux.
====UNICOS====


2) '''UNICOS''' is the name given to a range of Unix-like operating system variants developed by Cray for its supercomputers. UNICOS is the successor of the Cray Operating System (COS). It provides network clustering and source code compatibility layers for some other Unixes. UNICOS was originally introduced in 1985 with the Cray-2 system and later ported to other Cray models. The original UNICOS was based on UNIX System V Release 2, and had numerous BSD features for e.g., networking and file system enhancements added into it.
UNICOS is the name of a range of Unix-like operating system variants developed by Cray for its supercomputers. UNICOS is the successor of the Cray Operating System (COS). It provides network clustering and source code compatibility layers for some other Unixes. UNICOS was originally introduced in 1985 with the Cray-2 system and later ported to other Cray models. The original UNICOS was based on UNIX System V Release 2, and had numerous BSD features (e.g., networking and file system enhancements) added to it. UNICOS dominated on supercomputer in 1993 in the sense that 188 out of 500 supercomputers then were running UNICOS. Of course one of the reason is that Cray Inc. was the largest supercomputer vendor at that time(40% supercomputers were from Cray Inc.). As more and more other companies entered the market UNICOS's partition dropped with its share of hardware market. After 2000, Cray began to use linux and even Windows HPC to run on their machine and at the same time UNICOS is walking out of supercomputer.


CX-OS was the original name of UNICOS. This was a prototype system which ran on a Cray X-MP in 1984 before the Cray-2 port. It was used to demonstrate the feasibility of using Unix on a supercomputer system, prior to the availability of Cray-2 hardware.
====Solaris====


The operating system revamp was part of a larger movement inside Cray Research to modernize their corporate software: including rewriting their most important Fortran compiler in a higher-level language (Pascal) with more modern optimizations and vectorizations.
Solaris appeared when Sun Microsystems began to ship their supercomputer to the market. Technically, Solaris is one of the most powerful operating sytems, sometimes much more secure and efficient than Linux distributions and unix systems. But Solaris disappears as Sun Microsystems leaves the market now.


Bell Labs ran very early versions of UNICOS , where Unix pioneers including Dennis Ritchie ported parts of their Eighth Edition Unix including the stream I/O to UNICOS. They also experimented with a guest facility within UNICOS, allowing the stand-alone version of the OS to host itself.
====Windows HPC 2008====


Windows HPC Server 2008, released by Microsoft in September 2008, is the successor product to Windows Compute Cluster Server 2003. Windows HPC Server 2008 is designed for high-end applications that require high performance computing clusters. This version of the server is claimed to efficiently scale to thousands of cores. It includes features unique to HPC workloads: a new high-speed NetworkDirect RDMA, highly efficient and scalable cluster management tools, a service-oriented architecture (SOA) job scheduler, and cluster interoperability through standards such as the High Performance Computing Basic Profile (HPCBP) specification produced by the Open Grid Forum (OGF).


Below shows the operating system share in supercomputers.
====Operating Systems Trends -- Why Linux?====
Operating System          Count  Share %  Rmax Sum (GF)  Rpeak Sum (GF)  Processor Sum 
Linux                            391  78.20 %  17276889    27799325    2610524
Super-UX                     1 0.20 %   122400   131072   1280
AIX                             22 4.40 %   1335219   1734201   105520
Cell OS                     1 0.20 %   35480   38836   3650
SuSE Linux Enterprise Server 9     5 1.00 %   279807          424052   62576
UNICOS/Linux                     1 0.20 %   33929   40622   7812
CNK/SLES 9                     20 4.00 %   3167814   3837118   1187840
SUSE Linux                     1 0.20 %   274800   308283   26304
Redhat Linux                     4 0.80 %   361590   446020   48800
RedHat Enterprise 4             3 0.60 %   109580   151341   14736
UNICOS/SUSE Linux             1 0.20 %   35200   42598   8192
SUSE Linux Enterprise Server 10      4 0.80 %   157080   192640   20952
SLES10 + SGI ProPack 5             16 3.20 %   1689872   1968718   172288
UNICOS/lc                     1 0.20 %   174083   208435   22656
CNL                             12 2.40 %   1320958   1690969   185281
Windows HPC 2008             5 1.00 %   412590   509350   59072
RedHat Enterprise 5             2 0.40 %   129120   139795   11928
CentOS                             8 1.60 %   921980   1134498   100112
Open Solaris                     2 0.40 %   139110   152247   15104


Totals                           500 100% 27977501.79   40950122.01   4664627
AIX was the operating system for IBM own mainframe, but IBM is a strong proponent for Linux for years now. When IBM started its Blue Gene series of supercomputers back in 2002 it chose Linux as its operating system. The following quote from Bill Pulleyblank of IBM Research nicely sums up why IBM and many other vendors have chosen Linux: ''"We chose Linux because it’s open and because we believed it could be extended to run a computer the size of Blue Gene. We saw considerable advantage in using an operating system supported by the open-source community, so that we can get their input and feedback".''


== Supercomputer Programming Models ==
In short, it looks like Linux has conquered the supercomputer market almost completely. Linux outguns popular Unix operating systems like AIX and Solaris from Sun Microsystems because those systems contain features that make them great for commercial users but add a lot of system overhead that ends up limiting overall performance. Here comes one example: a "virtualization" feature in AIX lets many applications share the same processor but just hammers performance. Linus Torvalds says that Linux has caught on in part because while typical Unix versions run on only one or two hardware architectures, Linux runs on more than 20 different hardware architectures including machines based on Intel microprocessors as well as RISC-based computers from IBM and HP. Linux is easy to get, has no licensing costs, has all the infrastructure in place, and runs on pretty much every single relevant piece of hardware out there.
 
===Supercomputer Interconnects<ref>http://compnetworking.about.com/library/weekly/aa051902d.htm</ref>===
[[Image:Interconnect1.png|thumb|right|300px]]
[[Image:Interconnect2.png|thumb|right|300px]]
 
In order for a large number of processors to work together, supercomputers utilize specialized network interfaces. These interconnects support high bandwidth and very low latency communication.
 
Interconnects join nodes inside the supercomputer together. A node is a communication endpoint running one instance of the operating system. Nodes utilize one or several processors and different types of nodes can exist within the system. Compute nodes, for example, execute the processes and threads required for raw computation. I/O nodes handle the reading and writing of data to disks within the system. Service nodes and network nodes provide the user interface into the system and also network interfaces to the outside world. Special-purpose nodes improve overall performance by segregating the system workload with hardware and system software configured to best handle that workload.
 
Supercomputer nodes fit together into a network topology. Modern supercomputers have utilized several different specialized network topologies including hypercube, two-dimensional and three-dimensional mesh, and torus. Supercomputer network topologies can be either static (fixed) or dynamic (through the use of switches).
 
One of the most critical elements of supercomputer networking is routing. Supercomputers that utilize message passing require routing to ensure the individual pieces of a message are routed from source to destination through the topology without creating hotspots (bottlenecks). Advanced routing techniques like wormhole and virtual cut-through routing are employed by today's ASCI supercomputers.
 
Supercomputers utilize various network protocols. Application data communications generally take place at the physical and data link layers. I/O and communications with external networks utilize technologies like HIPPI, FDDI, and ATM as well as Ethernet.
 
Supercomputer interconnects involve large quantities of network cabling. These cables can be very difficult to install as they often must fit within small spaces. Supercomputers do not utilize wireless networking internally as the bandwidth and latency properties of wireless are not suitable for high-performance communications.
 
====Infiniband as an emerging interconnect technology<ref>http://www.networkworld.com/news/2009/111909-infiniband-top-500-supercomputers.html</ref>====
 
[[Image:infiniband.jpg|thumb|right|300px|Growth of Infiniband high speed clustering interconnects]]
 
InfiniBand-based clusters are charging up the Top 500 supercomputer list with 182 systems, including 63 of the top 100 and five of the top 10 now based on the high-speed interconnect. Gigabit Ethernet still dominates the list of the world's 500 fastest supercomputers, with 258 machines.
 
InfiniBand, an interconnect for servers, storage and networking, "has really found its place in high-performance computing and we're starting to see that transcend into the enterprise," says Brian Sparks, director of marketing for Mellanox Technologies and a member of IBTA. "It's the only growing standard interconnect on the list. When you look at all the really large-node clusters out there, the ones reaching peak performance, the majority are InfiniBand."
 
InfiniBand's presence on the Top 500 list, the latest version of which was announced this week, has grown 28% since November. Just four years ago, only 3% of the Top 500 supercomputers used InfiniBand. The technology's growth has come mainly at the expense of Myrinet, an interconnect designed by Myricom that used to hold a substantial portion of the Top 500.
 
Sparks says InfiniBand offers performance, latency and scalability advantages for applications requiring high I/O throughput, while attributing Gigabit Ethernet's dominance of the Top 500 list to its low cost. He also acknowledged that InfiniBand lags far behind Gigabit Ethernet in the enterprise market, largely because IT pros think deploying InfiniBand is too difficult. But IBTA officials are trying to educate IT about InfiniBand to help the technology gain wider acceptance, and they believe challenges posed by emerging technologies like virtualization may convince enterprises of InfiniBand's advantages.
 
=== Supercomputer Programming Models<ref>http://books.google.com/books?id=tDxNyGSXg5IC&pg=PA4&lpg=PA4&dq=evolution+of+supercomputers&source=bl&ots=I1NZtZyCTD&sig=Ma2fHyp336BSp4Yv2ERmfrpeo4&hl=en&ei=IAReS4WbM8eUtgf2u8GnAg&sa=X&oi=book_result&ct=result&resnum=5&ved=0CB4Q6AEwBA#v=onepage&q=evolution%20of%20supercomputers&f=false</ref> ===
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. The base language of supercomputer code is, in general, Fortran or C, using special libraries to share data between nodes. Now environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize a problem for the interconnect characteristics of the machine it is run on. The aim is to prevent any of the CPUs from wasting time waiting on data from other nodes.
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. The base language of supercomputer code is, in general, Fortran or C, using special libraries to share data between nodes. Now environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize a problem for the interconnect characteristics of the machine it is run on. The aim is to prevent any of the CPUs from wasting time waiting on data from other nodes.


Line 326: Line 442:


Fortran a blend derived from The IBM Mathematical Formula Translating System encompasses a lineage of versions, each of which evolved to add extensions to the language while usually retaining compatibility with previous versions. Successive versions have added support for processing of character-based data (FORTRAN 77), array programming, modular programming and object-based programming (Fortran 90 / 95), and object-oriented and generic programming (Fortran 2003).
Fortran a blend derived from The IBM Mathematical Formula Translating System encompasses a lineage of versions, each of which evolved to add extensions to the language while usually retaining compatibility with previous versions. Successive versions have added support for processing of character-based data (FORTRAN 77), array programming, modular programming and object-based programming (Fortran 90 / 95), and object-oriented and generic programming (Fortran 2003).
In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a more practical alternative to assembly language for programming their IBM 704 mainframe computer. A draft specification for The IBM Mathematical Formula Translating System was completed by mid-1954. The first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957. This was an optimizing compiler, since customers were reluctant to use a high-level programming language unless its compiler could generate code whose performance was comparable to that of hand-coded assembly language.
It reduced the number of programming statements necessary to operate a machine by a factor of 20, became very popular then it was  accepted. The language was popular among scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of a complex number data type in the language made Fortran especially suited to technical applications such as electrical engineering.
By 1960, different versions of FORTRAN were available for the IBM 709, 650, 1620, and 7090 computers. The increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. Because of these reasons, FORTRAN is considered to be the first widely used programming language supported across a variety of computer architectures. The development of FORTRAN paralleled the early evolution of compiler technology. Indeed many advances in the theory and design of compilers were specifically motivated by the need to generate efficient code for FORTRAN programs.


2) '''C-Language''' is a general-purpose computer programming language developed in 1972 by Dennis Ritchie at the Bell Telephone Laboratories to use with the Unix operating system. It is also widely used for developing portable application software. C is one of the most popular programming languages and there are hardly few computer architectures for which a C compiler does not exist. C has greatly influenced many other popular programming languages, most notably C++, which originally began as an extension to C.
2) '''C-Language''' is a general-purpose computer programming language developed in 1972 by Dennis Ritchie at the Bell Telephone Laboratories to use with the Unix operating system. It is also widely used for developing portable application software. C is one of the most popular programming languages and there are hardly few computer architectures for which a C compiler does not exist. C has greatly influenced many other popular programming languages, most notably C++, which originally began as an extension to C.
Line 348: Line 458:


Version 3.0, released in May, 2008, is the current version of the API specifications. The new features included in 3.0 is the concept of tasks and the task construct. More info regarding openMP can be read here [http://www.openmp.org/mp-documents/spec30.pdf OpenMP 3.0 specifications].
Version 3.0, released in May, 2008, is the current version of the API specifications. The new features included in 3.0 is the concept of tasks and the task construct. More info regarding openMP can be read here [http://www.openmp.org/mp-documents/spec30.pdf OpenMP 3.0 specifications].
== Cooling Supercomputers<ref>http://www.scientificcomputing.com/news-hpc-Water-cooling-system-enables-supercomputers-to-heat-buildings-070609.aspx</ref> ==
=== Hot Topic – the Problem of Cooling Supercomputers ===
The continued exponential growth in the performance of Leadership Class computers (supercomputers) has been predicated on the ability to perform more computing in less space. Two key components have been 1) the reduction of component size, leading to more powerful chips, and 2) the ability to increase the number of processors, leading to more powerful systems. There is strong pressure to keep the physical size of the system compact to keep communication latency manageable. There has been an increase in power density. The ability to remove the waste heat as quickly and efficiently as possible is becoming a limiting factor in the capability of future machines.
Convection cooling with air is currently the preferred method of heat removal in most data centers. Air handlers force large volumes of cooled air under a raised floor (the deeper the floor, the lower the wind resistance) and up through perforated tiles in front of or under computer racks where fans within the racks servers or blade cages distribute it across the electronics radiating heat, perhaps with the help of heat sinks or heat pipes. This system easily accommodates racks drawing 4-7 kW. In 2001 the average U.S. household drew 1.2 kW. Think about cooling half a dozen homes crammed into about 8 square feet. A BlueGene/L rack uses 9 kW. The Energy Smart Data Center’s (ESDC’s) NW-ICE compute rack uses 12 kW. Petascale system racks may require 60 kW to satisfy communication latency demands that limit a systems physical size. Additional ducting can be used to keep warm and cold air from mixing in the data center, but air cooling alone is reaching its limits.
Chilled water has been used by previous generations of bipolar transistor-based mainframes and the Cray-2 immersed the entire system in Fluorinert in the 1980s. Water has a much higher heat capacity than air and even than Fluorinert, but it is also a conductor so it cannot come into direct contact with the electronics, making transferring the heat to the water. Blowing hot air through a water cooled heat exchanger mounted on or near the rack is one common way of improving the ability to cool a rack, but it is limited by the low heat capacity of air and requires energy to move enough air.
More efficient and effective cooling is only one part of developing a truly energy smart data center. Not generating heat in the first place is another component, which includes moving some heat sources such as power supplies away from the compute components or using more efficient power conversion mechanisms where the power taken of the grid is high voltage alternating current (AC), while the components use low voltage direct current (DC). Power aware components that can reduce their power requirements or turn off entirely when not needed are another element.
This photo story is a peek into one of the world's great supercomputer labs housed inside the US's Oak Ridge National Laboratory, a leading research institution and the site of the reactor in which plutonium for the first atomic bombs was refined during World War II.
Pictured here is one row of the lab's Cray X1E, the largest vector supercomputer in the world. It is rated for 18 teraflops of processing power. The computer is liquid-cooled, and piping was installed into the floor for that purpose
=== Cooling ESDC's NW-ICE ===
[[Image:Comp6.jpg|thumb|right|400px|The Cray X1E is so power-intensive that it requires liquid cooling from 16-inch pipes installed in the floor underneath the supercomputer]]
Fluorinert not only has a high dielectric constant in excess of 35,000 volts across a 0.1 inch gap, but it has other desirable properties. 3M Fluorinert Liquids are actually a family of clear, colorless, odorless perfluorinated fluids having a viscosity similar to water. These non-flammable liquids are thermally and chemically stable and compatible with most sensitive materials, including metals, plastics, and elastomers. Fluorinert liquids are completely fluorinated, containing no chlorine or hydrogen atoms. The strength of the carbon-fluorine bond contributes to their extreme stability and inertness. Fluorinert liquids are available with boiling points ranging from 30°C to 215°C.
NW-ICE is being cooled with a combination of air and two-phase liquid (Fluorinert) cooling, in this case SprayCool. Closed SprayCool modules 1) replace the normal heat sinks on each of the processor chips, 2) cool them with a fine mist of Fluorinert that evaporates as it hits the hot thermal conduction layer on top of the chip package, and 3) return the heated Fluorinert to the heat exchanger in the bottom of the rack. The heat exchanger, also called a thermal server, transfers the heat to facility chilled water. The rest of the electronics in the rack, including memory, is now easily cooled with air. The high heat transfer rate of the two-phase cooling allows the use of much warmer water than conventional air-water heat exchangers, allowing direct connection to efficient external cooling towers.Two-phase liquid cooling is thermodynamically more efficient than convection cooling with air, resulting in less energy being needed to remove waste heat while at the same time being able to handle a higher heat load.
=== Alternative Cooling Approaches<ref>http://nextbigfuture.com/2008/09/cray-has-supercomputer-cooling.html Cray's cooling technology</ref> ===
[[Image:Comp10.jpg|thumb|right|300px|Cray has unveiled a petascale-era cooling technology it says is more than 10 times as efficient as same-size water coils.They call it ECOphlex technology
The cabinet infrastructure can use either Cray’s high-efficiency vertical air cooling or our new phase change cooling technology that converts an inert refrigerant, R134a, from a liquid to a gas. The other flexibility is that the liquid-cooled systems can use various chilled or unchilled datacenter water temperatures to pull heat from the R134a subsystem and to adapt to changing datacenter conditions]]
Spray Cooling is, of course, just one approach to solving data center cooling problems. A plethora of cooling technologies and products exist. Technologies of interest use air, liquid, and/or solid-state cooling principles:
Evolutionary progress is made with conventional air cooling techniques that are known for their reliability. Current investigation focuses on novel heat sinks and fan technologies with the aim to improve contact surface, conductivity, and heat transfer parameters. Efficiency and noise generation are also of great concern with air cooling. 1 Improvements have been made in the design of Piezoelectric Infrasonic Fans that exhibit low power consumption and have a lightweight and inexpensive construction. 2 One of the most effective air cooling options is Air Jet Impingement. 3 The design and manufacturing of nozzles and manifolds for jet impingement is relatively simple.
The same benefits that apply to Air Jet Impingement are exhibited in Liquid Impingement technologies. In addition, liquid cooling offers higher heat transfer coefficients as a tradeoff for higher design and operation complexity. 4 One of the most interesting liquid cooling technologies are microchannel heat sinks in conjunction with micropumps because the channels can be manufactured in the micrometer range with the same process technologies used for electronic devices. 5 Microchannels heat sinks are effective supporting large heat fluxes. 6 Liquid metal cooling, used in cooling reactors, is starting to be an interesting alternative for high-power-density micro devices.7 Large heat transfer coefficients are achieved by circulating the liquid with hydroelectric or hydromagnetic pumps. The pumping circuit is reliable because no moving parts, except for the liquid itself, are involved in the cooling process. Heat transfer efficiency is also increased by high conductivity. The low heat capacity of metals leads to less stringent requirements for heat exchangers. 8 Heat extraction with liquids can be increased by several orders of magnitude by exploiting phase changes. Heat pipes and Thermosyphons exploit the high latent heat of vaporization to remove large quantities of heat from the evaporator section. The circuits are closed by either capillary action in the case of heat pipes or gravity in the case of Thermosyphons. These devices are therefore very efficient but are limited in their temperature range and heat flux capabilities.9 Thermoelectric Coolers (TEC) that use the Peltier-Seebeck effect do not have the largest efficiency but have the ability to provide localized spot cooling, an important capability in modern processor design. Research in this area focuses on improving materials and distributing control of TEC arrays such that the efficiency over the whole chip improves.
=== Water-cooling System Enables Supercomputers to Heat Buildings<ref>http://www.scientificcomputing.com/news-hpc-Water-cooling-system-enables-supercomputers-to-heat-buildings-070609.aspx</ref> ===
In an effort to achieve energy-aware computing, the Swiss Federal Institute of Technology Zurich (ETH), and IBM have announced plans to build a first-of-a-kind water-cooled supercomputer that will directly repurpose excess heat for the university buildings. The system, dubbed Aquasar, is expected to decrease the carbon footprint of the system by up to 85% and estimated to save up to 30 tons of CO2 per year, compared to a similar system using today's cooling technologies.
Here is the link which gives more info regarding this technique [http://www.scientificcomputing.com/news-hpc-Water-cooling-system-enables-supercomputers-to-heat-buildings-070609.aspx water cooled supercomputer]


== Supercomputing Applications ==
== Supercomputing Applications ==
Line 379: Line 525:
UC-Irvine Supercomputer Project Aims to Predict Earth's Environmental Future - In February, the university announced the debut of the Virtual Climate Time Machine -- a computing system designed by IBM to help Irvine scientists predict earth's meteorological and environmental future.
UC-Irvine Supercomputer Project Aims to Predict Earth's Environmental Future - In February, the university announced the debut of the Virtual Climate Time Machine -- a computing system designed by IBM to help Irvine scientists predict earth's meteorological and environmental future.


Professor Dirk Helbing, billed the machine supercomputer that can predict the future (even the next recession) as
A £900million scheme to produce a computer system which could predict the next financial crisis has been backed by leading scientists. The [http://www.dailymail.co.uk/sciencetech/article-2069775/Get-ready-supercomputer-predict-future-EU-prepares-900m-funding.html Living Earth Simulator Project] aims to 'simulate everything' on the planet, using anything from tweets to government statistics to map out social trends and predict the next economic crisis.
[http://www.dailymail.co.uk/sciencetech/article-2069775/Get-ready-supercomputer-predict-future-EU-prepares-900m-funding.html the nervous system for the entire planet]


=== SGI ALTIX - COLUMBIA SUPERCOMPUTER ===
The Columbia supercluster makes it possible for NASA to achieve breakthroughs in science and engineering for the agency's missions and Vision for Space Exploration. Columbia's highly advanced architecture is also being made available to a broader national science and engineering community.
The Columbia supercluster makes it possible for NASA to achieve breakthroughs in science and engineering for the agency's missions and Vision for Space Exploration. Columbia's highly advanced architecture is also being made available to a broader national science and engineering community.
Here shows the Columbia System Facts [http://www.nas.nasa.gov/Resources/Systems/columbia.html Columbia]
Here shows the Columbia System Facts [http://www.nas.nasa.gov/Resources/Systems/columbia.html Columbia]
== Cooling Supercomputers ==
=== Hot Topic – the Problem of Cooling Supercomputers ===
The continued exponential growth in the performance of Leadership Class computers (supercomputers) has been predicated on the ability to perform more computing in less space. Two key components have been 1) the reduction of component size, leading to more powerful chips, and 2) the ability to increase the number of processors, leading to more powerful systems. There is strong pressure to keep the physical size of the system compact to keep communication latency manageable. There has been an increase in power density. The ability to remove the waste heat as quickly and efficiently as possible is becoming a limiting factor in the capability of future machines.
Convection cooling with air is currently the preferred method of heat removal in most data centers. Air handlers force large volumes of cooled air under a raised floor (the deeper the floor, the lower the wind resistance) and up through perforated tiles in front of or under computer racks where fans within the racks servers or blade cages distribute it across the electronics radiating heat, perhaps with the help of heat sinks or heat pipes. This system easily accommodates racks drawing 4-7 kW. In 2001 the average U.S. household drew 1.2 kW. Think about cooling half a dozen homes crammed into about 8 square feet. A BlueGene/L rack uses 9 kW. The Energy Smart Data Center’s (ESDC’s) NW-ICE compute rack uses 12 kW. Petascale system racks may require 60 kW to satisfy communication latency demands that limit a systems physical size. Additional ducting can be used to keep warm and cold air from mixing in the data center, but air cooling alone is reaching its limits.
Chilled water has been used by previous generations of bipolar transistor-based mainframes and the Cray-2 immersed the entire system in Fluorinert in the 1980s. Water has a much higher heat capacity than air and even than Fluorinert, but it is also a conductor so it cannot come into direct contact with the electronics, making transferring the heat to the water. Blowing hot air through a water cooled heat exchanger mounted on or near the rack is one common way of improving the ability to cool a rack, but it is limited by the low heat capacity of air and requires energy to move enough air.
More efficient and effective cooling is only one part of developing a truly energy smart data center. Not generating heat in the first place is another component, which includes moving some heat sources such as power supplies away from the compute components or using more efficient power conversion mechanisms where the power taken of the grid is high voltage alternating current (AC), while the components use low voltage direct current (DC). Power aware components that can reduce their power requirements or turn off entirely when not needed are another element.
This photo story is a peek into one of the world's great supercomputer labs housed inside the US's Oak Ridge National Laboratory, a leading research institution and the site of the reactor in which plutonium for the first atomic bombs was refined during World War II.
Pictured here is one row of the lab's Cray X1E, the largest vector supercomputer in the world. It is rated for 18 teraflops of processing power. The computer is liquid-cooled, and piping was installed into the floor for that purpose
[[Image:Comp3.jpg]]
=== Cooling ESDC's NW-ICE ===
[[Image:Comp6.jpg|thumb|right|400px|The Cray X1E is so power-intensive that it requires liquid cooling from 16-inch pipes installed in the floor underneath the supercomputer]]
Fluorinert not only has a high dielectric constant in excess of 35,000 volts across a 0.1 inch gap, but it has other desirable properties. 3M Fluorinert Liquids are actually a family of clear, colorless, odorless perfluorinated fluids having a viscosity similar to water. These non-flammable liquids are thermally and chemically stable and compatible with most sensitive materials, including metals, plastics, and elastomers. Fluorinert liquids are completely fluorinated, containing no chlorine or hydrogen atoms. The strength of the carbon-fluorine bond contributes to their extreme stability and inertness. Fluorinert liquids are available with boiling points ranging from 30°C to 215°C.
NW-ICE is being cooled with a combination of air and two-phase liquid (Fluorinert) cooling, in this case SprayCool. Closed SprayCool modules 1) replace the normal heat sinks on each of the processor chips, 2) cool them with a fine mist of Fluorinert that evaporates as it hits the hot thermal conduction layer on top of the chip package, and 3) return the heated Fluorinert to the heat exchanger in the bottom of the rack. The heat exchanger, also called a thermal server, transfers the heat to facility chilled water. The rest of the electronics in the rack, including memory, is now easily cooled with air. The high heat transfer rate of the two-phase cooling allows the use of much warmer water than conventional air-water heat exchangers, allowing direct connection to efficient external cooling towers.Two-phase liquid cooling is thermodynamically more efficient than convection cooling with air, resulting in less energy being needed to remove waste heat while at the same time being able to handle a higher heat load.
=== Alternative Cooling Approaches<ref>http://nextbigfuture.com/2008/09/cray-has-supercomputer-cooling.html Cray's cooling technology</ref> ===
[[Image:Comp10.jpg|thumb|right|300px|Cray has unveiled a petascale-era cooling technology it says is more than 10 times as efficient as same-size water coils.They call it ECOphlex technology
The cabinet infrastructure can use either Cray’s high-efficiency vertical air cooling or our new phase change cooling technology that converts an inert refrigerant, R134a, from a liquid to a gas. The other flexibility is that the liquid-cooled systems can use various chilled or unchilled datacenter water temperatures to pull heat from the R134a subsystem and to adapt to changing datacenter conditions]]
Spray Cooling is, of course, just one approach to solving data center cooling problems. A plethora of cooling technologies and products exist. Technologies of interest use air, liquid, and/or solid-state cooling principles:
Evolutionary progress is made with conventional air cooling techniques that are known for their reliability. Current investigation focuses on novel heat sinks and fan technologies with the aim to improve contact surface, conductivity, and heat transfer parameters. Efficiency and noise generation are also of great concern with air cooling. 1 Improvements have been made in the design of Piezoelectric Infrasonic Fans that exhibit low power consumption and have a lightweight and inexpensive construction. 2 One of the most effective air cooling options is Air Jet Impingement. 3 The design and manufacturing of nozzles and manifolds for jet impingement is relatively simple.
The same benefits that apply to Air Jet Impingement are exhibited in Liquid Impingement technologies. In addition, liquid cooling offers higher heat transfer coefficients as a tradeoff for higher design and operation complexity. 4 One of the most interesting liquid cooling technologies are microchannel heat sinks in conjunction with micropumps because the channels can be manufactured in the micrometer range with the same process technologies used for electronic devices. 5 Microchannels heat sinks are effective supporting large heat fluxes. 6 Liquid metal cooling, used in cooling reactors, is starting to be an interesting alternative for high-power-density micro devices.7 Large heat transfer coefficients are achieved by circulating the liquid with hydroelectric or hydromagnetic pumps. The pumping circuit is reliable because no moving parts, except for the liquid itself, are involved in the cooling process. Heat transfer efficiency is also increased by high conductivity. The low heat capacity of metals leads to less stringent requirements for heat exchangers. 8 Heat extraction with liquids can be increased by several orders of magnitude by exploiting phase changes. Heat pipes and Thermosyphons exploit the high latent heat of vaporization to remove large quantities of heat from the evaporator section. The circuits are closed by either capillary action in the case of heat pipes or gravity in the case of Thermosyphons. These devices are therefore very efficient but are limited in their temperature range and heat flux capabilities.9 Thermoelectric Coolers (TEC) that use the Peltier-Seebeck effect do not have the largest efficiency but have the ability to provide localized spot cooling, an important capability in modern processor design. Research in this area focuses on improving materials and distributing control of TEC arrays such that the efficiency over the whole chip improves.
=== Water-cooling System Enables Supercomputers to Heat Buildings ===
In an effort to achieve energy-aware computing, the Swiss Federal Institute of Technology Zurich (ETH), and IBM have announced plans to build a first-of-a-kind water-cooled supercomputer that will directly repurpose excess heat for the university buildings. The system, dubbed Aquasar, is expected to decrease the carbon footprint of the system by up to 85% and estimated to save up to 30 tons of CO2 per year, compared to a similar system using today's cooling technologies.
Here is the link which gives more info regarding this technique [http://www.scientificcomputing.com/news-hpc-Water-cooling-system-enables-supercomputers-to-heat-buildings-070609.aspx water cooled supercomputer]


== Supercomputers of the Future ==
== Supercomputers of the Future ==
Line 431: Line 539:


== External links ==
== External links ==
1.[http://www.netlib.org/benchmark/top500/reports/report93/section2_12_3.html Japan History]
1.[http://expertiza.csc.ncsu.edu/wiki/index.php/1.1 Previous wiki]


2.[http://http://www.top500.org]
2.[http://www.netlib.org/benchmark/top500/reports/report93/section2_12_3.html Japan History]


3.[http://www.bukisa.com/articles/13059_supercomputer-evolution info regarding supercomputer evolution]
3.[http://www.top500.org Top500-The supercomputer website]


4.[http://www.rit.edu/news/?v=47077 black holes]
4.[http://www.bukisa.com/articles/13059_supercomputer-evolution Evolution of supercomputers]


5.[http://www.universetoday.com/2006/10/31/supercomputer-simulates-stellar-evolution/ stellar evolution]
5.[http://www.rit.edu/news/?v=47077 Supercomputers to "see" black holes]


6.[http://www.encyclopedia.com/topic/supercomputer.aspx encyclopedia]
6.[http://www.universetoday.com/2006/10/31/supercomputer-simulates-stellar-evolution/ Supercomputer simulates stellar evolution]


7.[http://news.cnet.com/2300-1_3-5757343-2.html?tag=mncol image source1]
7.[http://www.encyclopedia.com/topic/supercomputer.aspx Encyclopedia on supercomputer]


8.[http://www.silicon.com/technology/hardware/2008/07/14/photos-inside-a-supercomputer-lab-39259269/3/ image source2]
8.[http://news.cnet.com/2300-1_3-5757343-2.html?tag=mncol Image source1]


9.[http://www.scientificcomputing.com/news-hpc-Water-cooling-system-enables-supercomputers-to-heat-buildings-070609.aspx Water-cooling System Enables Supercomputers to Heat Buildings]
9.[http://www.silicon.com/technology/hardware/2008/07/14/photos-inside-a-supercomputer-lab-39259269/3/ Image source2]


10.[http://nextbigfuture.com/2008/09/cray-has-supercomputer-cooling.html Cray's cooling technology]
10.[http://www.scientificcomputing.com/news-hpc-Water-cooling-system-enables-supercomputers-to-heat-buildings-070609.aspx Water-cooling System Enables Supercomputers to Heat Buildings]


11.[http://www.nas.nasa.gov/Resources/Systems/columbia.html Columbia system facts]
11.[http://nextbigfuture.com/2008/09/cray-has-supercomputer-cooling.html Cray's cooling technology]


12.[http://chronicle.com/article/UC-Irvine-Supercomputer/29940/ UC-Irvine Supercomputer Project Aims to Predict Earth's Environmental Future]
12.[http://www.nas.nasa.gov/Resources/Systems/columbia.html Columbia system facts]


13.[http://www.legitreviews.com/article/958/1/ ASRock X58 SuperComputer Motherboard Review]
13.[http://chronicle.com/article/UC-Irvine-Supercomputer/29940/ UC-Irvine Supercomputer Project Aims to Predict Earth's Environmental Future]


14.[http://en.wikipedia.org/wiki/Supercomputer Wikipedia]
14.[http://en.wikipedia.org/wiki/Supercomputer Wikipedia]
Line 463: Line 571:
16.[http://www.hpcwire.com/offthewire/Georgia-Tech-Uses-Supercomputing-for-Better-Insight-into-Genomic-Evolution-70290117.html Genomic Evolution]
16.[http://www.hpcwire.com/offthewire/Georgia-Tech-Uses-Supercomputing-for-Better-Insight-into-Genomic-Evolution-70290117.html Genomic Evolution]


17[http://books.google.com/booksid=wx4kNh8ArH8C&pg=PA3&lpg=PA3&dq=evolution+of+supercomputers&source=bl&ots=7DVWaEYsZ4&sig=WKRWRuqtM-UfPoBWdka5ZWTgng&hl=en&ei=xAleSTmDpqutgfcj_2jAg&sa=X&oi=book_result&ct=result&resnum=1&ved=0CAoQ6AEwADgK#v=onepage&q=evolution%20of%20supercomputers&f=false The future of supercomputing: an interim report By National Research Council (U.S.). Committee on theFutureof Supercomputing]
17.[http://books.google.com/booksid=wx4kNh8ArH8C&pg=PA3&lpg=PA3&dq=evolution+of+supercomputers&source=bl&ots=7DVWaEYsZ4&sig=WKRWRuqtM-UfPoBWdka5ZWTgng&hl=en&ei=xAleSTmDpqutgfcj_2jAg&sa=X&oi=book_result&ct=result&resnum=1&ved=0CAoQ6AEwADgK#v=onepage&q=evolution%20of%20supercomputers&f=false The future of supercomputing: an interim report By National Research Council (U.S.). Committee on theFutureof Supercomputing]


18.[http://www.calit2.net/newsroom/article.php?id=572 Cosmic evolution]
18.[http://www.calit2.net/newsroom/article.php?id=572 Cosmic evolution]
Line 469: Line 577:
19.[http://www.thefreelibrary.com/Firm+Builds+SuperComputers+for+One-Fifth+the+Price.%28Brief+Article%29-a076702427 SuperComputers for One-Fifth the Price]
19.[http://www.thefreelibrary.com/Firm+Builds+SuperComputers+for+One-Fifth+the+Price.%28Brief+Article%29-a076702427 SuperComputers for One-Fifth the Price]


20.[http://royal.pingdom.com/2009/06/11/10-of-the-coolest-and-most-powerful-supercomputers-of-all-time/ top10 supercomputers]
20.[http://royal.pingdom.com/2009/06/11/10-of-the-coolest-and-most-powerful-supercomputers-of-all-time/ Top 10 supercomputers]
 


=='''References'''==
=='''References'''==
<references/>
<references/>

Latest revision as of 03:00, 9 February 2012

Introduction<ref>http://en.wikipedia.org/wiki/Supercomputer</ref>

A supercomputer is generally considered to be the front-line “cutting-edge” in terms of processing capacity (number crunching) and computational speed at the time it is built, but with the pace of development, yesterday's supercomputers have become regular servers today. A state-of-the-art supercomputer is an extremely powerful computer capable of manipulating massive amounts of data in a relatively short amount of time. Supercomputers are very expensive and are deployed for specialized scientific and engineering applications that must handle very large databases or do a great amount of computation -- among them are meteorology, animated graphics, fluid dynamic calculations, nuclear energy research, weapons simulation and petroleum exploration.

Supercomputer Evolution <ref>http://www.bukisa.com/articles/13059_supercomputer-evolution</ref>

The K computer of Fujitsu leads the Top500 list as of November 2011
A 'K computer' rack. Each computer rack is equipped with about 100 CPUs

The United States government has played the key role in the development and use of supercomputers, During World War II, the US Army paid for the construction of Electronic Numerical Integrator And Computer(ENIAC)in order to speed the calculations of artillery tables. In the 30 years after World War II, the US government used high-performance computers to design nuclear weapons, break codes, and perform other security-related applications.

The most powerful supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC). They led the market into the 1970s until Cray left to form his own company,Cray Research.

With Moore’s Law still holding after more than thirty years, the rate at which future mass-market technologies overtake today’s cutting-edge super-duper wonders continues to accelerate. The effects of this are manifest in the abrupt about-face we have witnessed in the underlying philosophy of building supercomputers.

During the 1970s and all the way through the mid-1980s supercomputers were built using specialized custom vector processors working in parallel. Typically, this meant anywhere between four to sixteen CPUs. The next phase of the supercomputer evolution saw the introduction of massive parallel processing and a drift away from vector-only microprocessors. However, the processors used in the construction of this generation of supercomputers were still primarily highly specialized purpose-specific custom designed and fabricated units.

That is no longer true. No longer is silicon fabricated into the incredibly expensive highly specialized purpose-specific customized microprocessor units to serve as the heart and mind of supercomputers. Advances in mainstream technologies and economies of scale now dictate that “off-the-shelf” multicore server-class CPUs are assembled into great conglomerates, combined with mind-boggling quantities of storage (RAM and HDD), and interconnected using light-speed transports.

So we now find that instead of using specialized custom-built processors in their design, supercomputers are based on "off the shelf" server-class multicore microprocessors, such as the IBM PowerPC, Intel Itanium, or AMD x86-64. The modern supercomputer is firmly based around massively parallel processing by clustering very large numbers of commodity processors combined with a custom interconnect.

Currently, the K computer is the world's fastest supercomputer at 10.51 petaFLOPS. K is built by the Japanese computer firm Fujitsu, based in Kobe's Riken Advanced Institute for Computational Science. It consists of 88,000 SPARC64 VIIIfx CPUs, and spans 864 server racks. In November 2011, the power consumption was reported to be 12659.89 kW<ref>http://www.top500.org/list/2011/11/100</ref>. K's performance is equivalent to one million linked desktop computers, which is more than its five closest competitors combined. It consists of 672 cabinets stuffed with circuit-boards, and its creators plan to increase that to 800 in the coming months. It uses enough energy to power nearly 10,000 homes and costs $10 million (£6.2 million) annually to run<ref>http://www.telegraph.co.uk/technology/news/8586655/Japanese-supercomputer-K-is-worlds-fastest.html</ref>.

Some of the companies which build supercomputers are Silicon Graphics, Intel, IBM, Cray, Orion, Aspen Systems etc.

Here is a list of the top 10 supercomputers top10 supercomputers as of November 2011.

First Supercomputer ( ENIAC )

ENIAC - The World's first supercomputer

The Electronic Numerical Integrator And Computer(ENIAC) was first developed in 1949 and it took the world by storm. Originally, it was built to solve very complex problems that would take several months or years to solve. Because of this some of us use computers today but ENIAC was built with a single purpose: to solve scientific problems for the entire nation. The military were first to use it, benefiting the country's defenses. Even today, most new supercomputer technology is designed for the military first, and then is redesigned for civilian uses.

This system actually was used to compute the firing tables for White Sands missile range from 1949 until it was replaced in 1957. This allowed the military to synchronize the liftoff of missiles should it be deemed necessary. This was one of the important milestones in military history for the United States, at least on a technological level.

ENIAC was a huge machine that used nineteen thousand vacuum tubes and occupied a massive fifteen thousand square feet of floor space. It weighted nearly thirty tons, making it one of the largest machines of the time. It was considered the greatest scientific invention up to this point because it took only 2 hours of computation time to do what normally took a team of one hundred engineers a period of a year. That made it almost a miracle in some people's eyes and people got excited about this emerging technology. ENIAC could perform five thousand additions in seconds. Though it seemed very fast, by today's standards, that is extremely slow. Most computers today do millions of additions per second in comparison.

So what made ENIAC run? That task took a lot of manpower to complete and took hours to set up.The people completing the task used board, plus and wires to program the desired commands into the colossal machine. They also had to input the numbers by turning tons of dials until they matched the correct numbers, much like one has to do on a combination lock.

Cray History <ref>http://www.cray.com/Assets/PDF/about/CrayTimeline.pdf</ref>

Cray 1 supercomputer installed at Lawrence Livermore National Laboratory (LLNL), California, USA.
The Cray-T3E-1200E supercomputer

Cray Inc. has a history that extends back to 1972, when the legendary Seymour Cray, the "father of supercomputing," founded Cray Research. R&D and manufacturing were based in his hometown of Chippewa Falls, Wisconsin and business headquarters were in Minneapolis, Minnesota.

The first Cray-1 system was installed at Los Alamos National Laboratory in 1976 for $8.8 million. It boasted a world-record speed of 160 million floating-point operations per second (160 megaflops) and an 8 megabyte (1 million word) main memory. In order to increase the speed of the system, the Cray-1 had a unique "C" shape which made integrated circuits to be closer together and no wire in the system was more than four feet long. To handle the intense heat generated by the computer, Cray developed an innovative refrigeration system using Freon.

In order to concentrate his efforts more on the design, Cray left the CEO position in 1980 and became an independent contractor. Later he worked on the follow-on to the Cray-1, another group within the company developed the first multiprocessor supercomputer, the Cray X-MP, which was introduced in 1982. The Cray-2 system made its debut in 1985, providing a tenfold increase in performance over the Cray-1. In 1988, Cray Y-MP was introduced, the world's first supercomputer to sustain over 1 gigaflop on many applications. Multiple 333 MFLOPS processors powered the system speeds of up-to 2.3 gigaflops.

Always a visionary, Seymour Cray had been exploring the use of gallium arsenide in creating a semiconductor faster than silicon. However, the costs and complexities of this material made it difficult for the company to support both the Cray-3 and the Cray C90 development efforts. In 1989, Cray Research spun off the Cray-3 project into a separate company, Cray Computer Corporation, headed by Seymour Cray and based in Colorado Springs, Colorado. Tragically, Seymour Cray died in September 1996 at the age of 71.

The 1990s brought a number of transformations to Cray Research. The company continued its leadership in providing the most powerful supercomputers for production applications. The Cray C90 featured a new central processor which produced a performance of 1 gigaflop. Using 16 of these powerful processors and 256 million words of central memory, the system boasted of amazing total performance. The company also produced its first "mini-supercomputer," the Cray XMS system, followed by the Cray Y-MP EL series and the subsequent Cray J90. In 1993, it offered the first massively parallel processing (MPP) system, the Cray T3D supercomputer, and quickly captured MPP market leadership from early MPP companies such as Thinking Machines and MasPar. The Cray T3D proved to be exceptionally robust, reliable, sharable and easy-to-administer, compared with competing MPP systems.

The successor, Cray T3E supercomputer has been the world's best selling MPP system. The Cray T3E-1200E system was the first supercomputer to sustain one teraflop (1 trillion calculations per second) on a real-world application. In November 1998, a joint scientific team from Oak Ridge National Laboratory, the National Energy Research Scientific Computing Center (NERSC), Pittsburgh Supercomputing Center and the University of Bristol (UK) ran a magnetism application at a sustained speed of 1.02 teraflops. In another technological landmark, the Cray T90 became the world's first wireless supercomputer when it was released in 1994. Also the Cray J90 series which was released during the same year has become the world's most popular supercomputer, with over 400 systems sold.

Cray Research merged with SGI (Silicon Graphics, Inc.) in February 1996 then the company was renamed Cray Inc. and the ticker symbol was changed to CRAY. In August 1999, SGI created a separate Cray Research business unit to focus completely and exclusively on the unique requirements of high-end supercomputing customers. Assets of this business unit were sold to Tera Computer Company in March 2000. Tera began software development for the Multithreaded Architecture (MTA) systems in 1988 and hardware design commenced in 1991. The Cray MTA-2 system provides scalable shared memory, in which every processor has equal access to every memory location, which greatly simplifies the programming because it eliminates concerns about the layout of memory.Company received its first order for the MTA from the San Diego Supercomputer Center. The multiprocessor system was accepted by the center in 1998, and has since been upgraded to eight processors.

The link below shows the Historical Timeline of Cray in the field of Supercomputers. Historical Timeline of Cray.

Supercomputer History in Japan<ref>http://www.versionone.com/Agile101/Methodologies.asp </ref>

In the beginning there were only a few Cray-1s installed in Japan, and until 1983 no Japanese company produced supercomputers. The first models were announced in 1983. Naturally there had been prototypes earlier like the Fujitsu F230-75 APU produced in two copies in 1978, but Fujitsu's VP-200 and Hitachi's S-810 were the first officially announced versions. NEC announced its SX series slightly later.

The last decade has rather been a surprise. About three generations of machines have been produced by each of the domestic manufacturers. During the last ten years about 300 supercomputer systems have been shipped and installed in Japan, and a whole infrastructure of supercomputing has been established. All major universities, many of the large industrial companies and research centers have supercomputers.

In 1984 the NEC announced the SX-1 and SX-2 and started delivery in 1985. The first two SX-2 systems were domestic deliveries to Osaka University and the Institute for Computational Fluid Dynamics (ICFD). The SX-2 had multiple pipelines with one set of add and multiply floating point units each.It had a cycle time of 6 nanoseconds so each pipelined floating-point unit could peak at 167 Mflop/s. With four pipelines per unit and two floating-point units, the peak performance was about 1.3 Gflop/s. Due to limited memory bandwidth and other issues the performance in benchmark tests was less than half the peak value. The SX-1 had a slightly higher cycle time of 7 ns than the SX-2 and had only half the number of pipelines. The maximum execution rate was 570 Mflop/s.

At the end of 1987, NEC improved its supercomputer family with the introduction of A-series which gave improvements to the memory and I/O bandwidth. The top model, the SX-2A, had the same theoretical peak performance as the SX-2. Several low-range models were also announced but today none of them qualify for the TOP500.

In 1989 NEC announced a rather aggressive new model, the SX-3, with several important changes. The vector cycle time was brought down to 2.9 ns, the number of pipelines was doubled, but most significantly NEC added multiprocessing capability to its new series. It contained four independent arithmetic processors each with a scalar and a vector processing unit and NEC increased its performance by more than one order of magnitude of 22 Gflop/s from 1.33 on the SX-2A. The combination of these features made SX-3 the most powerful vector processor in the world. The total memory bandwidth was subdivided into two halves which in turn featured two vector load and one vector store paths per pipeline set as well as one scalar load and one scalar store path. This gave a total memory bandwidth to the vector units of about 66 GB/s. Like its predecessors, the SX-3 was to offer the memory bandwidth needed to sustain peak performance unless most operands were contained in the vector registers.

In 1992 NEC announced the SX-3R with a couple of improvements compared to the first version. The clock was further reduced to 2.5 ns, so that the peak performance increased to 6.4 Gflop/s per processor

Fujitsu's VP series <ref>http://www.netlib.org/benchmark/top500/reports/report94/Japan/node5.html</ref>

In 1977 Fujitsu produced the first supercomputer prototype called the F230-75 APU which was a pipelined vector processor added to a scalar processor. This attached processor was installed in the Japanese Atomic Energy Commission (JAERI) and the National Aeronautic Lab (NAL).

In 1983 the company came out with the VP-200 and VP-100 systems. In 1986 VP-400 was released with twice as many pipelines as the VP-200 and during mid-1987 the whole family became the E-series with the addition of an extra (multiply-add) pipelined floating point unit that increased the performance potential by 50%. With the flexible range of systems in this generation (VP-30E to VP-400E), good marketing and a broad range of applications, Fujitsu has became the largest domestic supplier with over 80 systems installed, many of which are named in TOP500.

Available since 1990, the VP-2000 family can offer a peak performance of 5 Gflop/s due to a vector cycle time of 3.2 ns. The family was initially announced with four vector performance levels (model 2100, 2200, 2400, and 2600) where each level could have either one of two scalar processors, but the VP-2400/40 doubled this limit offering a peak vector performance similar to the VP-2600. Most of these models are now represented in the Japanese TOP500.

Previous machines wre heavily criticized for the lack of memory throughput. The VP-400 series had only one load/store path to memory that peaked at 4.57 GB/s. This was improved in the VP-2000 series by doubling the paths so that each pipeline set can do two load/store operations per cycle giving a total transfer rate of 20 GB/s. Fujitsu recently decided to use the label, VPX-2x0, for the VP-2x00 systems adapted to their Unix system. Keio Daigaku university now runs such a system.

The VPP-500 series

In 1993 Fujitsu sprung a surprise to the world by announcing a Vector Parallel Processor (VPP) series that was designed for reaching in the range of hundreds of Gflop/s. At the core of the system is a combined Ga-As/Bi-CMOS processor, based largely on the original design of the VP-200. The processor chips gate delay was made as low as 60 ps in the Ga-As chips by using the most advanced hardware technology available. The resulting cycle time was 9.5 ns. The processor has four independent pipelines each capable of executing two Multiply-Add instructions in parallel resulting in a peak speed of 1.7 Gflop/s per processor. Each processor board is equipped with 256 Megabytes of central memory.

The most amazing part of the VPP-500 is the capability to interconnect up to 222 processors via a cross-bar network with two independent (read/write) connections, each operating at 400 MB/s. The total memory is addressed via virtual shared memory primitives. The system is meant to be front-ended by a VP-2x00 system that handles input/output and permanent file store, and job queue logistics.

As mentioned in the introduction, an early version of this system called the Numeric Wind Tunnel, was developed together with NAL. This early version of the VPP-500 (with 140 processors) is today the fastest supercomputer in the world and stands out at the beginning of the TOP500 due to a value that is twice that of the TMC CM-5/1024 installed at Los Alamo.

Hitachi's Supercomputers

Hitachi has been producing supercomputers since 1983 but differs from other manufacturers by not exporting them. For this reason, their supercomputers are less well known in the West. After having gone through two generations of supercomputers, the S-810 series started in 1983 and the S-820 series in 1988, Hitachi leapfrogged NEC in 1992 by announcing the most powerful vector supercomputer ever.The top S-820 model consisted of one processor operating at 4 ns and contained 4 vector pipelines with four pipelines and two independent floating-point units. This corresponded to a peak performance of 2 Gflop/s. Hitachi put great emphasis on a fast memory although this meant limiting its size to a maximum of 512 MB. The memory bandwidth of 2 words per pipe per vector cycle, giving a peak rate of 16 GB/s was a respectable achievement, but it was not enough to keep all functional units busy.

The S-3800 was announced two years ago and is comparable to NEC's SX- 3R in its features. It has up to four scalar processors with a vector processing unit each. These vector units have in turn up to four independent pipelines and two floating point units that can each perform a multiply/add operation per cycle. With a cycle time of 2.0 ns, the whole system achieves a peak performance level of 32 Gflop/s.

The S-3600 systems can be seen as the design of the S-820 recast in more modern technology. The system consists of a single scalar processor with an attached vector processor. The 4 models in the range correspond to a successive reduction of the number of pipelines and floating point units installed. Link showing the list of the top 500 super computers top 500 super computers. Link showing the statistics of top 500 supercomputer statistics

IBM History

IBM 704 at Lawrence Livermore National Laboratory (LLNL), California, USA (October 1956).

In the early 1950s, IBM built their first scientific computer, the IBM 701. The IBM 704 and other high-end systems appeared in the 1950s and 1960s, but by today's standards, these early machines were little more than oversized calculators. After going through a rough patch, IBM re-emerged as a leader in supercomputing research and development in the mid-1990s, creating several systems for the U.S. Government's Accelerated Strategic Computing Initiative (ASCI). These computers boast approximately 100 times as much computational power as supercomputers of just ten years ago.

Sequoia is a petascale Blue Gene/Q supercomputer being constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It is scheduled to be delivered to the Lawrence Livermore National Laboratory in 2011 and fully deployed in 2012.

Sequoia was revealed in February 2009; the targeted performance of 20 petaflops was more than the combined performance of the top 500 supercomputers in the world and about 20 times faster than Roadrunner, the reigning champion of the time. It will be twice as fast as the current record-holding K computer and also twice as fast as the intended future performance of Pleiades.

IBM has also built a smaller prototype called "Dawn," capable of 500 teraflops, using the Blue Gene/P design, to evaluate the Sequoia design. This system was delivered in April 2009 and entered the Top500 list at 9th place in June 2009

Supercomputer speeds are advancing rapidly as manufacturers latch on to new techniques and cheaper prices for computer chips. The first machine to break the teraflop barrier - a trillion calculations per second - was only built in 1996. Two years ago a $59m machine from Sun Microsystems, called Constellation, attempted to take the crown of world's fastest with operating speeds of 421 teraflops. Just two years later, Sequoia was able to achieve nearly 50 times the computing power.

Current Top Supercomputers<ref>http://http://www.top500.org</ref>

Comparison of Top Supercomputer Vendors In The World (November 2011)

Vendor System Count System Share(%) Rmax(GFlops) Rpeak(GFlops) Processor cores
IBM 223 44.6 20234409.46 31888720.48 3317036
Hewlett-Packard 141 28.2 9673402.4 16410722.22 1509694
Cray Inc. 27 5.4 10614483 13558554.6 1457068
SGI 17 3.4 2974418 3764607.92 336104
Bull SA 15 3 3287252 4146261.12 321284
Appro International 13 2.6 2371260 3122119.2 219648
Dell 11 2.2 1160900 1492525.8 136722
Oracle 10 2 1648199.82 1965064.96 183040
Hitachi 5 1 404551 548899.8 32032
Fujitsu 4 0.8 10909940 11707788 743176

Legend

  • Vendor – The manufacturer of the platform and hardware.
  • Rmax – The highest score measured using the LINPACK benchmark suite. This is the number that is used to rank the computers. Measured in quadrillions of floating point operations per second, i.e. Petaflops(Pflops).
  • Rpeak – This is the theoretical peak performance of the system. Measured in Pflops.
  • Processor cores – The number of active processor cores used.

Top 10 supercomputers of today<ref>http://www.junauza.com/2011/07/top-10-fastest-linux-based.html</ref>

Below are the Top 10 supercomputers in the World(as of June 2011). An effort has been made to compare the architectural features of these supercomputers.

World s fastest supercomputer: K-computer

1.K-computer:

  • K-computer is currently the world's fastest supercomputer. It is developed by Fujitsu at the RIKEN Advanced Institute for Computational Science campus in Kobe, Japan.
  • As per the LINPACK benchmarking standards, K-computer managed to give a peak performance of a mind-blowing 8.16 petaflops toppling Tianhe-1A off its number one spot.
  • This supercomputer uses 68,544 2.0 GHZ 8-core SPARC 64 VIIIfx processors packed in 672 cabinets, for a total of 548,352 cores. In layman's term, K-computer's performance is almost equivalent to the performance of 1 million desktop computers.
  • The file system used here is an optimized parallel file system based on Lustre, called Fujitsu Exabyte File System.
  • One of the disadvantage with this high-performer is it consumes about 9.8 MW of power, that's the amount of power that would be enough to light 10,000 houses. When compared with its closest competitor, that is the Tianhe-1A, the K-computer is miles ahead and it is highly unlikely that it would lose its number 1 spot any time soon.
Tianhe-IA

2.Tianhe-IA:

  • Tianhe-1A is an upgraded model of Tianhe-1 that was developed by the Chinese National University of Defense in Changsha, Hunan. Tianhe-1 stands for “Milky Way number 1” in Chinese.
  • Till June 2011, Tianhe-1A was the world's fastest supercomputer before being overtaken by Japan's K computer.
  • This 88 million dollar beast consists of 112 computer cabinets, 12 storage cabinets, 6 communication cabinets and 8 I/O cabinets. Each cabinet has 4 frames, each frame having eight blades and a 16-port switching board. The system has 3584 such blades containing 7168 GPUs and 14,336 CPUs.
  • This Chinese marvel has given a peak performance of about 2.5 petaflops and is used in carrying out computations for petroleum exploration and aircraft design.
  • The best part about Tianhe-1A however, is the fact that it is an open access computer. Which means that it will provide services to other countries too.
  • And maintaining this supercomputer costs about 20 million USD a year.
Jaguar Cray


3. Jaguar Cray:

  • Running on Cray Linux Environment, Jaguar is currently the world's third fastest supercomputer. It has achieved a peak performance of about 1.75 petaflops and was once the world's fastest supercomputer before being overtaken by the Chinese Tianhe-1A in 2010-11.
  • The current model, that is Cray CTX5, is an upgraded version of the popular Cray CTX4. Jaguar has around 224, 256 x86-based AMD Opteron processor cores with 16 GB of memory for each node.
  • The file system used here is an external Lustre file system, which is basically a massively parallel-distributed file system that is used for cluster computing.The file system is capable of storing over 10 Petabytes of data and has a read/write benchmark of 240 GB/s.
  • This mean that this supercomputer costs a whopping 104 million USD and can be found at the Oak Ridge National Laboratory in Tennessee.
Nebulae


4. Nebulae:

  • Nebulae is a research supercomputer located in Shenzhen, Guangdong, China.
  • It has a theoretical peak performance of around 2.9 petaflops.
  • Nebulae is the 4th most powerful supercomputer in the world and the second most powerful in China.
TSUBAME 2.0


5.TSUBAME 2.0:

  • TSUBAME 2.0 is the successor of TSUBAME 1.0, which previously was the fastest supercomputer in Japan.
  • TSUBAME stands for Tokyo Tech Supercomputer Ubiquitously Accessible Mass storage Environment.Tsubame is also the word for a swallow in Japanese that forms an integral part of their logo.
  • The Japanese marvel has a theoretical peak performance of a whopping 2.4 petaflops making it the 5th fastest supercomputer in the world. It has an aggregated memory bandwidth of 720 Terabytes per second.
Cielo Cray XE6


6. Cielo Cray XE6:

  • This mean machine that was unveiled in May 2010, is the sixth fastest supercomputer in the world.
  • It is powered by AMD x86-64 Opteron 8 core processor. Cielo is located in Los Alamos National Laboratory in New Mexico, USA and is mainly used for research purposes.
Pleiades SGI Altix


7. Pleiades SGI Altix:

  • Pleiades is a supercomputer used by NASA to conduct modeling and simulation for their missions.Pleiades is the world's 7th fastest supercomputer.
  • Its performance averages around 1.09 petaflops with a peak of 1.315 petaflops. Loaded with a memory of 185 TB and 111,104 cores.
  • The beast runs on SUSE Linux and has about 6.9 PB of storage space with 12 Direct Data Network (DDN) RAIDs.
Cray XE


8. Cray XE:

  • Housed in DOE's National Energy Research Scientific Computing Center (NERSC), California, Cray XE6 is currently the world's 8th fastest supercomputer.
  • It has achieved a peak performance of 1.5 petaflops and runs on Cray Linux Environment version 3. Specs include, 1536 cores per cabinet with 8 or 12-core 64-bit AMD Opteron 6100 Series processors.
  • XE6 also comes with a Hardware Supervisory System (HSS) that integrates hardware and software components to provide system monitoring, fault identification and recovery.
Tera 100


9. Tera 100:

  • Built by the French company Bull SA, Tera 100 is Europe's fastest supercomputer.
  • It runs on Red Hat Enterprise Linux and gives an average of 1 petaflops, peaking at 1.25 petaFlops.
  • It is one of the most efficient supercomputers in the world running at an efficiency of 83.7 %.
  • Going back to the specs, Tera 100 comes with 20 Petabytes of storage, 300 TB of memory and the processing power of 140,000 Intel Xeon processor cores.
  • This supercomputer includes specially designed water-cooled doors, which cut electrical consumption to half when compared with traditional air-cooled ones.
IBM Roadrunner


10. IBM Roadrunner:

  • This is the world's tenth fastest supercomputer, IBM Roadrunner was built by IBM at the Los Almos National Laboratory in New Mexico, USA.
  • It costs around 125 million USD and is the fourth most energy efficient supercomputer in the world.
  • A computer's performance is generally measured in FLOPS, which stands for floating point operations per second. IBM's Roadrunner has a speed of about 1 petaflops(1015) with a top speed of 1.456 petaflops which it reached in November 2008.
  • It uses Red Hat Enterprise Linux along with Fedora as its operating system and occupies almost 6000 sq. ft. of real estate.
  • Roadrunner's main use is to predict whether USA's aging arsenal of nuclear weapons is safe and reliable. It is also used in other fields like financial, aerospace and automotive industries.
  • The unique thing about Roadrunner is its use of two different processing architectures at the same time, more commonly known as hybrid design.
  • This consists of AMD's Opteron along with IBM's own Powercell 8i. In case your dual core computer's speed was never good enough for you, the IBM Roadrunner boasts of a whopping 122,400 cores


Benchmarking: The benchmarks – that is, the figures which are in petaflops – are carried out using LINPACK. LINPACK is basically a collection of FORTRAN subroutines that analyzes and solves linear equations and linear least-square problems. The computer runs a program that solves a system of linear equations and the floating point rate of execution is measured. It is currently the best way to understand how fast a computer works thus making it a benchmarking standard in the world of supercomputers.

Supercomputer Design

There are two approaches to the design of supercomputers. One, called massively parallel processing (MPP), is to chain together thousands of commercially available microprocessors utilizing parallel processing techniques. A variant of this, called a Beowulf cluster or cluster computing, employs large numbers of personal computers interconnected by a local area network and running programs written for parallel processing. The other approach, called vector processing, is to develop specialized hardware to solve complex calculations. This technique was employed in the Earth Simulator, a Japanese supercomputer introduced in 2002 that utilizes 640 nodes composed of 5104 specialized processors to execute 35.6 trillion mathematical operations per second. it is used to analyze earthquake, weather patterns, climate change, including global warming.

Supercomputer Architecture <ref>http://http://www.top500.org</ref>

Over the years, we see the changes in supercomputer architecture. Various architectures were developed and abandoned, as computer technology progressed. In the early '90s, single processors were still common in the supercomputer arena. However, two other architectures played more important roles. One was Massive Parallel Processing (MPP), which is a computer system with many independent arithmetic units or entire microprocessors, that run in parallel. The other is Symmetric Multiprocessing (SMP), a good representative of the earliest styles of multiprocessor machine architectures. The existence of these two architectures met two of the supercomputer's key needs: parallelism and high performance.

As time passed by, more units were available. In the early 2000s, constellation computing was widely used, and MPP reached its peak percentage. With the rise of cluster computing, the supercomputer world was transformed. In 2009, cluster computing accounted for 83.4% of the architectures in the Top 500. A cluster computer is a group of linked computers, working together closely so that in many respects they form a single computer. Compared to a single computer, clusters are deployed to improve performance and/or availability, while being more cost-effective than single computers of comparable speed or availability. Cluster computers offer a high-performance computing alternative over SMP and massively parallel computing systems. Using redundancy, cluster architectures also aim to provide reliability. From the analysis above, we can see that supercomputers are highly related to technological change, and actively motivated by it.

The supercomputer of today is built on a hierarchical design where a number of clustered computers are joined by ultra high speed network optical interconnections. 1.Supercomputer – Cluster of interconnected multiple multi-core microprocessor computers. 2.Cluster Members - Each cluster member is a computer composed of a number of Multiple Instruction, Multiple Data (MIMD) multi-core microprocessors and runs its own instance of an operating system. 3.Multi-Core Microprocessors - Each of these multi-core microprocessors has multiple processing cores of which the application software is oblivious and share tasks using Symmetric Multiprocessing (SMP) and Non-Uniform Memory Access (NUMA). 4.Multi-Core Microprocessor Core - Each core of these multi-core microprocessors is in itself a complete Single Instruction, Multiple Data (SIMD) microprocessor capable of running a number of instructions simultaneously and many SIMD instructions per nanosecond.

  • SISD machines: These are the conventional systems that contain one CPU, so can accommodate one instruction stream that is executed serially. Nowadays many large mainframes may have more than one CPU but each of these execute instruction streams that are unrelated. Therefore, such systems still should be regarded as SISD machines acting on different data spaces. Examples of SISD machines are workstations like those of DEC, Hewlett-Packard and Sun Microsystems.
  • SIMD machines: Such systems often have a large number of processing units, ranging from 1,024 to 16,384 that all may execute the same instruction on different data in lock-step. So, a single instruction manipulates many data items in parallel. Examples of SIMD machines are CPP DAP Gamma II and the Quadrics Apemille.

Another subclass of the SIMD systems are the vectorprocessors. Vectorprocessors act on arrays of similar data rather than on single data items using specially structured CPUs. When data can be manipulated by these vector units, results can be delivered with a rate of one, two or three per clock cycle. So, vector processors execute on their data in an almost parallel way but only when executing in vector mode. In this case they are several times faster than when executing in conventional scalar mode. For practical purposes vectorprocessors are mostly regarded as SIMD machines. An example of such a system is for instance the NEC SX-6i.

  • MISD machines: Theoretically in these type of machines multiple instructions should act on a single stream of data. As yet no practical machine in this class has been constructed nor are such systems easily to conceive.
  • MIMD machines: These machines execute several instruction streams in parallel on different data. The difference with the multi-processor SISD machines is that the instructions and data are related because they represent different parts of the same task to be executed. So, MIMD systems may run many sub-tasks in parallel in order to shorten the time-to-solution for the main task to be executed. There is a large variety of MIMD systems and especially in this class the Flynn taxonomy proves to be not fully adequate for the classification of systems. Systems that behave very differently like a four-processor NEC SX-6 and a thousand processor SGI/Cray T3E fall both in this class. Now we will make another important distinction between classes of systems.

a)Shared memory systems: Shared memory systems have multiple CPUs all of which share the same address space. This means that the knowledge of where data is stored is of no concern to the user as there is only one memory accessed by all CPUs on an equal basis. Shared memory systems can be both SIMD or MIMD. Single-CPU vector processors can be regarded as an example of the former, while the multi-CPU models of these machines are examples of the latter. We will sometimes use the abbreviations SM-SIMD and SM-MIMD for the two subclasses.

b)Distributed memory systems: In this case each CPU has its own associated memory. The CPUs are connected by some network and may exchange data between their respective memories when required. In contrast to shared memory machines the user must be aware of the location of the data in the local memories and will have to move or distribute these data explicitly when needed. Again, distributed memory systems may be either SIMD or MIMD. The first class of SIMD systems mentioned which operate in lock step, all have distributed memories associated to the processors. Distributed-memory MIMD systems exhibit a large variety in the topology of their connecting network. The details of this topology are largely hidden from the user which is quite helpful with respect to portability of applications. For the distributed-memory systems we will sometimes use DM-SIMD and DM-MIMD to indicate the two subclasses.

Why have vector machines declined so fast in popularity?<ref>http://jes.ece.wisc.edu/papers/ics98.espasa.pdf</ref>

Since the early nineties, supercomputers based on the vector paradigm have lost their dominance of the supercomputing market. In June 1993, of the top 500 computers, 310 were parallel-vector machines. All the machines included in the list at that time totaled a peak computing power of 1.8 Teraflops. The 310 vector systems represented roughly 43% of all that computing power. Four and a half years later, in November 97, the same list reports that only 108 PVP's are still in the top-500 systems. Moreover, the total peak power of all systems listed had sky-rocketed to 24.2 Teraflops, but now the vector machines only accounted for 17% of this power.

The main reason for the decline of vector machines is the cost. Why are vector supercomputers so much more expensive than MPPs or SMPs? There are several related reasons.

  • Probably the most important reason is that scalar-parallel systems use commodity parts. With commodity parts, design and non-recurring manufacturing costs can be spread over a larger number of chips. If a vector only sells a few dozen copies, then design costs can easily be the dominant overall cost.
  • The most expensive part of a computer( whether a PC, workstation, or supercomputer) is usually the memory system. Vector processors provide high performance memory systems that sustain very large bandwidths between main memory and the vector registers. To achieve this bandwidth, vector processors rely on high-performance, highly interleaved memory systems. Moreover, for a high performance machine, latency also plays an important role. Therefore, vector supercomputers use the fastest memory technology available.
  • Another problem is how one packages a processor with such high bandwidths. That is, consider a 20 GB/s memory system and a typical CMOS package that allows it's pins to operate at 133 MHz. A back-of-the-envelope calculation indicates that 1200 pins would be needed to sustain a peak of 20 GB/s. Such numbers of pins are difficult to implement. In the past, vector manufacturers have employed multi-chip designs. These designs tend to be substantially more expensive than single-chip solutions.
  • Another factor that keeps vector costs up is the base technology used in these machines. Up to very recently, most vector designs were based on ECL. While this choice was adequate in the 1976-1991 time frame, vector vendors apparently failed to realize the potential of CMOS implementations. Nor were they willing to shift from gate array to custom design in order to exploit the capabilities of CMOS. In the last 8 years, CMOS chips have outperformed ECL in numbers of transistors, speed, and reliability. Recently, most vector vendors have introduced CMOS-based vector machines.
  • Also important is the fact that users often have difficulty achieving peak performance on vector supercomputers. Despite high performance processors and high bandwidth memory systems, even programs that are highly vectorized fall short of theoretical peak performance.
  • Finally, it is important to note that there have been relatively few architectural innovations since the CRAY-1. The top of the line CRAY T90 still has only 8 vector registers and has a relatively slow scalar microarchitecture when compared to current superscalar microprocessors. Meanwhile, superscalar microprocessors have adopted many architectural features to increase performance while still retaining low cost.

Japan's Fujitsu announced it's decision to shift to scalar processors in the year 2009. Read article.

Supercomputer Operating Systems <ref>http://http://www.top500.org</ref>

Supercomputer use various of operating systems. The operating system of s specific supercomputer depends on its vendor. Until the early-to-mid-1980s, supercomputers usually sacrificed instruction-set compatibility and code portability for performance (processing and memory access speed). For the most part, supercomputers at this time (unlike high-end mainframes) had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. In a similar manner, there existed different and incompatible vectorizing and parallelizing compilers for Fortran. This trend would have continued with the ETA-10 were it not for the initial instruction set compatibility between the Cray-1 and the Cray X-MP, and the adoption of computer systems such as Cray's Unicos, or Linux.

From the Top 500 statistics, before the 21st century almost all the OSs fell into the Unix family, while after year 2000 more and more Linux versions were adopted for supercomputers. In the 2009/11 list, 446 out of 500 supercomputers at the top were using their own distribution of Linux. When we list the OS for each of the top 20 supercomputers, the result for Linux is very impressive: 18 of the top 20 supercomputers in the world are running some form of Linux. And if you just look at the top 10, **all** of them use Linux. Looking at the list, it becomes clear that prominent supercomputer vendors such as Cray, IBM and SGI have wholeheartedly embraced Linux. In a few cases Linux coexists with a lightweight kernel running on the compute nodes (the part of the supercomputer that performs the actual calculations), but often even these lightweight kernels are based on Linux. Cray, for example, has a modified version of Linux they call CNL (Compute Node Linux).

IBM AIX

AIX (Advanced Interactive eXecutive) is the name given to a series of proprietary operating systems sold by IBM for several of its computer system platforms, based on UNIX System V with 4.3BSD-compatible command and programming interface extensions. AIX runs on 32-bit or 64-bit IBM POWER or PowerPC CPUs (depending on version) and can address up to 32 terabytes (TB) of random access memory. The JFS2 file system—first introduced by IBM as part of AIX—allows computer files and partitions over 4 petabytes in size.

Linux Family

SuSE Linux Enterprise Server Family SLES has been developed based on SUSE Linux. It was first released on 31 October 2000 as a version for IBM S/390 mainframe machines. In December 2000, the first enterprise client (Telia) was made public. In April 2001, the first SLES for x86 was released. SLES version 9 was released in August 2004; SUSE Linux Enterprise Server 10 was released in July 2006; SUSE Linux Enterprise Server 11 was released on March 24, 2009. All of them are supported by the major hardware vendors—IBM, HP, Sun Microsystems, Dell, SGI, Lenovo, and Fujitsu Siemens Computers. Redhat Enterprise/CentOS Redhat Enterprise along with CentOS are adopted in some vendors' platform. Red Hat Enterprise Linux (RHEL) is a Linux distribution produced by Red Hat and targeted toward the commercial market, including mainframes. CentOS is a community-supported, free and open source operating system based on Red Hat Enterprise Linux.

UNICOS

UNICOS is the name of a range of Unix-like operating system variants developed by Cray for its supercomputers. UNICOS is the successor of the Cray Operating System (COS). It provides network clustering and source code compatibility layers for some other Unixes. UNICOS was originally introduced in 1985 with the Cray-2 system and later ported to other Cray models. The original UNICOS was based on UNIX System V Release 2, and had numerous BSD features (e.g., networking and file system enhancements) added to it. UNICOS dominated on supercomputer in 1993 in the sense that 188 out of 500 supercomputers then were running UNICOS. Of course one of the reason is that Cray Inc. was the largest supercomputer vendor at that time(40% supercomputers were from Cray Inc.). As more and more other companies entered the market UNICOS's partition dropped with its share of hardware market. After 2000, Cray began to use linux and even Windows HPC to run on their machine and at the same time UNICOS is walking out of supercomputer.

Solaris

Solaris appeared when Sun Microsystems began to ship their supercomputer to the market. Technically, Solaris is one of the most powerful operating sytems, sometimes much more secure and efficient than Linux distributions and unix systems. But Solaris disappears as Sun Microsystems leaves the market now.

Windows HPC 2008

Windows HPC Server 2008, released by Microsoft in September 2008, is the successor product to Windows Compute Cluster Server 2003. Windows HPC Server 2008 is designed for high-end applications that require high performance computing clusters. This version of the server is claimed to efficiently scale to thousands of cores. It includes features unique to HPC workloads: a new high-speed NetworkDirect RDMA, highly efficient and scalable cluster management tools, a service-oriented architecture (SOA) job scheduler, and cluster interoperability through standards such as the High Performance Computing Basic Profile (HPCBP) specification produced by the Open Grid Forum (OGF).

Operating Systems Trends -- Why Linux?

AIX was the operating system for IBM own mainframe, but IBM is a strong proponent for Linux for years now. When IBM started its Blue Gene series of supercomputers back in 2002 it chose Linux as its operating system. The following quote from Bill Pulleyblank of IBM Research nicely sums up why IBM and many other vendors have chosen Linux: "We chose Linux because it’s open and because we believed it could be extended to run a computer the size of Blue Gene. We saw considerable advantage in using an operating system supported by the open-source community, so that we can get their input and feedback".

In short, it looks like Linux has conquered the supercomputer market almost completely. Linux outguns popular Unix operating systems like AIX and Solaris from Sun Microsystems because those systems contain features that make them great for commercial users but add a lot of system overhead that ends up limiting overall performance. Here comes one example: a "virtualization" feature in AIX lets many applications share the same processor but just hammers performance. Linus Torvalds says that Linux has caught on in part because while typical Unix versions run on only one or two hardware architectures, Linux runs on more than 20 different hardware architectures including machines based on Intel microprocessors as well as RISC-based computers from IBM and HP. Linux is easy to get, has no licensing costs, has all the infrastructure in place, and runs on pretty much every single relevant piece of hardware out there.

Supercomputer Interconnects<ref>http://compnetworking.about.com/library/weekly/aa051902d.htm</ref>

In order for a large number of processors to work together, supercomputers utilize specialized network interfaces. These interconnects support high bandwidth and very low latency communication.

Interconnects join nodes inside the supercomputer together. A node is a communication endpoint running one instance of the operating system. Nodes utilize one or several processors and different types of nodes can exist within the system. Compute nodes, for example, execute the processes and threads required for raw computation. I/O nodes handle the reading and writing of data to disks within the system. Service nodes and network nodes provide the user interface into the system and also network interfaces to the outside world. Special-purpose nodes improve overall performance by segregating the system workload with hardware and system software configured to best handle that workload.

Supercomputer nodes fit together into a network topology. Modern supercomputers have utilized several different specialized network topologies including hypercube, two-dimensional and three-dimensional mesh, and torus. Supercomputer network topologies can be either static (fixed) or dynamic (through the use of switches).

One of the most critical elements of supercomputer networking is routing. Supercomputers that utilize message passing require routing to ensure the individual pieces of a message are routed from source to destination through the topology without creating hotspots (bottlenecks). Advanced routing techniques like wormhole and virtual cut-through routing are employed by today's ASCI supercomputers.

Supercomputers utilize various network protocols. Application data communications generally take place at the physical and data link layers. I/O and communications with external networks utilize technologies like HIPPI, FDDI, and ATM as well as Ethernet.

Supercomputer interconnects involve large quantities of network cabling. These cables can be very difficult to install as they often must fit within small spaces. Supercomputers do not utilize wireless networking internally as the bandwidth and latency properties of wireless are not suitable for high-performance communications.

Infiniband as an emerging interconnect technology<ref>http://www.networkworld.com/news/2009/111909-infiniband-top-500-supercomputers.html</ref>

Growth of Infiniband high speed clustering interconnects

InfiniBand-based clusters are charging up the Top 500 supercomputer list with 182 systems, including 63 of the top 100 and five of the top 10 now based on the high-speed interconnect. Gigabit Ethernet still dominates the list of the world's 500 fastest supercomputers, with 258 machines.

InfiniBand, an interconnect for servers, storage and networking, "has really found its place in high-performance computing and we're starting to see that transcend into the enterprise," says Brian Sparks, director of marketing for Mellanox Technologies and a member of IBTA. "It's the only growing standard interconnect on the list. When you look at all the really large-node clusters out there, the ones reaching peak performance, the majority are InfiniBand."

InfiniBand's presence on the Top 500 list, the latest version of which was announced this week, has grown 28% since November. Just four years ago, only 3% of the Top 500 supercomputers used InfiniBand. The technology's growth has come mainly at the expense of Myrinet, an interconnect designed by Myricom that used to hold a substantial portion of the Top 500.

Sparks says InfiniBand offers performance, latency and scalability advantages for applications requiring high I/O throughput, while attributing Gigabit Ethernet's dominance of the Top 500 list to its low cost. He also acknowledged that InfiniBand lags far behind Gigabit Ethernet in the enterprise market, largely because IT pros think deploying InfiniBand is too difficult. But IBTA officials are trying to educate IT about InfiniBand to help the technology gain wider acceptance, and they believe challenges posed by emerging technologies like virtualization may convince enterprises of InfiniBand's advantages.

Supercomputer Programming Models<ref>http://books.google.com/books?id=tDxNyGSXg5IC&pg=PA4&lpg=PA4&dq=evolution+of+supercomputers&source=bl&ots=I1NZtZyCTD&sig=Ma2fHyp336BSp4Yv2ERmfrpeo4&hl=en&ei=IAReS4WbM8eUtgf2u8GnAg&sa=X&oi=book_result&ct=result&resnum=5&ved=0CB4Q6AEwBA#v=onepage&q=evolution%20of%20supercomputers&f=false</ref>

The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. The base language of supercomputer code is, in general, Fortran or C, using special libraries to share data between nodes. Now environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize a problem for the interconnect characteristics of the machine it is run on. The aim is to prevent any of the CPUs from wasting time waiting on data from other nodes.

Now we will discuss briefly regarding the programming languages mentioned above.

1) Fortran previously known as FORTRAN is a general-purpose, procedural, imperative programming language that is especially suited to computation like numeric and scientific computing. It was originally developed by IBM in the 1950s for scientific and engineering applications,then became very dominant in this area of programming early on and has been in use for over half a century in very much computationally intensive areas such as numerical weather prediction, finite element analysis, computational fluid dynamics (CFD), computational physics, and computational chemistry. It is one of the most popular and highly preferred language in the area of high-performance computing and is the language used for programs that benchmark and rank the world's fastest supercomputers.

Fortran a blend derived from The IBM Mathematical Formula Translating System encompasses a lineage of versions, each of which evolved to add extensions to the language while usually retaining compatibility with previous versions. Successive versions have added support for processing of character-based data (FORTRAN 77), array programming, modular programming and object-based programming (Fortran 90 / 95), and object-oriented and generic programming (Fortran 2003).

2) C-Language is a general-purpose computer programming language developed in 1972 by Dennis Ritchie at the Bell Telephone Laboratories to use with the Unix operating system. It is also widely used for developing portable application software. C is one of the most popular programming languages and there are hardly few computer architectures for which a C compiler does not exist. C has greatly influenced many other popular programming languages, most notably C++, which originally began as an extension to C.

It was designed to be compiled using a relatively straightforward compiler, to provide low-level access to memory, to provide language constructs that map efficiently to machine instructions, and to require minimal run-time support.The language was designed to encourage machine-independent programming. The language has been used in very wide range of platforms, from embedded microcontrollers to supercomputers.

3) The Parallel Virtual Machine (PVM) is a software tool used for parallel networking of computers. It is designed to allow a network of heterogeneous Unix and/or Windows machines to be used as a single distributed parallel processor. Thus large and complex computational problems can be solved more cost effectively by using the combined memory and power of many computers. The software is very portable and has been compiled on everything from laptops to Crays.

PVM enables users to exploit their existing computer hardware to solve complex problems at very less cost. PVM is also been used as an educational tool to teach parallel programming and to solve important practical problems. It was developed by the University of Tennessee, Oak Ridge National Laboratory and Emory University. The first version was written at ORNL in 1989, and after being rewritten by University of Tennessee, version 2 was released in March 1991. Version 3 was released in March 1993, and supported fault tolerance and better portability.User programs written in C, C++, or Fortran can access PVM through provided library routines.

4) The OpenMP (Open Multi-Processing) is an application programming interface (API) that supports multi-platform shared memory multiprocessing programming in C, C++ and Fortran on many architectures, including Unix and Microsoft Windows platforms. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.

It was developed by a group of major computer hardware and software vendors. OpenMP is a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer.An application built with the hybrid model of parallel programming can run on a computer cluster using both OpenMP and Message Passing Interface (MPI), or more transparently through the use of OpenMP extensions for non-shared memory systems.

The OpenMP Architecture Review Board (ARB) published its first API specifications, OpenMP for Fortran 1.0, in October 1997. October the following year they released the C/C++ standard. 2000 saw version 2.0 of the Fortran specifications with version 2.0 of the C/C++ specifications being released in 2002. Version 2.5 is a combined C/C++/Fortran specification that was released in 2005.

Version 3.0, released in May, 2008, is the current version of the API specifications. The new features included in 3.0 is the concept of tasks and the task construct. More info regarding openMP can be read here OpenMP 3.0 specifications.

Cooling Supercomputers<ref>http://www.scientificcomputing.com/news-hpc-Water-cooling-system-enables-supercomputers-to-heat-buildings-070609.aspx</ref>

Hot Topic – the Problem of Cooling Supercomputers

The continued exponential growth in the performance of Leadership Class computers (supercomputers) has been predicated on the ability to perform more computing in less space. Two key components have been 1) the reduction of component size, leading to more powerful chips, and 2) the ability to increase the number of processors, leading to more powerful systems. There is strong pressure to keep the physical size of the system compact to keep communication latency manageable. There has been an increase in power density. The ability to remove the waste heat as quickly and efficiently as possible is becoming a limiting factor in the capability of future machines.

Convection cooling with air is currently the preferred method of heat removal in most data centers. Air handlers force large volumes of cooled air under a raised floor (the deeper the floor, the lower the wind resistance) and up through perforated tiles in front of or under computer racks where fans within the racks servers or blade cages distribute it across the electronics radiating heat, perhaps with the help of heat sinks or heat pipes. This system easily accommodates racks drawing 4-7 kW. In 2001 the average U.S. household drew 1.2 kW. Think about cooling half a dozen homes crammed into about 8 square feet. A BlueGene/L rack uses 9 kW. The Energy Smart Data Center’s (ESDC’s) NW-ICE compute rack uses 12 kW. Petascale system racks may require 60 kW to satisfy communication latency demands that limit a systems physical size. Additional ducting can be used to keep warm and cold air from mixing in the data center, but air cooling alone is reaching its limits.

Chilled water has been used by previous generations of bipolar transistor-based mainframes and the Cray-2 immersed the entire system in Fluorinert in the 1980s. Water has a much higher heat capacity than air and even than Fluorinert, but it is also a conductor so it cannot come into direct contact with the electronics, making transferring the heat to the water. Blowing hot air through a water cooled heat exchanger mounted on or near the rack is one common way of improving the ability to cool a rack, but it is limited by the low heat capacity of air and requires energy to move enough air.

More efficient and effective cooling is only one part of developing a truly energy smart data center. Not generating heat in the first place is another component, which includes moving some heat sources such as power supplies away from the compute components or using more efficient power conversion mechanisms where the power taken of the grid is high voltage alternating current (AC), while the components use low voltage direct current (DC). Power aware components that can reduce their power requirements or turn off entirely when not needed are another element.

This photo story is a peek into one of the world's great supercomputer labs housed inside the US's Oak Ridge National Laboratory, a leading research institution and the site of the reactor in which plutonium for the first atomic bombs was refined during World War II. Pictured here is one row of the lab's Cray X1E, the largest vector supercomputer in the world. It is rated for 18 teraflops of processing power. The computer is liquid-cooled, and piping was installed into the floor for that purpose

Cooling ESDC's NW-ICE

The Cray X1E is so power-intensive that it requires liquid cooling from 16-inch pipes installed in the floor underneath the supercomputer

Fluorinert not only has a high dielectric constant in excess of 35,000 volts across a 0.1 inch gap, but it has other desirable properties. 3M Fluorinert Liquids are actually a family of clear, colorless, odorless perfluorinated fluids having a viscosity similar to water. These non-flammable liquids are thermally and chemically stable and compatible with most sensitive materials, including metals, plastics, and elastomers. Fluorinert liquids are completely fluorinated, containing no chlorine or hydrogen atoms. The strength of the carbon-fluorine bond contributes to their extreme stability and inertness. Fluorinert liquids are available with boiling points ranging from 30°C to 215°C.

NW-ICE is being cooled with a combination of air and two-phase liquid (Fluorinert) cooling, in this case SprayCool. Closed SprayCool modules 1) replace the normal heat sinks on each of the processor chips, 2) cool them with a fine mist of Fluorinert that evaporates as it hits the hot thermal conduction layer on top of the chip package, and 3) return the heated Fluorinert to the heat exchanger in the bottom of the rack. The heat exchanger, also called a thermal server, transfers the heat to facility chilled water. The rest of the electronics in the rack, including memory, is now easily cooled with air. The high heat transfer rate of the two-phase cooling allows the use of much warmer water than conventional air-water heat exchangers, allowing direct connection to efficient external cooling towers.Two-phase liquid cooling is thermodynamically more efficient than convection cooling with air, resulting in less energy being needed to remove waste heat while at the same time being able to handle a higher heat load.

Alternative Cooling Approaches<ref>http://nextbigfuture.com/2008/09/cray-has-supercomputer-cooling.html Cray's cooling technology</ref>

Cray has unveiled a petascale-era cooling technology it says is more than 10 times as efficient as same-size water coils.They call it ECOphlex technology The cabinet infrastructure can use either Cray’s high-efficiency vertical air cooling or our new phase change cooling technology that converts an inert refrigerant, R134a, from a liquid to a gas. The other flexibility is that the liquid-cooled systems can use various chilled or unchilled datacenter water temperatures to pull heat from the R134a subsystem and to adapt to changing datacenter conditions

Spray Cooling is, of course, just one approach to solving data center cooling problems. A plethora of cooling technologies and products exist. Technologies of interest use air, liquid, and/or solid-state cooling principles: Evolutionary progress is made with conventional air cooling techniques that are known for their reliability. Current investigation focuses on novel heat sinks and fan technologies with the aim to improve contact surface, conductivity, and heat transfer parameters. Efficiency and noise generation are also of great concern with air cooling. 1 Improvements have been made in the design of Piezoelectric Infrasonic Fans that exhibit low power consumption and have a lightweight and inexpensive construction. 2 One of the most effective air cooling options is Air Jet Impingement. 3 The design and manufacturing of nozzles and manifolds for jet impingement is relatively simple.

The same benefits that apply to Air Jet Impingement are exhibited in Liquid Impingement technologies. In addition, liquid cooling offers higher heat transfer coefficients as a tradeoff for higher design and operation complexity. 4 One of the most interesting liquid cooling technologies are microchannel heat sinks in conjunction with micropumps because the channels can be manufactured in the micrometer range with the same process technologies used for electronic devices. 5 Microchannels heat sinks are effective supporting large heat fluxes. 6 Liquid metal cooling, used in cooling reactors, is starting to be an interesting alternative for high-power-density micro devices.7 Large heat transfer coefficients are achieved by circulating the liquid with hydroelectric or hydromagnetic pumps. The pumping circuit is reliable because no moving parts, except for the liquid itself, are involved in the cooling process. Heat transfer efficiency is also increased by high conductivity. The low heat capacity of metals leads to less stringent requirements for heat exchangers. 8 Heat extraction with liquids can be increased by several orders of magnitude by exploiting phase changes. Heat pipes and Thermosyphons exploit the high latent heat of vaporization to remove large quantities of heat from the evaporator section. The circuits are closed by either capillary action in the case of heat pipes or gravity in the case of Thermosyphons. These devices are therefore very efficient but are limited in their temperature range and heat flux capabilities.9 Thermoelectric Coolers (TEC) that use the Peltier-Seebeck effect do not have the largest efficiency but have the ability to provide localized spot cooling, an important capability in modern processor design. Research in this area focuses on improving materials and distributing control of TEC arrays such that the efficiency over the whole chip improves.

Water-cooling System Enables Supercomputers to Heat Buildings<ref>http://www.scientificcomputing.com/news-hpc-Water-cooling-system-enables-supercomputers-to-heat-buildings-070609.aspx</ref> 

In an effort to achieve energy-aware computing, the Swiss Federal Institute of Technology Zurich (ETH), and IBM have announced plans to build a first-of-a-kind water-cooled supercomputer that will directly repurpose excess heat for the university buildings. The system, dubbed Aquasar, is expected to decrease the carbon footprint of the system by up to 85% and estimated to save up to 30 tons of CO2 per year, compared to a similar system using today's cooling technologies. Here is the link which gives more info regarding this technique water cooled supercomputer


Supercomputing Applications

This is a close-up of Everest, showing a set of data related to the spread of volcanic ash around the globe after eruptions. The display system is capable of producing brilliant colours on a large scale

The primary tasks that the supercomputers are used for are solidly focused on number crunching and enormous calculation intensive tasks that involve massive datasets requiring real-time resolution that for all intent and purpose are beyond the generation lifetime of general purpose computers even in large numbers or that of the average humans life expectancy today. The type of tasks that supercomputers are built to tackle are : Physics - Quantum mechanics, thermodynamics, cosmology, astrophysics Meteorology - Weather forecasting, climate research, global warming research, storm warnings Molecular Modeling - Computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals Physical Simulations – Aerodynamics, fluid dynamics, wind tunnels Engineering Design – Structural simulations, bridges, dams, buildings, earthquake tolerance Nuclear Research – Nuclear fusion research, simulation of the detonation of nuclear weapons, particle physics Cryptography and Cryptanalysis – Code and cipher breaking, encryption Earth Sciences – Geology, geophysics, volcanic behavior Training Simulators – Advanced astronaut training and simulation, civil aviation training Space Research – Mission planning, vehicle design, propulsion systems, mission proposals and feasibility studies and simulations The main users of these supercomputers include: universities, military agencies, NASA, scientific research laboratories and major corporations.

RIT Scientists Use Supercomputers to ‘See’ Black Holes RIT

Supercomputer Simulates Stellar Evolution Stellar Evolution

Georgia Tech University have used the Super Computers for getting better insight into genomic evolution. genomic evolution

Largest-Ever Simulation of Cosmic Evolution Calculated at San Diego Supercomputer Center Cosmic Evolution

UC-Irvine Supercomputer Project Aims to Predict Earth's Environmental Future - In February, the university announced the debut of the Virtual Climate Time Machine -- a computing system designed by IBM to help Irvine scientists predict earth's meteorological and environmental future.

A £900million scheme to produce a computer system which could predict the next financial crisis has been backed by leading scientists. The Living Earth Simulator Project aims to 'simulate everything' on the planet, using anything from tweets to government statistics to map out social trends and predict the next economic crisis.

The Columbia supercluster makes it possible for NASA to achieve breakthroughs in science and engineering for the agency's missions and Vision for Space Exploration. Columbia's highly advanced architecture is also being made available to a broader national science and engineering community. Here shows the Columbia System Facts Columbia

Supercomputers of the Future

Research centers are constantly delving into new applications like data mining to explore additional uses of supercomputing. Data mining is a class of applications that looks for hidden patterns in a group of data which allows scientists to discover previously unknown relationships among the data. For instance, the Protein Data Bank at the San Diego Supercomputer Center is a collection of scientific data that provides scientists around the world with a greater understanding of biological systems. Over the years, the Protein Data Bank has developed into a web-based international repository for three-dimension almolecular structure data that contains detailed information on the atomic structure of complex molecules. The three-dimensional structures of proteins and other molecules contained in the Protein Data Bank and supercomputer analysis of the data provide researchers with new insights on the causes, effects, and treatment of many diseases.

Other modern supercomputing applications involve the advancement of brain research. Researchers are beginning to use supercomputers to provide them with a better understanding of the relationship between the structure and function of the brain, and operation of the brain. Specifically, neuroscientists use supercomputers to look at the dynamic and physiological structures of the brain. Scientists are also working toward development of three-dimensional simulation programs that will allow them to conduct research on areas such as memory processing and cognitive recognition.

In addition to new applications, the future of supercomputing includes the assembly of the next generation of computational research infrastructure and the introduction of new supercomputing architectures. Parallel supercomputers have many processors, distributed and shared memory, and many communications parts. We have yet to explore all of the ways in which they can be assembled. Supercomputing applications and capabilities will continue to develop as institutions around the world share their discoveries and researchers become more proficient at parallel processing.

External links

1.Previous wiki

2.Japan History

3.Top500-The supercomputer website

4.Evolution of supercomputers

5.Supercomputers to "see" black holes

6.Supercomputer simulates stellar evolution

7.Encyclopedia on supercomputer

8.Image source1

9.Image source2

10.Water-cooling System Enables Supercomputers to Heat Buildings

11.Cray's cooling technology

12.Columbia system facts

13.UC-Irvine Supercomputer Project Aims to Predict Earth's Environmental Future

14.Wikipedia

15.Parallel programming in C with MPI and OpenMP ByMichael Jay Quinn

16.Genomic Evolution

17.The future of supercomputing: an interim report By National Research Council (U.S.). Committee on theFutureof Supercomputing

18.Cosmic evolution

19.SuperComputers for One-Fifth the Price

20.Top 10 supercomputers

References

<references/>