1.1: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
Line 111: Line 111:
In 2003 scientists at Virginia Tech assembled a relatively low-cost supercomputer using 1,100  
In 2003 scientists at Virginia Tech assembled a relatively low-cost supercomputer using 1,100  
dual-processor Apple Macintoshes; it was ranked at the time as the third fastest machine in the world.
dual-processor Apple Macintoshes; it was ranked at the time as the third fastest machine in the world.
== Supercomputer Hierarchal Architecture ==
The supercomputer of today is built on a hierarchal design where a number of clustered computers are joined by ultra high speed network (switching fabric) optical interconnections.
1.Supercomputer – Cluster of interconnected multiple multi-core microprocessor computers
2.Cluster Members - Each cluster member is a computer composed of a number of Multiple Instruction, Multiple Data (MIMD) multi-core microprocessors and runs its own instance of an operating system.
3.Multi-Core Microprocessors - Each of these multi-core microprocessors has multiple processing cores of which the application software is oblivious and share tasks using Symmetric Multiprocessing (SMP) and Non-Uniform Memory Access (NUMA).
4.Multi-Core Microprocessor Core - Each core of these multi-core microprocessors is in itself a complete Single Instruction, Multiple Data (SIMD) microprocessor capable of running a number of instructions simultaneously and many SIMD instructions per nanosecond.
== Supercomputing Applications Today ==
The primary tasks that the supercomputers of today and those of tomorrow are used for are solidly focused on number crunching and calculation intensive tasks of enormity. By enormity we mean those large-scale computational tasks that involve massive datasets requiring real-time resolution that for all intent and purpose are beyond the generation lifetime of general purpose computers (even in large numbers) or that of the average humans life expectancy today.
The type of tasks that supercomputers are built to tackle are :
Physics - Quantum mechanics, thermodynamics, cosmology, astrophysics
Meteorology - Weather forecasting, climate research, global warming research, storm warnings
Molecular Modeling - Computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals
Physical Simulations – Aerodynamics, fluid dynamics, wind tunnels
Engineering Design – Structural simulations, bridges, dams, buildings, earthquake tolerance
Nuclear Research – Nuclear fusion research, simulation of the detonation of nuclear weapons, particle physics
Cryptography and Cryptanalysis – Code and cipher breaking, encryption
Earth Sciences – Geology, geophysics, volcanic behavior
Training Simulators – Advanced astronaut training and simulation, civil aviation training
Space Research – Mission planning, vehicle design, propulsion systems, mission proposals and feasibility studies and simulations
The main users of these supercomputers include: universities, military agencies, NASA, scientific research laboratories and major corporations. For more supercomputer information checkout the Top500.org list.
RIT Scientists Use Supercomputers to ‘See’ Black Holes
http://www.rit.edu/news/?v=47077
Supercomputer Simulates Stellar Evolution
http://www.universetoday.com/2006/10/31/supercomputer-simulates-stellar-evolution/
Georgia Tech University have used the Super Computers for getting better insight into genomic evloution.
http://www.hpcwire.com/offthewire/Georgia-Tech-Uses-Supercomputing-for-Better-Insight-into-Genomic-Evolution-70290117.html
Largest-Ever Simulation of Cosmic Evolution Calculated at San Diego Supercomputer Center
http://www.calit2.net/newsroom/article.php?id=572
UC-Irvine Supercomputer Project Aims to Predict Earth's Environmental Future
Still not convinced that global warming is a problem? A new supercomputer at the University of California at Irvine may help turn more skeptics into believers, says Charles Zender, an assistant professor of earth system science.
In February, the university announced the debut of the Virtual Climate Time Machine -- a computing system designed by IBM to help Irvine scientists predict earth's meteorological and environmental future.

Revision as of 05:36, 26 January 2010

Super Computer Evolution

The United States government has played the key role in the development and use of super computers, During the world war2 US army paid for the construction of ENIAC in order to speed the calculations of artillery tables . In 30 years after the world war 2 , the US government used high performance computers to design nuclear weapons, break the codes and perform other security related applications. A supercomputer is generally considered to be the frontline “cutting-edge” in terms of processing capacity (number crunching) and computational speed at the time it is built, but with all new modern technologies, that which is today’s wonder supercomputer fast becomes tomorrow’s standard (off-the-shelf) computer. Supercomputer a state-of-the-art, extremely powerful computer capable of manipulating massive amounts of data in a relatively short amount of time. Supercomputers are very expensive and are employed for specialized scientific and engineering applications that must handle very large databases or do a great amount of computation, among them meteorology, animated graphics, fluid dynamic calculations, nuclear energy research and weapon simulation, and petroleum exploration etc. Supercomputers were introduced in the 1960s and were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. With Moore’s Law still holding true after more than thirty years the rate at which future mass-market technologies overtake today’s cutting-edge super-duper wonders continues to accelerate. The effects of this are manifest in the abrupt about-face we have witnessed in the underlying philosophy of building supercomputers. During the 1970s all the way through the mid-1980s supercomputers were built using specialized custom vector processors working in parallel. Typically, this meant anywhere between four to sixteen CPUs. The next phase of the supercomputer evolution saw the introduction of massive parallel processing and a drift away from vector only microprocessors. However, the processors used in the construction of this generation of supercomputers were still primarily highly specialized purpose-specific custom designed and fabricated units.

No longer is silicon fabricated into the incredibly expensive highly specialized purpose-specific customized microprocessor units that were once the heart and mind of the supercomputers of the past. Advances in mainstream technologies and scales of economy now dictate that “off-the-shelf” multi-core server class CPUs are assembled into great conglomerates combined with mind boggling quantities of storage (RAM and HDD) and interconnected using light speed transports are the order of the day.

So we now find that instead of using specialized custom-built processors in their design, the supercomputers of today and tomorrow are based on "off the shelf" server-class multi-core microprocessors, such as the IBM PowerPC, Intel Itanium, or AMD x86-64. The modern supercomputer is firmly based around massively parallel processing by clustering very large numbers of commodity processors combined with custom interconnect .

Some of the companies which build supercomputers are Silicon Graphics, Intel, IBM, Cray, Orion, Aspen Systems etc.

First Supercomputer ( ENIAC )

A Bit of Computer History The ENIAC was first developed in 1949 and it took the world by storm. Originally, it was built to solve very complex problems that would take several months or years to solve. Because of this some of us use computers today but ENIAC was build with a single purpose. This was used for scientific problems of the entire nation. The Military were the first people to use which benefited the nation in a very huge way. Even today, most of new technology is designed for the military first and then it is redesigned for the public.

This system actually was used to compute the firing tables for White Sands missile range from 1949 until it was replaced in 1957. This allowed the military to synchronize the lift off the missiles should it be deemed necessary. This was one of the important milestones in military history for the United States at least on a technological level.

ENIAC was a huge machine that used nineteen thousand vacuum tubes and occupied a massive fifteen thousand square feet of floor space. It weighted nearly thirty tons, making it one of the largest machines of the time. It was considered the greatest scientific invention up to this point because it took only 2 hours of computation time, which actually takes a team of one hundred engineers working normally for a period of a year to do. That made it almost a miracle in some people's eyes and people got excited about this emerging technology. ENIAC could make five thousand additions in seconds, which seemed very fast, by today's standards that is extremely slow. Most computers today do millions of additions per second in comparison. That is a huge difference when one looks into it.

So what made ENIAC run? That task took lot of manpower to complete and took hours to set up.The people completing the task used board, plus and wires to program the desired commands into the colossal machine. They also has to input the numbers by turning tons of dials until the matched the correct numbers, much like one has to do on a combination lock.

Cray History

Cray Inc. has a rich history that extends back to 1972, when the legendary Seymour Cray, the "father of supercomputing," founded Cray Research. R&D and manufacturing were based in his hometown of Chippewa Falls, Wisconsin and business headquarters were in Minneapolis, Minnesota.

The first Cray-1 system was installed at Los Alamos National Laboratory in 1976 for $8.8 million. It boasted a world-record speed of 160 million floating-point operations per second (160 megaflops) and an 8 megabyte (1 million word) main memory. The Cray-1's architecture reflected its designer's penchant for bridging technical hurdles with revolutionary ideas. In order to increase the speed of this system, the Cray-1 had a unique "C" shape which enabled integrated circuits to be closer together. No wire in the system was more than four feet long. To handle the intense heat generated by the computer, Cray developed an innovative refrigeration system using Freon.

In order to concentrate his efforts on design, Cray left the CEO position in 1980 and became an independent contractor. As he worked on the follow-on to the Cray-1, another group within the company developed the first multiprocessor supercomputer, the Cray X-MP, which was introduced in 1982. The Cray-2 system appeared in 1985, providing a tenfold increase in performance over the Cray-1.In 1988, Cray Research introduced the Cray Y-MP, the world's first supercomputer to sustain over 1 gigaflop on many applications. Multiple 333 MFLOPS processors powered the system to a record sustained speed of 2.3 gigaflops. Always a visionary, Seymour Cray had been exploring the use of gallium arsenide in creating a semiconductor faster than silicon. However, the costs and complexities of this material made it difficult for the company to support both the Cray-3 and the Cray C90 development efforts. In 1989, Cray Research spun off the Cray-3 project into a separate company, Cray Computer Corporation, headed by Seymour Cray and based in Colorado Springs, Colorado. Tragically, Seymour Cray died of injuries suffered in an auto accident in September 1996 at the age of 71. The 1990s brought a number of transforming events to Cray Research. The company continued its leadership in providing the most powerful supercomputers for production applications. The Cray C90 featured a new central processor with industry-leading sustained performance of 1 gigaflop. Using 16 of these powerful processors and 256 million words of central memory, the system boasted unrivaled total performance. The company also produced its first "mini-supercomputer," the Cray XMS system, followed by the Cray Y-MP EL series and the subsequent Cray J90™.In 1993, Cray Research offered its first massively parallel processing (MPP) system, the Cray T3D supercomputer, and quickly captured MPP market leadership from early MPP companies such as Thinking Machines and MasPar. The Cray T3D proved to be exceptionally robust, reliable, sharable and easy-to-administer, compared with competing MPP systems. Since its debut in 1995, the successor Cray T3E supercomputer has been the world's best selling MPP system. The Cray T3E-1200E system was the first supercomputer to sustain one teraflop (1 trillion calculations per second) on a real-world application. In November 1998, a joint scientific team from Oak Ridge National Laboratory, the National Energy Research Scientific Computing Center (NERSC), Pittsburgh Supercomputing Center and the University of Bristol (UK) ran a magnetism application at a sustained speed of 1.02 teraflops. In another technological landmark, the Cray T90 became the world's first wireless supercomputer when it was unveiled in 1994. Also introduced that year, the Cray J90 series has since become the world's most popular supercomputer, with over 400 systems sold. Cray Research merged with SGI (Silicon Graphics, Inc.) in February 1996. In August 1999, SGI created a separate Cray Research business unit to focus exclusively on the unique requirements of high-end supercomputing customers. Assets of this business unit were sold to Tera Computer Company in March 2000. Tera Computer Company was founded in 1987 in Washington, DC, and moved to Seattle, Washington, in 1988. Tera began software development for the Multithreaded Architecture (MTA) systems that year and hardware design commenced in 1991. The Cray MTA-2 system provides scalable shared memory, in which every processor has equal access to every memory location, greatly simplifying programming because it eliminates concerns about the layout of memory. The company completed its initial public offering in 1995 (TERA on the NASDAQ stock exchange), and soon after received its first order for the MTA from the San Diego Supercomputer Center. The multiprocessor system was accepted by the center in 1998, and has since been upgraded to eight processors. Upon the merger with the Cray Research division of SGI in 2000, the company was renamed Cray Inc. and the ticker symbol was changed to CRAY. The link below shows the Historical Timeline of Cray in the field of Supercomputers. http://www.cray.com/Assets/PDF/about/CrayTimeline.pdf

Supercomputer History in Japan

In the beginning there were only a couple of Cray-1s installed in Japan, and until 1983 there were no Japanese-produced supercomputers. The first models were announced in 1983. Naturally there had been prototypes earlier (like the Fujitsu F230-75 APU produced in two copies in 1978) but Fujitsu's VP-200 and Hitachi's S-810 were the first officially announced versions. NEC announced its SX series slightly later. The last decade has completely changed the scene. About three generations of machines have been produced by each of the domestic manufacturers and model improvements have also been offered during the life-span of those machines. During the last ten years about 300 supercomputer systems have been shipped and installed in Japan, and a whole infrastructure of supercomputing has been established. All major universities have supercomputers, many of the large industrial companies and research centres as well. In 1984 the company announced the SX-1 and SX-2 and started delivery in 1985. The first two SX-2 systems were domestic deliveries to Osaka University and the Institute for Computational Fluid Dynamics (ICFD). The SX-2 had multiple pipelines with one set of add and multiply floating point units each. With a cycle time of 6 nanoseconds, each pipelined floating-point unit could peak at 167 Mflop/s. With four pipelines per unit and two floating-point units, the peak performance was about 1.3 Gflop/s. Due to limited memory bandwidth and other issues the sustained performance in benchmark tests was typically less than half the peak value. The SX-1 had a slightly higher cycle time (7 ns) than the SX-2. In addition it had only half the number of pipelines. The maximum execution rate was 570 Mflop/s. At the end of 1987, NEC improved its supercomputer family with the A-series which gave improvements to the memory and I/O bandwidth. The top model, the SX-2A, had the same theoretical peak performance as the SX-2. Several low-range models were also announced but today none of these systems can qualify for the TOP500. In 1989 NEC announced a rather aggressive new model, the SX-3, with several important changes. The vector cycle time was brought down to 2.9 ns, the number of pipelines was doubled, but most significantly NEC added multiprocessing capability to its new series. The new top of the range featured four independent arithmetic processors (each with a scalar and a vector processing unit); and NEC pushed its performance by more than one order of magnitude to an impressive peak of 22 Gflop/s (from 1.33 on the SX-2A). The combination of these features put the SX-3 at the top of the list of now the most powerful vector processors in the world. The total memory bandwidth was subdivided into two halves which in turn featured two vector load and one vector store paths per pipeline set as well as one scalar load and one scalar store paths. This gave a total memory bandwidths to the vector units of about 66 GB/s. Like its predecessors, the SX-3 was therefore unable to offer the memory bandwidth needed to sustain peak performance unless most operands were contained in the vector registers. In 1992 NEC announced the SX-3R with a couple of improvements compared to the first version. The clock was further reduced to 2.5 ns, so that the peak performance increased to 6.4 Gflop/s per processor

Fujitsu's VP series

In 1977 Fujitsu produced the first supercomputer prototype called the F230-75 APU that was a pipelined vector processor added to a scalar processor. This attached processor was installed in the Japanese Atomic Energy Commission (JAERI) and the National Aeronautic Lab (NAL). In 1983 the company came out with the VP-200 and VP-100 systems, which later spun off the low-end VP-50 and VP-30 systems. In 1986 came the VP-400 (with twice as many pipelines as the VP-200) and as of mid-1987 the whole family became the E-series with the addition of an extra (multiply-add) pipelined floating point unit that boosted the performance potential by 50%. Thanks to the flexible range of systems in this generation (VP-30E to VP-400E), and other reasons such as good marketing and a broad range of applications, Fujitsu has became the largest domestic supplier with over 80 systems installed, many of which are well below the cut-off limit in the TOP500. Available since 1990, the VP-2000 family can offer a peak performance of 5 Gflop/s thanks to a vector cycle time of 3.2 ns. The family was initially announced with four vector performance levels (model 2100, 2200, 2400, and 2600) where each level could have either one of two scalar processors. but the VP-2400/40 doubled this limit offering a peak vector performance similar to the VP-2600. Most of these models are now represented in the Japanese TOP500. Previous machines had been heavily criticised for the lack of memory throughput. The VP-400 series had only one load/store path to memory that peaked at 4.57 GB/s. This was improved in the VP-2000 series by doubling the paths so that each pipeline set can do two load/store operations per cycle giving a total transfer rate of 20 GB/s. Fujitsu recently decided to use the label, VPX-2x0, for the VP-2x00 systems adapted to their Unix system. Keio Daigaku (university) now runs such a system.

The VPP-500 series

In 1993 Fujitsu surprised the world by announcing a Vector Parallel Processor (VPP) series that was designed for reaching well into the range of hundreds of Gflop/s. At the core of the system is a combined Ga-As/Bi-CMOS processor, based largely on the original design of the VP-200. By using the most advanced hardware technology available the processor chips have a gate delay as low as 60 ps in the Ga-As chips. The resulting cycle time is 9.5 ns. The processor has four independent pipelines each capable of executing two Multiply-Add instructions in parallel resulting in a peak speed of 1.7 Gflop/s per processor. Each processor board is equipped with 256 Megabytes of central memory. The most striking part of the VPP-500 is the capability to interconnect up to 222 processors via a cross-bar network via two independent (read/write) connections, each operating at 400 MB/s. The total memory can be addressed via virtual shared memory primitives. The system is meant to be front-ended by a VP-2x00 system that handles input/output and permanent file store, and job queue logistics. As mentioned in the introduction, an early version of this system called the Numeric Wind Tunnel, was developed together with NAL. This early version of the VPP-500 (with 140 processors) is today the fastest supercomputer in the world and stands out at the beginning of the TOP500 due to a [tex2html_wrap2074] value that is twice that of the TMC CM-5/1024 installed at Los Alamo

Hitachi's Supercomputers

Hitachi has been producing supercomputers since 1983 but differs from the two other manufacturers by not exporting them. For this reason, their supercomputers are less well known in the West than the ones made by NEC and Fujitsu. After having gone through two generations of supercomputers, the S-810 series started in 1983 and the S-820 series in 1988, Hitachi leapfrogged NEC in 1992 by announcing the most powerful vector supercomputer ever. The top S-820 model consisted of one processor that operated at 4 ns and contained 4 vector pipelines with four pipelines and two independent floating-point units. This corresponded to a peak performance of 2 Gflop/s. Already in these systems Hitachi put great emphasis on a fast memory although this meant limiting its size to a maximum of 512 MB. The memory bandwidth (2 words per pipe per vector cycle, giving a peak rate of 16 GB/s) was a respectable achievement, but it was not enough to keep all functional units busy. The S-3800 was announced two years ago and is comparable to NEC's SX- 3R in its features. It has up to four scalar processors with a vector processing unit each. These vector units have in turn up to four independent pipelines and two floating point units that can each perform a multiply/add operation per cycle. With a cycle time of 2.0 ns, the whole system achieves a peak performance level of 32 Gflop/s. The S-3600 systems (there are four of them in the TOP500) can be seen as the design of the S-820 recast in more modern technology. The system consists of a single scalar processor with an attached vector processor. The 4 models in the range correspond to a successive reduction of the number of pipelines and floating point units installed. Link showing the list of the top 500 super computers . http://www.top500.org/list/2009/11/100 PDF link showing the statistics of top 500 supercomputer http://www.top500.org/static/lists/2009/11/top500_statistics.pdf

IBM unveils world's fastest supercomputer

Sequoia, built for the US department of energy, is almost 20 times more powerful than the previous record holder. In a race to see who can build the most powerful machine - regularly trading places with each other as they develop faster and more impressive systems. But IBM today smashed the existing record as it unveiled the world's fastest supercomputer, a machine that is almost 20 times more powerful than the previous record holder. The new system, dubbed Sequoia, will be able to achieve speeds of up to 20 Petaflops - 20 quadrillion calculations per second - the equivalent of more than 2m laptops. Sequoia will consist of around 1.6m computer chips, giving it the ability to perform an order of magnitude faster than the 1.1 Petaflop Blue Gene/L computer, which is currently recognised as the world's most powerful.

It is being built by IBM for the US department of energy and should be installed at the Lawrence Livermore National Laboratory in California by 2012. The LLNC is one of the world's leading laboratories dedicated to national security, where teams of scientists work on projects linked to nuclear energy, environmental protection and economic issues. Sequoia will be used to simulate nuclear tests and explosions, alongside a smaller machine, known as Dawn, which is currently being built. "Both systems will be used for the ongoing safety and reliability of our nation's nuclear stockpile," IBM spokesman Ron Favali said. "Sequoia is the big one." Supercomputer speeds are advancing rapidly as manufacturers latch on to new techniques and cheaper prices for computer chips. The first machine to break the teraflop barrier - a trillion calculations per second - was only built in 1996. Two years ago a $59m machine from Sun Microsystems, called Constellation, attempted to take the crown of world's fastest with operating speeds of 421 teraflops, or 421tn calculations per second. Just two years later, Sequoia could be able to achieve nearly 50 times the computing power. Costs remain high, but the latest generation of supercomputers are more powerful and less expensive than at any point in history. "We were just talking about teraflops and the fact we just broke the petaflop barrier is pretty amazing," said Favali. "The next speed is 'exaflop' - 10 to the 18th power." The move comes less than a week after IBM announced that it is laying off almost 3,000 employees worldwide.

Supercomputer Design

Sixteen racks of IBM's Blue Gene/L supercomputer can perform 70.7 trillion calculations per second, making it the fastest machine known so far
A closer look at IBM's Blue Gene/L supercomputer
Another one of the lab's supercomputers, this is IBM's Blue Gene, currently rated the 74th most powerful computer in the world, at 28 teraflops

There are two approaches to the design of supercomputers. One, called massively parallel processing (MPP), is to chain together thousands of commercially available microprocessors utilizing parallel processing techniques. A variant of this, called a Beowulf cluster or cluster computing, employs large numbers of personal computers interconnected by a local area network and running programs written for parallel processing. The other approach, called vector processing, is to develop specialized hardware to solve complex calculations. This technique was employed in the Earth Simulator , a Japanese supercomputer introduced in 2002 that utilizes 640 nodes composed of 5104 specialized processors to execute 35.6 trillion mathematical operations per second; it is used to analyze earthquake and weather patterns and climate change, including global warming. Currently the fastest supercomputer is the Blue Gene/L,completed at Lawrence Livermore National Laboratory in 2005 and upgraded in 2007; it utilizes 212,992 processors to execute potentially as many 596 trillion mathematical operations per second. The computer is used to do nuclear weapons safety and reliability analyses. A prototype of Blue Gene/L demonstrated in 2003 was air-cooled, as opposed to many high-performance machines that use water and refrigeration, and used no more power than the average home. In 2003 scientists at Virginia Tech assembled a relatively low-cost supercomputer using 1,100 dual-processor Apple Macintoshes; it was ranked at the time as the third fastest machine in the world.

Supercomputer Hierarchal Architecture

The supercomputer of today is built on a hierarchal design where a number of clustered computers are joined by ultra high speed network (switching fabric) optical interconnections. 1.Supercomputer – Cluster of interconnected multiple multi-core microprocessor computers 2.Cluster Members - Each cluster member is a computer composed of a number of Multiple Instruction, Multiple Data (MIMD) multi-core microprocessors and runs its own instance of an operating system. 3.Multi-Core Microprocessors - Each of these multi-core microprocessors has multiple processing cores of which the application software is oblivious and share tasks using Symmetric Multiprocessing (SMP) and Non-Uniform Memory Access (NUMA). 4.Multi-Core Microprocessor Core - Each core of these multi-core microprocessors is in itself a complete Single Instruction, Multiple Data (SIMD) microprocessor capable of running a number of instructions simultaneously and many SIMD instructions per nanosecond.

Supercomputing Applications Today

The primary tasks that the supercomputers of today and those of tomorrow are used for are solidly focused on number crunching and calculation intensive tasks of enormity. By enormity we mean those large-scale computational tasks that involve massive datasets requiring real-time resolution that for all intent and purpose are beyond the generation lifetime of general purpose computers (even in large numbers) or that of the average humans life expectancy today. The type of tasks that supercomputers are built to tackle are : Physics - Quantum mechanics, thermodynamics, cosmology, astrophysics Meteorology - Weather forecasting, climate research, global warming research, storm warnings Molecular Modeling - Computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals Physical Simulations – Aerodynamics, fluid dynamics, wind tunnels Engineering Design – Structural simulations, bridges, dams, buildings, earthquake tolerance Nuclear Research – Nuclear fusion research, simulation of the detonation of nuclear weapons, particle physics Cryptography and Cryptanalysis – Code and cipher breaking, encryption Earth Sciences – Geology, geophysics, volcanic behavior Training Simulators – Advanced astronaut training and simulation, civil aviation training Space Research – Mission planning, vehicle design, propulsion systems, mission proposals and feasibility studies and simulations The main users of these supercomputers include: universities, military agencies, NASA, scientific research laboratories and major corporations. For more supercomputer information checkout the Top500.org list.

RIT Scientists Use Supercomputers to ‘See’ Black Holes http://www.rit.edu/news/?v=47077

Supercomputer Simulates Stellar Evolution http://www.universetoday.com/2006/10/31/supercomputer-simulates-stellar-evolution/

Georgia Tech University have used the Super Computers for getting better insight into genomic evloution. http://www.hpcwire.com/offthewire/Georgia-Tech-Uses-Supercomputing-for-Better-Insight-into-Genomic-Evolution-70290117.html

Largest-Ever Simulation of Cosmic Evolution Calculated at San Diego Supercomputer Center http://www.calit2.net/newsroom/article.php?id=572

UC-Irvine Supercomputer Project Aims to Predict Earth's Environmental Future Still not convinced that global warming is a problem? A new supercomputer at the University of California at Irvine may help turn more skeptics into believers, says Charles Zender, an assistant professor of earth system science. In February, the university announced the debut of the Virtual Climate Time Machine -- a computing system designed by IBM to help Irvine scientists predict earth's meteorological and environmental future.