CSC/ECE 506 Fall 2007/wiki1 2 3K8i
Trends in Scientific and Engineering Computing
Scientific and engineering applications continue to demand the highest performance computers available, a trend established in the early days of supercomputers that continues today in the realm of "High Performance Computing" (HPC). Much work has been done in the areas of computational modeling and simulation. By using modeling, scientists are able to analyze hypothetical designs, rather than relying on empirical evidence, which is often difficult or impossible to come by. Higher performance computers afford the resources to build richer, more complicated models that allow scientist to study more difficult problems. As problems are solved, new, more complex problems arise to take their place. Many Grand Challenge Problems "on the radar" today were not feasible on the supercomputers of a just decade ago. Generally, each successive generation of Grand Challenge problems require more memory and faster processing capabilities. The architectures used to deliver those memory and processing speed requirements are changing, and along with them the algorithms and techniques used to make efficient use of them.
Before looking at some of today's most challenging computational problems, it may be instructive to look at the trends in HPC over the last decade or so.
Hardware Trends
According the the U.S. Army Research Laboratory (http://www.arl.army.mil/www/default.cfm?Action=20&Page=272), there have been five generations of architectures in the realm of scientific computing. They are serial processors (1947-1986), vector processors (1986-2002), shared memory (1993-present), distributed memory (2000-present), and commodity clusters (2001-present).
Indeed, the trend in recent years in multiprocessing has been away from vector architectures and towards low cost, multi-threaded single address spaced systems. (http://www.nus.edu.sg/comcen/svu/publications/SVULink/vol_1_iss_1/hpc-trends.html) Multi-core processors are becoming more and more mainstream. Today, many new desktop systems contain dual cores, with some quad cores being sold. Tony Hey (http://en.wikipedia.org/wiki/Tony_Hey), a leading researcher in parallel computing, has predicted that by 2015, a single socket will contain hundreds of cores. (http://www.accessmylibrary.com/coms2/summary_0286-30177016_ITM)
There have also been strong trends towards cluster and grid computing, which seek to take advantage of multiple, low cost (commodity) computer systems and treat them as a logical entity. Several categories of clusters exists, ranging from high performance computers linked together via high speed interconnects, to geographically separated systems linked together over the Internet. (http://en.wikipedia.org/wiki/Cluster_computing) Grid computing is a form of the latter, generally composed of multiple "collections" of computers (grid elements) that do not necessarily trust each other.
Putting these together, we see a clear movement towards commodity processors in both the areas of parallel computing (processors in a single computer system) and distributed computing (using multiple computer systems). Perhaps the strongest driving force behind these trends is one of supply and demand, or economies of scale. Specialized hardware is, by definition, in less demand then general purpose hardware, making it much more difficult to recover design and production costs. Hence, there is a strong economic incentive for consumers and manufacturers alike to favor more general purpose solutions.
Despite these trends, the "traditional" supercomputer is alive and well. Today's highest performance machines are capable of performing hundreds of teraflops per second. (A teraflop is 1 x 10^12 FLOPS.) As of June 2007, the world's fastest computer, as measured by the Linpack (http://en.wikipedia.org/wiki/Linpack) benchmark, is IBM's massively parallel Blue Gene/L. IBM has plans to build a 3 petaflop (3 x 10^15 FLOPS) machine by fall 2007, and a 10 petaflop machine in the 2010-2012 timeframe. More information on IBM's Blue Gene and other supercomputers can be found at http://www.top500.org/.
Software Trends
With dramatic increases in hardware performance come more demanding applications, which in turn demand higher performance software. Many scientific applications available today, some even commercially, were not available on general purpose architectures even a few years ago.
Mathematica 6 by Wolfram Research, is one of a growing number of commercially available scientific/engineering applications. Mathematica does not require parallel processors, but does require significant resources (at least, from the "average" personal computer) - at least 512mb memory. However, on many operating systems Mathematica is able to take advantage of multiple processors, focusing on linear algebra and machine-precision real numbers. Wolfram does offer a parallel computing toolkit (http://www.wolfram.com/products/applications/parallel) that allows Mathematica application designers to take advantage of various parallel architectures, ranging from shared memory multiprocessor machines to supercomputers. Wolfram also offers a version of Mathematica tailored to cluster/grid computing, see http://www.wolfram.com/products/gridmathematica/.
There is a trend in HPC towards virtualization. Virtualization allows application programmers to write applications for virtual machines, not concerning themselves with physical platforms. There are many benefits to running on virtual machines, among them the ability to seamlessly migrate the virtual machines (along with its running applications) off its physical host machine. However, in practice there are many challenges related to virtualization, chief among them the performance limitations they impose (http://www.scientificcomputing.com/ShowPR~PUBCODE~030~ACCT~3000000100~ISSUE~0707~RELTYPE~HPCC~PRODCODE~00000000~PRODLETT~C.html).
Grand Challenge Problems
Some problems are so complex that to solve them would require a significant increase (in some cases by orders of magnitude) in the current computational capabilities of today's computers. These problems are loosely defined as "Grand Challenge Problems." Grand Challenge problems are problems that are solvable, but not in a reasonable period of time on today's computers. Further, a grand challenge problem is a problem of some importance, either socially or economically.
Biology - Human Genome Project (http://en.wikipedia.org/wiki/Human_genome_project) Looks like a "divide and conquer" approach. The genome was broken down into smaller pieces, approximately 150,000 base pairs in length, processed separately, then assembled to form chromosones.
Physics (nuclear technology)
Astronomy
Cognition/Strong AI - the idea that computers can become "self aware." (vs. weak AI who's goal is not so grandiose - Turing test)
Game playing - chess, checkers (Jonathon Schaefer)
Linpack benchmark
Links: http://en.wikipedia.org/wiki/High_Performance_Computing