CSC/ECE 506 Fall 2007/wiki1 2 3K8i: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(41 intermediate revisions by the same user not shown)
Line 1: Line 1:
''Section 1.1.1, first half: Scientific/engineering application trends. What characterizes present-day applications? How much memory, processor time, etc.? How high is the speedup?''
== Trends in Scientific and Engineering Computing ==
== Trends in Scientific and Engineering Computing ==
Scientific and engineering applications continue to demand the highest performance computers available, a trend established in the early days of supercomputers that continues today in the realm of "High Performance Computing" (HPC).  Interestingly, an increasing number of applications are being designed to run on more modest platforms, such as a cluster of commodity processors.


Scientific and engineering applications continue to demand the highest performance computers available, a trend established in the early days of supercomputers that continues today in the realm of "High Performance Computing" (HPC).  As problems are solved, new, more complex problems arise to take their place.  Many Grand Challenge Problems "on the radar" today were not feasible on the supercomputers of a just decade ago.  Generally they require more memory, faster processors, more aggregate processing speed, and better algorithms to efficiently harness those resources.
Much work has been done in the areas of computational modeling and simulation.  By using modeling, scientists are able to analyze hypothetical designs, rather than relying on empirical evidence, which is often difficult or impossible to come by.  Higher performance computers afford the resources to build richer, more complicated models that allow scientists to study more difficult problems.  As problems are solved, new, more complex problems arise to take their place.  Many Grand Challenge Problems "on the radar" today were not feasible on the supercomputers of a just decade ago.  Generally, each successive generation of Grand Challenge problems ''require more memory and faster processing capabilities''.  The architectures used to deliver those memory and processing speed requirements are changing, and along with them the algorithms and techniques used to make efficient use of them.


Before looking at some of today's most challenging computational problems, it may be instructive to look at the trends in HPC over the last decade or so.   
Before looking at some of today's most challenging computational problems, it may be instructive to look at the trends in HPC over the last decade or so.   


== Hardware Trends ==
== Hardware Trends ==
According the the U.S. Army Research Laboratory (http://www.arl.army.mil/www/default.cfm?Action=20&Page=272), there have been five generations of architectures in the realm of scientific computing.  They are serial processors (1947-1986), vector processors (1986-2002), shared memory (1993-present), distributed memory (2000-present), and commodity clusters (2001-present).
Indeed, the trend in recent years has been towards cluster and grid computing, which seek to take advantage of multiple, low cost (commodity) computer systems and treat them as one logical computer.  Perhaps the strongest driving force behind this trend is the economies of scale.  Specialized hardware designed is, by definition, in less demand then general purpose hardware, making it much more expensive and difficult to recover design and production costs.  Hence, there is a strong economic incentive for consumers to use more general purpose solutions.
What characterizes present-day applications? How much memory, processor time, etc.? How high is the speedup?
For several years the trend has been towards commodity processor based clusters. (mutithreaded, single address space)


Also towards grid computing.
=== Architecture Evolution ===
According the the U.S. Army Research Laboratory, there have been five generations of architectures in the realm of scientific computing .  They are serial processors (1947-1986), vector processors (1986-2002), shared memory (1993-present), distributed memory (2000-present), and commodity clusters (2001-present) [http://www.arl.army.mil/www/default.cfm?Action=20&Page=272].


Tony Hey - "hundreds of cores on a socket by 2015." (http://www.accessmylibrary.com/coms2/summary_0286-30177016_ITM)
Indeed, the trend in recent years in multiprocessing has been away from vector architectures and towards low cost, multi-threaded single address spaced systems [http://www.nus.edu.sg/comcen/svu/publications/SVULink/vol_1_iss_1/hpc-trends.html]. Multi-core processors are becoming more and more mainstream.  Today, many new desktop systems contain dual cores, with some quad cores being sold.  [http://en.wikipedia.org/wiki/Tony_Hey Tony Hey], a leading researcher in parallel computing, has predicted that by 2015, a single socket will contain hundreds of cores [http://www.accessmylibrary.com/coms2/summary_0286-30177016_ITM].


http://www.nus.edu.sg/comcen/svu/publications/SVULink/vol_1_iss_1/hpc-trends.html
=== Cluster and Grid Computing ===
There have also been strong trends towards [http://en.wikipedia.org/wiki/Cluster_computing cluster] and [http://en.wikipedia.org/wiki/Grid_computing grid] computing, which seek to take advantage of multiple, low cost (commodity) computer systems and treat them as a logical entity.  Several categories of clusters exists, ranging from high performance computers linked together via high speed interconnects, to geographically separated systems linked together over the Internet [http://en.wikipedia.org/wiki/Cluster_computing].  Grid computing is a form of the latter, generally composed of multiple "collections" of computers (grid elements) that do not necessarily trust each other.


=== Why it is This Way ===
Putting these together, we see a clear movement towards commodity processors in both the areas of parallel computing (processors in a single computer system) and distributed computing (using multiple computer systems).  Perhaps the strongest driving force behind these trends is one of supply and demand, or economies of scale.  Specialized hardware is, by definition, in less demand then general purpose hardware, making it much more difficult to recover design and production costs.  Hence, there is a strong economic incentive for consumers and manufacturers alike to favor more general purpose solutions.


=== Supercomputers: faster than ever ===
Despite these trends, the "traditional" supercomputer is alive and well.  Today's highest performance machines are capable of performing  hundreds of teraflops per second. (A teraflop is 1 x 10^12 FLOPS.)  As of June 2007, the world's fastest computer, as measured by the [http://en.wikipedia.org/wiki/Linpack Linpack Benchmark], is IBM's massively parallel Blue Gene/L.  IBM has plans to build a 3 petaflop (3 x 10^15 FLOPS) machine by fall 2007, and a 10 petaflop machine in the 2010-2012 timeframe.  More information on IBM's Blue Gene and other supercomputers can be found at [http://www.top500.org Top500.org].


== Software Trends ==
== Software Trends ==
Mathematica 6 - does not require parallel processors, but does require significant resources (from the "average" personal computer) - at least 512mb memoryOn many operating systems Mathematica is able to take advantage of multiple processors, focusing on linear algebra and machine-precision real numbers. (http://support.wolfram.com/mathematica/systems/allplatforms/multipleprocessors.html)
With dramatic increases in hardware performance come more demanding applications, which in turn demand higher performance softwareMany scientific applications available today, some even commercially, were not available on general purpose architectures even a few years ago.
See also: http://www.wolfram.com/products/applications/parallel/
http://www.wolfram.com/products/gridmathematica/
 
 
Some challenges today in keeping the "performance" in HPC (High Performance Computing):
http://www.scientificcomputing.com/ShowPR~PUBCODE~030~ACCT~3000000100~ISSUE~0707~RELTYPE~HPCC~PRODCODE~00000000~PRODLETT~C.html
All is not well in HPC.  There is a tend towards virtualization, or running on virtual machines.  One benefit of this approach is that the virtual machine can be seamlessly migrated off its physical host machine.  However, in practice there are many challenges related to virtualization.  Current designs impose some performance limitations on HPC.
<examples, source>
 
Petascale computing - 10^15 floating point operations per second!  Should be a reality by 2010 (http://www.nsf.gov/pubs/2005/nsf05625/nsf05625.htm).
 
Computational modeling/simulation: why, benefits. in-depth analysis can be performed cheaply on hypothetical designs.
There is a direct correlation between computational performance and the problems that can be studied through simulation.
 


=== Multiprocessor Capabilities ===
An increasing number of scientific/engineering applications, such as Mathematica 6 by Wolfram Research, are becoming '''multiprocessor capable'''.
While Mathematica does not ''require'' multiple processors, it does benefit from them, particularly when performing long running linear algebra equations or when dealing with machine-precision real numbers.  Mathematica requires a significant amount of RAM (at least, from the average personal computer) - at least 512mb.  Wolfram does offer a  [http://www.wolfram.com/products/applications/parallel parallel computing toolkit] that allows Mathematica application designers to take advantage of various parallel architectures, ranging from shared memory multiprocessor machines to supercomputers.  Wolfram also offers a version of Mathematica tailored to cluster/grid computing, see [http://www.wolfram.com/products/gridmathematica/ http://www.wolfram.com/products/gridmathematica/].


== Grand Challenge Problems ==
=== Virtualization ===
// todo: work on wording below<br>
There is a trend in HPC towards [http://en.wikipedia.org/wiki/Virtualization '''virtualization''']Virtualization allows application programmers to write applications for virtual machines, not concerning themselves with physical platformsThere are many benefits to running on virtual machines, among them the ability to seamlessly migrate the virtual machines (along with its running applications) off its physical host machineHowever, in practice there are many challenges related to virtualization, chief among them the performance limitations they impose [http://www.scientificcomputing.com/ShowPR~PUBCODE~030~ACCT~3000000100~ISSUE~0707~RELTYPE~HPCC~PRODCODE~00000000~PRODLETT~C.html].
Some problems are so complex that to solve them would require a significant increase (in some cases by orders of magnitude) in the current computational capabilities of today's computersThese problems are loosely defined as "Grand Challenge Problems." Grand Challenge problems are problems that are solvable, but not in a reasonable period of time on today's computersFurther, a grand challenge problem is a problem of some importance, either socially or economically.


'''Biology''' - Human Genome Project (http://en.wikipedia.org/wiki/Human_genome_project)
=== Visualization ===
Looks like a "divide and conquer" approach. The genome was broken down into smaller pieces, approximately 150,000 base pairs in length, processed separately, then assembled to form chromosones.
Another trend that continues today is the [http://en.wikipedia.org/wiki/Scientific_visualization '''visualization'''] of structured, complex data.  Visualization often make complex concepts easier to understand.  [http://en.wikipedia.org/wiki/Medical_imaging Medical imaging] allows doctors to visualize internal organs without surgery. There are an almost endless number of domains which use data visualization, such as [http://en.wikipedia.org/wiki/Molecular_geometry molecular geometry], [http://en.wikipedia.org/wiki/Weather_forecasting weather forecasting], [http://en.wikipedia.org/wiki/Fluid_dynamics fluid dynamics], and [http://en.wikipedia.org/wiki/Earth_sciences earth sciences].


'''Physics''' (nuclear technology)
Visualization is a memory intensive process.  How much so depends on the data set being rendered, but most modern PCs are now capable of running applications with complex visualization capabilities.  Several products, such as those from [http://www.avs.com/index_nf.html Advanced Visual Systems], offer software development kits to aid developers in incorporating visualization into their applications.


'''Astronomy'''
=== Hardware Requirements/Utilization ===
Generalizing from these examples, modern day scientific/engineering applications are becoming increasingly more capable and more demanding of system resources (RAM and CPU).  However, many are targeted for consumer hardware platforms.  Many applications are now multiprocessor capable.  In the best case, the speedup obtained from having multiple processors is near linear, but normally the speedup is somewhat less, depending on how well the problem parallelizes, how well the program is written, and the communication latency.


'''Cognition/Strong AI''' - the idea that computers can become "self aware."  (vs. weak AI who's goal is not so grandiose - Turing test)
== Applications for HPC ==
Some problems are so complex that to solve them would require a significant increase (in some cases by orders of magnitude) in the current computational capabilities of today's computers.  These problems are loosely defined as "Grand Challenge Problems."  Grand Challenge problems are problems that are solvable, but not in a reasonable period of time on today's computers.  Further, a Grand Challenge problem is a problem of some importance, either socially or economically.


'''Game playing''' - chess, checkers (Jonathon Schaefer)
Below are a sample of scientific/engineering applications that employ HPC. (Note: not all are classified as "Grand Challenge" problems, but all are challenging!)


Linpack benchmark
=== Biology ===
[http://en.wikipedia.org/wiki/Human_genome_project Human Genome Project (HGP)]. 
The goal of the HGP is to "understand the genetic makeup of the human species."  By some definitions of "complete sequencing" this work was completed in 2003.  However, some work remains in that large area of the genome remain unsequenced.  Some estimate the genome to be 92% sequenced.  The remaining portions , highly repetitive sequences of DNA known as centromeres, are proving difficult with today's technology.


The basic approach to HGP is a classic "divide and conquer".  The genome was broken down into smaller pieces, approximately 150,000 base pairs in length, processed separately, then assembled to form chromosones.


----
=== Physics ===
Distributed and parallel computing is playing an increasing role in physics, including Particle Beam Physics.  [http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6TJM-4B5R97X-3&_user=10&_coverDate=02%2F21%2F2004&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=721502408864d5addf95e432dfe4e0fe This paper]
examines computational techniques in which cluster computing can be employed for a variety of beam physics problems.


Links:
The University of California at Santa Cruz uses a supercomputer to model supernova explosions, galaxy formation, and the fluid dynamics of the interiors of stars and planets. [http://www.unisci.com/stories/20021/0204026.htm]
http://en.wikipedia.org/wiki/High_Performance_Computing


http://www.top500.org/
=== Space Exploration ===
[http://www.seti.org Search for Extraterrestrial Intelligence (SETI)].  SETI's stated mission is to "explore, understand and explain the origin, nature and prevalence of life in the universe."  The SETI@Home project (which is not an official project of SETI) uses CPU cycles from millions of computers to process data from the Arecibo Radio Telescope, in Puerto Rico.  To take part in the project, users simply download and install a screensaver which, when activated, downloads and processes data, then uploads the results.


http://en.wikipedia.org/wiki/Grand_Challenge_problem
=== Weather Forecasting ===
Weather forecasting, a classic "supercomputer" problem, remains in the realm of supercomputers and will for the foreseeable future.  Weather forecasting technology continues to improve, not only from increased hardware performance but also by improved software/modeling capabilities.  Three day forecasts today are as accurate as one day forecast just 20 years ago [http://www.hpcwire.com/hpcwire/hpcwireWWW/04/0813/108178.html].


http://en.wikipedia.org/wiki/Grand_Challenge
=== Game Playing===
Checkers was solved by the program [http://www.cs.ualberta.ca/~chinook/project/ Chinook] in March 2007, after over a decade of a dozen or more computers working nonstop on the problem.  The search space for checkers is rather large - 5x10^20 nodes.  (Chess is estimated at 10^44, which probably exceeds the number of molecules in the universe.)  A number of [http://www.cs.ualberta.ca/~chinook/publications/research.html research publications] have emerged from the Chinook project, including "APHID - Asynchronous Parallel Game Tree Search", and "Distributed Game-Tree Searching Using Transposition Table Driven Work Schedule."

Latest revision as of 20:45, 10 September 2007

Section 1.1.1, first half: Scientific/engineering application trends. What characterizes present-day applications? How much memory, processor time, etc.? How high is the speedup?

Trends in Scientific and Engineering Computing

Scientific and engineering applications continue to demand the highest performance computers available, a trend established in the early days of supercomputers that continues today in the realm of "High Performance Computing" (HPC). Interestingly, an increasing number of applications are being designed to run on more modest platforms, such as a cluster of commodity processors.

Much work has been done in the areas of computational modeling and simulation. By using modeling, scientists are able to analyze hypothetical designs, rather than relying on empirical evidence, which is often difficult or impossible to come by. Higher performance computers afford the resources to build richer, more complicated models that allow scientists to study more difficult problems. As problems are solved, new, more complex problems arise to take their place. Many Grand Challenge Problems "on the radar" today were not feasible on the supercomputers of a just decade ago. Generally, each successive generation of Grand Challenge problems require more memory and faster processing capabilities. The architectures used to deliver those memory and processing speed requirements are changing, and along with them the algorithms and techniques used to make efficient use of them.

Before looking at some of today's most challenging computational problems, it may be instructive to look at the trends in HPC over the last decade or so.

Hardware Trends

Architecture Evolution

According the the U.S. Army Research Laboratory, there have been five generations of architectures in the realm of scientific computing . They are serial processors (1947-1986), vector processors (1986-2002), shared memory (1993-present), distributed memory (2000-present), and commodity clusters (2001-present) [1].

Indeed, the trend in recent years in multiprocessing has been away from vector architectures and towards low cost, multi-threaded single address spaced systems [2]. Multi-core processors are becoming more and more mainstream. Today, many new desktop systems contain dual cores, with some quad cores being sold. Tony Hey, a leading researcher in parallel computing, has predicted that by 2015, a single socket will contain hundreds of cores [3].

Cluster and Grid Computing

There have also been strong trends towards cluster and grid computing, which seek to take advantage of multiple, low cost (commodity) computer systems and treat them as a logical entity. Several categories of clusters exists, ranging from high performance computers linked together via high speed interconnects, to geographically separated systems linked together over the Internet [4]. Grid computing is a form of the latter, generally composed of multiple "collections" of computers (grid elements) that do not necessarily trust each other.

Why it is This Way

Putting these together, we see a clear movement towards commodity processors in both the areas of parallel computing (processors in a single computer system) and distributed computing (using multiple computer systems). Perhaps the strongest driving force behind these trends is one of supply and demand, or economies of scale. Specialized hardware is, by definition, in less demand then general purpose hardware, making it much more difficult to recover design and production costs. Hence, there is a strong economic incentive for consumers and manufacturers alike to favor more general purpose solutions.

Supercomputers: faster than ever

Despite these trends, the "traditional" supercomputer is alive and well. Today's highest performance machines are capable of performing hundreds of teraflops per second. (A teraflop is 1 x 10^12 FLOPS.) As of June 2007, the world's fastest computer, as measured by the Linpack Benchmark, is IBM's massively parallel Blue Gene/L. IBM has plans to build a 3 petaflop (3 x 10^15 FLOPS) machine by fall 2007, and a 10 petaflop machine in the 2010-2012 timeframe. More information on IBM's Blue Gene and other supercomputers can be found at Top500.org.

Software Trends

With dramatic increases in hardware performance come more demanding applications, which in turn demand higher performance software. Many scientific applications available today, some even commercially, were not available on general purpose architectures even a few years ago.

Multiprocessor Capabilities

An increasing number of scientific/engineering applications, such as Mathematica 6 by Wolfram Research, are becoming multiprocessor capable. While Mathematica does not require multiple processors, it does benefit from them, particularly when performing long running linear algebra equations or when dealing with machine-precision real numbers. Mathematica requires a significant amount of RAM (at least, from the average personal computer) - at least 512mb. Wolfram does offer a parallel computing toolkit that allows Mathematica application designers to take advantage of various parallel architectures, ranging from shared memory multiprocessor machines to supercomputers. Wolfram also offers a version of Mathematica tailored to cluster/grid computing, see http://www.wolfram.com/products/gridmathematica/.

Virtualization

There is a trend in HPC towards virtualization. Virtualization allows application programmers to write applications for virtual machines, not concerning themselves with physical platforms. There are many benefits to running on virtual machines, among them the ability to seamlessly migrate the virtual machines (along with its running applications) off its physical host machine. However, in practice there are many challenges related to virtualization, chief among them the performance limitations they impose [5].

Visualization

Another trend that continues today is the visualization of structured, complex data. Visualization often make complex concepts easier to understand. Medical imaging allows doctors to visualize internal organs without surgery. There are an almost endless number of domains which use data visualization, such as molecular geometry, weather forecasting, fluid dynamics, and earth sciences.

Visualization is a memory intensive process. How much so depends on the data set being rendered, but most modern PCs are now capable of running applications with complex visualization capabilities. Several products, such as those from Advanced Visual Systems, offer software development kits to aid developers in incorporating visualization into their applications.

Hardware Requirements/Utilization

Generalizing from these examples, modern day scientific/engineering applications are becoming increasingly more capable and more demanding of system resources (RAM and CPU). However, many are targeted for consumer hardware platforms. Many applications are now multiprocessor capable. In the best case, the speedup obtained from having multiple processors is near linear, but normally the speedup is somewhat less, depending on how well the problem parallelizes, how well the program is written, and the communication latency.

Applications for HPC

Some problems are so complex that to solve them would require a significant increase (in some cases by orders of magnitude) in the current computational capabilities of today's computers. These problems are loosely defined as "Grand Challenge Problems." Grand Challenge problems are problems that are solvable, but not in a reasonable period of time on today's computers. Further, a Grand Challenge problem is a problem of some importance, either socially or economically.

Below are a sample of scientific/engineering applications that employ HPC. (Note: not all are classified as "Grand Challenge" problems, but all are challenging!)

Biology

Human Genome Project (HGP). The goal of the HGP is to "understand the genetic makeup of the human species." By some definitions of "complete sequencing" this work was completed in 2003. However, some work remains in that large area of the genome remain unsequenced. Some estimate the genome to be 92% sequenced. The remaining portions , highly repetitive sequences of DNA known as centromeres, are proving difficult with today's technology.

The basic approach to HGP is a classic "divide and conquer". The genome was broken down into smaller pieces, approximately 150,000 base pairs in length, processed separately, then assembled to form chromosones.

Physics

Distributed and parallel computing is playing an increasing role in physics, including Particle Beam Physics. This paper examines computational techniques in which cluster computing can be employed for a variety of beam physics problems.

The University of California at Santa Cruz uses a supercomputer to model supernova explosions, galaxy formation, and the fluid dynamics of the interiors of stars and planets. [6]

Space Exploration

Search for Extraterrestrial Intelligence (SETI). SETI's stated mission is to "explore, understand and explain the origin, nature and prevalence of life in the universe." The SETI@Home project (which is not an official project of SETI) uses CPU cycles from millions of computers to process data from the Arecibo Radio Telescope, in Puerto Rico. To take part in the project, users simply download and install a screensaver which, when activated, downloads and processes data, then uploads the results.

Weather Forecasting

Weather forecasting, a classic "supercomputer" problem, remains in the realm of supercomputers and will for the foreseeable future. Weather forecasting technology continues to improve, not only from increased hardware performance but also by improved software/modeling capabilities. Three day forecasts today are as accurate as one day forecast just 20 years ago [7].

Game Playing

Checkers was solved by the program Chinook in March 2007, after over a decade of a dozen or more computers working nonstop on the problem. The search space for checkers is rather large - 5x10^20 nodes. (Chess is estimated at 10^44, which probably exceeds the number of molecules in the universe.) A number of research publications have emerged from the Chinook project, including "APHID - Asynchronous Parallel Game Tree Search", and "Distributed Game-Tree Searching Using Transposition Table Driven Work Schedule."