CSC/ECE 506 Fall 2007/wiki1 2 3K8i: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
What characterizes present-day applications? How much memory, processor time, etc.? How high is the speedup? | What characterizes present-day applications? How much memory, processor time, etc.? How high is the speedup? | ||
For several years the trend has been towards commodity processor based clusters. (mutithreaded, single address space) | |||
(http://www.arl.army.mil/www/default.cfm?Action=20&Page=272) | |||
Also towards grid computing. | |||
Suzanne Tracy - "hundreds of cores on a socket by 2015." (http://www.accessmylibrary.com/coms2/summary_0286-30177016_ITM) | |||
http://www.nus.edu.sg/comcen/svu/publications/SVULink/vol_1_iss_1/hpc-trends.html | |||
'''Off the Shelf Software Packages''' | |||
Mathematica 6 - does not require parallel processors, but does require significant resources (from the "average" personal computer) - at least 512mb memory. On many operating systems Mathematica is able to take advantage of multiple processors, focusing on linear algebra and machine-precision real numbers. (http://support.wolfram.com/mathematica/systems/allplatforms/multipleprocessors.html) | |||
See also: http://www.wolfram.com/products/applications/parallel/ | |||
http://www.wolfram.com/products/gridmathematica/ | |||
Some challenges today in keeping the "performance" in HPC (High Performance Computing): | Some challenges today in keeping the "performance" in HPC (High Performance Computing): | ||
Line 8: | Line 21: | ||
<examples, source> | <examples, source> | ||
Petascale computing - 10^15 floating point operations per second! Should be a reality by 2010 (http://www.nsf.gov/pubs/2005/nsf05625/nsf05625.htm). | |||
Computational modeling/simulation: why, benefits. in-depth analysis can be performed cheaply on hypothetical designs. | Computational modeling/simulation: why, benefits. in-depth analysis can be performed cheaply on hypothetical designs. |
Revision as of 21:09, 1 September 2007
What characterizes present-day applications? How much memory, processor time, etc.? How high is the speedup? For several years the trend has been towards commodity processor based clusters. (mutithreaded, single address space) (http://www.arl.army.mil/www/default.cfm?Action=20&Page=272)
Also towards grid computing.
Suzanne Tracy - "hundreds of cores on a socket by 2015." (http://www.accessmylibrary.com/coms2/summary_0286-30177016_ITM)
http://www.nus.edu.sg/comcen/svu/publications/SVULink/vol_1_iss_1/hpc-trends.html
Off the Shelf Software Packages
Mathematica 6 - does not require parallel processors, but does require significant resources (from the "average" personal computer) - at least 512mb memory. On many operating systems Mathematica is able to take advantage of multiple processors, focusing on linear algebra and machine-precision real numbers. (http://support.wolfram.com/mathematica/systems/allplatforms/multipleprocessors.html)
See also: http://www.wolfram.com/products/applications/parallel/
http://www.wolfram.com/products/gridmathematica/
Some challenges today in keeping the "performance" in HPC (High Performance Computing):
http://www.scientificcomputing.com/ShowPR~PUBCODE~030~ACCT~3000000100~ISSUE~0707~RELTYPE~HPCC~PRODCODE~00000000~PRODLETT~C.html
All is not well in HPC. There is a tend towards virtualization, or running on virtual machines. One benefit of this approach is that the virtual machine can be seamlessly migrated off its physical host machine. However, in practice there are many challenges related to virtualization. Current designs impose some performance limitations on HPC.
<examples, source>
Petascale computing - 10^15 floating point operations per second! Should be a reality by 2010 (http://www.nsf.gov/pubs/2005/nsf05625/nsf05625.htm).
Computational modeling/simulation: why, benefits. in-depth analysis can be performed cheaply on hypothetical designs. There is a direct correlation between computational performance and the problems that can be studied through simulation.
// todo: work on wording below
Some problems are so complex that to solve them would require a significant increase (in some cases by orders of magnitude) in the current computational capabilities of today's computers. These problems are loosely defined as "Grand Challenge Problems." Grand Challenge problems are problems that are solvable, but not in a reasonable period of time on today's computers. Further, a grand challenge problem is a problem of some importance, either socially or economically.
Biology - Human Genome Project (http://en.wikipedia.org/wiki/Human_genome_project) Looks like a "divide and conquer" approach. The genome was broken down into smaller pieces, approximately 150,000 base pairs in length, processed separately, then assembled to form chromosones.
Physics (nuclear technology)
Astronomy
Cognition/Strong AI - the idea that computers can become "self aware." (vs. weak AI who's goal is not so grandiose - Turing test)
Game playing - chess, checkers (Jonathon Schaefer)
Linpack benchmark
Links:
http://en.wikipedia.org/wiki/High_Performance_Computing