CSC/ECE 506 Fall 2007/wiki1 9 arubha: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
Line 26: Line 26:
==Bibliography ==
==Bibliography ==
* Parallel computer architecture : a hardware/software approach / David E. Culler, Jaswinder Pal Singh, with Anoop Gupta. ISBN:1558603433
* Parallel computer architecture : a hardware/software approach / David E. Culler, Jaswinder Pal Singh, with Anoop Gupta. ISBN:1558603433
* Computer architecture : a quantitative approach / John L. Hennessy, David A. Patterson ; with contributions by Andrea C. Arpaci-Dusseau. ISBN:0123704901
* Lecture notes by Professor David A. Patterson http://www.cs.berkeley.edu/~pattrsn/252S98/Lec07-vector.pdf
* Lecture notes by Professor David A. Patterson http://www.cs.berkeley.edu/~pattrsn/252S98/Lec07-vector.pdf
* Lecture notes by Professor David E. Culler http://www.cs.berkeley.edu/~culler/cs252-s02/slides/Lec20-vector.pdf
* Lecture notes by Professor David E. Culler http://www.cs.berkeley.edu/~culler/cs252-s02/slides/Lec20-vector.pdf
* Cray-1 architecture http://en.wikipedia.org/wiki/Cray-1
* Vector processors http://en.wikipedia.org/wiki/Vector_processor

Revision as of 17:24, 5 September 2007

Array Processing is a computer architectural concept that was first put to use in the early 1960s. As scientific computing evolved, the need to process large amounts of data using a common algorithm became important. Computers with an array of processing elements (PEs), controlled by a common control unit (CU) were built. The PEs were usually ALUs, capable of performing simple mathematical operations. The CPU itself would perform the job of the CU.

As computer architectures evolved, a new concept called the Vector processing was developed during the 1970s. In vector processing, a PE usually consists of a collection of functional units that operate on vectors of data. This greatly simplifies the interconnections and reduces data dependency, compared to array processing.

Vector processors and array processors form the basic building blocks of some of the early and most successful supercomputers. Vector and array processing techniques are extensively used by applications like ocean mapping, 3D modeling, molecular modeling, weather forecasting, wind tunnel simulations. The Airbus A380 project made use of the NEC SX-5, scalable vector processor architecture supercomputers, to run simulations and fine tune the design even before the aircraft's maiden flight.

Trends

The earliest array processors were used to operate on matrix-like data. The CU would load all the ALUs with a common instruction. The ALUs would get data inputs from a array of memory locations, containing different values from the matrix. This concept of using separate ALUs for each data element, but performing the same operation, is classifed as the Single-Instruction-Multiple-Data (SIMD) under the Flynn Taxonomy.

The first implementation of a vector processing based computer system was the CDC Star-100. It was developed by the Control Data Corporation (CDC) in the early 1970s and was capable of performing 100 million floating point operations (MFLOPS). The CDC STAR-100 combined scalar and vector computations. Though it was able to achieve a peak performance of 20 MFLOPS, when fully loaded, it's performance for real-life data sets was not very impressive.

The first system to exploit the vector processing architecture was the Cray-1. The Cray-1, again developed by CDC, was able to overcome some of the pitfalls encounterd during the STAR-100 project. The STAR-100 took a lot of time decoding vector instructions and also had to re-fetch data every time an instruction asked for it. The Cray-1 introduced a set of CPU registers which would not only pre-fetch data, that would be used again, but also successive instructions, thus introducing pipelining. This enabled Cray-1 to work on more flexible data-sets and also improved it's instruction decoding times. But the registers introduced a limit on the vector sizes and also made the system expensive.

The vector technique was first fully exploited in the famous Cray-1. Instead of leaving the data in memory like the STAR and ASC, the Cray design had eight "vector registers" which held sixty-four 64-bit words each. The vector instructions were applied between registers, which is much faster than talking to main memory. In addition the design had completely separate pipelines for different instructions, for example, addition/subtraction was implemented in different hardware than multiplication. This allowed a batch of vector instructions themselves to be pipelined, a technique they called vector chaining. The Cray-1 normally had a performance of about 80 MFLOPS, but with up to three chains running it could peak at 240 MFLOPS – a respectable number even today.

Other examples followed. CDC tried to re-enter the high-end market again with its ETA-10 machine, but it sold poorly and they took that as an opportunity to leave the supercomputing field entirely. Various Japanese companies (Fujitsu, Hitachi and NEC) introduced register-based vector machines similar to the Cray-1, typically being slightly faster and much smaller. Oregon-based Floating Point Systems (FPS) built add-on array processors for minicomputers, later building their own minisupercomputers. However Cray continued to be the performance leader, continually beating the competition with a series of machines that led to the Cray-2, Cray X-MP and Cray Y-MP. Since then the supercomputer market has focused much more on massively parallel processing rather than better implementations of vector processors. However, recognizing the benefits of vector processing IBM developed Virtual Vector Architecture for use in supercomputers coupling several scalar processors to act as a vector processor.

Today the average computer at home crunches as much data watching a short QuickTime video as did all of the supercomputers in the 1970s. Vector processor elements have since been added to almost all modern CPU designs, although they are typically referred to as SIMD. In these implementations the vector processor runs beside the main scalar CPU, and is fed data from programs that know it is there.

Horizons

Links

Bibliography