CSC/ECE 506 Spring 2013/2b ks: Difference between revisions
(Created page with "testing edit") |
No edit summary |
||
Line 1: | Line 1: | ||
=Introduction= | |||
Processor clock speed has plateaued in the last several years. This has resulted in demand for other alternatives to achieve performance gains. At the heart of sciences discoveries has been parallelism. Today’s computers are driven meant to take advantage of processing in parallel to produce quicker results. Even personal computers bought in stores contains CPU’s with several cores that execute in parallel. | |||
While the idea of multi-core is a fairly new concept for the CPU, it has been a focus for GPU design for much longer. GPU’s have had the luxury of being purposed for very specialized tasks related to graphics computation and rendering. These types of computation require a great deal of parallelism to be efficient. In Understanding the Parallelism of GPU’s, the author uses an example of blending two images together. Part of what the GPU will need to be able to do is perform a blending action on pixels in both images. This process will for the most part be the same operation, just on different data points. The intense level of floating point calculation along with a focused purpose have led GPU’s to rely heavily on data parallelism. This focus has lead to very different engineering choices as GPU’s have matured. Now that CPU clock speeds are no longer improving, the scientific and technology communities are looking to GPU’s for a way to see continued efficiency gains. | |||
Based on the engineering choices of GPU hardware, developers have been wanting to take advantage of the raw processing power to leverage more responsive applications. The requirements to do this were very daunting to anyone but the most highly trained and experience programmers. A strong level knowledge for the low level details was required to see any true benefit and if the developer was not highly trained in the area of processing they were trying to see performance gains in that could instead have the exact opposite affect. Due to these challenges software alternative slowly started to appear to try and make the task of leveraging this power through software less daunting. As part of this article we will take a look at General Purpose Graphic Processing Unit (GPGPU) programming. We will take a look at language abstraction such as CUDA and OpenCL to see how accessible this approach is becoming. We will also examine the possible performance gains of this approach and how they compare to solutions that are evolving that still utilize CPU’s. Lastly, we will give our thoughts on the future of this area of study based on our assessments and our expectation for its place in the industry. |
Revision as of 01:30, 3 February 2013
Introduction
Processor clock speed has plateaued in the last several years. This has resulted in demand for other alternatives to achieve performance gains. At the heart of sciences discoveries has been parallelism. Today’s computers are driven meant to take advantage of processing in parallel to produce quicker results. Even personal computers bought in stores contains CPU’s with several cores that execute in parallel.
While the idea of multi-core is a fairly new concept for the CPU, it has been a focus for GPU design for much longer. GPU’s have had the luxury of being purposed for very specialized tasks related to graphics computation and rendering. These types of computation require a great deal of parallelism to be efficient. In Understanding the Parallelism of GPU’s, the author uses an example of blending two images together. Part of what the GPU will need to be able to do is perform a blending action on pixels in both images. This process will for the most part be the same operation, just on different data points. The intense level of floating point calculation along with a focused purpose have led GPU’s to rely heavily on data parallelism. This focus has lead to very different engineering choices as GPU’s have matured. Now that CPU clock speeds are no longer improving, the scientific and technology communities are looking to GPU’s for a way to see continued efficiency gains.
Based on the engineering choices of GPU hardware, developers have been wanting to take advantage of the raw processing power to leverage more responsive applications. The requirements to do this were very daunting to anyone but the most highly trained and experience programmers. A strong level knowledge for the low level details was required to see any true benefit and if the developer was not highly trained in the area of processing they were trying to see performance gains in that could instead have the exact opposite affect. Due to these challenges software alternative slowly started to appear to try and make the task of leveraging this power through software less daunting. As part of this article we will take a look at General Purpose Graphic Processing Unit (GPGPU) programming. We will take a look at language abstraction such as CUDA and OpenCL to see how accessible this approach is becoming. We will also examine the possible performance gains of this approach and how they compare to solutions that are evolving that still utilize CPU’s. Lastly, we will give our thoughts on the future of this area of study based on our assessments and our expectation for its place in the industry.