Chapter 2b: Data parallelism in GPUs: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 40: Line 40:
== Solution ==
== Solution ==
=== CUDA solution ===
=== CUDA solution ===
== Other parallelism techniques ==
Task and Program parallelism.

Revision as of 00:28, 31 January 2012

Take a modern GPU architecture, and use it as an example in explaining how data-parallel programming is done. Do this in a discussion similar to the discussion of the hypothetical array processor in Lecture 3. That is, describe the problem, then describe the instructions of the GPU, and show code for how the problem can be solved efficiently using GPU instructions. You might want to use multiple examples to illustrate different facilities of a GPU instruction set.

Introduction

Terminology

Basics of CUDA GPU

Architecture overview

Instruction set overview

C Runtime overview

Problem

Consider the problem of computing addition of two vectors and storing the result into a third vector. A standard C program to achieve this would look like this:

int main()
{
	float A[1000],B[1000],C[1000];
	.....	
	//Some initializations
	.....

	for(int i = 0; i < 1000; i++)
		C[i] = A[i] + B[i];

	......
}

Here we are executing the for loop, once for each array index. On a sequential processor (Single core processor),the instructions similar to the following are executed for each run of the for loop.

Fetch i
Fetch A[i] into Register EAX
Fetch B[i] into Register EBX
Add EBX to EAX
Store EAX into Adress of C[i]


Solution

CUDA solution

Other parallelism techniques

Task and Program parallelism.