Chapter 2b: Data parallelism in GPUs: Difference between revisions
Jump to navigation
Jump to search
(Created page with "Testing") |
No edit summary |
||
Line 1: | Line 1: | ||
Take a modern GPU architecture, and use it as an example in explaining how data-parallel programming is done. Do this in a discussion similar to the discussion of the hypothetical array processor in Lecture 3. That is, describe the problem, then describe the instructions of the GPU, and show code for how the problem can be solved efficiently using GPU instructions. You might want to use multiple examples to illustrate different facilities of a GPU instruction set. | |||
== Introduction == | |||
== Terminology == | |||
== Basics of CUDA GPU == | |||
=== Architecture overview === | |||
=== Instruction set overview === | |||
=== C Runtime overview === | |||
== Problem == | |||
== Solution == | |||
=== Example1 === | |||
=== Example2 === |
Revision as of 23:45, 30 January 2012
Take a modern GPU architecture, and use it as an example in explaining how data-parallel programming is done. Do this in a discussion similar to the discussion of the hypothetical array processor in Lecture 3. That is, describe the problem, then describe the instructions of the GPU, and show code for how the problem can be solved efficiently using GPU instructions. You might want to use multiple examples to illustrate different facilities of a GPU instruction set.