<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jmfoste2</id>
	<title>Expertiza_Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jmfoste2"/>
	<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Special:Contributions/Jmfoste2"/>
	<updated>2026-05-12T05:11:03Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45158</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45158"/>
		<updated>2011-04-19T00:47:17Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Supplemental to Chapter 11: SCI Cache Coherence''' ==&lt;br /&gt;
&lt;br /&gt;
This is intended to be a supplement to Chapter 11 of [[#References | Solihin ]], which deals with Distributed Shared Memory (DSM) in Multiprocessors.  In the book, the basic approaches to DSM are discussed, and a directory based cache coherence protocol are introduced.  This protocol is in contrast to the bus-based cache coherence protocols that were introduced earlier.  This supplemental focuses on a specific directory based cache coherence protocol called the Scalable Coherent Interface.&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of [[#References | Solihin (2008)]]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The directory maintains a state for the block in memory, and a pointer to the first cache that is on the shared list.  The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory that they now have the only clean copy of the data.  This head cache must also notify the next cache in the shared list so that the cache can invalidate their copy.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
The structure for SCI is fairly complicated and can lead to a number of race conditions.&lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  If a sharer (not the head) wants to write to the block, the sharer first has to detach itself from the linked list.  The sharer is no longer sharing the block so from now on we will call it N1.  Then N1 has to ask the directory for the block again with intent to write.  N1 then swaps out the pointer pointing to the head of the linked list with a pointer to itself.  Now N1 is the new head and it has the pointer to the old head.  N1 then invalidates the old head’s cache line and all of the sharers linked to it.  This allows the SCI protocol to keep write exclusivity [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  Write exclusivity mean only one write to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [[#References | (Solihin)]].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
&lt;br /&gt;
=== Directory States ===&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
If the directory has the block of memory in the GONE or EM state, then the directory must forward any incoming read/write requests for the block to the last cache that modified the block.  That cache is responsible for updating main memory and the block that requested the data. This is why the directory must maintain a pointer to the first cache in the cache list. &lt;br /&gt;
&lt;br /&gt;
=== Cache States : Complicated ===&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities [[#References | (Cutler (1999))]]:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
In addition to these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
Each designation for the location can match with a designation for the state which creates additional states like:&lt;br /&gt;
&lt;br /&gt;
* ONLY_DIRTY - if the cache is the only state to have the cached memory block, and the cache has modified the memory block so it is different from main memory&lt;br /&gt;
* MID_FRESH - if the cache is neither the first nor last cache in the list, and the data it has is the same as what is in main memory.&lt;br /&gt;
&lt;br /&gt;
As implied in the [[#Coherence Race Conditions | Race Conditions ]] section, some states are impossible, like: MID_DIRTY or TAIL_CLEAN.&lt;br /&gt;
&lt;br /&gt;
=== Cache States : Simplified ===&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the [http://en.wikipedia.org/wiki/MESI_protocol MESI protocol] states.  This results in a more compact state diagram, which still gives the general sense of how the protocol causes transitions.  It also illustrates how the directory states influence the cache states. In this case, we maintain the three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
To further simplify this scenario, only the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
As mentioned in the [[#Coherence Race Conditions | Race Conditions ]] section, in order for a processor to make a write, they have to be in the head of the list (or the Sh state in this scenario).&lt;br /&gt;
&lt;br /&gt;
The following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;br /&gt;
&lt;br /&gt;
As you can see from the diagram, a Write(S) can only occur when a processor is in the &amp;quot;Sh&amp;quot; state.  In the actual SCI protocol, a write can occur when the cache is in the HEAD state combined with CLEAN/FRESH/DIRTY or when the cache is in the ONLY state.  In this simplified scenario, these states are combined into one, the Sh state.&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
The SCI protocol is a directory based protocol for Distributed Memory Management similar to the one described in [[#References | Solihin ]].  This is an extensive and complicated strategy that maintains coherence as it scales to multiple processors.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Culler, David E. and Jaswinder Pal Singh, ''Parallel Computer Architecture,'' Morgan Kaufmann Publishers, Inc, 1999.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45156</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45156"/>
		<updated>2011-04-19T00:45:18Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Supplemental to Chapter 11: SCI Cache Coherence''' ==&lt;br /&gt;
&lt;br /&gt;
This is intended to be a supplement to Chapter 11 of [[#References | Solihin ]], which deals with Distributed Shared Memory (DSM) in Multiprocessors.  In the book, the basic approaches to DSM are discussed, and a directory based cache coherence protocol are introduced.  This protocol is in contrast to the bus-based cache coherence protocols that were introduced earlier.  This supplemental focuses on a specific directory based cache coherence protocol called the Scalable Coherent Interface.&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of [[#References | Solihin (2008)]]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The directory maintains a state for the block in memory, and a pointer to the first cache that is on the shared list.  The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory that they now have the only clean copy of the data.  This head cache must also notify the next cache in the shared list so that the cache can invalidate their copy.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
The structure for SCI is fairly complicated and can lead to a number of race conditions.&lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  If a sharer (not the head) wants to write to the block, the sharer first has to detach itself from the linked list.  The sharer is no longer sharing the block so from now on we will call it N1.  Then N1 has to ask the directory for the block again with intent to write.  N1 then swaps out the pointer pointing to the head of the linked list with a pointer to itself.  Now N1 is the new head and it has the pointer to the old head.  N1 then invalidates the old head’s cache line and all of the sharers linked to it.  This allows the SCI protocol to keep write exclusivity [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  Write exclusivity mean only one write to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [[#References | (Solihin)]].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
&lt;br /&gt;
=== Directory States ===&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
If the directory has the block of memory in the GONE or EM state, then the directory must forward any incoming read/write requests for the block to the last cache that modified the block.  That cache is responsible for updating main memory and the block that requested the data. This is why the directory must maintain a pointer to the first cache in the cache list. &lt;br /&gt;
&lt;br /&gt;
=== Cache States : Complicated ===&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities [[#References | (Cutler (1999))]]:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
In addition to these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
Each designation for the location can match with a designation for the state which creates additional states like:&lt;br /&gt;
&lt;br /&gt;
* ONLY_DIRTY - if the cache is the only state to have the cached memory block, and the cache has modified the memory block so it is different from main memory&lt;br /&gt;
* MID_FRESH - if the cache is neither the first nor last cache in the list, and the data it has is the same as what is in main memory.&lt;br /&gt;
&lt;br /&gt;
As implied in the [[#Coherence Race Conditions | Race Conditions ]] section, some states are impossible, like: MID_DIRTY or TAIL_CLEAN.&lt;br /&gt;
&lt;br /&gt;
=== Cache States : Simplified ===&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the [http://en.wikipedia.org/wiki/MESI_protocol MESI protocol] states.  This results in a more compact state diagram, which still gives the general sense of how the protocol causes transitions.  It also illustrates how the directory states influence the cache states. In this case, we maintain the three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
To further simplify this scenario, only the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
As mentioned in the [[#Coherence Race Conditions | Race Conditions ]] section, in order for a processor to make a write, they have to be in the head of the list (or the Sh state in this scenario).&lt;br /&gt;
&lt;br /&gt;
The following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;br /&gt;
&lt;br /&gt;
As you can see from the diagram, a Write(S) can only occur when a processor is in the &amp;quot;Sh&amp;quot; state.  In the actual SCI protocol, a write can occur when the cache is in the HEAD state combined with CLEAN/FRESH/DIRTY or when the cache is in the ONLY state.  In this simplified scenario, these states are combined into one, the Sh state.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Culler, David E. and Jaswinder Pal Singh, ''Parallel Computer Architecture,'' Morgan Kaufmann Publishers, Inc, 1999.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45152</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45152"/>
		<updated>2011-04-19T00:32:08Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Supplemental to Chapter 11: SCI Cache Coherence''' ==&lt;br /&gt;
&lt;br /&gt;
This is intended to be a supplement to Chapter 11 of [[#References | Solihin ]], which deals with Distributed Shared Memory (DSM) in Multiprocessors.  In the book, the basic approaches to DSM are discussed, and a directory based cache coherence protocol are introduced.  This protocol is in contrast to the bus-based cache coherence protocols that were introduced earlier.  This supplemental focuses on a specific directory based cache coherence protocol called the Scalable Coherent Interface.&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of [[#References | Solihin (2008)]]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The directory maintains a state for the block in memory, and a pointer to the first cache that is on the shared list.  The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory that they now have the only clean copy of the data.  This head cache must also notify the next cache in the shared list so that the cache can invalidate their copy.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
The structure for SCI is fairly complicated and can lead to a number of race conditions.&lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  If a sharer (not the head) wants to write to the block, the sharer first has to detach itself from the linked list.  The sharer is no longer sharing the block so from now on we will call it N1.  Then N1 has to ask the directory for the block again with intent to write.  N1 then swaps out the pointer pointing to the head of the linked list with a pointer to itself.  Now N1 is the new head and it has the pointer to the old head.  N1 then invalidates the old head’s cache line and all of the sharers linked to it.  This allows the SCI protocol to keep write exclusivity [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  Write exclusivity mean only one write to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [[#References | (Solihin)]].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
&lt;br /&gt;
=== Directory States ===&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
If the directory has the block of memory in the GONE or EM state, then the directory must forward any incoming read/write requests for the block to the last cache that modified the block.  That cache is responsible for updating main memory and the block that requested the data. This is why the directory must maintain a pointer to the first cache in the cache list. &lt;br /&gt;
&lt;br /&gt;
=== Cache States : Complicated ===&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities [[#References | (Cutler (1999))]]:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
In addition to these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
Each designation for the location can match with a designation for the state which creates additional states like:&lt;br /&gt;
&lt;br /&gt;
* ONLY_DIRTY - if the cache is the only state to have the cached memory block, and the cache has modified the memory block so it is different from main memory&lt;br /&gt;
* MID_FRESH - if the cache is neither the first nor last cache in the list, and the data it has is the same as what is in main memory.&lt;br /&gt;
&lt;br /&gt;
As implied in the [[#Coherence Race Conditions | Race Conditions ]] section, some states are impossible, like: MID_DIRTY or TAIL_CLEAN.&lt;br /&gt;
&lt;br /&gt;
=== Cache States : Simplified ===&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the MESI protocol [LINK] states.  In this case, we map envision three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
In this case, the following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Culler, David E. and Jaswinder Pal Singh, ''Parallel Computer Architecture,'' Morgan Kaufmann Publishers, Inc, 1999.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45150</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45150"/>
		<updated>2011-04-19T00:27:22Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Supplemental to Chapter 11: SCI Cache Coherence''' ==&lt;br /&gt;
&lt;br /&gt;
This is intended to be a supplement to Chapter 11 of [[#References | Solihin ]], which deals with Distributed Shared Memory (DSM) in Multiprocessors.  In the book, the basic approaches to DSM are discussed, and a directory based cache coherence protocol are introduced.  This protocol is in contrast to the bus-based cache coherence protocols that were introduced earlier.  This supplemental focuses on a specific directory based cache coherence protocol called the Scalable Coherent Interface.&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of [[#References | Solihin (2008)]]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The directory maintains a state for the block in memory, and a pointer to the first cache that is on the shared list.  The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory that they now have the only clean copy of the data.  This head cache must also notify the next cache in the shared list so that the cache can invalidate their copy.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
The structure for SCI is fairly complicated and can lead to a number of race conditions.&lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  If a sharer (not the head) wants to write to the block, the sharer first has to detach itself from the linked list.  The sharer is no longer sharing the block so from now on we will call it N1.  Then N1 has to ask the directory for the block again with intent to write.  N1 then swaps out the pointer pointing to the head of the linked list with a pointer to itself.  Now N1 is the new head and it has the pointer to the old head.  N1 then invalidates the old head’s cache line and all of the sharers linked to it.  This allows the SCI protocol to keep write exclusivity [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  Write exclusivity mean only one write to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [[#References | (Solihin)]].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
&lt;br /&gt;
=== Directory States ===&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
If the directory has the block of memory in the GONE or EM state, then the directory must forward any incoming read/write requests for the block to the last cache that modified the block.  That cache is responsible for updating main memory and the block that requested the data. This is why the directory must maintain a pointer to the first cache in the cache list. &lt;br /&gt;
&lt;br /&gt;
=== Cache States : Complicated ===&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities [[#References | (Cutler (1999))]]:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
On top of these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
=== Cache States : Simplified ===&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the MESI protocol [LINK] states.  In this case, we map envision three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
In this case, the following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Culler, David E. and Jaswinder Pal Singh, ''Parallel Computer Architecture,'' Morgan Kaufmann Publishers, Inc, 1999.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45137</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45137"/>
		<updated>2011-04-19T00:18:46Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Supplemental to Chapter 11: SCI Cache Coherence''' ==&lt;br /&gt;
&lt;br /&gt;
This is intended to be a supplement to Chapter 11 of [[#References | Solihin ]], which deals with Distributed Shared Memory (DSM) in Multiprocessors.  In the book, the basic approaches to DSM are discussed, and a directory based cache coherence protocol are introduced.  This protocol is in contrast to the bus-based cache coherence protocols that were introduced earlier.  This supplemental focuses on a specific directory based cache coherence protocol called the Scalable Coherent Interface.&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of [[#References | Solihin (2008)]]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory and the next cache in the list.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  If a sharer (not the head) wants to write to the block, the sharer first has to detach itself from the linked list.  The sharer is no longer sharing the block so from now on we will call it N1.  Then N1 has to ask the directory for the block again with intent to write.  N1 then swaps out the pointer pointing to the head of the linked list with a pointer to itself.  Now N1 is the new head and it has the pointer to the old head.  N1 then invalidates the old head’s cache line and all of the sharers linked to it.  This allows the SCI protocol to keep write exclusivity [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  Write exclusivity mean only one write to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [[#References | (Solihin)]].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
&lt;br /&gt;
=== Directory States ===&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cache States ===&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities [[#References | (Cutler (1999))]]:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
On top of these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the MESI protocol [LINK] states.  In this case, we map envision three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
In this case, the following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Culler, David E. and Jaswinder Pal Singh, ''Parallel Computer Architecture,'' Morgan Kaufmann Publishers, Inc, 1999.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45131</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45131"/>
		<updated>2011-04-19T00:08:49Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Supplemental to Chapter 11: SCI Cache Coherence'''&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of [[#References | Solihin (2008)]]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory and the next cache in the list.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  If a sharer (not the head) wants to write to the block, the sharer first has to detach itself from the linked list.  The sharer is no longer sharing the block so from now on we will call it N1.  Then N1 has to ask the directory for the block again with intent to write.  N1 then swaps out the pointer pointing to the head of the linked list with a pointer to itself.  Now N1 is the new head and it has the pointer to the old head.  N1 then invalidates the old head’s cache line and all of the sharers linked to it.  This allows the SCI protocol to keep write exclusivity [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  Write exclusivity mean only one write to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [[#References | (Solihin)]].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
&lt;br /&gt;
=== Directory States ===&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Cache States ===&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities [[#References | (Cutler (1999))]]:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
On top of these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the MESI protocol [LINK] states.  In this case, we map envision three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
In this case, the following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Culler, David E. and Jaswinder Pal Singh, ''Parallel Computer Architecture,'' Morgan Kaufmann Publishers, Inc, 1999.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45079</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45079"/>
		<updated>2011-04-18T22:03:22Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= '''Supplemental to Chapter 11: SCI Cache Coherence''' =&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of [[#References | Solihin (2008)]]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory and the next cache in the list.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  If a sharer (not the head) wants to write to the block, the sharer first has to detach itself from the linked list.  The sharer is no longer sharing the block so from now on we will call it N1.  Then N1 has to ask the directory for the block again with intent to write.  N1 then swaps out the pointer pointing to the head of the linked list with a pointer to itself.  Now N1 is the new head and it has the pointer to the old head.  N1 then invalidates the old head’s cache line and all of the sharers linked to it.  This allows the SCI protocol to keep write exclusivity [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  Write exclusivity mean only one write to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [[#References | (Solihin)]].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities [[#References | (Cutler (1999))]]:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
On top of these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the MESI protocol [LINK] states.  In this case, we map envision three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
In this case, the following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Culler, David E. and Jaswinder Pal Singh, ''Parallel Computer Architecture,'' Morgan Kaufmann Publishers, Inc, 1999.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45078</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45078"/>
		<updated>2011-04-18T22:02:57Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= '''Supplemental to Chapter 11: SCI Cache Coherence''' =&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of [[#References | Solihin (2008)]]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory and the next cache in the list.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  If a sharer (not the head) wants to write to the block, the sharer first has to detach itself from the linked list.  The sharer is no longer sharing the block so from now on we will call it N1.  Then N1 has to ask the directory for the block again with intent to write.  N1 then swaps out the pointer pointing to the head of the linked list with a pointer to itself.  Now N1 is the new head and it has the pointer to the old head.  N1 then invalidates the old head’s cache line and all of the sharers linked to it.  This allows the SCI protocol to keep write exclusivity [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  Write exclusivity mean only one write to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [[#References | (Solihin)]].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities [[#References | Cutler (1999)]]:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
On top of these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the MESI protocol [LINK] states.  In this case, we map envision three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
In this case, the following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Culler, David E. and Jaswinder Pal Singh, ''Parallel Computer Architecture,'' Morgan Kaufmann Publishers, Inc, 1999.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45077</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45077"/>
		<updated>2011-04-18T21:56:34Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= '''Supplemental to Chapter 11: SCI Cache Coherence''' =&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of Solihan [LINK]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory and the next cache in the list.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  If a sharer (not the head) wants to write to the block, the sharer first has to detach itself from the linked list.  The sharer is no longer sharing the block so from now on we will call it N1.  Then N1 has to ask the directory for the block again with intent to write.  N1 then swaps out the pointer pointing to the head of the linked list with a pointer to itself.  Now N1 is the new head and it has the pointer to the old head.  N1 then invalidates the old head’s cache line and all of the sharers linked to it.  This allows the SCI protocol to keep write exclusivity [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  Write exclusivity mean only one write to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [SOLIHIN BOOK].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
On top of these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the MESI protocol [LINK] states.  In this case, we map envision three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
In this case, the following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:SCIStatesC.png&amp;diff=45076</id>
		<title>File:SCIStatesC.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:SCIStatesC.png&amp;diff=45076"/>
		<updated>2011-04-18T21:55:37Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: SCI States&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SCI States&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45075</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45075"/>
		<updated>2011-04-18T21:55:02Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= '''Supplemental to Chapter 11: SCI Cache Coherence''' =&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of Solihan [LINK]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory and the next cache in the list.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  If a sharer (not the head) wants to write to the block, the sharer first has to detach itself from the linked list.  The sharer is no longer sharing the block so from now on we will call it N1.  Then N1 has to ask the directory for the block again with intent to write.  N1 then swaps out the pointer pointing to the head of the linked list with a pointer to itself.  Now N1 is the new head and it has the pointer to the old head.  N1 then invalidates the old head’s cache line and all of the sharers linked to it.  This allows the SCI protocol to keep write exclusivity [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  Write exclusivity mean only one write to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [SOLIHIN BOOK].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
On top of these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the MESI protocol [LINK] states.  In this case, we map envision three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
In this case, the following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:IMAGE]]&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44917</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44917"/>
		<updated>2011-04-13T23:52:35Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which have been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data and task parallel models address fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The two factors that influenced parallel computing performance improvement the most were the speed of the individual processor and the speed of the communication connections.  These communication connections include access to memory (local and main), as well as communication between processors. The earliest advancements in parallel computers took advantage of bit-level parallelism from improvements made to chip design.  These computers mainly used vector processing and each processor had direct, fast conections to memory.  This gave rise to the shared memory programming model.  As performance returns from this architecture diminished and the complexity of building machines with direct access to memory increased, the emphasis was placed on instruction-level parallelism, distributed memory systems, and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Shared memory in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Cray Research, an offshoot of Control Data Corporation, and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers for memory instead of individual caches connected directly to processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
All of these early commercial machines relied on there being a direct connection between each processor and the memory.  Not only did there have to be a direct connection, but each connection had to allow relatively similar access to each part of memory.  As you increased the number of processors, you had to increase the number of connections, which meant that this architecture did not scale well.  At the same time, processors were becoming more and more advanced.  The first single chip microprocessor was introduced in 1971 [http://en.wikipedia.org/wiki/Intel_4004].  Due to these two pressures, there was a movement away from large shared memory supercomputers and towards distributed memory systems.&lt;br /&gt;
&lt;br /&gt;
== Move to message passing in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32 bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]), so there were fewer gains to be had for performance from processor improvements.  At the same time, processors were becoming smaller and connections more efficient and connecting processors to do work in parallel became more viable.  In the late 70's and early 80's Massively Parallel Processors (MPPs) emerged [http://www.intel.com/pressroom/kits/upcrc/parallelcomputing_backgrounder.pdf].  These consisted of separate computational units with their own memory and a link to the network that connects each unit.  This structure allowed separate units to communicate the results of computations to each other without there needing to be a direct connection to each memory location.  This change in architecture shifted the emphasis from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture.  This gave rise to the message passing model which allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
Organizing each of the nodes in a MPP posed its own problems.  Some MPPs organized the connections into hypercubes, but these proved difficult to build [http://en.wikipedia.org/wiki/MIMD].  One of the most successful were the Connection Machines [http://ed-thelen.org/comp-hist/vs-cm-1-2-5.html].  Other architectures used 2-D meshes.  All of these strategies meant that each message might have to pass through a number of nodes before reaching its final destination.  This introduced its own restrictions on performance becaues each node had to handle routing duties.  As networking technology became faster and faster, and individual processors became more and more efficient, it became reasonable to connect separate computers across a network, which gave rise to cluster machines.&lt;br /&gt;
&lt;br /&gt;
== Current trend to cluster machines ==&lt;br /&gt;
In the 1990's, off-the-shelf computers were becoming more and more powerful.  At the same time, network connections were becoming faster and faster. These two trends meant that it was no longer necessary to build custom hardware for parallel computing, like the bit-serial processor machines.  Off-the-shelf computers connected via networks could offer similar performance. These cluster-based machines added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].&lt;br /&gt;
&lt;br /&gt;
= Data-Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Combining with message passing and shared memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. [[#Definitions |''SIMD'']] processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
= Task-Parallel Model =&lt;br /&gt;
The logical opposite of data parallel is [[#Definitions | ''task parallel,'']] in which a number of distinct tasks operate on common data. Task parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
An example of a task parallel code which is functionally equivalent to the sequential and data parallel codes given above follows below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // Task parallel code.&lt;br /&gt;
 &lt;br /&gt;
 int id = getmyid(); // assume id = 0 for thread 0, id = 1 for thread 1&lt;br /&gt;
 &lt;br /&gt;
 if (id == 0)&lt;br /&gt;
 {&lt;br /&gt;
     for (i = 0; i &amp;lt; 8; i++)&lt;br /&gt;
     {&lt;br /&gt;
         a[i] = b[i] + c[i];&lt;br /&gt;
         send_msg(P1, a[i]);&lt;br /&gt;
     }&lt;br /&gt;
 }&lt;br /&gt;
 else&lt;br /&gt;
 {&lt;br /&gt;
     sum = 0;&lt;br /&gt;
     for (i = 0; i &amp;lt; 8; i++)&lt;br /&gt;
     {&lt;br /&gt;
         recv_msg(P0, a[i]);&lt;br /&gt;
         if (a[i] &amp;gt; 0)&lt;br /&gt;
             sum = sum + a[i];&lt;br /&gt;
     }&lt;br /&gt;
     Print sum;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
In the code above, work is divided into two parallel tasks.  The first performs the element-wise addition of arrays ''b'' and ''c'' and stores the result in ''a.''  The other sums the elements of ''a.''  These tasks both operate on all elements of ''a'' (rather than on separate chunks), and the code executed by each thread is different (rather than identical).&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. non-determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Summary =&lt;br /&gt;
&lt;br /&gt;
As computer architectures have evolved, so have the parallel computing models.  As discussed in Solihan [SITE], two models that have been prominent throughout computer history have been the Shared Memory Model and the Message Passing Model.  These are, by no means, the only programming models in use.  In this supplemental, we discussed two other important models: the Task Parallel Model and the Data Parallel Model.  We also gave some context for how each of the programming models have emerged throughout the history of computer architecture.  Each model has different strengths and weaknesses as discussed above.&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44916</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44916"/>
		<updated>2011-04-13T23:44:33Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which have been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data and task parallel models address fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The two factors that influenced parallel computing performance improvement the most were the speed of the individual processor and the speed of the communication connections.  These communication connections include access to memory (local and main), as well as communication between processors. The earliest advancements in parallel computers took advantage of bit-level parallelism from improvements made to chip design.  These computers mainly used vector processing and each processor had direct, fast conections to memory.  This gave rise to the shared memory programming model.  As performance returns from this architecture diminished and the complexity of building machines with direct access to memory increased, the emphasis was placed on instruction-level parallelism, distributed memory systems, and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Shared memory in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Cray Research, an offshoot of Control Data Corporation, and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers for memory instead of individual caches connected directly to processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
All of these early commercial machines relied on there being a direct connection between each processor and the memory.  Not only did there have to be a direct connection, but each connection had to allow relatively similar access to each part of memory.  As you increased the number of processors, you had to increase the number of connections, which meant that this architecture did not scale well.  At the same time, processors were becoming more and more advanced.  The first single chip microprocessor was introduced in 1971 [http://en.wikipedia.org/wiki/Intel_4004].  Due to these two pressures, there was a movement away from large shared memory supercomputers and towards distributed memory systems.&lt;br /&gt;
&lt;br /&gt;
== Move to message passing in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32 bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]), so there were fewer gains to be had for performance from processor improvements.  At the same time, processors were becoming smaller and connections more efficient and connecting processors to do work in parallel became more viable.  In the late 70's and early 80's Massively Parallel Processors (MPPs) emerged [http://www.intel.com/pressroom/kits/upcrc/parallelcomputing_backgrounder.pdf].  These consisted of separate computational units with their own memory and a link to the network that connects each unit.  This structure allowed separate units to communicate the results of computations to each other without there needing to be a direct connection to each memory location.  This change in architecture shifted the emphasis from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture.  This gave rise to the message passing model which allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
Organizing each of the nodes in a MPP posed its own problems.  Some MPPs organized the connections into hypercubes, but these proved difficult to build [http://en.wikipedia.org/wiki/MIMD].  One of the most successful were the Connection Machines [http://ed-thelen.org/comp-hist/vs-cm-1-2-5.html].  Other architectures used 2-D meshes.  All of these strategies meant that each message might have to pass through a number of nodes before reaching its final destination.  This introduced its own restrictions on performance becaues each node had to handle routing duties.  As networking technology became faster and faster, and individual processors became more and more efficient, it became reasonable to connect separate computers across a network, which gave rise to cluster machines.&lt;br /&gt;
&lt;br /&gt;
== Current trend to cluster machines ==&lt;br /&gt;
In the 1990's, off-the-shelf computers were becoming more and more powerful.  At the same time, network connections were becoming faster and faster. These two trends meant that it was no longer necessary to build custom hardware for parallel computing, like the bit-serial processor machines.  Off-the-shelf computers connected via networks could offer similar performance. These cluster-based machines added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].&lt;br /&gt;
&lt;br /&gt;
= Data-Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Combining with message passing and shared memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. [[#Definitions |''SIMD'']] processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
= Task-Parallel Model =&lt;br /&gt;
The logical opposite of data parallel is [[#Definitions | ''task parallel,'']] in which a number of distinct tasks operate on common data. Task parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
An example of a task parallel code which is functionally equivalent to the sequential and data parallel codes given above follows below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // Task parallel code.&lt;br /&gt;
 &lt;br /&gt;
 int id = getmyid(); // assume id = 0 for thread 0, id = 1 for thread 1&lt;br /&gt;
 &lt;br /&gt;
 if (id == 0)&lt;br /&gt;
 {&lt;br /&gt;
     for (i = 0; i &amp;lt; 8; i++)&lt;br /&gt;
     {&lt;br /&gt;
         a[i] = b[i] + c[i];&lt;br /&gt;
         send_msg(P1, a[i]);&lt;br /&gt;
     }&lt;br /&gt;
 }&lt;br /&gt;
 else&lt;br /&gt;
 {&lt;br /&gt;
     sum = 0;&lt;br /&gt;
     for (i = 0; i &amp;lt; 8; i++)&lt;br /&gt;
     {&lt;br /&gt;
         recv_msg(P0, a[i]);&lt;br /&gt;
         if (a[i] &amp;gt; 0)&lt;br /&gt;
             sum = sum + a[i];&lt;br /&gt;
     }&lt;br /&gt;
     Print sum;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
In the code above, work is divided into two parallel tasks.  The first performs the element-wise addition of arrays ''b'' and ''c'' and stores the result in ''a.''  The other sums the elements of ''a.''  These tasks both operate on all elements of ''a'' (rather than on separate chunks), and the code executed by each thread is different (rather than identical).&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. non-determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44911</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44911"/>
		<updated>2011-04-12T12:20:29Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which have been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data and task parallel models address fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The two factors that influenced parallel computing performance improvement the most were the speed of the individual processor and the speed of the communication connections.  These communication connections include access to memory (local and main), as well as communication between processors. The earliest advancements in parallel computers took advantage of bit-level parallelism from improvements made to chip design.  These computers mainly used vector processing and each processor had direct, fast conections to memory.  This gave rise to the shared memory programming model.  As performance returns from this architecture diminished and the complexity of building machines with direct access to memory increased, the emphasis was placed on instruction-level parallelism, distributed memory systems, and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Shared memory in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
All of these early machines relied on there being a direct connection between each processor and the memory.  Not only did there have to be a direct connection, but each connection had to allow relatively similar access to each part of memory.  As you increased the number of processors, you had to increase the number of connections, which meant that this architecture did not scale well.  At the same time, processors were becoming more and more advanced.  The first single chip microprocessor was introduced in 1971 [http://en.wikipedia.org/wiki/Intel_4004].  Due to these two pressures, there was a movement away from large shared memory supercomputers and towards distributed memory systems.&lt;br /&gt;
&lt;br /&gt;
== Move to message passing in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32 bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]), so there were fewer gains to be had for performance from processor improvements.  At the same time, processors were becoming smaller and connections more efficient and connecting processors to do work in parallel became more viable.  In the late 70's and early 80's Massively Parallel Processors (MPPs) emerged [http://www.intel.com/pressroom/kits/upcrc/parallelcomputing_backgrounder.pdf].  These consisted of separate computational units with their own memory and a link to the network that connects each unit.  This structure allowed separate units to communicate the results of computations to each other without there needing to be a direct connection to each memory location.  This change in architecture shifted the emphasis from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture.  This gave rise to the message passing model which allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
Organizing each of the nodes in a MPP posed its own problems.  Some MPPs organized the connections into hypercubes, but these proved difficult to build [http://en.wikipedia.org/wiki/MIMD].  One of the most successful were the Connection Machines [http://ed-thelen.org/comp-hist/vs-cm-1-2-5.html].  Other architectures used 2-D meshes.  All of these strategies meant that each message might have to pass through a number of nodes before reaching its final destination.  This introduced its own restrictions on performance becaues each node had to handle routing duties.  As networking technology became faster and faster, and individual processors became more and more efficient, it became reasonable to connect separate computers across a network, which gave rise to cluster machines.&lt;br /&gt;
&lt;br /&gt;
== Current trend to cluster machines ==&lt;br /&gt;
In the 1990's, multi-core computers became the domninant trend in computer architecture.  At the same time, network connections were becoming faster and faster. These two trends meant that it was no longer necessary to build custom hardware for parallel computing, like the bit-serial processor machines.  Off-the-shelf computers connected via networks could offer similar performance. These cluster-based machines added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data-Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Combining with message passing and shared memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. [[#Definitions |''SIMD'']] processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
= Task-Parallel Model =&lt;br /&gt;
The logical opposite of data parallel is [[#Definitions | ''task parallel,'']] in which a number of distinct tasks operate on common data. Task parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
An example of a task parallel code which is functionally equivalent to the sequential and data parallel codes given above follows below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // Task parallel code.&lt;br /&gt;
 &lt;br /&gt;
 int id = getmyid(); // assume id = 0 for thread 0, id = 1 for thread 1&lt;br /&gt;
 &lt;br /&gt;
 if (id == 0)&lt;br /&gt;
 {&lt;br /&gt;
     for (i = 0; i &amp;lt; 8; i++)&lt;br /&gt;
     {&lt;br /&gt;
         a[i] = b[i] + c[i];&lt;br /&gt;
         send_msg(P1, a[i]);&lt;br /&gt;
     }&lt;br /&gt;
 }&lt;br /&gt;
 else&lt;br /&gt;
 {&lt;br /&gt;
     sum = 0;&lt;br /&gt;
     for (i = 0; i &amp;lt; 8; i++)&lt;br /&gt;
     {&lt;br /&gt;
         recv_msg(P0, a[i]);&lt;br /&gt;
         if (a[i] &amp;gt; 0)&lt;br /&gt;
             sum = sum + a[i];&lt;br /&gt;
     }&lt;br /&gt;
     Print sum;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
In the code above, work is divided into two parallel tasks.  The first performs the element-wise addition of arrays ''b'' and ''c'' and stores the result in ''a.''  The other sums the elements of ''a.''  These tasks both operate on all elements of ''a'' (rather than on separate chunks), and the code executed by each thread is different (rather than identical).&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. non-determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44729</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44729"/>
		<updated>2011-04-01T14:06:34Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which have been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data and task parallel models address fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The two factors that influenced parallel computing performance improvement the most were the speed of the individual processor and the speed of the communication connections.  These communication connections include access to memory (local and main), as well as communication between processors. The earliest advancements in parallel computers took advantage of bit-level parallelism from improvements made to chip design.  These computers mainly used vector processing and each processor had direct, fast conections to memory.  This gave rise to the shared memory programming model.  As performance returns from this architecture diminished and the complexity of building machines with direct access to memory increased, the emphasis was placed on instruction-level parallelism, distributed memory systems, and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Shared memory in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
All of these early machines relied on there being a direct connection between each processor and the memory.  Not only did there have to be a direct connection, but each connection had to allow relatively similar access to each part of memory.  As you increased the number of processors, you had to increase the number of connections, which meant that this architecture did not scale well.  At the same time, processors were becoming more and more advanced.  The first single chip microprocessor was introduced in 1971 [http://en.wikipedia.org/wiki/Intel_4004].  Due to these two pressures, there was a movement away from large shared memory supercomputers and towards distributed memory systems.&lt;br /&gt;
&lt;br /&gt;
== Move to message passing in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32 bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]), so there were fewer gains to be had for performance from processor improvements.  At the same time, processors were becoming smaller and connections more efficient and connecting processors to do work in parallel became more viable.  In the late 70's and early 80's Massively Parallel Processors (MPPs) emerged [http://www.intel.com/pressroom/kits/upcrc/parallelcomputing_backgrounder.pdf].  These consisted of separate computational units with their own memory and a link to the network that connects each unit.  This structure allowed separate units to communicate the results of computations to each other without there needing to be a direct connection to each memory location.  This change in architecture shifted the emphasis from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture.  This gave rise to the message passing model which allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
Organizing each of the nodes in a MPP posed its own problems.  Some MPPs organized the connections into hypercubes, but these proved difficult to build [http://en.wikipedia.org/wiki/MIMD].  One of the most successful were the Connection Machines [http://ed-thelen.org/comp-hist/vs-cm-1-2-5.html].  Other architectures used 2-D meshes.  All of these strategies meant that each message might have to pass through a number of nodes before reaching its final destination.  This introduced its own restrictions on performance becaues each node had to handle routing duties.  As networking technology became faster and faster, and individual processors became more and more efficient, it became reasonable to connect separate computers across a network, which gave rise to cluster machines.&lt;br /&gt;
&lt;br /&gt;
== Current trend to cluster machines ==&lt;br /&gt;
In the 1990's, multi-core computers became the domninant trend in computer architecture.  At the same time, network connections were becoming faster and faster. These two trends meant that it was no longer necessary to build custom hardware for parallel computing.  Off-the-shelf computers connected via networks could offer similar performance. These cluster-based machines added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data-Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Combining with message passing and shared memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. [[#Definitions |''SIMD'']] processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
= Task-Parallel Model =&lt;br /&gt;
The logical opposite of data parallel is [[#Definitions | ''task parallel,'']] in which a number of distinct tasks operate on common data. Task parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
An example of a task parallel code which is functionally equivalent to the sequential and data parallel codes given above follows below.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // Task parallel code.&lt;br /&gt;
 &lt;br /&gt;
 int id = getmyid(); // assume id = 0 for thread 0, id = 1 for thread 1&lt;br /&gt;
 &lt;br /&gt;
 if (id == 0)&lt;br /&gt;
 {&lt;br /&gt;
     for (i = 0; i &amp;lt; 8; i++)&lt;br /&gt;
     {&lt;br /&gt;
         a[i] = b[i] + c[i];&lt;br /&gt;
         send_msg(P1, a[i]);&lt;br /&gt;
     }&lt;br /&gt;
 }&lt;br /&gt;
 else&lt;br /&gt;
 {&lt;br /&gt;
     sum = 0;&lt;br /&gt;
     for (i = 0; i &amp;lt; 8; i++)&lt;br /&gt;
     {&lt;br /&gt;
         recv_msg(P0, a[i]);&lt;br /&gt;
         if (a[i] &amp;gt; 0)&lt;br /&gt;
             sum = sum + a[i];&lt;br /&gt;
     }&lt;br /&gt;
     Print sum;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
In the code above, work is divided into two parallel tasks.  The first performs the element-wise addition of arrays ''b'' and ''c'' and stores the result in ''a.''  The other sums the elements of ''a.''  These tasks both operate on all elements of ''a'' (rather than on separate chunks), and the code executed by each thread is different (rather than identical).&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. non-determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44728</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44728"/>
		<updated>2011-04-01T14:00:15Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which have been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data and task parallel models address fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The two factors that influenced parallel computing performance improvement the most were the speed of the individual processor and the speed of the communication connections.  These communication connections include access to memory (local and main), as well as communication between processors. The earliest advancements in parallel computers took advantage of bit-level parallelism from improvements made to chip design.  These computers mainly used vector processing and each processor had direct, fast conections to memory.  This gave rise to the shared memory programming model.  As performance returns from this architecture diminished and the complexity of building machines with direct access to memory increased, the emphasis was placed on instruction-level parallelism, distributed memory systems, and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Shared memory in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
All of these early machines relied on there being a direct connection between each processor and the memory.  Not only did there have to be a direct connection, but each connection had to allow relatively similar access to each part of memory.  As you increased the number of processors, you had to increase the number of connections, which meant that this architecture did not scale well.  At the same time, processors were becoming more and more advanced.  The first single chip microprocessor was introduced in 1971 [http://en.wikipedia.org/wiki/Intel_4004].  Due to these two pressures, there was a movement away from large shared memory supercomputers and towards distributed memory systems.&lt;br /&gt;
&lt;br /&gt;
== Move to message passing in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32 bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]), so there were fewer gains to be had for performance from processor improvements.  At the same time, processors were becoming smaller and connections more efficient and connecting processors to do work in parallel became more viable.  In the late 70's and early 80's Massively Parallel Processors (MPPs) emerged [http://www.intel.com/pressroom/kits/upcrc/parallelcomputing_backgrounder.pdf].  These consisted of separate computational units with their own memory and a link to the network that connects each unit.  This structure allowed separate units to communicate the results of computations to each other without there needing to be a direct connection to each memory location.  This change in architecture shifted the emphasis from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture.  This gave rise to the message passing model which allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
Organizing each of the nodes in a MPP posed its own problems.  Some MPPs organized the connections into hypercubes, but these proved difficult to build [http://en.wikipedia.org/wiki/MIMD].  One of the most successful were the Connection Machines [http://ed-thelen.org/comp-hist/vs-cm-1-2-5.html].  Other architectures used 2-D meshes.  All of these strategies meant that each message might have to pass through a number of nodes before reaching its final destination.  This introduced its own restrictions on performance becaues each node had to handle routing duties.  As networking technology became faster and faster, and individual processors became more and more efficient, it became reasonable to connect separate computers across a network, which gave rise to cluster machines.&lt;br /&gt;
&lt;br /&gt;
== Current trend to cluster machines ==&lt;br /&gt;
In the 1990's, multi-core computers became the domninant trend in computer architecture.  At the same time, network connections were becoming faster and faster. These two trends meant that it was no longer necessary to build custom hardware for parallel computing.  Off-the-shelf computers connected via networks could offer similar performance. These cluster-based machines added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data-Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Combining with message passing and shared memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. [[#Definitions |''SIMD'']] processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
= Task-Parallel Model =&lt;br /&gt;
Task parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
If the task to be accomplished is to compute the sum of the results associated with the execution of instruction &amp;lt;tt&amp;gt;A&amp;lt;/tt&amp;gt; and instructions &amp;lt;tt&amp;gt;B&amp;lt;/tt&amp;gt;. The following example illustrates how task parallelism can be achieved.&lt;br /&gt;
&lt;br /&gt;
The pseudocode below illustrates task parallelism:&lt;br /&gt;
&amp;lt;pre&amp;gt;program:&lt;br /&gt;
do &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
end if&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it accordingly.&lt;br /&gt;
In an SPMD system, both CPUs will execute the code. In a parallel environment, both will have access to the same data. The &amp;quot;if&amp;quot; clause differentiates between the CPU's. CPU &amp;quot;a&amp;quot; will read true on the &amp;quot;if&amp;quot; and CPU &amp;quot;b&amp;quot; will read true on the &amp;quot;else if&amp;quot;, thus having their own task. Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.&lt;br /&gt;
Code executed by CPU &amp;quot;a&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;A&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
Code executed by CPU &amp;quot;b&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;B&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
This concept can now be generalized to any number of processors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU =&amp;quot;n&amp;quot; then&lt;br /&gt;
   do task &amp;quot;N&amp;quot;&lt;br /&gt;
&lt;br /&gt;
end if&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. non-determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44727</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44727"/>
		<updated>2011-04-01T13:59:59Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which have been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data and task parallel models address fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The two factors that influenced parallel computing performance improvement the most were the speed of the individual processor and the speed of the communication connections.  These communication connections include access to memory (local and main), as well as communication between processors. The earliest advancements in parallel computers took advantage of bit-level parallelism from improvements made to chip design.  These computers mainly used vector processing and each processor had direct, fast conections to memory.  This gave rise to the shared memory programming model.  As performance returns from this architecture diminished and the complexity of building machines with direct access to memory increased, the emphasis was placed on instruction-level parallelism, distributed memory systems, and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Shared memory in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
All of these early machines relied on there being a direct connection between each processor and the memory.  Not only did there have to be a direct connection, but each connection had to allow relatively similar access to each part of memory.  As you increased the number of processors, you had to increase the number of connections, which meant that this architecture did not scale well.  At the same time, processors were becoming more and more advanced.  The first single chip microprocessor was introduced in 1971 [http://en.wikipedia.org/wiki/Intel_4004].  Due to these two pressures, there was a movement away from large shared memory supercomputers and towards distributed memory systems.&lt;br /&gt;
&lt;br /&gt;
== Move to message passing in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32 bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]), so there were fewer gains to be had for performance from processor improvements.  At the same time, processors were becoming smaller and connections more efficient and connecting processors to do work in parallel became more viable.  In the late 70's and early 80's Massively Parallel Processors (MPPs) emerged [http://www.intel.com/pressroom/kits/upcrc/parallelcomputing_backgrounder.pdf].  These consisted of separate computational units with their own memory and a link to the network that connects each unit.  This structure allowed separate units to communicate the results of computations to each other without there needing to be a direct connection to each memory location.  This change in architecture shifted the emphasis from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture.  This gave rise to the message passing model which allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
Organizing each of the nodes in a MPP posed its own problems.  Some MPPs organized the connections into hypercubes, but these proved difficult to build [http://en.wikipedia.org/wiki/MIMD].  One of the most successful were the Connection Machines [http://ed-thelen.org/comp-hist/vs-cm-1-2-5.html].  Other architectures used 2-D meshes.  All of these strategies meant that each message might have to pass through a number of nodes before reaching its final destination.  This introduced its own restrictions on performance becaues each node had to handle routing duties.  As networking technology became faster and faster, and individual processors became more and more efficient, it became reasonable to connect separate computers across a network, which gave rise to cluster machines.&lt;br /&gt;
&lt;br /&gt;
== Current trend to cluster machines ==&lt;br /&gt;
In the 1990's, multi-core computers became the domninant trend in computer architecture.  At the same time, network connections were becoming faster and faster. These two trends meant that it was no longer necessary to build custom hardware for parallel computing.  Off-the-shelf computers connected via networks could offer similar performance. These cluster-based machines added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data-Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Combining with message passing and shared memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. [[#Definitions |''SIMD'']] processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
= Task-Parallel Model =&lt;br /&gt;
Task parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and example ==&lt;br /&gt;
If the task to be accomplished is to compute the sum of the results associated with the execution of instruction &amp;lt;tt&amp;gt;A&amp;lt;/tt&amp;gt; and instructions &amp;lt;tt&amp;gt;B&amp;lt;/tt&amp;gt;. The following example illustrates how task parallelism can be achieved.&lt;br /&gt;
&lt;br /&gt;
The pseudocode below illustrates task parallelism:&lt;br /&gt;
&amp;lt;pre&amp;gt;program:&lt;br /&gt;
do &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
end if&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it accordingly.&lt;br /&gt;
In an SPMD system, both CPUs will execute the code. In a parallel environment, both will have access to the same data. The &amp;quot;if&amp;quot; clause differentiates between the CPU's. CPU &amp;quot;a&amp;quot; will read true on the &amp;quot;if&amp;quot; and CPU &amp;quot;b&amp;quot; will read true on the &amp;quot;else if&amp;quot;, thus having their own task. Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.&lt;br /&gt;
Code executed by CPU &amp;quot;a&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;A&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
Code executed by CPU &amp;quot;b&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;B&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
This concept can now be generalized to any number of processors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU =&amp;quot;n&amp;quot; then&lt;br /&gt;
   do task &amp;quot;N&amp;quot;&lt;br /&gt;
&lt;br /&gt;
end if&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. non-Determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44726</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44726"/>
		<updated>2011-04-01T13:51:44Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which have been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data and task parallel models address fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The two factors that influenced parallel computing performance improvement the most were the speed of the individual processor and the speed of the communication connections.  These communication connections include access to memory (local and main), as well as communication between processors. The earliest advancements in parallel computers took advantage of bit-level parallelism from improvements made to chip design.  These computers mainly used vector processing and each processor had direct, fast conections to memory.  This gave rise to the shared memory programming model.  As performance returns from this architecture diminished and the complexity of building machines with direct access to memory increased, the emphasis was placed on instruction-level parallelism, distributed memory systems, and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Shared Memory in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
All of these early machines relied on there being a direct connection between each processor and the memory.  Not only did there have to be a direct connection, but each connection had to allow relatively similar access to each part of memory.  As you increased the number of processors, you had to increase the number of connections, which meant that this architecture did not scale well.  At the same time, processors were becoming more and more advanced.  The first single chip microprocessor was introduced in 1971 [http://en.wikipedia.org/wiki/Intel_4004].  Due to these two pressures, there was a movement away from large shared memory supercomputers and towards distributed memory systems.&lt;br /&gt;
&lt;br /&gt;
== Move to Message Passing in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32 bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]), so there were fewer gains to be had for performance from processor improvements.  At the same time, processors were becoming smaller and connections more efficient and connecting processors to do work in parallel became more viable.  In the late 70's and early 80's Massively Parallel Processors (MPPs) emerged [http://www.intel.com/pressroom/kits/upcrc/parallelcomputing_backgrounder.pdf].  These consisted of separate computational units with their own memory and a link to the network that connects each unit.  This structure allowed separate units to communicate the results of computations to each other without there needing to be a direct connection to each memory location.  This change in architecture shifted the emphasis from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture.  This gave rise to the message passing model which allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
Organizing each of the nodes in a MPP posed its own problems.  Some MPPs organized the connections into hypercubes, but these proved difficult to build [http://en.wikipedia.org/wiki/MIMD].  One of the most successful were the Connection Machines [http://ed-thelen.org/comp-hist/vs-cm-1-2-5.html].  Other architectures used 2-D meshes.  All of these strategies meant that each message might have to pass through a number of nodes before reaching its final destination.  This introduced its own restrictions on performance becaues each node had to handle routing duties.  As networking technology became faster and faster, and individual processors became more and more efficient, it became reasonable to connect separate computers across a network, which gave rise to cluster machines.&lt;br /&gt;
&lt;br /&gt;
== Current Trend to Cluster Machines ==&lt;br /&gt;
In the 1990's, multi-core computers became the domninant trend in computer architecture.  At the same time, network connections were becoming faster and faster. These two trends meant that it was no longer necessary to build custom hardware for parallel computing.  Off-the-shelf computers connected via networks could offer similar performance. These cluster-based machines added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data-Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Combining with Message Passing and Shared Memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. [[#Definitions |''SIMD'']] processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
= Task-Parallel Model =&lt;br /&gt;
Task parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
If the task to be accomplished is to compute the sum of the results associated with the execution of instruction &amp;lt;tt&amp;gt;A&amp;lt;/tt&amp;gt; and instructions &amp;lt;tt&amp;gt;B&amp;lt;/tt&amp;gt;. The following example illustrates how task parallelism can be achieved.&lt;br /&gt;
&lt;br /&gt;
The pseudocode below illustrates task parallelism:&lt;br /&gt;
&amp;lt;pre&amp;gt;program:&lt;br /&gt;
do &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
end if&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it accordingly.&lt;br /&gt;
In an SPMD system, both CPUs will execute the code. In a parallel environment, both will have access to the same data. The &amp;quot;if&amp;quot; clause differentiates between the CPU's. CPU &amp;quot;a&amp;quot; will read true on the &amp;quot;if&amp;quot; and CPU &amp;quot;b&amp;quot; will read true on the &amp;quot;else if&amp;quot;, thus having their own task. Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.&lt;br /&gt;
Code executed by CPU &amp;quot;a&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;A&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
Code executed by CPU &amp;quot;b&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;B&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
This concept can now be generalized to any number of processors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU =&amp;quot;n&amp;quot; then&lt;br /&gt;
   do task &amp;quot;N&amp;quot;&lt;br /&gt;
&lt;br /&gt;
end if&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs Asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. Non-Determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44659</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=44659"/>
		<updated>2011-03-31T15:51:12Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which have been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data and task parallel models address fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Shared Memory in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
All of these early machines relied on there being a direct connection between each processor and the memory.  Not only did there have to be a direct connection, but each connection had to allow relatively similar access to each part of memory.  This architecture did not scale well.  As you increased the number of processors, you had to increase the number of connections.  At the same time, processors were becoming more and more advanced.  The first single chip microprocessor was introduced in 1971 [SITE].  Due to these two pressures, there was a movement away from large shared memory supercomputers and towards distributed memory systems.&lt;br /&gt;
&lt;br /&gt;
== Move to Message Passing in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
There were a variety of reasons that influenced the move to distributed memory systems in the 1980's. Increasing the word size above 32 bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]). In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture.&lt;br /&gt;
&lt;br /&gt;
At the same time, processors were becoming smaller and more efficient and connecting processors to do work in parallel became more viable.  In the late 70's and early 80's Massively Parallel Processors (MPPs) emerged [http://www.intel.com/pressroom/kits/upcrc/parallelcomputing_backgrounder.pdf].  These consisted of separate computational units with their own memory and a link to the network that connects each unit.  This structure allowed separate units to communicate the results of computations to each other without there needing to be a direct connection to each memory location.  This gave rise to the message passing model which allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
Organizing each of the nodes in a MPP posed its own problems.  Some MPPs organized the connections into hypercubes, but these proved difficult to build [SITE].  Other architectures used 2-D meshes, but this meant that each message might have to pass through a number of nodes before reaching its destination.  This introduced its own restrictions on performance.  As networking technology became faster and faster, and individual processors became more and more efficient, it became reasonable to connect separate computers across a network, which gave rise to cluster machines.&lt;br /&gt;
&lt;br /&gt;
== Current Trend to Cluster Machines ==&lt;br /&gt;
The move to cluster-based machines in the past decade has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data-Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Combining with Message Passing and Shared Memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. [[#Definitions |''SIMD'']] processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
= Task-Parallel Model =&lt;br /&gt;
Task parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
If the task to be accomplished is to compute the sum of the results associated with the execution of instruction &amp;lt;tt&amp;gt;A&amp;lt;/tt&amp;gt; and instructions &amp;lt;tt&amp;gt;B&amp;lt;/tt&amp;gt;. The following example illustrates how task parallelism can be achieved.&lt;br /&gt;
&lt;br /&gt;
The pseudocode below illustrates task parallelism:&lt;br /&gt;
&amp;lt;pre&amp;gt;program:&lt;br /&gt;
do &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
end if&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it accordingly.&lt;br /&gt;
In an SPMD system, both CPUs will execute the code. In a parallel environment, both will have access to the same data. The &amp;quot;if&amp;quot; clause differentiates between the CPU's. CPU &amp;quot;a&amp;quot; will read true on the &amp;quot;if&amp;quot; and CPU &amp;quot;b&amp;quot; will read true on the &amp;quot;else if&amp;quot;, thus having their own task. Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.&lt;br /&gt;
Code executed by CPU &amp;quot;a&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;A&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
Code executed by CPU &amp;quot;b&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;B&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
This concept can now be generalized to any number of processors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU =&amp;quot;n&amp;quot; then&lt;br /&gt;
   do task &amp;quot;N&amp;quot;&lt;br /&gt;
&lt;br /&gt;
end if&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs Asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. Non-Determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43598</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43598"/>
		<updated>2011-02-01T00:44:45Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which have been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data and task parallel models address fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]). In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Current Trend to Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Combining with Message Passing and Shared Memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. [[#Definitions |''SIMD'']] processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
Task Parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
If the task to be accomplished is to compute the sum of the results associated with the execution of instruction 'A' and instructions 'B'. The following example illustrates, how task parallelism can be achieved.&lt;br /&gt;
&lt;br /&gt;
The pseudo code below illustrates task parallelism:&lt;br /&gt;
&amp;lt;pre&amp;gt;program:&lt;br /&gt;
do &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
end if&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it accordingly.&lt;br /&gt;
In an SPMD system, both CPUs will execute the code. In a parallel environment, both will have access to the same data. The &amp;quot;if&amp;quot; clause differentiates between the CPU's. CPU &amp;quot;a&amp;quot; will read true on the &amp;quot;if&amp;quot; and CPU &amp;quot;b&amp;quot; will read true on the &amp;quot;else if&amp;quot;, thus having their own task. Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.&lt;br /&gt;
Code executed by CPU &amp;quot;a&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;A&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
Code executed by CPU &amp;quot;b&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;B&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
This concept can now be generalized to any number of processors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU =&amp;quot;n&amp;quot; then&lt;br /&gt;
   do task &amp;quot;N&amp;quot;&lt;br /&gt;
&lt;br /&gt;
end if&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs Asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. Non-Determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43597</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43597"/>
		<updated>2011-02-01T00:31:55Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which have been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data and task parallel models address fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]). In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Current Trend to Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. If the task parallel code given above were modified from a message passing model to a shared memory model, the two threads would require 8 signals be sent between the threads (instead of 8 messages). In contrast, the data parallel code would require a single barrier before the local sums are added to compute the full sum.&lt;br /&gt;
Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. SIMD (single-instruction-multiple-data) processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
Since data parallel code tends to simplify communication and synchronization, data parallel code may be easier to develop than a more task parallel approach. However, data parallel code also requires writing code to split program data into chunks and assign it to different threads. In addition, it is possible that a problem may not decompose easily into subproblems relying on largely independent chunks of data. In this case, it may be impractical or impossible to apply the data parallel model.&lt;br /&gt;
Once written, data parallel programs can scale easily to large numbers of processors. The data parallel model implicitly encourages data locality by having each thread work on a chunk of data. The regular data chunks also make it easier to reason about where to locate data and how to organize it.&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
Task Parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
If the task to be accomplished is to compute the sum of the results associated with the execution of instruction 'A' and instructions 'B'. The following example illustrates, how task parallelism can be achieved.&lt;br /&gt;
&lt;br /&gt;
The pseudo code below illustrates task parallelism:&lt;br /&gt;
&amp;lt;pre&amp;gt;program:&lt;br /&gt;
do &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
end if&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it accordingly.&lt;br /&gt;
In an SPMD system, both CPUs will execute the code. In a parallel environment, both will have access to the same data. The &amp;quot;if&amp;quot; clause differentiates between the CPU's. CPU &amp;quot;a&amp;quot; will read true on the &amp;quot;if&amp;quot; and CPU &amp;quot;b&amp;quot; will read true on the &amp;quot;else if&amp;quot;, thus having their own task. Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.&lt;br /&gt;
Code executed by CPU &amp;quot;a&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;A&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
Code executed by CPU &amp;quot;b&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;B&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
This concept can now be generalized to any number of processors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU =&amp;quot;n&amp;quot; then&lt;br /&gt;
   do task &amp;quot;N&amp;quot;&lt;br /&gt;
&lt;br /&gt;
end if&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs Asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. Non-Determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43596</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43596"/>
		<updated>2011-02-01T00:30:41Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which has been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data parallel model addresses fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of these parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]). In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Current Trend to Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. If the task parallel code given above were modified from a message passing model to a shared memory model, the two threads would require 8 signals be sent between the threads (instead of 8 messages). In contrast, the data parallel code would require a single barrier before the local sums are added to compute the full sum.&lt;br /&gt;
Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. SIMD (single-instruction-multiple-data) processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
Since data parallel code tends to simplify communication and synchronization, data parallel code may be easier to develop than a more task parallel approach. However, data parallel code also requires writing code to split program data into chunks and assign it to different threads. In addition, it is possible that a problem may not decompose easily into subproblems relying on largely independent chunks of data. In this case, it may be impractical or impossible to apply the data parallel model.&lt;br /&gt;
Once written, data parallel programs can scale easily to large numbers of processors. The data parallel model implicitly encourages data locality by having each thread work on a chunk of data. The regular data chunks also make it easier to reason about where to locate data and how to organize it.&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
Task Parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
If the task to be accomplished is to compute the sum of the results associated with the execution of instruction 'A' and instructions 'B'. The following example illustrates, how task parallelism can be achieved.&lt;br /&gt;
&lt;br /&gt;
The pseudo code below illustrates task parallelism:&lt;br /&gt;
&amp;lt;pre&amp;gt;program:&lt;br /&gt;
do &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
end if&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it accordingly.&lt;br /&gt;
In an SPMD system, both CPUs will execute the code. In a parallel environment, both will have access to the same data. The &amp;quot;if&amp;quot; clause differentiates between the CPU's. CPU &amp;quot;a&amp;quot; will read true on the &amp;quot;if&amp;quot; and CPU &amp;quot;b&amp;quot; will read true on the &amp;quot;else if&amp;quot;, thus having their own task. Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.&lt;br /&gt;
Code executed by CPU &amp;quot;a&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;A&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
Code executed by CPU &amp;quot;b&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;B&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
This concept can now be generalized to any number of processors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU =&amp;quot;n&amp;quot; then&lt;br /&gt;
   do task &amp;quot;N&amp;quot;&lt;br /&gt;
&lt;br /&gt;
end if&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs Asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. Non-Determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43595</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43595"/>
		<updated>2011-02-01T00:30:18Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which has been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data parallel model addresses fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of these parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly [[#Definitions| ''SIMD'']] architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]). In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Current Transition to Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
One important feature of the data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. If the task parallel code given above were modified from a message passing model to a shared memory model, the two threads would require 8 signals be sent between the threads (instead of 8 messages). In contrast, the data parallel code would require a single barrier before the local sums are added to compute the full sum.&lt;br /&gt;
Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. SIMD (single-instruction-multiple-data) processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
&lt;br /&gt;
Since data parallel code tends to simplify communication and synchronization, data parallel code may be easier to develop than a more task parallel approach. However, data parallel code also requires writing code to split program data into chunks and assign it to different threads. In addition, it is possible that a problem may not decompose easily into subproblems relying on largely independent chunks of data. In this case, it may be impractical or impossible to apply the data parallel model.&lt;br /&gt;
Once written, data parallel programs can scale easily to large numbers of processors. The data parallel model implicitly encourages data locality by having each thread work on a chunk of data. The regular data chunks also make it easier to reason about where to locate data and how to organize it.&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
Task Parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
If the task to be accomplished is to compute the sum of the results associated with the execution of instruction 'A' and instructions 'B'. The following example illustrates, how task parallelism can be achieved.&lt;br /&gt;
&lt;br /&gt;
The pseudo code below illustrates task parallelism:&lt;br /&gt;
&amp;lt;pre&amp;gt;program:&lt;br /&gt;
do &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
end if&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it accordingly.&lt;br /&gt;
In an SPMD system, both CPUs will execute the code. In a parallel environment, both will have access to the same data. The &amp;quot;if&amp;quot; clause differentiates between the CPU's. CPU &amp;quot;a&amp;quot; will read true on the &amp;quot;if&amp;quot; and CPU &amp;quot;b&amp;quot; will read true on the &amp;quot;else if&amp;quot;, thus having their own task. Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.&lt;br /&gt;
Code executed by CPU &amp;quot;a&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;A&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
Code executed by CPU &amp;quot;b&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;B&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
This concept can now be generalized to any number of processors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU =&amp;quot;n&amp;quot; then&lt;br /&gt;
   do task &amp;quot;N&amp;quot;&lt;br /&gt;
&lt;br /&gt;
end if&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
Since each parallel task is unique, a major limitation of task parallel algorithms is that the maximum degree of parallelism attainable is limited to the number of tasks that have been formulated.  This is in contrast to data parallel algorithms, which can be scaled easily to take advantage of an arbitrary number of processing elements.  In addition, unique tasks are likely to have significantly different run times, making it more challenging to balance load across processors. [[#References | Haveraaen (2000)]] also notes that task parallel algorithms are inherently more complex, requiring a greater degree of communication and synchronization.&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs Asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. Non-Determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43594</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43594"/>
		<updated>2011-02-01T00:17:08Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not give an historical context for the development of parallel programming models.  It also does not address other commonly recognized parallel programming models like the [[#Definitions | ''task parallel'']] model or the [[#Definitions | ''data parallel'']] model, which has been covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often presented as competing models, but the data parallel model addresses fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide historical context for the development of these parallel programming models and a treatment of the data and task parallel models to complement Chapter 2 of [[#References | Solihin (2008)]].  &lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model was developed to take advantage of advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly Single Instruction Multiple Data architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]). In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Current Transition to Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism] and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. If the task parallel code given above were modified from a message passing model to a shared memory model, the two threads would require 8 signals be sent between the threads (instead of 8 messages). In contrast, the data parallel code would require a single barrier before the local sums are added to compute the full sum.&lt;br /&gt;
Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. SIMD (single-instruction-multiple-data) processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
Since data parallel code tends to simplify communication and synchronization, data parallel code may be easier to develop than a more task parallel approach. However, data parallel code also requires writing code to split program data into chunks and assign it to different threads. In addition, it is possible that a problem may not decompose easily into subproblems relying on largely independent chunks of data. In this case, it may be impractical or impossible to apply the data parallel model.&lt;br /&gt;
Once written, data parallel programs can scale easily to large numbers of processors. The data parallel model implicitly encourages data locality by having each thread work on a chunk of data. The regular data chunks also make it easier to reason about where to locate data and how to organize it.&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
Task Parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
If the task to be accomplished is to compute the sum of the results associated with the execution of instruction 'A' and instructions 'B'. The following example illustrates, how task parallelism can be achieved.&lt;br /&gt;
&lt;br /&gt;
The pseudo code below illustrates task parallelism:&lt;br /&gt;
&amp;lt;pre&amp;gt;program:&lt;br /&gt;
do &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
end if&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it accordingly.&lt;br /&gt;
In an SPMD system, both CPUs will execute the code. In a parallel environment, both will have access to the same data. The &amp;quot;if&amp;quot; clause differentiates between the CPU's. CPU &amp;quot;a&amp;quot; will read true on the &amp;quot;if&amp;quot; and CPU &amp;quot;b&amp;quot; will read true on the &amp;quot;else if&amp;quot;, thus having their own task. Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.&lt;br /&gt;
Code executed by CPU &amp;quot;a&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;A&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
Code executed by CPU &amp;quot;b&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;B&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
This concept can now be generalized to any number of processors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU =&amp;quot;n&amp;quot; then&lt;br /&gt;
   do task &amp;quot;N&amp;quot;&lt;br /&gt;
&lt;br /&gt;
end if&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs Asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. Non-Determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43593</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43593"/>
		<updated>2011-02-01T00:11:38Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not address the [[#Definitions | ''data parallel'']] model, another commonly recognized parallel programming model covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].  It also does not give any historical context for how parallel programming models have evolved.&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often present as competing models, but the data parallel model addresses fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide a treatment of the data parallel model which complements Chapter 2 of [[#References | Solihin (2008)]].  The [[#Definitions | ''task parallel'']] model will also be introduced as a point of contrast.&lt;br /&gt;
&lt;br /&gt;
= Overview =&lt;br /&gt;
Whereas the shared memory and message passing models focus on how parallel tasks access common data, the [[#Definitions | ''data parallel'']] model focuses on how to divide up work into parallel tasks.  Data parallel algorithms exploit parallelism by dividing a problem into a number of identical tasks which execute on different subsets of common data.  The logical opposite of data parallel is task parallel, in which a number of distinct tasks operate on common data.  Historically, each parallel programming model developed to exploit performance gains made possible by advancements in computer architecture.&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly Single Instruction Multiple Data architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]). In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade, has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism]  and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. If the task parallel code given above were modified from a message passing model to a shared memory model, the two threads would require 8 signals be sent between the threads (instead of 8 messages). In contrast, the data parallel code would require a single barrier before the local sums are added to compute the full sum.&lt;br /&gt;
Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. SIMD (single-instruction-multiple-data) processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
Since data parallel code tends to simplify communication and synchronization, data parallel code may be easier to develop than a more task parallel approach. However, data parallel code also requires writing code to split program data into chunks and assign it to different threads. In addition, it is possible that a problem may not decompose easily into subproblems relying on largely independent chunks of data. In this case, it may be impractical or impossible to apply the data parallel model.&lt;br /&gt;
Once written, data parallel programs can scale easily to large numbers of processors. The data parallel model implicitly encourages data locality by having each thread work on a chunk of data. The regular data chunks also make it easier to reason about where to locate data and how to organize it.&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
Task Parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
If the task to be accomplished is to compute the sum of the results associated with the execution of instruction 'A' and instructions 'B'. The following example illustrates, how task parallelism can be achieved.&lt;br /&gt;
&lt;br /&gt;
The pseudo code below illustrates task parallelism:&lt;br /&gt;
&amp;lt;pre&amp;gt;program:&lt;br /&gt;
do &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
end if&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it accordingly.&lt;br /&gt;
In an SPMD system, both CPUs will execute the code. In a parallel environment, both will have access to the same data. The &amp;quot;if&amp;quot; clause differentiates between the CPU's. CPU &amp;quot;a&amp;quot; will read true on the &amp;quot;if&amp;quot; and CPU &amp;quot;b&amp;quot; will read true on the &amp;quot;else if&amp;quot;, thus having their own task. Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.&lt;br /&gt;
Code executed by CPU &amp;quot;a&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;A&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
Code executed by CPU &amp;quot;b&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;B&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
This concept can now be generalized to any number of processors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU =&amp;quot;n&amp;quot; then&lt;br /&gt;
   do task &amp;quot;N&amp;quot;&lt;br /&gt;
&lt;br /&gt;
end if&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs Asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. Non-Determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43592</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43592"/>
		<updated>2011-01-31T23:08:40Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
Chapter 2 of [[#References | Solihin (2008)]] covers the shared memory and message passing parallel programming models.  However, it does not address the [[#Definitions | ''data parallel'']] model, another commonly recognized parallel programming model covered in other treatments like [[#References | Foster (1995)]] and [[#References | Culler (1999)]].  It also does not give any historical context for how parallel programming models have evolved.&lt;br /&gt;
&lt;br /&gt;
Shared memory and message passing models are often present as competing models, but the data parallel model addresses fundamentally different programming concerns and can therefore be used in conjunction with either.  The goal of this supplement is to provide a treatment of the data parallel model which complements Chapter 2 of [[#References | Solihin (2008)]].  The [[#Definitions | ''task parallel'']] model will also be introduced as a point of contrast.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly Single Instruction Multiple Data architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]). In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade, has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism]  and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow. Flynn's taxonomy classifies SIMD to be analogous to doing the same operation repeatedly over a large data set. There is only one control processor that directs the activities of all the processing elements. In a multiprocessor system executing a single set of instructions (SIMD), data parallelism is achieved when each processor performs the same task on different pieces of distributed data. In some situations, a single execution thread controls operations on all pieces of data. In others, different threads control the operation, but they execute the same code.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
This section shows a simple example adapted from Solihin textbook (pp. 24 - 27) that illustrates  the data-parallel programming model. Each of the codes below are written in pseudo-code style.&lt;br /&gt;
&lt;br /&gt;
Suppose we want to perform the following task on an array &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;: updating each element of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; by the product of itself and its index, and adding together the elements of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; into the variable &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;. The corresponding code is shown below.&lt;br /&gt;
&lt;br /&gt;
 // simple sequential task&lt;br /&gt;
 sum = 0;&lt;br /&gt;
 '''for''' (i = 0; i &amp;lt; a.length; i++)&lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    sum = sum + a[i];&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
When we orchestrate the task using the data-parallel programming model, the program can be divided into two parts. The first part performs the same operations on separate elements of the array for each processing element (sometimes referred to as PE or pe), and the second part reorganizes data among all processing elements (In our example data reorganization is summing up values across different processing elements). Since data-parallel programming model only defines the overall effects of parallel steps, the second part can be accomplished either through shared memory or message passing. The three code fragments below are examples for the first part of the program, shared-memory version of the second part, and message passing for the second part, respectively.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 // data parallel programming: let each PE perform the same task on different pieces of distributed data&lt;br /&gt;
 pe_id = getid();&lt;br /&gt;
 my_sum = 0;&lt;br /&gt;
 '''for''' (i = pe_id; i &amp;lt; a.length; i += number_of_pe)         //separate elements of the array are assigned to each PE &lt;br /&gt;
 {&lt;br /&gt;
    a[i] = a[i] * i;&lt;br /&gt;
    my_sum = my_sum + a[i];                               //all PEs accumulate elements assigned to them into local variable my_sum&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above code, data parallelism is achieved by letting each processing element perform actions on array's separate elements, which are identified using the PE's id. For instance, if three processing elements are used then one processing element would start at i = 0, one would start at i = 1, and the last would start at i = 2. Since there are three processing elements then the index of the array for each will increase by three on each iteration until the task is complete (note that in our example elements assigned to each PE are interleaved instead of continuous). If the length of the array is a multiple of three then each processing element takes the same amount of time to execute its portion of the task.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The picture below illustrates how elements of the array are assigned among different PEs for the specific case: length of the array is 7 and there are 3 PEs available. Elements in the array are marked by their indexes (0 to 6). As shown in the picture, PE0 will work on elements with index 0, 3, 6; PE1 is in charge of elements with index 1, 4; and elements with index 2, 5 are assigned to PE2. In this way, these 3 PEs work collectively on the array, while each PE works on different elements. Thus, data parallelism is achieved.&lt;br /&gt;
&lt;br /&gt;
[[Image:506wiki1.png|frame|center|150px|Illustration of data parallel programming(adapted from [http://computing.llnl.gov/tutorials/parallel_comp/#ModelsData Introduction to Parallel Computing])]]&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
&lt;br /&gt;
Although the shared memory and message passing models may be combined into hybrid approaches, the two models are fundamentally different ways of addressing the same problem (of access control to common data). In contrast, the data parallel model is concerned with a fundamentally different problem (how to divide work into parallel tasks). As such, the data parallel model may be used in conjunction with either the shared memory or the message passing model without conflict. In fact, Klaiber (1994) compares the performance of a number of data parallel programs implemented with both shared memory and message passing models.&lt;br /&gt;
One of the major advantages of combining the data parallel and message passing models is a reduction in the amount and complexity of communication required relative to a task parallel approach. Similarly, combining the data parallel and shared memory models tends to simplify and reduce the amount of synchronization required. If the task parallel code given above were modified from a message passing model to a shared memory model, the two threads would require 8 signals be sent between the threads (instead of 8 messages). In contrast, the data parallel code would require a single barrier before the local sums are added to compute the full sum.&lt;br /&gt;
Much as the shared memory model can benefit from specialized hardware, the data parallel programming model can as well. SIMD (single-instruction-multiple-data) processors are specifically designed to run data parallel algorithms. These processors perform a single instruction on many different data locations simultaneously. Modern examples include CUDA processors developed by nVidia and Cell processors developed by STI (Sony, Toshiba, and IBM). For the curious, example code for CUDA processors is provided in the Appendix. However, whereas the shared memory model can be a difficult and costly abstraction in the absence of hardware support, the data parallel model—like the message passing model—does not require hardware support.&lt;br /&gt;
Since data parallel code tends to simplify communication and synchronization, data parallel code may be easier to develop than a more task parallel approach. However, data parallel code also requires writing code to split program data into chunks and assign it to different threads. In addition, it is possible that a problem may not decompose easily into subproblems relying on largely independent chunks of data. In this case, it may be impractical or impossible to apply the data parallel model.&lt;br /&gt;
Once written, data parallel programs can scale easily to large numbers of processors. The data parallel model implicitly encourages data locality by having each thread work on a chunk of data. The regular data chunks also make it easier to reason about where to locate data and how to organize it.&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
Task Parallelism is a form of parallelization where multiple instructions are executed either on same data or multiple data. It focuses on distributing execution of processes(threads) across different parallel computing nodes. As a part of workflow, different execution threads communicate with one another as they work to share data.&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
If the task to be accomplished is to compute the sum of the results associated with the execution of instruction 'A' and instructions 'B'. The following example illustrates, how task parallelism can be achieved.&lt;br /&gt;
&lt;br /&gt;
The pseudo code below illustrates task parallelism:&lt;br /&gt;
&amp;lt;pre&amp;gt;program:&lt;br /&gt;
do &lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
end if&lt;br /&gt;
&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If we write the code as above and launch it on a 2-processor system, then the runtime environment will execute it accordingly.&lt;br /&gt;
In an SPMD system, both CPUs will execute the code. In a parallel environment, both will have access to the same data. The &amp;quot;if&amp;quot; clause differentiates between the CPU's. CPU &amp;quot;a&amp;quot; will read true on the &amp;quot;if&amp;quot; and CPU &amp;quot;b&amp;quot; will read true on the &amp;quot;else if&amp;quot;, thus having their own task. Now, both CPU's execute separate code blocks simultaneously, performing different tasks simultaneously.&lt;br /&gt;
Code executed by CPU &amp;quot;a&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;A&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
Code executed by CPU &amp;quot;b&amp;quot;:&lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
do task &amp;quot;B&amp;quot;&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
This concept can now be generalized to any number of processors.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &lt;br /&gt;
program:&lt;br /&gt;
...&lt;br /&gt;
if CPU=&amp;quot;a&amp;quot; then&lt;br /&gt;
   do task &amp;quot;A&amp;quot;&lt;br /&gt;
else if CPU=&amp;quot;b&amp;quot; then&lt;br /&gt;
   do task &amp;quot;B&amp;quot;&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if CPU =&amp;quot;n&amp;quot; then&lt;br /&gt;
   do task &amp;quot;N&amp;quot;&lt;br /&gt;
&lt;br /&gt;
end if&lt;br /&gt;
...&lt;br /&gt;
end program&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
One important feature of data-parallel programming model or data parallelism (SIMD) is the single control flow: there is only one control processor that directs the activities of all the processing elements. In stark contrast to this is task parallelism (MIMD: Multiple Instruction, Multiple Data): characterized by its multiple control flows, it allows the concurrent execution of multiple instruction streams, each manipulates its own data and services separate functions. Below is a contrast between the data parallelism and task parallelism models from wikipedia: [http://en.wikipedia.org/wiki/SIMD SIMD] and [http://en.wikipedia.org/wiki/MIMD MIMD]. In the following subsections we continue to compare and contrast different features of data-parallel model and task-parallel model to help reader understand the unique characteristics of data-parallel programming model.&lt;br /&gt;
[[Image:Smid.png|frame|center|425px|contrast between data parallelism and task parallelism]]&lt;br /&gt;
&lt;br /&gt;
== Synchronous vs Asynchronous ==&lt;br /&gt;
While the [http://en.wikipedia.org/wiki/Lockstep_(computing) lockstep] imposed by data parallelism on all data streams ensures synchronous computation (all PEs perform their tasks at the exact same pace), every processor in task parallelism performs its task at their own pace, which we call asynchronous computation. Thus, at a certain point of a task parallel program's execution, communication and synchronization primitives are needed to allow different instruction streams to coordinate their efforts, and that is where variable-sharing and message-passing come into play.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Determinism vs. Non-Determinism ==&lt;br /&gt;
Data parallelism's synchronous nature and task parallelism's asynchronism give rise to another pair of features that add to the difference between these two models: determinism versus non-determinism. Data parallelism is deterministic, i.e. computing with the same input will always yield the same result, since its synchronism ensures that issues like relative timing between PEs will not arise. In contrast, task parallelism's asynchronous updates of common data can give rise to non-determinism, i.e, the same input won't always yield the same computation result (the result of a computation will depend also on factors outside the program control, such as scheduling and timing of other PEs). Obviously, non-determinism makes it harder to write and maintain correct programs. This partially explains the advantage of data parallel programming model over data parallelism in terms of development effort (also discussed in section 4.2).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Major differences between data parallel and task parallel models can broadly be classified as the following ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; border=&amp;quot;1&amp;quot; align=&amp;quot;center&amp;quot;&lt;br /&gt;
|+ '''Comparison between data parallel and task parallel programming models.'''&lt;br /&gt;
|-&lt;br /&gt;
! Aspects&lt;br /&gt;
! Data Parallel&lt;br /&gt;
! Task Parallel&lt;br /&gt;
|-&lt;br /&gt;
| Decomposition&lt;br /&gt;
| Partition data into subsets&lt;br /&gt;
| Partition program into subtasks&lt;br /&gt;
|-&lt;br /&gt;
| Parallel tasks&lt;br /&gt;
| Identical&lt;br /&gt;
| Unique&lt;br /&gt;
|-&lt;br /&gt;
| Degree of parallelism&lt;br /&gt;
| Scales easily&lt;br /&gt;
| Fixed&lt;br /&gt;
|-&lt;br /&gt;
| Load balancing&lt;br /&gt;
| Easier&lt;br /&gt;
| Harder&lt;br /&gt;
|-&lt;br /&gt;
| Communication overhead&lt;br /&gt;
| Lower&lt;br /&gt;
| Higher&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
* ''Data parallel.''  A data parallel algorithm is composed of a set of identical tasks which operate on different subsets of common data.&lt;br /&gt;
* ''Task parallel.''  A task parallel algorithm is composed of a set of differing tasks which operate on common data.&lt;br /&gt;
* ''SIMD (single-instruction-multiple-data).''  A processor which executes a single instruction simultaneously on multiple data locations.&lt;br /&gt;
* '' MIMD (multiple-instruction-multiple-data).'' A processor architecture which can execute multiple instruction across multiple data elements simultaneously.&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Philip J. Hatcher, Michael Jay Quinn, ''Data-Parallel Programming on MIMD Computers'', The MIT Press, 1991.&lt;br /&gt;
* Blaise Barney, &amp;quot;Introduction to Parallel Computing: Data Parallel Model&amp;quot;, Lawrence Livermore National Laboratory, [https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData https://computing.llnl.gov/tutorials/parallel_comp/#ModelsData], January 2009.&lt;br /&gt;
* Guy Blelloch, &amp;quot;Is Parallel Programming Hard?&amp;quot;, Carnegie Mellon University, [http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard http://www.cilk.com/multicore-blog/bid/9108/Is-Parallel-Programming-Hard], April 2009.&lt;br /&gt;
* Björn Lisper, ''Data parallelism and functional programming'', Lecture Notes in Computer Science, Volume 1132/1996, pp. 220-251, Springer Berlin, 1996.&lt;br /&gt;
* ''SIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SIMD http://en.wikipedia.org/wiki/SIMD].&lt;br /&gt;
* ''MIMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/MIMD http://en.wikipedia.org/wiki/MIMD].&lt;br /&gt;
* ''Lockstep'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/Lockstep_(computing) http://en.wikipedia.org/wiki/Lockstep_(computing)].&lt;br /&gt;
* ''SPMD'', Wikipedia, the free encyclopedia, [http://en.wikipedia.org/wiki/SPMD http://en.wikipedia.org/wiki/SPMD].&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43532</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43532"/>
		<updated>2011-01-29T22:53:35Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly Single Instruction Multiple Data architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]). In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade, has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism]  and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43531</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43531"/>
		<updated>2011-01-29T22:53:12Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly Single Instruction Multiple Data architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [CITE: http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance ([[#References|Culler (1999), p. 15.]]). In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time ([[#References|Culler (1999), p. 15.]]).  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade, has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [http://en.wikipedia.org/wiki/Thread-level_parallelism]  and the addition of the data parallelism programming model to existing message passing or shared memory models [http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43530</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43530"/>
		<updated>2011-01-29T22:52:01Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly Single Instruction Multiple Data architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [CITE: http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf pg 65].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance [CITE: Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 15]. In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time [CITE: Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 15].  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade, has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [CITE: http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [CITE: http://en.wikipedia.org/wiki/Thread-level_parallelism]  and the addition of the data parallelism programming model to existing message passing or shared memory models [CITE: http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43529</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43529"/>
		<updated>2011-01-29T22:51:15Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time ([[#References|Culler (1999), p. 15.]]).  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly Single Instruction Multiple Data architectures and used a shared memory programming model.  They each used different forms of vector processing ([[#References|Culler (1999), p. 21.]]). &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [CITE: http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf pg 65].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance [CITE: Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 15]. In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time [CITE: Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 15].  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade, has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [CITE: http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [CITE: http://en.wikipedia.org/wiki/Thread-level_parallelism]  and the addition of the data parallelism programming model to existing message passing or shared memory models [CITE: http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* David Culler, J.P. Singh, and Anoop Gupta, ''Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design)'' Morgan-Kauffman, 1999.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43528</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43528"/>
		<updated>2011-01-29T22:48:55Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time. [[#References|Culler (1999), p. 15.]].  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly Single Instruction Multiple Data architectures and used a shared memory programming model.  They each used different forms of vector processing [Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 21]. &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [CITE: http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf pg 4].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf pg 4].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [CITE: http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf pg 65].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance [CITE: Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 15]. In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time [CITE: Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 15].  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade, has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [CITE: http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [CITE: http://en.wikipedia.org/wiki/Thread-level_parallelism]  and the addition of the data parallelism programming model to existing message passing or shared memory models [CITE: http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
= References =&lt;br /&gt;
* David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, [http://portal.acm.org/citation.cfm?id=550071 ''Parallel Computer Architecture: A Hardware/Software Approach,''] Morgan-Kauffman, 1999.&lt;br /&gt;
* Ian Foster, [http://www.mcs.anl.gov/~itf/dbpp/ ''Designing and Building Parallel Programs,''] Addison-Wesley, 1995.&lt;br /&gt;
* Magne Haveraaen, [http://portal.acm.org/citation.cfm?id=1239917 &amp;quot;Machine and collection abstractions for user-implemented data-parallel programming,&amp;quot;] ''Scientific Programming,'' 8(4):231-246, 2000.&lt;br /&gt;
* W. Daniel Hillis and Guy L. Steele, Jr., [http://portal.acm.org/citation.cfm?id=7903 &amp;quot;Data parallel algorithms,&amp;quot;] ''Communications of the ACM,'' 29(12):1170-1183, December 1986.&lt;br /&gt;
* Alexander C. Klaiber and Henry M. Levy, [http://portal.acm.org/citation.cfm?id=192020 &amp;quot;A comparison of message passing and shared memory architectures for data parallel programs,&amp;quot;] in ''Proceedings of the 21st Annual International Symposium on Computer Architecture,'' April 1994, pp. 94-105.&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* David Culler, J.P. Singh, and Anoop Gupta, ''Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design)'' Morgan-Kauffman, 1999.&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43527</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43527"/>
		<updated>2011-01-29T22:41:24Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Supplement to Chapter 2: The Data Parallel Programming Model=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= History =&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
== Bit-level parallelism in the 1970's ==&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time. [Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta,  pg 15].  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly Single Instruction Multiple Data architectures and used a shared memory programming model.  They each used different forms of vector processing [Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 21]. &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [CITE: http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf pg 4].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf pg 4].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [CITE: http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf pg 65].&lt;br /&gt;
&lt;br /&gt;
== Move to instruction-level parallelism in the 1980's ==&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance [CITE: Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 15]. In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time [CITE: Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 15].  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
== Thread-level parallelism ==&lt;br /&gt;
The move to cluster-based machines in the past decade, has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [CITE: http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [CITE: http://en.wikipedia.org/wiki/Thread-level_parallelism]  and the addition of the data parallelism programming model to existing message passing or shared memory models [CITE: http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
== Comparison with Message Passing and Shared Memory ==&lt;br /&gt;
&lt;br /&gt;
= Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
== Description and Example ==&lt;br /&gt;
&lt;br /&gt;
= Data Parallel Model vs Task Parallel Model =&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
&lt;br /&gt;
= References =&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43526</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43526"/>
		<updated>2011-01-29T22:39:02Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
As computer architectures have evolved, so have parallel programming models. The earliest advancements in parallel computers took advantage of bit-level parallelism.  These computers used vector processing, which required a shared memory programming model.  As performance returns from this architecture diminished, the emphasis was placed on instruction-level parallelism and the message passing model began to dominate.  Most recently, with the move to cluster-based machines, there has been an increased emphasis on thread-level parallelism. This has corresponded to an increase interest in the data parallel programming model.&lt;br /&gt;
&lt;br /&gt;
=== Bit-level parallelism in the 1970's ===&lt;br /&gt;
The major performance improvements from computers during this time were due to the ability to execute 32-bit word size operations at one time. [Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta,  pg 15].  The dominant supercomputers of the time, like the Cray and the ILLIAC IV, were mainly Single Instruction Multiple Data architectures and used a shared memory programming model.  They each used different forms of vector processing [Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 21]. &lt;br /&gt;
Development of the ILLIAC IV began in 1964 and wasn't finished until 1975 [CITE: http://en.wikipedia.org/wiki/ILLIAC_IV].  A central processor was connected to the main memory and delegated tasks to individual PE's, which each had their own memory cache. [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf pg 4].  Each PE could operate either an 8-, 32- or 64-bit operand at a given time [http://archive.computerhistory.org/resources/text/Burroughs/Burroughs.ILLIAC%20IV.1974.102624911.pdf pg 4].&lt;br /&gt;
&lt;br /&gt;
The Cray machine was installed at Los Alamos National Laborartory in1976 by Control Data Corporation and had similar performance to the ILLIAC IV [http://en.wikipedia.org/wiki/ILLIAC_IV].  The Cray machine relied heavily on the use of registers instead of individual processors like the ILLIAC IV.  Each processor was connected to main memory and had a number of 64-bit registers used to perform operations [CITE: http://www.eecg.toronto.edu/~moshovos/ACA05/read/cray1.pdf pg 65].&lt;br /&gt;
&lt;br /&gt;
=== Move to instruction-level parallelism in the 1980's ===&lt;br /&gt;
&lt;br /&gt;
Increasing the word size above 32-bits offered diminishing returns in terms of performance [CITE: Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 15]. In the mid-1980's the emphasis changed from bit-level parallelism to instruction-level parallelism, which involved increasing the number of instructions that could be executed at one time [CITE: Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design) by David Culler, J.P. Singh, and Anoop Gupta (Hardcover – Aug 15, 1998) pg 15].  The message passing model allowed programmers the ability to divide up instructions in order to take advantage of this architecture. &lt;br /&gt;
&lt;br /&gt;
=== Thread-level parallelism ===&lt;br /&gt;
The move to cluster-based machines in the past decade, has added another layer of complexity to parallelism.  Since computers could be located across a network from each other, there is more emphasis on software acting as a bridge [CITE: http://cobweb.ecn.purdue.edu/~pplinux/ppcluster.html]. This has led to a greater emphasis on thread- or task-level parallelism [CITE: http://en.wikipedia.org/wiki/Thread-level_parallelism]  and the addition of the data parallelism programming model to existing message passing or shared memory models [CITE: http://en.wikipedia.org/wiki/Thread-level_parallelism].  &lt;br /&gt;
&lt;br /&gt;
== Data Parallel Model ==&lt;br /&gt;
&lt;br /&gt;
=== Description and Example ===&lt;br /&gt;
&lt;br /&gt;
=== Comparison with Message Passing and Shared Memory ===&lt;br /&gt;
&lt;br /&gt;
== Task Parallel Model ==&lt;br /&gt;
&lt;br /&gt;
=== Description and Example ===&lt;br /&gt;
&lt;br /&gt;
== Data Parallel Model vs Task Parallel Model ==&lt;br /&gt;
&lt;br /&gt;
== Definitions ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43525</id>
		<title>CSC/ECE 506 Spring 2011/ch2 JR</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch2_JR&amp;diff=43525"/>
		<updated>2011-01-28T17:08:22Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
&lt;br /&gt;
== Data Parallel Model ==&lt;br /&gt;
&lt;br /&gt;
=== Description and Example ===&lt;br /&gt;
&lt;br /&gt;
=== Comparison with Message Passing and Shared Memory ===&lt;br /&gt;
&lt;br /&gt;
== Task Parallel Model ==&lt;br /&gt;
&lt;br /&gt;
=== Description and Example ===&lt;br /&gt;
&lt;br /&gt;
== Data Parallel Model vs Task Parallel Model ==&lt;br /&gt;
&lt;br /&gt;
== Definitions ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38923</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38923"/>
		<updated>2010-10-20T17:34:36Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases can be defined in many ways. There are many formal definitions for it. Very simply put, a use case is a reason to use a system. For example, a student borrowing a book from a library would be a use case of the library or a bank cardholder might need to use an ATM to get cash out of their account. More formally, “a use case is a collection of possible sequences of interactions between the system under discussion and its Users (or Actors), relating to a particular goal” [http://alistair.cockburn.us/Use+case+fundamentals].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. The system is treated as a &amp;quot;black box&amp;quot;, and the interactions with system, including system responses, are as perceived from outside the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;Use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system.&amp;quot;[http://www.bredemeyer.com/use_cases.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
&lt;br /&gt;
===Terms used with Use cases===&lt;br /&gt;
Now let us define some terms used with use cases: [http://en.wikipedia.org/wiki/Use_case]&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Actor:&amp;lt;/b&amp;gt; An actor is a type of user that interacts with the system (ex student borrowing book or cardholder using ATM). Actors are also external entities (people or other systems) who interact with the system to achieve a desired goal. The goal must be of value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Goal:&amp;lt;/b&amp;gt; Without a goal a use case is useless. There is no need for a use case when there is no need for any actor to achieve a goal. A goal briefly describes what the user intends to achieve with this use case. For example, the goal of a student using the library is to obtain the book. There is no point in having a use case like “the student enters the library” as that in itself has no value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Stakeholder:&amp;lt;/b&amp;gt; A stakeholder is an individual or department that is affected by the outcome of the use case. Individuals are usually agents of the organization or department for which the use case is being created. A stakeholder might be called on to provide input, feedback, or authorization for the use case. The stakeholder section of the use case can include a brief description of which of these functions the stakeholder is assigned to fulfill.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Trigger:&amp;lt;/b&amp;gt; A trigger describes the event that causes the use case to be initiated. This event can be external or internal. If the trigger is not a simple true &amp;quot;event&amp;quot; (e.g., the customer presses a button), but instead &amp;quot;when a set of conditions are met&amp;quot;, there will need to be a triggering process that continually (or periodically) runs to test whether the &amp;quot;trigger conditions&amp;quot; are met: the &amp;quot;triggering event&amp;quot; is a signal from the trigger process that the conditions are now met. &lt;br /&gt;
In our example with the student, a trigger would be the need for the book due to an approaching exam or test which causes the student to go to the library to borrow a book.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Precondition:&amp;lt;/b&amp;gt; A precondition defines all the conditions that must be true (i.e., describes the state of the system) for the trigger to meaningfully cause the initiation of the use case. That is, if the system is not in the state described in the preconditions, the behavior of the use case is indeterminate. For example, the student should be a member of the library and have the required identity to borrow a book. If the student is not a member of the library, there is no point in the student trying to borrow a book from that library.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Scenarios:&amp;lt;/b&amp;gt; A scenario usually specifies when the use case starts and ends. It describes the interaction with actors and shows the flow of events between a user and system. For example, when a student tries to borrow a particular book from the library, it doesn’t always necessarily turn out the same way. Sometimes the book is available and sometimes it is already borrowed by someone else or the library may not have a given book. These are all examples of use case scenarios. The outcome in each case if different depending on circumstances, but they all relate to the same goal that is, they are all triggered by the same need(in this case, need for the book) and all the scenarios have the same starting point.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Simple Example===&lt;br /&gt;
&amp;lt;p&amp;gt;Now that we know something about use cases, let us go ahead and describe a simple use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
 Use Case 1: Request book from the library (automated system).&lt;br /&gt;
 &lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
   To borrow book a particular book from the library&lt;br /&gt;
 &lt;br /&gt;
 1.2: Actors&lt;br /&gt;
   Student&lt;br /&gt;
 &lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
   Student should be a member of the library and have an id.&lt;br /&gt;
 &lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
   System requests for student ID and checks if he/she is a member&lt;br /&gt;
   Student selects “request book”&lt;br /&gt;
   Student enters name(s) of the book(s)&lt;br /&gt;
   System checks for availability of books and displays results accordingly&lt;br /&gt;
   Student confirms the order&lt;br /&gt;
   System displays details of where the requested books are stacked&lt;br /&gt;
 &lt;br /&gt;
 1.5: Alternate Path&lt;br /&gt;
   System does not recognize Id or student is not a member. System will not allow any books to be checked out.&lt;br /&gt;
   All books requested are already checked out. Displays this information to student and closes request.&lt;br /&gt;
&lt;br /&gt;
===Important Characteristics of Use Cases===&lt;br /&gt;
&amp;lt;p&amp;gt;The above description shows a very simple use case. However, there are a few essential characteristics to be noticed about the use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have identified the key components of a use case, that is, the goal, actors, preconditions, key scenarios/flow and preconditions. It is very essential that we identify these components before writing a use case.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have not gone into any sort of technical details about implementation or user interface design. Use cases only represent a very high level design. We are only trying to understand the flow and uses of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of paths (scenarios) that traverse an actor from a trigger event (start of the use case) to the goal (success scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of scenarios that traverse an actor from a trigger event toward a goal but fall short of the goal (failure scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Where can we use ‘use cases’?===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases are usually used to capture the requirements of an interaction based system. When there is a lot of interaction between actors and the system, it makes sense to capture as many interactions and scenarios possible before starting development of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases help to eliminate rework due to requirements misunderstandings between developers and stakeholders by aiming to reach a point where there are no surprises for the users. Use cases help to build an explicit shared understanding that everyone can take away with them, the users, developers, testers, technical authors, and others.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases have received some interest as a starting point for test design. By analyzing use cases for the system, we can know various interactions between the system and actors which will help in drawing out test plans.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Where can’t we use ‘use cases’?===&lt;br /&gt;
&amp;lt;p&amp;gt;[ http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use case flows are not well suited to easily capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Some systems are better described in an information/data-driven approach than in a functionality-driven approach of use cases. A good example of this kind of system is data-mining systems used for Business Intelligence. If you were to describe this kind of system in a use case model, it would be quite small and uninteresting (there are not many different functions here) but the set of data that the system handles may nevertheless be large and rich in details.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Common mistakes while writing Use cases: ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The system boundary is undefined or inconstant.  A system boundary is a boundary that separates the internal components of a system from external entities.  If we are not able to identify the system boundaries, we will not be able to clearly define the actors, scenarios and other essential factors involved in writing a good and useful use case. For example, the system described in the library example, has a clear boundary. It is used to accept inputs as book names, checks the id and provides a location for the books. We know its role very clearly. Suppose the system was used to manage everything like security, employee details etc, we will not be able to identify the goal, actors and scenarios very clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases should not be used to capture all the details of a system. The granularity to which you define use cases in a diagram should be enough to keep the use case diagram uncluttered and readable, yet, be complete without missing significant aspects of the required functionality.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The use cases are written from the system’s (not the actors’) point of view. Use cases written from a system point of view will make the writer have the tendency to get into technical details. If we wrote the example test case above from the systems point of view, we would have statements like “obtain location of book from database and display location of books to user”. This is more detail than necessary. Also, this would not capture the interaction with the actor very clearly. Use cases also give a brief insight into how the UI should look but when written from the system these details might not be captured clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  Use Cases can take many different forms, depending on the type of development process being used [http://www.answers.com/topic/use-case?cat=technology].  The format that a Use Case takes is not as important as the process that it goes through [http://www.gatherspace.com/static/use_case_example.html].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [http://www.gatherspace.com/static/use_case_example.html].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [http://agile.csc.ncsu.edu/SEMaterials/UseCaseRequirements.pdf].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard. [http://en.wikipedia.org/wiki/Unified_Modeling_Language UML] is a standardized modeling language for software development.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.&amp;quot;[http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
&lt;br /&gt;
===Essential Use Cases Vs System Use cases=== &lt;br /&gt;
&amp;lt;p&amp;gt;[http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases may be described at the abstract level (business use case, sometimes called essential use case), or at the system level (system use case). The difference between these is the scope.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;business use case&amp;lt;/b&amp;gt; is described in technology-free terminology which treats system as a black box and describes the business process that is used by its business actors (people or systems external to the process) to achieve their goals (e.g., manual payment processing, expense report approval, manage corporate real estate). The business use case will describe a process that provides value to the business actor, and it describes what the process does. Business Process Mapping is another method for this level of business description. A significant advantage of essential use cases is that they enable you to stand back and ask fundamental questions like &amp;quot;what's really going on&amp;quot; and &amp;quot;what do we really need to do&amp;quot; without letting implementation decisions get in the way.  These questions often lead to critical realizations that allow you to rethink, or reengineer if you prefer that term, aspects of the overall business process.&lt;br /&gt;
A Very good example of essential use case can be seen in this link: http://www.agilemodeling.com/artifacts/essentialUseCase.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;system use case&amp;lt;/b&amp;gt; describes a system that automates a business use case or process. It is normally described at the system functionality level (for example, &amp;quot;create voucher&amp;quot;) and specifies the function or the service that the system provides for the actor. The system use case details what the system will do in response to an actor's actions. For this reason it is recommended that system use case specification begin with a verb (e.g., create voucher, select payments, exclude payment, cancel voucher). An actor can be a human user or another system/subsystem interacting with the system being defined&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Pluggable Use Cases:===&lt;br /&gt;
&amp;lt;p&amp;gt;[http://alistair.cockburn.us/Pluggable+use+cases]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases describe various scenarios and sequence of operations to achieve a given goal between actors and systems. We have seen that we can write Use Cases in various styles with varying degrees of details, varying aims and for varying audience in terms of business and System use cases. Business use cases are very readable with very little technical content so that stakeholders and business managers can understand the system as a black box. This may not work with the developers. They would like to see more technical details and have use cases that describe fully what a system must do under all circumstances.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;It has been seen through experience that there is some amount of common behavior that is replicated in many business use cases. By extracting these common processing details (e.g. create, read, update, delete, etc), the contents of use cases can be reduced. These common processing details can be put into what is called lower level pluggable use cases. Essentially, we are creating various levels of use cases. At the highest level, what we can call as the main use case shows the more fundamental processing steps with the names of the pluggable use cases 'plugged' in-between. This helps abstracting out lower level details and keeping the use cases more simple. What this also does is, business managers and stake holders can see the higher level use case which still remains simple and readable whereas the lower level details which are of interest to developers is not lost either. These use case levels provides the freedom of reading at various degrees of granularity.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Pluggable use cases have to be written in a generic form so that it can be used wherever needed. It should be generic enough that it can be plugged into other use cases. All the rules of a regular use cases still apply in terms of finding goals(which are essentially sub goals of the system), actors etc.&lt;br /&gt;
Pluggable use cases can be produced in a way where their content is the same for all transactions, that is, it is common to various scenarios and projects. The difference in each project or scenario is mainly the data they handle and the sequence of activities being performed. The unique data and rules of each scenario are separated and documented independently in &amp;quot;Companion Tables&amp;quot; from the process steps which enable flexibility and maximum reuse.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Thus pluggable use cases become building blocks for higher level use cases. They are organized and applied within each use case to reach its goals. Whenever a pluggable use case is invoked, the invocation references the companion table that provides the unique data and rules for that use case. They are most effective when they are used in conjunction with these tables.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use case content is dependent on the requirements of the system under design. This is also true for pluggable use cases. While the majority of pluggable use case content can be used verbatim across any project or company, minor customizations may be needed to accommodate the individual needs of the project and company.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Some use case sequences are simple while others are complex. To manage degrees of complexity, a use case can exercise one, multiple or a series of pluggable use cases wherever desired and in any order. To maximize use case cohesion and increase reusability a pluggable use case may employ another pluggable use case. The versatility of pluggable use cases provides a solid foundation for the construction of project use cases.        &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The following link has excellent examples of how pluggable use cases can be written and used http://alistair.cockburn.us/Pluggable+use+cases&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;These links are taken from http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different examples of Use Cases.  Some are listed below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.objectmentor.com/resources/articles/usecases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.w3.org/2002/06/ws-example &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.agilemodeling.com/essays/useCaseReuse.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.soi.wide.ad.jp/class/20040034/slides/07/9.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.cs.colorado.edu/~kena/classes/6448/s05/reference/usecases/examples.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://courses.softlab.ntua.gr/softeng/Tutorials/UML-Use-Cases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&amp;lt;p&amp;gt;These links are taken from http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&amp;lt;/p&amp;gt;&lt;br /&gt;
=== Quick references ===&lt;br /&gt;
&lt;br /&gt;
Some quick references for studying use cases&lt;br /&gt;
&lt;br /&gt;
•	http://www.oreilly.com.cn/samplechap/uml20inanutshell/UML20-ch07.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.rit.edu/~jaa/CS4/Lectures/UseCase.PDF&lt;br /&gt;
&lt;br /&gt;
•	http://www.alagad.com/go/blog-entry/uml-use-case-diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.nmsu.edu/~jeffery/courses/371/lecture.html&lt;br /&gt;
&lt;br /&gt;
=== Good Tutorials ===&lt;br /&gt;
&lt;br /&gt;
•	http://www.parlezuml.com/tutorials/usecases/usecases.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.readysetpro.com/whitepapers/usecasetut.html &lt;br /&gt;
&lt;br /&gt;
Two very easy and informative tutorials for beginners who are not familiar with use cases. Both the tutorials contain some very good and simple examples coupled with easily understandable pictures. The first tutorial focuses on use case driven development and UML diagrams while the second deals with writing effective use cases.&lt;br /&gt;
&lt;br /&gt;
=== Presentations Online ===&lt;br /&gt;
&lt;br /&gt;
•	https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_use-cases.php&lt;br /&gt;
&lt;br /&gt;
This presentation describes writing use cases along with constructing use case diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www-rohan.sdsu.edu/faculty/rnorman/course/ids306/Lect_c4.ppt&lt;br /&gt;
&lt;br /&gt;
This is a very good presentation that explains the concepts with familiar real life examples.&lt;br /&gt;
&lt;br /&gt;
•	http://www.cragsystems.com/SFRWUC/index.htm&lt;br /&gt;
&lt;br /&gt;
This web-based tutorial describes creating a Use Case Model of the functional requirements for a computer system.&lt;br /&gt;
&lt;br /&gt;
=== Books ===&lt;br /&gt;
&lt;br /&gt;
Following are good books for learning use cases&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Writing-Effective-Cases-Alistair-Cockburn/dp/0201702258 Writing Effective Use Cases by Alistair Cockburn ]]&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Object-Oriented-Software-Engineering-Driven-Approach/dp/0201544350 Object-Oriented Software Engineering: A Use Case Driven Approach by Ivar Jacobson ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[http://alistair.cockburn.us/Use+case+fundamentals Cockburn, Alistair. &amp;quot;Use case fundamentals.&amp;quot; Alistair Cockburn. May 10, 2006. http://alistair.cockburn.us/Use+case+fundamentals. Accessed: 10/18/2010]&lt;br /&gt;
&lt;br /&gt;
[http://alistair.cockburn.us/Pluggable+use+cases Cockburn, Alistair. &amp;quot;Pluggable use cases.&amp;quot; Alistair Cockburn. August 2, 2004. http://alistair.cockburn.us/Pluggable+use+cases. Accessed: 10/18/2010]&lt;br /&gt;
&lt;br /&gt;
[http://www.bredemeyer.com/use_cases.htm The Architecture Discipline. &amp;quot;Functional Requiremenets and Use Cases.&amp;quot; Bredemeyer Consulting. July 25, 2006. http://www.bredemeyer.com/use_cases.htm. Accessed: 10/18/10]&lt;br /&gt;
&lt;br /&gt;
[http://www.answers.com/topic/use-case?cat=technology &amp;quot;Use case.&amp;quot; Answers.com. ReferenceAnswers. Unknown. http://www.answers.com/topic/use-case?cat=technology. Accessed: 10/19/10]&lt;br /&gt;
&lt;br /&gt;
[http://www.gatherspace.com/static/use_case_example.html GatherSpace. &amp;quot;Writing effective Use Case Examples.&amp;quot; GatherSpace.com. Unknown. http://www.gatherspace.com/static/use_case_example.html. Acessed: 10/17/10]&lt;br /&gt;
&lt;br /&gt;
[http://agile.csc.ncsu.edu/SEMaterials/UseCaseRequirements.pdf Williams, Laurie, Phd. &amp;quot;Use Case Requirements.&amp;quot; agile.csc.ncsu.edu. North Carolina State University. 2004. http://agile.csc.ncsu.edu/SEMaterials/UseCaseRequirements.pdf. Accessed: 10/18/10]&lt;br /&gt;
&lt;br /&gt;
[http://www.agilemodeling.com/artifacts/useCaseDiagram.htm Ambler, Scott W. &amp;quot;UML 2 Use Case Diagrams.&amp;quot; agilemodeling.com. Agile Modeling. http://www.agilemodeling.com/artifacts/useCaseDiagram.htm. Accessed: 10/18/10]&lt;br /&gt;
&lt;br /&gt;
[http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html Heywood, Rus. &amp;quot;.&amp;quot; andrew.cmu.edu. Carnegie Mellon. Unknown. http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html. Accessed: 10/19/10]&lt;br /&gt;
&lt;br /&gt;
[http://www.agilemodeling.com/artifacts/essentialUseCase.htm Ambler, Scott W. &amp;quot;Essential Use Case.&amp;quot; agilemodeling.com. Agile Modeling. http://www.agilemodeling.com/artifacts/essentialUseCase.htm. Accessed: 10/18/10]&lt;br /&gt;
&lt;br /&gt;
http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&lt;br /&gt;
&lt;br /&gt;
http://en.wikipedia.org/wiki/Use_case&lt;br /&gt;
&lt;br /&gt;
http://en.wikipedia.org/wiki/Unified_Modeling_Language&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38922</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38922"/>
		<updated>2010-10-20T17:24:30Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases can be defined in many ways. There are many formal definitions for it. Very simply put, a use case is a reason to use a system. For example, a student borrowing a book from a library would be a use case of the library or a bank cardholder might need to use an ATM to get cash out of their account. More formally, “a use case is a collection of possible sequences of interactions between the system under discussion and its Users (or Actors), relating to a particular goal” [http://alistair.cockburn.us/Use+case+fundamentals].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. The system is treated as a &amp;quot;black box&amp;quot;, and the interactions with system, including system responses, are as perceived from outside the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;Use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system.&amp;quot;[http://www.bredemeyer.com/use_cases.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
&lt;br /&gt;
===Terms used with Use cases===&lt;br /&gt;
Now let us define some terms used with use cases: [http://en.wikipedia.org/wiki/Use_case]&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Actor:&amp;lt;/b&amp;gt; An actor is a type of user that interacts with the system (ex student borrowing book or cardholder using ATM). Actors are also external entities (people or other systems) who interact with the system to achieve a desired goal. The goal must be of value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Goal:&amp;lt;/b&amp;gt; Without a goal a use case is useless. There is no need for a use case when there is no need for any actor to achieve a goal. A goal briefly describes what the user intends to achieve with this use case. For example, the goal of a student using the library is to obtain the book. There is no point in having a use case like “the student enters the library” as that in itself has no value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Stakeholder:&amp;lt;/b&amp;gt; A stakeholder is an individual or department that is affected by the outcome of the use case. Individuals are usually agents of the organization or department for which the use case is being created. A stakeholder might be called on to provide input, feedback, or authorization for the use case. The stakeholder section of the use case can include a brief description of which of these functions the stakeholder is assigned to fulfill.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Trigger:&amp;lt;/b&amp;gt; A trigger describes the event that causes the use case to be initiated. This event can be external or internal. If the trigger is not a simple true &amp;quot;event&amp;quot; (e.g., the customer presses a button), but instead &amp;quot;when a set of conditions are met&amp;quot;, there will need to be a triggering process that continually (or periodically) runs to test whether the &amp;quot;trigger conditions&amp;quot; are met: the &amp;quot;triggering event&amp;quot; is a signal from the trigger process that the conditions are now met. &lt;br /&gt;
In our example with the student, a trigger would be the need for the book due to an approaching exam or test which causes the student to go to the library to borrow a book.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Precondition:&amp;lt;/b&amp;gt; A precondition defines all the conditions that must be true (i.e., describes the state of the system) for the trigger to meaningfully cause the initiation of the use case. That is, if the system is not in the state described in the preconditions, the behavior of the use case is indeterminate. For example, the student should be a member of the library and have the required identity to borrow a book. If the student is not a member of the library, there is no point in the student trying to borrow a book from that library.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Scenarios:&amp;lt;/b&amp;gt; A scenario usually specifies when the use case starts and ends. It describes the interaction with actors and shows the flow of events between a user and system. For example, when a student tries to borrow a particular book from the library, it doesn’t always necessarily turn out the same way. Sometimes the book is available and sometimes it is already borrowed by someone else or the library may not have a given book. These are all examples of use case scenarios. The outcome in each case if different depending on circumstances, but they all relate to the same goal that is, they are all triggered by the same need(in this case, need for the book) and all the scenarios have the same starting point.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Simple Example===&lt;br /&gt;
&amp;lt;p&amp;gt;Now that we know something about use cases, let us go ahead and describe a simple use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
 Use Case 1: Request book from the library (automated system).&lt;br /&gt;
 &lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
   To borrow book a particular book from the library&lt;br /&gt;
 &lt;br /&gt;
 1.2: Actors&lt;br /&gt;
   Student&lt;br /&gt;
 &lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
   Student should be a member of the library and have an id.&lt;br /&gt;
 &lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
   System requests for student ID and checks if he/she is a member&lt;br /&gt;
   Student selects “request book”&lt;br /&gt;
   Student enters name(s) of the book(s)&lt;br /&gt;
   System checks for availability of books and displays results accordingly&lt;br /&gt;
   Student confirms the order&lt;br /&gt;
   System displays details of where the requested books are stacked&lt;br /&gt;
 &lt;br /&gt;
 1.5: Alternate Path&lt;br /&gt;
   System does not recognize Id or student is not a member. System will not allow any books to be checked out.&lt;br /&gt;
   All books requested are already checked out. Displays this information to student and closes request.&lt;br /&gt;
&lt;br /&gt;
===Important Characteristics of Use Cases===&lt;br /&gt;
&amp;lt;p&amp;gt;The above description shows a very simple use case. However, there are a few essential characteristics to be noticed about the use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have identified the key components of a use case, that is, the goal, actors, preconditions, key scenarios/flow and preconditions. It is very essential that we identify these components before writing a use case.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have not gone into any sort of technical details about implementation or user interface design. Use cases only represent a very high level design. We are only trying to understand the flow and uses of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of paths (scenarios) that traverse an actor from a trigger event (start of the use case) to the goal (success scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of scenarios that traverse an actor from a trigger event toward a goal but fall short of the goal (failure scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Where can we use ‘use cases’?===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases are usually used to capture the requirements of an interaction based system. When there is a lot of interaction between actors and the system, it makes sense to capture as many interactions and scenarios possible before starting development of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases help to eliminate rework due to requirements misunderstandings between developers and stakeholders by aiming to reach a point where there are no surprises for the users. Use cases help to build an explicit shared understanding that everyone can take away with them, the users, developers, testers, technical authors, and others.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases have received some interest as a starting point for test design. By analyzing use cases for the system, we can know various interactions between the system and actors which will help in drawing out test plans.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Where can’t we use ‘use cases’?===&lt;br /&gt;
&amp;lt;p&amp;gt;[ http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use case flows are not well suited to easily capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Some systems are better described in an information/data-driven approach than in a functionality-driven approach of use cases. A good example of this kind of system is data-mining systems used for Business Intelligence. If you were to describe this kind of system in a use case model, it would be quite small and uninteresting (there are not many different functions here) but the set of data that the system handles may nevertheless be large and rich in details.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Common mistakes while writing Use cases: ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The system boundary is undefined or inconstant.  A system boundary is a boundary that separates the internal components of a system from external entities.  If we are not able to identify the system boundaries, we will not be able to clearly define the actors, scenarios and other essential factors involved in writing a good and useful use case. For example, the system described in the library example, has a clear boundary. It is used to accept inputs as book names, checks the id and provides a location for the books. We know its role very clearly. Suppose the system was used to manage everything like security, employee details etc, we will not be able to identify the goal, actors and scenarios very clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases should not be used to capture all the details of a system. The granularity to which you define use cases in a diagram should be enough to keep the use case diagram uncluttered and readable, yet, be complete without missing significant aspects of the required functionality.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The use cases are written from the system’s (not the actors’) point of view. Use cases written from a system point of view will make the writer have the tendency to get into technical details. If we wrote the example test case above from the systems point of view, we would have statements like “obtain location of book from database and display location of books to user”. This is more detail than necessary. Also, this would not capture the interaction with the actor very clearly. Use cases also give a brief insight into how the UI should look but when written from the system these details might not be captured clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  Use Cases can take many different forms, depending on the type of development process being used [http://www.answers.com/topic/use-case?cat=technology].  The format that a Use Case takes is not as important as the process that it goes through [http://www.gatherspace.com/static/use_case_example.html].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [http://www.gatherspace.com/static/use_case_example.html].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [http://agile.csc.ncsu.edu/SEMaterials/UseCaseRequirements.pdf].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard. [http://en.wikipedia.org/wiki/Unified_Modeling_Language UML] is a standardized modeling language for software development.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.&amp;quot;[http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
&lt;br /&gt;
===Essential Use Cases Vs System Use cases=== &lt;br /&gt;
&amp;lt;p&amp;gt;[http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases may be described at the abstract level (business use case, sometimes called essential use case), or at the system level (system use case). The difference between these is the scope.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;business use case&amp;lt;/b&amp;gt; is described in technology-free terminology which treats system as a black box and describes the business process that is used by its business actors (people or systems external to the process) to achieve their goals (e.g., manual payment processing, expense report approval, manage corporate real estate). The business use case will describe a process that provides value to the business actor, and it describes what the process does. Business Process Mapping is another method for this level of business description. A significant advantage of essential use cases is that they enable you to stand back and ask fundamental questions like &amp;quot;what's really going on&amp;quot; and &amp;quot;what do we really need to do&amp;quot; without letting implementation decisions get in the way.  These questions often lead to critical realizations that allow you to rethink, or reengineer if you prefer that term, aspects of the overall business process.&lt;br /&gt;
A Very good example of essential use case can be seen in this link: http://www.agilemodeling.com/artifacts/essentialUseCase.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;system use case&amp;lt;/b&amp;gt; describes a system that automates a business use case or process. It is normally described at the system functionality level (for example, &amp;quot;create voucher&amp;quot;) and specifies the function or the service that the system provides for the actor. The system use case details what the system will do in response to an actor's actions. For this reason it is recommended that system use case specification begin with a verb (e.g., create voucher, select payments, exclude payment, cancel voucher). An actor can be a human user or another system/subsystem interacting with the system being defined&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Pluggable Use Cases:===&lt;br /&gt;
&amp;lt;p&amp;gt;[http://alistair.cockburn.us/Pluggable+use+cases]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases describe various scenarios and sequence of operations to achieve a given goal between actors and systems. We have seen that we can write Use Cases in various styles with varying degrees of details, varying aims and for varying audience in terms of business and System use cases. Business use cases are very readable with very little technical content so that stakeholders and business managers can understand the system as a black box. This may not work with the developers. They would like to see more technical details and have use cases that describe fully what a system must do under all circumstances.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;It has been seen through experience that there is some amount of common behavior that is replicated in many business use cases. By extracting these common processing details (e.g. create, read, update, delete, etc), the contents of use cases can be reduced. These common processing details can be put into what is called lower level pluggable use cases. Essentially, we are creating various levels of use cases. At the highest level, what we can call as the main use case shows the more fundamental processing steps with the names of the pluggable use cases 'plugged' in-between. This helps abstracting out lower level details and keeping the use cases more simple. What this also does is, business managers and stake holders can see the higher level use case which still remains simple and readable whereas the lower level details which are of interest to developers is not lost either. These use case levels provides the freedom of reading at various degrees of granularity.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Pluggable use cases have to be written in a generic form so that it can be used wherever needed. It should be generic enough that it can be plugged into other use cases. All the rules of a regular use cases still apply in terms of finding goals(which are essentially sub goals of the system), actors etc.&lt;br /&gt;
Pluggable use cases can be produced in a way where their content is the same for all transactions, that is, it is common to various scenarios and projects. The difference in each project or scenario is mainly the data they handle and the sequence of activities being performed. The unique data and rules of each scenario are separated and documented independently in &amp;quot;Companion Tables&amp;quot; from the process steps which enable flexibility and maximum reuse.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Thus pluggable use cases become building blocks for higher level use cases. They are organized and applied within each use case to reach its goals. Whenever a pluggable use case is invoked, the invocation references the companion table that provides the unique data and rules for that use case. They are most effective when they are used in conjunction with these tables.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use case content is dependent on the requirements of the system under design. This is also true for pluggable use cases. While the majority of pluggable use case content can be used verbatim across any project or company, minor customizations may be needed to accommodate the individual needs of the project and company.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Some use case sequences are simple while others are complex. To manage degrees of complexity, a use case can exercise one, multiple or a series of pluggable use cases wherever desired and in any order. To maximize use case cohesion and increase reusability a pluggable use case may employ another pluggable use case. The versatility of pluggable use cases provides a solid foundation for the construction of project use cases.        &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The following link has excellent examples of how pluggable use cases can be written and used http://alistair.cockburn.us/Pluggable+use+cases&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;These links are taken from http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different examples of Use Cases.  Some are listed below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.objectmentor.com/resources/articles/usecases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.w3.org/2002/06/ws-example &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.agilemodeling.com/essays/useCaseReuse.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.soi.wide.ad.jp/class/20040034/slides/07/9.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.cs.colorado.edu/~kena/classes/6448/s05/reference/usecases/examples.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://courses.softlab.ntua.gr/softeng/Tutorials/UML-Use-Cases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&amp;lt;p&amp;gt;These links are taken from http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&amp;lt;/p&amp;gt;&lt;br /&gt;
=== Quick references ===&lt;br /&gt;
&lt;br /&gt;
Some quick references for studying use cases&lt;br /&gt;
&lt;br /&gt;
•	http://www.oreilly.com.cn/samplechap/uml20inanutshell/UML20-ch07.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.rit.edu/~jaa/CS4/Lectures/UseCase.PDF&lt;br /&gt;
&lt;br /&gt;
•	http://www.alagad.com/go/blog-entry/uml-use-case-diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.nmsu.edu/~jeffery/courses/371/lecture.html&lt;br /&gt;
&lt;br /&gt;
=== Good Tutorials ===&lt;br /&gt;
&lt;br /&gt;
•	http://www.parlezuml.com/tutorials/usecases/usecases.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.readysetpro.com/whitepapers/usecasetut.html &lt;br /&gt;
&lt;br /&gt;
Two very easy and informative tutorials for beginners who are not familiar with use cases. Both the tutorials contain some very good and simple examples coupled with easily understandable pictures. The first tutorial focuses on use case driven development and UML diagrams while the second deals with writing effective use cases.&lt;br /&gt;
&lt;br /&gt;
=== Presentations Online ===&lt;br /&gt;
&lt;br /&gt;
•	https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_use-cases.php&lt;br /&gt;
&lt;br /&gt;
This presentation describes writing use cases along with constructing use case diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www-rohan.sdsu.edu/faculty/rnorman/course/ids306/Lect_c4.ppt&lt;br /&gt;
&lt;br /&gt;
This is a very good presentation that explains the concepts with familiar real life examples.&lt;br /&gt;
&lt;br /&gt;
•	http://www.cragsystems.com/SFRWUC/index.htm&lt;br /&gt;
&lt;br /&gt;
This web-based tutorial describes creating a Use Case Model of the functional requirements for a computer system.&lt;br /&gt;
&lt;br /&gt;
=== Books ===&lt;br /&gt;
&lt;br /&gt;
Following are good books for learning use cases&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Writing-Effective-Cases-Alistair-Cockburn/dp/0201702258 Writing Effective Use Cases by Alistair Cockburn ]]&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Object-Oriented-Software-Engineering-Driven-Approach/dp/0201544350 Object-Oriented Software Engineering: A Use Case Driven Approach by Ivar Jacobson ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[http://alistair.cockburn.us/Use+case+fundamentals Cockburn, Alistair. &amp;quot;Use case fundamentals.&amp;quot; Alistair Cockburn. May 10, 2006. http://alistair.cockburn.us/Use+case+fundamentals. Accessed: 10/18/2010]&lt;br /&gt;
&lt;br /&gt;
[http://alistair.cockburn.us/Pluggable+use+cases Cockburn, Alistair. &amp;quot;Pluggable use cases.&amp;quot; Alistair Cockburn. August 2, 2004. http://alistair.cockburn.us/Pluggable+use+cases. Accessed: 10/18/2010]&lt;br /&gt;
&lt;br /&gt;
[http://www.bredemeyer.com/use_cases.htm The Architecture Discipline. &amp;quot;Functional Requiremenets and Use Cases.&amp;quot; Bredemeyer Consulting. July 25, 2006. http://www.bredemeyer.com/use_cases.htm. Accessed: 10/18/10]&lt;br /&gt;
&lt;br /&gt;
[http://www.answers.com/topic/use-case?cat=technology &amp;quot;Use case.&amp;quot; Answers.com. ReferenceAnswers. Unknown. http://www.answers.com/topic/use-case?cat=technology. Accessed: 10/19/10]&lt;br /&gt;
&lt;br /&gt;
[http://www.gatherspace.com/static/use_case_example.html GatherSpace. &amp;quot;Writing effective Use Case Examples.&amp;quot; GatherSpace.com. Unknown. http://www.gatherspace.com/static/use_case_example.html. Acessed: 10/17/10]&lt;br /&gt;
&lt;br /&gt;
http://agile.csc.ncsu.edu/SEMaterials/UseCaseRequirements.pdf&lt;br /&gt;
&lt;br /&gt;
http://www.agilemodeling.com/artifacts/useCaseDiagram.htm&lt;br /&gt;
&lt;br /&gt;
http://en.wikipedia.org/wiki/Unified_Modeling_Language&lt;br /&gt;
&lt;br /&gt;
http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html&lt;br /&gt;
&lt;br /&gt;
http://www.agilemodeling.com/artifacts/essentialUseCase.htm&lt;br /&gt;
&lt;br /&gt;
http://alistair.cockburn.us/Pluggable+use+cases&lt;br /&gt;
&lt;br /&gt;
http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&lt;br /&gt;
&lt;br /&gt;
http://en.wikipedia.org/wiki/Use_case&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38920</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38920"/>
		<updated>2010-10-20T17:21:00Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases can be defined in many ways. There are many formal definitions for it. Very simply put, a use case is a reason to use a system. For example, a student borrowing a book from a library would be a use case of the library or a bank cardholder might need to use an ATM to get cash out of their account. More formally, “a use case is a collection of possible sequences of interactions between the system under discussion and its Users (or Actors), relating to a particular goal” [http://alistair.cockburn.us/Use+case+fundamentals].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. The system is treated as a &amp;quot;black box&amp;quot;, and the interactions with system, including system responses, are as perceived from outside the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;Use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system.&amp;quot;[http://www.bredemeyer.com/use_cases.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
&lt;br /&gt;
===Terms used with Use cases===&lt;br /&gt;
Now let us define some terms used with use cases: [http://en.wikipedia.org/wiki/Use_case]&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Actor:&amp;lt;/b&amp;gt; An actor is a type of user that interacts with the system (ex student borrowing book or cardholder using ATM). Actors are also external entities (people or other systems) who interact with the system to achieve a desired goal. The goal must be of value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Goal:&amp;lt;/b&amp;gt; Without a goal a use case is useless. There is no need for a use case when there is no need for any actor to achieve a goal. A goal briefly describes what the user intends to achieve with this use case. For example, the goal of a student using the library is to obtain the book. There is no point in having a use case like “the student enters the library” as that in itself has no value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Stakeholder:&amp;lt;/b&amp;gt; A stakeholder is an individual or department that is affected by the outcome of the use case. Individuals are usually agents of the organization or department for which the use case is being created. A stakeholder might be called on to provide input, feedback, or authorization for the use case. The stakeholder section of the use case can include a brief description of which of these functions the stakeholder is assigned to fulfill.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Trigger:&amp;lt;/b&amp;gt; A trigger describes the event that causes the use case to be initiated. This event can be external or internal. If the trigger is not a simple true &amp;quot;event&amp;quot; (e.g., the customer presses a button), but instead &amp;quot;when a set of conditions are met&amp;quot;, there will need to be a triggering process that continually (or periodically) runs to test whether the &amp;quot;trigger conditions&amp;quot; are met: the &amp;quot;triggering event&amp;quot; is a signal from the trigger process that the conditions are now met. &lt;br /&gt;
In our example with the student, a trigger would be the need for the book due to an approaching exam or test which causes the student to go to the library to borrow a book.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Precondition:&amp;lt;/b&amp;gt; A precondition defines all the conditions that must be true (i.e., describes the state of the system) for the trigger to meaningfully cause the initiation of the use case. That is, if the system is not in the state described in the preconditions, the behavior of the use case is indeterminate. For example, the student should be a member of the library and have the required identity to borrow a book. If the student is not a member of the library, there is no point in the student trying to borrow a book from that library.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Scenarios:&amp;lt;/b&amp;gt; A scenario usually specifies when the use case starts and ends. It describes the interaction with actors and shows the flow of events between a user and system. For example, when a student tries to borrow a particular book from the library, it doesn’t always necessarily turn out the same way. Sometimes the book is available and sometimes it is already borrowed by someone else or the library may not have a given book. These are all examples of use case scenarios. The outcome in each case if different depending on circumstances, but they all relate to the same goal that is, they are all triggered by the same need(in this case, need for the book) and all the scenarios have the same starting point.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Simple Example===&lt;br /&gt;
&amp;lt;p&amp;gt;Now that we know something about use cases, let us go ahead and describe a simple use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
 Use Case 1: Request book from the library (automated system).&lt;br /&gt;
 &lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
   To borrow book a particular book from the library&lt;br /&gt;
 &lt;br /&gt;
 1.2: Actors&lt;br /&gt;
   Student&lt;br /&gt;
 &lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
   Student should be a member of the library and have an id.&lt;br /&gt;
 &lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
   System requests for student ID and checks if he/she is a member&lt;br /&gt;
   Student selects “request book”&lt;br /&gt;
   Student enters name(s) of the book(s)&lt;br /&gt;
   System checks for availability of books and displays results accordingly&lt;br /&gt;
   Student confirms the order&lt;br /&gt;
   System displays details of where the requested books are stacked&lt;br /&gt;
 &lt;br /&gt;
 1.5: Alternate Path&lt;br /&gt;
   System does not recognize Id or student is not a member. System will not allow any books to be checked out.&lt;br /&gt;
   All books requested are already checked out. Displays this information to student and closes request.&lt;br /&gt;
&lt;br /&gt;
===Important Characteristics of Use Cases===&lt;br /&gt;
&amp;lt;p&amp;gt;The above description shows a very simple use case. However, there are a few essential characteristics to be noticed about the use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have identified the key components of a use case, that is, the goal, actors, preconditions, key scenarios/flow and preconditions. It is very essential that we identify these components before writing a use case.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have not gone into any sort of technical details about implementation or user interface design. Use cases only represent a very high level design. We are only trying to understand the flow and uses of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of paths (scenarios) that traverse an actor from a trigger event (start of the use case) to the goal (success scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of scenarios that traverse an actor from a trigger event toward a goal but fall short of the goal (failure scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Where can we use ‘use cases’?===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases are usually used to capture the requirements of an interaction based system. When there is a lot of interaction between actors and the system, it makes sense to capture as many interactions and scenarios possible before starting development of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases help to eliminate rework due to requirements misunderstandings between developers and stakeholders by aiming to reach a point where there are no surprises for the users. Use cases help to build an explicit shared understanding that everyone can take away with them, the users, developers, testers, technical authors, and others.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases have received some interest as a starting point for test design. By analyzing use cases for the system, we can know various interactions between the system and actors which will help in drawing out test plans.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Where can’t we use ‘use cases’?===&lt;br /&gt;
&amp;lt;p&amp;gt;[ http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use case flows are not well suited to easily capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Some systems are better described in an information/data-driven approach than in a functionality-driven approach of use cases. A good example of this kind of system is data-mining systems used for Business Intelligence. If you were to describe this kind of system in a use case model, it would be quite small and uninteresting (there are not many different functions here) but the set of data that the system handles may nevertheless be large and rich in details.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Common mistakes while writing Use cases: ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The system boundary is undefined or inconstant.  A system boundary is a boundary that separates the internal components of a system from external entities.  If we are not able to identify the system boundaries, we will not be able to clearly define the actors, scenarios and other essential factors involved in writing a good and useful use case. For example, the system described in the library example, has a clear boundary. It is used to accept inputs as book names, checks the id and provides a location for the books. We know its role very clearly. Suppose the system was used to manage everything like security, employee details etc, we will not be able to identify the goal, actors and scenarios very clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases should not be used to capture all the details of a system. The granularity to which you define use cases in a diagram should be enough to keep the use case diagram uncluttered and readable, yet, be complete without missing significant aspects of the required functionality.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The use cases are written from the system’s (not the actors’) point of view. Use cases written from a system point of view will make the writer have the tendency to get into technical details. If we wrote the example test case above from the systems point of view, we would have statements like “obtain location of book from database and display location of books to user”. This is more detail than necessary. Also, this would not capture the interaction with the actor very clearly. Use cases also give a brief insight into how the UI should look but when written from the system these details might not be captured clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  Use Cases can take many different forms, depending on the type of development process being used [http://www.answers.com/topic/use-case?cat=technology].  The format that a Use Case takes is not as important as the process that it goes through [http://www.gatherspace.com/static/use_case_example.html].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [http://www.gatherspace.com/static/use_case_example.html].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [http://agile.csc.ncsu.edu/SEMaterials/UseCaseRequirements.pdf].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard. [http://en.wikipedia.org/wiki/Unified_Modeling_Language UML] is a standardized modeling language for software development.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.&amp;quot;[http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
&lt;br /&gt;
===Essential Use Cases Vs System Use cases=== &lt;br /&gt;
&amp;lt;p&amp;gt;[http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases may be described at the abstract level (business use case, sometimes called essential use case), or at the system level (system use case). The difference between these is the scope.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;business use case&amp;lt;/b&amp;gt; is described in technology-free terminology which treats system as a black box and describes the business process that is used by its business actors (people or systems external to the process) to achieve their goals (e.g., manual payment processing, expense report approval, manage corporate real estate). The business use case will describe a process that provides value to the business actor, and it describes what the process does. Business Process Mapping is another method for this level of business description. A significant advantage of essential use cases is that they enable you to stand back and ask fundamental questions like &amp;quot;what's really going on&amp;quot; and &amp;quot;what do we really need to do&amp;quot; without letting implementation decisions get in the way.  These questions often lead to critical realizations that allow you to rethink, or reengineer if you prefer that term, aspects of the overall business process.&lt;br /&gt;
A Very good example of essential use case can be seen in this link: http://www.agilemodeling.com/artifacts/essentialUseCase.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;system use case&amp;lt;/b&amp;gt; describes a system that automates a business use case or process. It is normally described at the system functionality level (for example, &amp;quot;create voucher&amp;quot;) and specifies the function or the service that the system provides for the actor. The system use case details what the system will do in response to an actor's actions. For this reason it is recommended that system use case specification begin with a verb (e.g., create voucher, select payments, exclude payment, cancel voucher). An actor can be a human user or another system/subsystem interacting with the system being defined&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Pluggable Use Cases:===&lt;br /&gt;
&amp;lt;p&amp;gt;[http://alistair.cockburn.us/Pluggable+use+cases]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases describe various scenarios and sequence of operations to achieve a given goal between actors and systems. We have seen that we can write Use Cases in various styles with varying degrees of details, varying aims and for varying audience in terms of business and System use cases. Business use cases are very readable with very little technical content so that stakeholders and business managers can understand the system as a black box. This may not work with the developers. They would like to see more technical details and have use cases that describe fully what a system must do under all circumstances.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;It has been seen through experience that there is some amount of common behavior that is replicated in many business use cases. By extracting these common processing details (e.g. create, read, update, delete, etc), the contents of use cases can be reduced. These common processing details can be put into what is called lower level pluggable use cases. Essentially, we are creating various levels of use cases. At the highest level, what we can call as the main use case shows the more fundamental processing steps with the names of the pluggable use cases 'plugged' in-between. This helps abstracting out lower level details and keeping the use cases more simple. What this also does is, business managers and stake holders can see the higher level use case which still remains simple and readable whereas the lower level details which are of interest to developers is not lost either. These use case levels provides the freedom of reading at various degrees of granularity.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Pluggable use cases have to be written in a generic form so that it can be used wherever needed. It should be generic enough that it can be plugged into other use cases. All the rules of a regular use cases still apply in terms of finding goals(which are essentially sub goals of the system), actors etc.&lt;br /&gt;
Pluggable use cases can be produced in a way where their content is the same for all transactions, that is, it is common to various scenarios and projects. The difference in each project or scenario is mainly the data they handle and the sequence of activities being performed. The unique data and rules of each scenario are separated and documented independently in &amp;quot;Companion Tables&amp;quot; from the process steps which enable flexibility and maximum reuse.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Thus pluggable use cases become building blocks for higher level use cases. They are organized and applied within each use case to reach its goals. Whenever a pluggable use case is invoked, the invocation references the companion table that provides the unique data and rules for that use case. They are most effective when they are used in conjunction with these tables.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use case content is dependent on the requirements of the system under design. This is also true for pluggable use cases. While the majority of pluggable use case content can be used verbatim across any project or company, minor customizations may be needed to accommodate the individual needs of the project and company.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Some use case sequences are simple while others are complex. To manage degrees of complexity, a use case can exercise one, multiple or a series of pluggable use cases wherever desired and in any order. To maximize use case cohesion and increase reusability a pluggable use case may employ another pluggable use case. The versatility of pluggable use cases provides a solid foundation for the construction of project use cases.        &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The following link has excellent examples of how pluggable use cases can be written and used http://alistair.cockburn.us/Pluggable+use+cases&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;These links are taken from http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different examples of Use Cases.  Some are listed below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.objectmentor.com/resources/articles/usecases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.w3.org/2002/06/ws-example &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.agilemodeling.com/essays/useCaseReuse.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.soi.wide.ad.jp/class/20040034/slides/07/9.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.cs.colorado.edu/~kena/classes/6448/s05/reference/usecases/examples.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://courses.softlab.ntua.gr/softeng/Tutorials/UML-Use-Cases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&amp;lt;p&amp;gt;These links are taken from http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&amp;lt;/p&amp;gt;&lt;br /&gt;
=== Quick references ===&lt;br /&gt;
&lt;br /&gt;
Some quick references for studying use cases&lt;br /&gt;
&lt;br /&gt;
•	http://www.oreilly.com.cn/samplechap/uml20inanutshell/UML20-ch07.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.rit.edu/~jaa/CS4/Lectures/UseCase.PDF&lt;br /&gt;
&lt;br /&gt;
•	http://www.alagad.com/go/blog-entry/uml-use-case-diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.nmsu.edu/~jeffery/courses/371/lecture.html&lt;br /&gt;
&lt;br /&gt;
=== Good Tutorials ===&lt;br /&gt;
&lt;br /&gt;
•	http://www.parlezuml.com/tutorials/usecases/usecases.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.readysetpro.com/whitepapers/usecasetut.html &lt;br /&gt;
&lt;br /&gt;
Two very easy and informative tutorials for beginners who are not familiar with use cases. Both the tutorials contain some very good and simple examples coupled with easily understandable pictures. The first tutorial focuses on use case driven development and UML diagrams while the second deals with writing effective use cases.&lt;br /&gt;
&lt;br /&gt;
=== Presentations Online ===&lt;br /&gt;
&lt;br /&gt;
•	https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_use-cases.php&lt;br /&gt;
&lt;br /&gt;
This presentation describes writing use cases along with constructing use case diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www-rohan.sdsu.edu/faculty/rnorman/course/ids306/Lect_c4.ppt&lt;br /&gt;
&lt;br /&gt;
This is a very good presentation that explains the concepts with familiar real life examples.&lt;br /&gt;
&lt;br /&gt;
•	http://www.cragsystems.com/SFRWUC/index.htm&lt;br /&gt;
&lt;br /&gt;
This web-based tutorial describes creating a Use Case Model of the functional requirements for a computer system.&lt;br /&gt;
&lt;br /&gt;
=== Books ===&lt;br /&gt;
&lt;br /&gt;
Following are good books for learning use cases&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Writing-Effective-Cases-Alistair-Cockburn/dp/0201702258 Writing Effective Use Cases by Alistair Cockburn ]]&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Object-Oriented-Software-Engineering-Driven-Approach/dp/0201544350 Object-Oriented Software Engineering: A Use Case Driven Approach by Ivar Jacobson ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[http://alistair.cockburn.us/Use+case+fundamentals Cockburn, Alistair. &amp;quot;Use case fundamentals.&amp;quot; Alistair Cockburn. May 10, 2006. http://alistair.cockburn.us/Use+case+fundamentals. Accessed: 10/18/2010]&lt;br /&gt;
&lt;br /&gt;
[http://alistair.cockburn.us/Pluggable+use+cases Cockburn, Alistair. &amp;quot;Pluggable use cases.&amp;quot; Alistair Cockburn. August 2, 2004. http://alistair.cockburn.us/Pluggable+use+cases. Accessed: 10/18/2010]&lt;br /&gt;
&lt;br /&gt;
[http://www.bredemeyer.com/use_cases.htm The Architecture Discipline. &amp;quot;Functional Requiremenets and Use Cases.&amp;quot; Bredemeyer Consulting. July 25, 2006. http://www.bredemeyer.com/use_cases.htm. Accessed: 10/18/10]&lt;br /&gt;
&lt;br /&gt;
[http://www.answers.com/topic/use-case?cat=technology &amp;quot;Use case.&amp;quot; Answers.com. ReferenceAnswers. Unknown. http://www.answers.com/topic/use-case?cat=technology. Accessed: 10/19/10]&lt;br /&gt;
&lt;br /&gt;
http://www.gatherspace.com/static/use_case_example.html&lt;br /&gt;
http://www.gatherspace.com/static/use_case_example.html&lt;br /&gt;
http://agile.csc.ncsu.edu/SEMaterials/UseCaseRequirements.pdf&lt;br /&gt;
http://www.agilemodeling.com/artifacts/useCaseDiagram.htm&lt;br /&gt;
http://en.wikipedia.org/wiki/Unified_Modeling_Language&lt;br /&gt;
http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html&lt;br /&gt;
http://www.agilemodeling.com/artifacts/essentialUseCase.htm&lt;br /&gt;
http://alistair.cockburn.us/Pluggable+use+cases&lt;br /&gt;
http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&lt;br /&gt;
&lt;br /&gt;
http://en.wikipedia.org/wiki/Use_case&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38918</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38918"/>
		<updated>2010-10-20T16:57:52Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases can be defined in many ways. There are many formal definitions for it. Very simply put, a use case is a reason to use a system. For example, a student borrowing a book from a library would be a use case of the library or a bank cardholder might need to use an ATM to get cash out of their account. More formally, “a use case is a collection of possible sequences of interactions between the system under discussion and its Users (or Actors), relating to a particular goal” [http://alistair.cockburn.us/Use+case+fundamentals].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. The system is treated as a &amp;quot;black box&amp;quot;, and the interactions with system, including system responses, are as perceived from outside the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;Use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system.&amp;quot;[http://www.bredemeyer.com/use_cases.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
&lt;br /&gt;
===Terms used with Use cases===&lt;br /&gt;
Now let us define some terms used with use cases: [http://en.wikipedia.org/wiki/Use_case]&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Actor:&amp;lt;/b&amp;gt; An actor is a type of user that interacts with the system (ex student borrowing book or cardholder using ATM). Actors are also external entities (people or other systems) who interact with the system to achieve a desired goal. The goal must be of value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Goal:&amp;lt;/b&amp;gt; Without a goal a use case is useless. There is no need for a use case when there is no need for any actor to achieve a goal. A goal briefly describes what the user intends to achieve with this use case. For example, the goal of a student using the library is to obtain the book. There is no point in having a use case like “the student enters the library” as that in itself has no value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Stakeholder:&amp;lt;/b&amp;gt; A stakeholder is an individual or department that is affected by the outcome of the use case. Individuals are usually agents of the organization or department for which the use case is being created. A stakeholder might be called on to provide input, feedback, or authorization for the use case. The stakeholder section of the use case can include a brief description of which of these functions the stakeholder is assigned to fulfill.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Trigger:&amp;lt;/b&amp;gt; A trigger describes the event that causes the use case to be initiated. This event can be external or internal. If the trigger is not a simple true &amp;quot;event&amp;quot; (e.g., the customer presses a button), but instead &amp;quot;when a set of conditions are met&amp;quot;, there will need to be a triggering process that continually (or periodically) runs to test whether the &amp;quot;trigger conditions&amp;quot; are met: the &amp;quot;triggering event&amp;quot; is a signal from the trigger process that the conditions are now met. &lt;br /&gt;
In our example with the student, a trigger would be the need for the book due to an approaching exam or test which causes the student to go to the library to borrow a book.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Precondition:&amp;lt;/b&amp;gt; A precondition defines all the conditions that must be true (i.e., describes the state of the system) for the trigger to meaningfully cause the initiation of the use case. That is, if the system is not in the state described in the preconditions, the behavior of the use case is indeterminate. For example, the student should be a member of the library and have the required identity to borrow a book. If the student is not a member of the library, there is no point in the student trying to borrow a book from that library.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Scenarios:&amp;lt;/b&amp;gt; A scenario usually specifies when the use case starts and ends. It describes the interaction with actors and shows the flow of events between a user and system. For example, when a student tries to borrow a particular book from the library, it doesn’t always necessarily turn out the same way. Sometimes the book is available and sometimes it is already borrowed by someone else or the library may not have a given book. These are all examples of use case scenarios. The outcome in each case if different depending on circumstances, but they all relate to the same goal that is, they are all triggered by the same need(in this case, need for the book) and all the scenarios have the same starting point.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Simple Example===&lt;br /&gt;
&amp;lt;p&amp;gt;Now that we know something about use cases, let us go ahead and describe a simple use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
 Use Case 1: Request book from the library (automated system).&lt;br /&gt;
 &lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
   To borrow book a particular book from the library&lt;br /&gt;
 &lt;br /&gt;
 1.2: Actors&lt;br /&gt;
   Student&lt;br /&gt;
 &lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
   Student should be a member of the library and have an id.&lt;br /&gt;
 &lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
   System requests for student ID and checks if he/she is a member&lt;br /&gt;
   Student selects “request book”&lt;br /&gt;
   Student enters name(s) of the book(s)&lt;br /&gt;
   System checks for availability of books and displays results accordingly&lt;br /&gt;
   Student confirms the order&lt;br /&gt;
   System displays details of where the requested books are stacked&lt;br /&gt;
 &lt;br /&gt;
 1.5: Alternate Path&lt;br /&gt;
   System does not recognize Id or student is not a member. System will not allow any books to be checked out.&lt;br /&gt;
   All books requested are already checked out. Displays this information to student and closes request.&lt;br /&gt;
&lt;br /&gt;
===Important Characteristics of Use Cases===&lt;br /&gt;
&amp;lt;p&amp;gt;The above description shows a very simple use case. However, there are a few essential characteristics to be noticed about the use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have identified the key components of a use case, that is, the goal, actors, preconditions, key scenarios/flow and preconditions. It is very essential that we identify these components before writing a use case.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have not gone into any sort of technical details about implementation or user interface design. Use cases only represent a very high level design. We are only trying to understand the flow and uses of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of paths (scenarios) that traverse an actor from a trigger event (start of the use case) to the goal (success scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of scenarios that traverse an actor from a trigger event toward a goal but fall short of the goal (failure scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Where can we use ‘use cases’?===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases are usually used to capture the requirements of an interaction based system. When there is a lot of interaction between actors and the system, it makes sense to capture as many interactions and scenarios possible before starting development of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases help to eliminate rework due to requirements misunderstandings between developers and stakeholders by aiming to reach a point where there are no surprises for the users. Use cases help to build an explicit shared understanding that everyone can take away with them, the users, developers, testers, technical authors, and others.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases have received some interest as a starting point for test design. By analyzing use cases for the system, we can know various interactions between the system and actors which will help in drawing out test plans.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Where can’t we use ‘use cases’?===&lt;br /&gt;
&amp;lt;p&amp;gt;[ http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use case flows are not well suited to easily capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Some systems are better described in an information/data-driven approach than in a functionality-driven approach of use cases. A good example of this kind of system is data-mining systems used for Business Intelligence. If you were to describe this kind of system in a use case model, it would be quite small and uninteresting (there are not many different functions here) but the set of data that the system handles may nevertheless be large and rich in details.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Common mistakes while writing Use cases: ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The system boundary is undefined or inconstant.  A system boundary is a boundary that separates the internal components of a system from external entities.  If we are not able to identify the system boundaries, we will not be able to clearly define the actors, scenarios and other essential factors involved in writing a good and useful use case. For example, the system described in the library example, has a clear boundary. It is used to accept inputs as book names, checks the id and provides a location for the books. We know its role very clearly. Suppose the system was used to manage everything like security, employee details etc, we will not be able to identify the goal, actors and scenarios very clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases should not be used to capture all the details of a system. The granularity to which you define use cases in a diagram should be enough to keep the use case diagram uncluttered and readable, yet, be complete without missing significant aspects of the required functionality.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The use cases are written from the system’s (not the actors’) point of view. Use cases written from a system point of view will make the writer have the tendency to get into technical details. If we wrote the example test case above from the systems point of view, we would have statements like “obtain location of book from database and display location of books to user”. This is more detail than necessary. Also, this would not capture the interaction with the actor very clearly. Use cases also give a brief insight into how the UI should look but when written from the system these details might not be captured clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  Use Cases can take many different forms, depending on the type of development process being used [http://www.answers.com/topic/use-case?cat=technology].  The format that a Use Case takes is not as important as the process that it goes through [http://www.gatherspace.com/static/use_case_example.html].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [http://www.gatherspace.com/static/use_case_example.html].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [http://agile.csc.ncsu.edu/SEMaterials/UseCaseRequirements.pdf].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard. [http://en.wikipedia.org/wiki/Unified_Modeling_Language UML] is a standardized modeling language for software development.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.&amp;quot;[http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
&lt;br /&gt;
===Essential Use Cases Vs System Use cases=== &lt;br /&gt;
&amp;lt;p&amp;gt;[http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases may be described at the abstract level (business use case, sometimes called essential use case), or at the system level (system use case). The difference between these is the scope.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;business use case&amp;lt;/b&amp;gt; is described in technology-free terminology which treats system as a black box and describes the business process that is used by its business actors (people or systems external to the process) to achieve their goals (e.g., manual payment processing, expense report approval, manage corporate real estate). The business use case will describe a process that provides value to the business actor, and it describes what the process does. Business Process Mapping is another method for this level of business description. A significant advantage of essential use cases is that they enable you to stand back and ask fundamental questions like &amp;quot;what's really going on&amp;quot; and &amp;quot;what do we really need to do&amp;quot; without letting implementation decisions get in the way.  These questions often lead to critical realizations that allow you to rethink, or reengineer if you prefer that term, aspects of the overall business process.&lt;br /&gt;
A Very good example of essential use case can be seen in this link: http://www.agilemodeling.com/artifacts/essentialUseCase.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;system use case&amp;lt;/b&amp;gt; describes a system that automates a business use case or process. It is normally described at the system functionality level (for example, &amp;quot;create voucher&amp;quot;) and specifies the function or the service that the system provides for the actor. The system use case details what the system will do in response to an actor's actions. For this reason it is recommended that system use case specification begin with a verb (e.g., create voucher, select payments, exclude payment, cancel voucher). An actor can be a human user or another system/subsystem interacting with the system being defined&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Pluggable Use Cases:===&lt;br /&gt;
&amp;lt;p&amp;gt;[http://alistair.cockburn.us/Pluggable+use+cases]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases describe various scenarios and sequence of operations to achieve a given goal between actors and systems. We have seen that we can write Use Cases in various styles with varying degrees of details, varying aims and for varying audience in terms of business and System use cases. Business use cases are very readable with very little technical content so that stakeholders and business managers can understand the system as a black box. This may not work with the developers. They would like to see more technical details and have use cases that describe fully what a system must do under all circumstances.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;It has been seen through experience that there is some amount of common behavior that is replicated in many business use cases. By extracting these common processing details (e.g. create, read, update, delete, etc), the contents of use cases can be reduced. These common processing details can be put into what is called lower level pluggable use cases. Essentially, we are creating various levels of use cases. At the highest level, what we can call as the main use case shows the more fundamental processing steps with the names of the pluggable use cases 'plugged' in-between. This helps abstracting out lower level details and keeping the use cases more simple. What this also does is, business managers and stake holders can see the higher level use case which still remains simple and readable whereas the lower level details which are of interest to developers is not lost either. These use case levels provides the freedom of reading at various degrees of granularity.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Pluggable use cases have to be written in a generic form so that it can be used wherever needed. It should be generic enough that it can be plugged into other use cases. All the rules of a regular use cases still apply in terms of finding goals(which are essentially sub goals of the system), actors etc.&lt;br /&gt;
Pluggable use cases can be produced in a way where their content is the same for all transactions, that is, it is common to various scenarios and projects. The difference in each project or scenario is mainly the data they handle and the sequence of activities being performed. The unique data and rules of each scenario are separated and documented independently in &amp;quot;Companion Tables&amp;quot; from the process steps which enable flexibility and maximum reuse.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Thus pluggable use cases become building blocks for higher level use cases. They are organized and applied within each use case to reach its goals. Whenever a pluggable use case is invoked, the invocation references the companion table that provides the unique data and rules for that use case. They are most effective when they are used in conjunction with these tables.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use case content is dependent on the requirements of the system under design. This is also true for pluggable use cases. While the majority of pluggable use case content can be used verbatim across any project or company, minor customizations may be needed to accommodate the individual needs of the project and company.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Some use case sequences are simple while others are complex. To manage degrees of complexity, a use case can exercise one, multiple or a series of pluggable use cases wherever desired and in any order. To maximize use case cohesion and increase reusability a pluggable use case may employ another pluggable use case. The versatility of pluggable use cases provides a solid foundation for the construction of project use cases.        &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The following link has excellent examples of how pluggable use cases can be written and used http://alistair.cockburn.us/Pluggable+use+cases&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;These links are taken from http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different examples of Use Cases.  Some are listed below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.objectmentor.com/resources/articles/usecases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.w3.org/2002/06/ws-example &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.agilemodeling.com/essays/useCaseReuse.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.soi.wide.ad.jp/class/20040034/slides/07/9.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.cs.colorado.edu/~kena/classes/6448/s05/reference/usecases/examples.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://courses.softlab.ntua.gr/softeng/Tutorials/UML-Use-Cases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&amp;lt;p&amp;gt;These links are taken from http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&amp;lt;/p&amp;gt;&lt;br /&gt;
=== Quick references ===&lt;br /&gt;
&lt;br /&gt;
Some quick references for studying use cases&lt;br /&gt;
&lt;br /&gt;
•	http://www.oreilly.com.cn/samplechap/uml20inanutshell/UML20-ch07.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.rit.edu/~jaa/CS4/Lectures/UseCase.PDF&lt;br /&gt;
&lt;br /&gt;
•	http://www.alagad.com/go/blog-entry/uml-use-case-diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.nmsu.edu/~jeffery/courses/371/lecture.html&lt;br /&gt;
&lt;br /&gt;
=== Good Tutorials ===&lt;br /&gt;
&lt;br /&gt;
•	http://www.parlezuml.com/tutorials/usecases/usecases.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.readysetpro.com/whitepapers/usecasetut.html &lt;br /&gt;
&lt;br /&gt;
Two very easy and informative tutorials for beginners who are not familiar with use cases. Both the tutorials contain some very good and simple examples coupled with easily understandable pictures. The first tutorial focuses on use case driven development and UML diagrams while the second deals with writing effective use cases.&lt;br /&gt;
&lt;br /&gt;
=== Presentations Online ===&lt;br /&gt;
&lt;br /&gt;
•	https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_use-cases.php&lt;br /&gt;
&lt;br /&gt;
This presentation describes writing use cases along with constructing use case diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www-rohan.sdsu.edu/faculty/rnorman/course/ids306/Lect_c4.ppt&lt;br /&gt;
&lt;br /&gt;
This is a very good presentation that explains the concepts with familiar real life examples.&lt;br /&gt;
&lt;br /&gt;
•	http://www.cragsystems.com/SFRWUC/index.htm&lt;br /&gt;
&lt;br /&gt;
This web-based tutorial describes creating a Use Case Model of the functional requirements for a computer system.&lt;br /&gt;
&lt;br /&gt;
=== Books ===&lt;br /&gt;
&lt;br /&gt;
Following are good books for learning use cases&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Writing-Effective-Cases-Alistair-Cockburn/dp/0201702258 Writing Effective Use Cases by Alistair Cockburn ]]&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Object-Oriented-Software-Engineering-Driven-Approach/dp/0201544350 Object-Oriented Software Engineering: A Use Case Driven Approach by Ivar Jacobson ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38917</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38917"/>
		<updated>2010-10-20T16:56:58Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases can be defined in many ways. There are many formal definitions for it. Very simply put, a use case is a reason to use a system. For example, a student borrowing a book from a library would be a use case of the library or a bank cardholder might need to use an ATM to get cash out of their account. More formally, “a use case is a collection of possible sequences of interactions between the system under discussion and its Users (or Actors), relating to a particular goal” [http://alistair.cockburn.us/Use+case+fundamentals].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. The system is treated as a &amp;quot;black box&amp;quot;, and the interactions with system, including system responses, are as perceived from outside the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;Use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system.&amp;quot;[http://www.bredemeyer.com/use_cases.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
&lt;br /&gt;
===Terms used with Use cases===&lt;br /&gt;
Now let us define some terms used with use cases: [http://en.wikipedia.org/wiki/Use_case]&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Actor:&amp;lt;/b&amp;gt; An actor is a type of user that interacts with the system (ex student borrowing book or cardholder using ATM). Actors are also external entities (people or other systems) who interact with the system to achieve a desired goal. The goal must be of value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Goal:&amp;lt;/b&amp;gt; Without a goal a use case is useless. There is no need for a use case when there is no need for any actor to achieve a goal. A goal briefly describes what the user intends to achieve with this use case. For example, the goal of a student using the library is to obtain the book. There is no point in having a use case like “the student enters the library” as that in itself has no value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Stakeholder:&amp;lt;/b&amp;gt; A stakeholder is an individual or department that is affected by the outcome of the use case. Individuals are usually agents of the organization or department for which the use case is being created. A stakeholder might be called on to provide input, feedback, or authorization for the use case. The stakeholder section of the use case can include a brief description of which of these functions the stakeholder is assigned to fulfill.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Trigger:&amp;lt;/b&amp;gt; A trigger describes the event that causes the use case to be initiated. This event can be external or internal. If the trigger is not a simple true &amp;quot;event&amp;quot; (e.g., the customer presses a button), but instead &amp;quot;when a set of conditions are met&amp;quot;, there will need to be a triggering process that continually (or periodically) runs to test whether the &amp;quot;trigger conditions&amp;quot; are met: the &amp;quot;triggering event&amp;quot; is a signal from the trigger process that the conditions are now met. &lt;br /&gt;
In our example with the student, a trigger would be the need for the book due to an approaching exam or test which causes the student to go to the library to borrow a book.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Precondition:&amp;lt;/b&amp;gt; A precondition defines all the conditions that must be true (i.e., describes the state of the system) for the trigger to meaningfully cause the initiation of the use case. That is, if the system is not in the state described in the preconditions, the behavior of the use case is indeterminate. For example, the student should be a member of the library and have the required identity to borrow a book. If the student is not a member of the library, there is no point in the student trying to borrow a book from that library.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Scenarios:&amp;lt;/b&amp;gt; A scenario usually specifies when the use case starts and ends. It describes the interaction with actors and shows the flow of events between a user and system. For example, when a student tries to borrow a particular book from the library, it doesn’t always necessarily turn out the same way. Sometimes the book is available and sometimes it is already borrowed by someone else or the library may not have a given book. These are all examples of use case scenarios. The outcome in each case if different depending on circumstances, but they all relate to the same goal that is, they are all triggered by the same need(in this case, need for the book) and all the scenarios have the same starting point.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Simple Example===&lt;br /&gt;
&amp;lt;p&amp;gt;Now that we know something about use cases, let us go ahead and describe a simple use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
 Use Case 1: Request book from the library (automated system).&lt;br /&gt;
 &lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
   To borrow book a particular book from the library&lt;br /&gt;
 &lt;br /&gt;
 1.2: Actors&lt;br /&gt;
   Student&lt;br /&gt;
 &lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
   Student should be a member of the library and have an id.&lt;br /&gt;
 &lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
   System requests for student ID and checks if he/she is a member&lt;br /&gt;
   Student selects “request book”&lt;br /&gt;
   Student enters name(s) of the book(s)&lt;br /&gt;
   System checks for availability of books and displays results accordingly&lt;br /&gt;
   Student confirms the order&lt;br /&gt;
   System displays details of where the requested books are stacked&lt;br /&gt;
 &lt;br /&gt;
 1.5: Alternate Path&lt;br /&gt;
   System does not recognize Id or student is not a member. System will not allow any books to be checked out.&lt;br /&gt;
   All books requested are already checked out. Displays this information to student and closes request.&lt;br /&gt;
&lt;br /&gt;
===Important Characteristics of Use Cases===&lt;br /&gt;
&amp;lt;p&amp;gt;The above description shows a very simple use case. However, there are a few essential characteristics to be noticed about the use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have identified the key components of a use case, that is, the goal, actors, preconditions, key scenarios/flow and preconditions. It is very essential that we identify these components before writing a use case.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have not gone into any sort of technical details about implementation or user interface design. Use cases only represent a very high level design. We are only trying to understand the flow and uses of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of paths (scenarios) that traverse an actor from a trigger event (start of the use case) to the goal (success scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of scenarios that traverse an actor from a trigger event toward a goal but fall short of the goal (failure scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Where can we use ‘use cases’?===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases are usually used to capture the requirements of an interaction based system. When there is a lot of interaction between actors and the system, it makes sense to capture as many interactions and scenarios possible before starting development of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases help to eliminate rework due to requirements misunderstandings between developers and stakeholders by aiming to reach a point where there are no surprises for the users. Use cases help to build an explicit shared understanding that everyone can take away with them, the users, developers, testers, technical authors, and others.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases have received some interest as a starting point for test design. By analyzing use cases for the system, we can know various interactions between the system and actors which will help in drawing out test plans.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Where can’t we use ‘use cases’?===&lt;br /&gt;
&amp;lt;p&amp;gt;[ http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use case flows are not well suited to easily capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Some systems are better described in an information/data-driven approach than in a functionality-driven approach of use cases. A good example of this kind of system is data-mining systems used for Business Intelligence. If you were to describe this kind of system in a use case model, it would be quite small and uninteresting (there are not many different functions here) but the set of data that the system handles may nevertheless be large and rich in details.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Common mistakes while writing Use cases: ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The system boundary is undefined or inconstant.  A system boundary is a boundary that separates the internal components of a system from external entities.  If we are not able to identify the system boundaries, we will not be able to clearly define the actors, scenarios and other essential factors involved in writing a good and useful use case. For example, the system described in the library example, has a clear boundary. It is used to accept inputs as book names, checks the id and provides a location for the books. We know its role very clearly. Suppose the system was used to manage everything like security, employee details etc, we will not be able to identify the goal, actors and scenarios very clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases should not be used to capture all the details of a system. The granularity to which you define use cases in a diagram should be enough to keep the use case diagram uncluttered and readable, yet, be complete without missing significant aspects of the required functionality.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The use cases are written from the system’s (not the actors’) point of view. Use cases written from a system point of view will make the writer have the tendency to get into technical details. If we wrote the example test case above from the systems point of view, we would have statements like “obtain location of book from database and display location of books to user”. This is more detail than necessary. Also, this would not capture the interaction with the actor very clearly. Use cases also give a brief insight into how the UI should look but when written from the system these details might not be captured clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  Use Cases can take many different forms, depending on the type of development process being used [http://www.answers.com/topic/use-case?cat=technology].  The format that a Use Case takes is not as important as the process that it goes through [http://www.gatherspace.com/static/use_case_example.html].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [http://www.gatherspace.com/static/use_case_example.html].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [http://agile.csc.ncsu.edu/SEMaterials/UseCaseRequirements.pdf].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard. [http://en.wikipedia.org/wiki/Unified_Modeling_Language UML] is a standardized modeling language for software development.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.&amp;quot;[http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
&lt;br /&gt;
===Essential Use Cases Vs System Use cases=== &lt;br /&gt;
&amp;lt;p&amp;gt;[http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases may be described at the abstract level (business use case, sometimes called essential use case), or at the system level (system use case). The difference between these is the scope.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;business use case&amp;lt;/b&amp;gt; is described in technology-free terminology which treats system as a black box and describes the business process that is used by its business actors (people or systems external to the process) to achieve their goals (e.g., manual payment processing, expense report approval, manage corporate real estate). The business use case will describe a process that provides value to the business actor, and it describes what the process does. Business Process Mapping is another method for this level of business description. A significant advantage of essential use cases is that they enable you to stand back and ask fundamental questions like &amp;quot;what's really going on&amp;quot; and &amp;quot;what do we really need to do&amp;quot; without letting implementation decisions get in the way.  These questions often lead to critical realizations that allow you to rethink, or reengineer if you prefer that term, aspects of the overall business process.&lt;br /&gt;
A Very good example of essential use case can be seen in this link: http://www.agilemodeling.com/artifacts/essentialUseCase.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;system use case&amp;lt;/b&amp;gt; describes a system that automates a business use case or process. It is normally described at the system functionality level (for example, &amp;quot;create voucher&amp;quot;) and specifies the function or the service that the system provides for the actor. The system use case details what the system will do in response to an actor's actions. For this reason it is recommended that system use case specification begin with a verb (e.g., create voucher, select payments, exclude payment, cancel voucher). An actor can be a human user or another system/subsystem interacting with the system being defined&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Pluggable Use Cases:===&lt;br /&gt;
&amp;lt;p&amp;gt;[http://alistair.cockburn.us/Pluggable+use+cases]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases describe various scenarios and sequence of operations to achieve a given goal between actors and systems. We have seen that we can write Use Cases in various styles with varying degrees of details, varying aims and for varying audience in terms of business and System use cases. Business use cases are very readable with very little technical content so that stakeholders and business managers can understand the system as a black box. This may not work with the developers. They would like to see more technical details and have use cases that describe fully what a system must do under all circumstances.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;It has been seen through experience that there is some amount of common behavior that is replicated in many business use cases. By extracting these common processing details (e.g. create, read, update, delete, etc), the contents of use cases can be reduced. These common processing details can be put into what is called lower level pluggable use cases. Essentially, we are creating various levels of use cases. At the highest level, what we can call as the main use case shows the more fundamental processing steps with the names of the pluggable use cases 'plugged' in-between. This helps abstracting out lower level details and keeping the use cases more simple. What this also does is, business managers and stake holders can see the higher level use case which still remains simple and readable whereas the lower level details which are of interest to developers is not lost either. These use case levels provides the freedom of reading at various degrees of granularity.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Pluggable use cases have to be written in a generic form so that it can be used wherever needed. It should be generic enough that it can be plugged into other use cases. All the rules of a regular use cases still apply in terms of finding goals(which are essentially sub goals of the system), actors etc.&lt;br /&gt;
Pluggable use cases can be produced in a way where their content is the same for all transactions, that is, it is common to various scenarios and projects. The difference in each project or scenario is mainly the data they handle and the sequence of activities being performed. The unique data and rules of each scenario are separated and documented independently in &amp;quot;Companion Tables&amp;quot; from the process steps which enable flexibility and maximum reuse.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Thus pluggable use cases become building blocks for higher level use cases. They are organized and applied within each use case to reach its goals. Whenever a pluggable use case is invoked, the invocation references the companion table that provides the unique data and rules for that use case. They are most effective when they are used in conjunction with these tables.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use case content is dependent on the requirements of the system under design. This is also true for pluggable use cases. While the majority of pluggable use case content can be used verbatim across any project or company, minor customizations may be needed to accommodate the individual needs of the project and company.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Some use case sequences are simple while others are complex. To manage degrees of complexity, a use case can exercise one, multiple or a series of pluggable use cases wherever desired and in any order. To maximize use case cohesion and increase reusability a pluggable use case may employ another pluggable use case. The versatility of pluggable use cases provides a solid foundation for the construction of project use cases.        &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The following link has excellent examples of how pluggable use cases can be written and used http://alistair.cockburn.us/Pluggable+use+cases&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different examples of Use Cases.  Some are listed below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.objectmentor.com/resources/articles/usecases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.w3.org/2002/06/ws-example &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.agilemodeling.com/essays/useCaseReuse.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.soi.wide.ad.jp/class/20040034/slides/07/9.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.cs.colorado.edu/~kena/classes/6448/s05/reference/usecases/examples.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://courses.softlab.ntua.gr/softeng/Tutorials/UML-Use-Cases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&amp;lt;p&amp;gt;These links are taken from http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&amp;lt;/p&amp;gt;&lt;br /&gt;
=== Quick references ===&lt;br /&gt;
&lt;br /&gt;
Some quick references for studying use cases&lt;br /&gt;
&lt;br /&gt;
•	http://www.oreilly.com.cn/samplechap/uml20inanutshell/UML20-ch07.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.rit.edu/~jaa/CS4/Lectures/UseCase.PDF&lt;br /&gt;
&lt;br /&gt;
•	http://www.alagad.com/go/blog-entry/uml-use-case-diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.nmsu.edu/~jeffery/courses/371/lecture.html&lt;br /&gt;
&lt;br /&gt;
=== Good Tutorials ===&lt;br /&gt;
&lt;br /&gt;
•	http://www.parlezuml.com/tutorials/usecases/usecases.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.readysetpro.com/whitepapers/usecasetut.html &lt;br /&gt;
&lt;br /&gt;
Two very easy and informative tutorials for beginners who are not familiar with use cases. Both the tutorials contain some very good and simple examples coupled with easily understandable pictures. The first tutorial focuses on use case driven development and UML diagrams while the second deals with writing effective use cases.&lt;br /&gt;
&lt;br /&gt;
=== Presentations Online ===&lt;br /&gt;
&lt;br /&gt;
•	https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_use-cases.php&lt;br /&gt;
&lt;br /&gt;
This presentation describes writing use cases along with constructing use case diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www-rohan.sdsu.edu/faculty/rnorman/course/ids306/Lect_c4.ppt&lt;br /&gt;
&lt;br /&gt;
This is a very good presentation that explains the concepts with familiar real life examples.&lt;br /&gt;
&lt;br /&gt;
•	http://www.cragsystems.com/SFRWUC/index.htm&lt;br /&gt;
&lt;br /&gt;
This web-based tutorial describes creating a Use Case Model of the functional requirements for a computer system.&lt;br /&gt;
&lt;br /&gt;
=== Books ===&lt;br /&gt;
&lt;br /&gt;
Following are good books for learning use cases&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Writing-Effective-Cases-Alistair-Cockburn/dp/0201702258 Writing Effective Use Cases by Alistair Cockburn ]]&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Object-Oriented-Software-Engineering-Driven-Approach/dp/0201544350 Object-Oriented Software Engineering: A Use Case Driven Approach by Ivar Jacobson ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38916</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38916"/>
		<updated>2010-10-20T16:55:06Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases can be defined in many ways. There are many formal definitions for it. Very simply put, a use case is a reason to use a system. For example, a student borrowing a book from a library would be a use case of the library or a bank cardholder might need to use an ATM to get cash out of their account. More formally, “a use case is a collection of possible sequences of interactions between the system under discussion and its Users (or Actors), relating to a particular goal” [http://alistair.cockburn.us/Use+case+fundamentals].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. The system is treated as a &amp;quot;black box&amp;quot;, and the interactions with system, including system responses, are as perceived from outside the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;Use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system.&amp;quot;[http://www.bredemeyer.com/use_cases.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Terms used with Use cases===&lt;br /&gt;
Now let us define some terms used with use cases: [http://en.wikipedia.org/wiki/Use_case]&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Actor:&amp;lt;/b&amp;gt; An actor is a type of user that interacts with the system (ex student borrowing book or cardholder using ATM). Actors are also external entities (people or other systems) who interact with the system to achieve a desired goal. The goal must be of value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Goal:&amp;lt;/b&amp;gt; Without a goal a use case is useless. There is no need for a use case when there is no need for any actor to achieve a goal. A goal briefly describes what the user intends to achieve with this use case. For example, the goal of a student using the library is to obtain the book. There is no point in having a use case like “the student enters the library” as that in itself has no value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Stakeholder:&amp;lt;/b&amp;gt; A stakeholder is an individual or department that is affected by the outcome of the use case. Individuals are usually agents of the organization or department for which the use case is being created. A stakeholder might be called on to provide input, feedback, or authorization for the use case. The stakeholder section of the use case can include a brief description of which of these functions the stakeholder is assigned to fulfill.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Trigger:&amp;lt;/b&amp;gt; A trigger describes the event that causes the use case to be initiated. This event can be external or internal. If the trigger is not a simple true &amp;quot;event&amp;quot; (e.g., the customer presses a button), but instead &amp;quot;when a set of conditions are met&amp;quot;, there will need to be a triggering process that continually (or periodically) runs to test whether the &amp;quot;trigger conditions&amp;quot; are met: the &amp;quot;triggering event&amp;quot; is a signal from the trigger process that the conditions are now met. &lt;br /&gt;
In our example with the student, a trigger would be the need for the book due to an approaching exam or test which causes the student to go to the library to borrow a book.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Precondition:&amp;lt;/b&amp;gt; A precondition defines all the conditions that must be true (i.e., describes the state of the system) for the trigger to meaningfully cause the initiation of the use case. That is, if the system is not in the state described in the preconditions, the behavior of the use case is indeterminate. For example, the student should be a member of the library and have the required identity to borrow a book. If the student is not a member of the library, there is no point in the student trying to borrow a book from that library.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Scenarios:&amp;lt;/b&amp;gt; A scenario usually specifies when the use case starts and ends. It describes the interaction with actors and shows the flow of events between a user and system. For example, when a student tries to borrow a particular book from the library, it doesn’t always necessarily turn out the same way. Sometimes the book is available and sometimes it is already borrowed by someone else or the library may not have a given book. These are all examples of use case scenarios. The outcome in each case if different depending on circumstances, but they all relate to the same goal that is, they are all triggered by the same need(in this case, need for the book) and all the scenarios have the same starting point.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Simple Example===&lt;br /&gt;
&amp;lt;p&amp;gt;Now that we know something about use cases, let us go ahead and describe a simple use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
 Use Case 1: Request book from the library (automated system).&lt;br /&gt;
 &lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
   To borrow book a particular book from the library&lt;br /&gt;
 &lt;br /&gt;
 1.2: Actors&lt;br /&gt;
   Student&lt;br /&gt;
 &lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
   Student should be a member of the library and have an id.&lt;br /&gt;
 &lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
   System requests for student ID and checks if he/she is a member&lt;br /&gt;
   Student selects “request book”&lt;br /&gt;
   Student enters name(s) of the book(s)&lt;br /&gt;
   System checks for availability of books and displays results accordingly&lt;br /&gt;
   Student confirms the order&lt;br /&gt;
   System displays details of where the requested books are stacked&lt;br /&gt;
 &lt;br /&gt;
 1.5: Alternate Path&lt;br /&gt;
   System does not recognize Id or student is not a member. System will not allow any books to be checked out.&lt;br /&gt;
   All books requested are already checked out. Displays this information to student and closes request.&lt;br /&gt;
&lt;br /&gt;
===Important Characteristics of Use Cases===&lt;br /&gt;
&amp;lt;p&amp;gt;The above description shows a very simple use case. However, there are a few essential characteristics to be noticed about the use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have identified the key components of a use case, that is, the goal, actors, preconditions, key scenarios/flow and preconditions. It is very essential that we identify these components before writing a use case.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have not gone into any sort of technical details about implementation or user interface design. Use cases only represent a very high level design. We are only trying to understand the flow and uses of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of paths (scenarios) that traverse an actor from a trigger event (start of the use case) to the goal (success scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of scenarios that traverse an actor from a trigger event toward a goal but fall short of the goal (failure scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Where can we use ‘use cases’?===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases are usually used to capture the requirements of an interaction based system. When there is a lot of interaction between actors and the system, it makes sense to capture as many interactions and scenarios possible before starting development of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases help to eliminate rework due to requirements misunderstandings between developers and stakeholders by aiming to reach a point where there are no surprises for the users. Use cases help to build an explicit shared understanding that everyone can take away with them, the users, developers, testers, technical authors, and others.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases have received some interest as a starting point for test design. By analyzing use cases for the system, we can know various interactions between the system and actors which will help in drawing out test plans.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Where can’t we use ‘use cases’?===&lt;br /&gt;
&amp;lt;p&amp;gt;[ http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use case flows are not well suited to easily capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Some systems are better described in an information/data-driven approach than in a functionality-driven approach of use cases. A good example of this kind of system is data-mining systems used for Business Intelligence. If you were to describe this kind of system in a use case model, it would be quite small and uninteresting (there are not many different functions here) but the set of data that the system handles may nevertheless be large and rich in details.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Common mistakes while writing Use cases: ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The system boundary is undefined or inconstant.  A system boundary is a boundary that separates the internal components of a system from external entities.  If we are not able to identify the system boundaries, we will not be able to clearly define the actors, scenarios and other essential factors involved in writing a good and useful use case. For example, the system described in the library example, has a clear boundary. It is used to accept inputs as book names, checks the id and provides a location for the books. We know its role very clearly. Suppose the system was used to manage everything like security, employee details etc, we will not be able to identify the goal, actors and scenarios very clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases should not be used to capture all the details of a system. The granularity to which you define use cases in a diagram should be enough to keep the use case diagram uncluttered and readable, yet, be complete without missing significant aspects of the required functionality.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The use cases are written from the system’s (not the actors’) point of view. Use cases written from a system point of view will make the writer have the tendency to get into technical details. If we wrote the example test case above from the systems point of view, we would have statements like “obtain location of book from database and display location of books to user”. This is more detail than necessary. Also, this would not capture the interaction with the actor very clearly. Use cases also give a brief insight into how the UI should look but when written from the system these details might not be captured clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  Use Cases can take many different forms, depending on the type of development process being used [http://www.answers.com/topic/use-case?cat=technology].  The format that a Use Case takes is not as important as the process that it goes through [http://www.gatherspace.com/static/use_case_example.html].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [http://www.gatherspace.com/static/use_case_example.html].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [http://agile.csc.ncsu.edu/SEMaterials/UseCaseRequirements.pdf].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard. [http://en.wikipedia.org/wiki/Unified_Modeling_Language UML] is a standardized modeling language for software development.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.&amp;quot;[http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
&lt;br /&gt;
===Essential Use Cases Vs System Use cases=== &lt;br /&gt;
&amp;lt;p&amp;gt;[http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases may be described at the abstract level (business use case, sometimes called essential use case), or at the system level (system use case). The difference between these is the scope.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;business use case&amp;lt;/b&amp;gt; is described in technology-free terminology which treats system as a black box and describes the business process that is used by its business actors (people or systems external to the process) to achieve their goals (e.g., manual payment processing, expense report approval, manage corporate real estate). The business use case will describe a process that provides value to the business actor, and it describes what the process does. Business Process Mapping is another method for this level of business description. A significant advantage of essential use cases is that they enable you to stand back and ask fundamental questions like &amp;quot;what's really going on&amp;quot; and &amp;quot;what do we really need to do&amp;quot; without letting implementation decisions get in the way.  These questions often lead to critical realizations that allow you to rethink, or reengineer if you prefer that term, aspects of the overall business process.&lt;br /&gt;
A Very good example of essential use case can be seen in this link: http://www.agilemodeling.com/artifacts/essentialUseCase.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;system use case&amp;lt;/b&amp;gt; describes a system that automates a business use case or process. It is normally described at the system functionality level (for example, &amp;quot;create voucher&amp;quot;) and specifies the function or the service that the system provides for the actor. The system use case details what the system will do in response to an actor's actions. For this reason it is recommended that system use case specification begin with a verb (e.g., create voucher, select payments, exclude payment, cancel voucher). An actor can be a human user or another system/subsystem interacting with the system being defined&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Pluggable Use Cases:===&lt;br /&gt;
&amp;lt;p&amp;gt;[http://alistair.cockburn.us/Pluggable+use+cases]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases describe various scenarios and sequence of operations to achieve a given goal between actors and systems. We have seen that we can write Use Cases in various styles with varying degrees of details, varying aims and for varying audience in terms of business and System use cases. Business use cases are very readable with very little technical content so that stakeholders and business managers can understand the system as a black box. This may not work with the developers. They would like to see more technical details and have use cases that describe fully what a system must do under all circumstances.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;It has been seen through experience that there is some amount of common behavior that is replicated in many business use cases. By extracting these common processing details (e.g. create, read, update, delete, etc), the contents of use cases can be reduced. These common processing details can be put into what is called lower level pluggable use cases. Essentially, we are creating various levels of use cases. At the highest level, what we can call as the main use case shows the more fundamental processing steps with the names of the pluggable use cases 'plugged' in-between. This helps abstracting out lower level details and keeping the use cases more simple. What this also does is, business managers and stake holders can see the higher level use case which still remains simple and readable whereas the lower level details which are of interest to developers is not lost either. These use case levels provides the freedom of reading at various degrees of granularity.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Pluggable use cases have to be written in a generic form so that it can be used wherever needed. It should be generic enough that it can be plugged into other use cases. All the rules of a regular use cases still apply in terms of finding goals(which are essentially sub goals of the system), actors etc.&lt;br /&gt;
Pluggable use cases can be produced in a way where their content is the same for all transactions, that is, it is common to various scenarios and projects. The difference in each project or scenario is mainly the data they handle and the sequence of activities being performed. The unique data and rules of each scenario are separated and documented independently in &amp;quot;Companion Tables&amp;quot; from the process steps which enable flexibility and maximum reuse.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Thus pluggable use cases become building blocks for higher level use cases. They are organized and applied within each use case to reach its goals. Whenever a pluggable use case is invoked, the invocation references the companion table that provides the unique data and rules for that use case. They are most effective when they are used in conjunction with these tables.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use case content is dependent on the requirements of the system under design. This is also true for pluggable use cases. While the majority of pluggable use case content can be used verbatim across any project or company, minor customizations may be needed to accommodate the individual needs of the project and company.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Some use case sequences are simple while others are complex. To manage degrees of complexity, a use case can exercise one, multiple or a series of pluggable use cases wherever desired and in any order. To maximize use case cohesion and increase reusability a pluggable use case may employ another pluggable use case. The versatility of pluggable use cases provides a solid foundation for the construction of project use cases.        &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The following link has excellent examples of how pluggable use cases can be written and used http://alistair.cockburn.us/Pluggable+use+cases&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different examples of Use Cases.  Some are listed below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.objectmentor.com/resources/articles/usecases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.w3.org/2002/06/ws-example &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.agilemodeling.com/essays/useCaseReuse.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.soi.wide.ad.jp/class/20040034/slides/07/9.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.cs.colorado.edu/~kena/classes/6448/s05/reference/usecases/examples.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://courses.softlab.ntua.gr/softeng/Tutorials/UML-Use-Cases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&amp;lt;p&amp;gt;These links are taken from http://pg-server.csc.ncsu.edu/mediawiki/index.php/CSC/ECE_517_Fall_2007/wiki2_4_np&amp;lt;/p&amp;gt;&lt;br /&gt;
=== Quick references ===&lt;br /&gt;
&lt;br /&gt;
Some quick references for studying use cases&lt;br /&gt;
&lt;br /&gt;
•	http://www.oreilly.com.cn/samplechap/uml20inanutshell/UML20-ch07.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.rit.edu/~jaa/CS4/Lectures/UseCase.PDF&lt;br /&gt;
&lt;br /&gt;
•	http://www.alagad.com/go/blog-entry/uml-use-case-diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www.cs.nmsu.edu/~jeffery/courses/371/lecture.html&lt;br /&gt;
&lt;br /&gt;
=== Good Tutorials ===&lt;br /&gt;
&lt;br /&gt;
•	http://www.parlezuml.com/tutorials/usecases/usecases.pdf&lt;br /&gt;
&lt;br /&gt;
•	http://www.readysetpro.com/whitepapers/usecasetut.html &lt;br /&gt;
&lt;br /&gt;
Two very easy and informative tutorials for beginners who are not familiar with use cases. Both the tutorials contain some very good and simple examples coupled with easily understandable pictures. The first tutorial focuses on use case driven development and UML diagrams while the second deals with writing effective use cases.&lt;br /&gt;
&lt;br /&gt;
=== Presentations Online ===&lt;br /&gt;
&lt;br /&gt;
•	https://users.cs.jmu.edu/bernstdh/web/common/lectures/slides_use-cases.php&lt;br /&gt;
&lt;br /&gt;
This presentation describes writing use cases along with constructing use case diagrams&lt;br /&gt;
&lt;br /&gt;
•	http://www-rohan.sdsu.edu/faculty/rnorman/course/ids306/Lect_c4.ppt&lt;br /&gt;
&lt;br /&gt;
This is a very good presentation that explains the concepts with familiar real life examples.&lt;br /&gt;
&lt;br /&gt;
•	http://www.cragsystems.com/SFRWUC/index.htm&lt;br /&gt;
&lt;br /&gt;
This web-based tutorial describes creating a Use Case Model of the functional requirements for a computer system.&lt;br /&gt;
&lt;br /&gt;
=== Books ===&lt;br /&gt;
&lt;br /&gt;
Following are good books for learning use cases&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Writing-Effective-Cases-Alistair-Cockburn/dp/0201702258 Writing Effective Use Cases by Alistair Cockburn ]]&lt;br /&gt;
&lt;br /&gt;
[[http://www.amazon.com/Object-Oriented-Software-Engineering-Driven-Approach/dp/0201544350 Object-Oriented Software Engineering: A Use Case Driven Approach by Ivar Jacobson ]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38914</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38914"/>
		<updated>2010-10-20T16:51:58Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases can be defined in many ways. There are many formal definitions for it. Very simply put, a use case is a reason to use a system. For example, a student borrowing a book from a library would be a use case of the library or a bank cardholder might need to use an ATM to get cash out of their account. More formally, “a use case is a collection of possible sequences of interactions between the system under discussion and its Users (or Actors), relating to a particular goal” [http://alistair.cockburn.us/Use+case+fundamentals].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. The system is treated as a &amp;quot;black box&amp;quot;, and the interactions with system, including system responses, are as perceived from outside the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;Use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system.&amp;quot;[http://www.bredemeyer.com/use_cases.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Terms used with Use cases===&lt;br /&gt;
Now let us define some terms used with use cases: [http://en.wikipedia.org/wiki/Use_case]&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Actor:&amp;lt;/b&amp;gt; An actor is a type of user that interacts with the system (ex student borrowing book or cardholder using ATM). Actors are also external entities (people or other systems) who interact with the system to achieve a desired goal. The goal must be of value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Goal:&amp;lt;/b&amp;gt; Without a goal a use case is useless. There is no need for a use case when there is no need for any actor to achieve a goal. A goal briefly describes what the user intends to achieve with this use case. For example, the goal of a student using the library is to obtain the book. There is no point in having a use case like “the student enters the library” as that in itself has no value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Stakeholder:&amp;lt;/b&amp;gt; A stakeholder is an individual or department that is affected by the outcome of the use case. Individuals are usually agents of the organization or department for which the use case is being created. A stakeholder might be called on to provide input, feedback, or authorization for the use case. The stakeholder section of the use case can include a brief description of which of these functions the stakeholder is assigned to fulfill.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Trigger:&amp;lt;/b&amp;gt; A trigger describes the event that causes the use case to be initiated. This event can be external or internal. If the trigger is not a simple true &amp;quot;event&amp;quot; (e.g., the customer presses a button), but instead &amp;quot;when a set of conditions are met&amp;quot;, there will need to be a triggering process that continually (or periodically) runs to test whether the &amp;quot;trigger conditions&amp;quot; are met: the &amp;quot;triggering event&amp;quot; is a signal from the trigger process that the conditions are now met. &lt;br /&gt;
In our example with the student, a trigger would be the need for the book due to an approaching exam or test which causes the student to go to the library to borrow a book.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Precondition:&amp;lt;/b&amp;gt; A precondition defines all the conditions that must be true (i.e., describes the state of the system) for the trigger to meaningfully cause the initiation of the use case. That is, if the system is not in the state described in the preconditions, the behavior of the use case is indeterminate. For example, the student should be a member of the library and have the required identity to borrow a book. If the student is not a member of the library, there is no point in the student trying to borrow a book from that library.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Scenarios:&amp;lt;/b&amp;gt; A scenario usually specifies when the use case starts and ends. It describes the interaction with actors and shows the flow of events between a user and system. For example, when a student tries to borrow a particular book from the library, it doesn’t always necessarily turn out the same way. Sometimes the book is available and sometimes it is already borrowed by someone else or the library may not have a given book. These are all examples of use case scenarios. The outcome in each case if different depending on circumstances, but they all relate to the same goal that is, they are all triggered by the same need(in this case, need for the book) and all the scenarios have the same starting point.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Simple Example===&lt;br /&gt;
&amp;lt;p&amp;gt;Now that we know something about use cases, let us go ahead and describe a simple use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
 Use Case 1: Request book from the library (automated system).&lt;br /&gt;
 &lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
   To borrow book a particular book from the library&lt;br /&gt;
 &lt;br /&gt;
 1.2: Actors&lt;br /&gt;
   Student&lt;br /&gt;
 &lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
   Student should be a member of the library and have an id.&lt;br /&gt;
 &lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
   System requests for student ID and checks if he/she is a member&lt;br /&gt;
   Student selects “request book”&lt;br /&gt;
   Student enters name(s) of the book(s)&lt;br /&gt;
   System checks for availability of books and displays results accordingly&lt;br /&gt;
   Student confirms the order&lt;br /&gt;
   System displays details of where the requested books are stacked&lt;br /&gt;
 &lt;br /&gt;
 1.5: Alternate Path&lt;br /&gt;
   System does not recognize Id or student is not a member. System will not allow any books to be checked out.&lt;br /&gt;
   All books requested are already checked out. Displays this information to student and closes request.&lt;br /&gt;
&lt;br /&gt;
===Important Characteristics of Use Cases===&lt;br /&gt;
&amp;lt;p&amp;gt;The above description shows a very simple use case. However, there are a few essential characteristics to be noticed about the use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have identified the key components of a use case, that is, the goal, actors, preconditions, key scenarios/flow and preconditions. It is very essential that we identify these components before writing a use case.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have not gone into any sort of technical details about implementation or user interface design. Use cases only represent a very high level design. We are only trying to understand the flow and uses of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of paths (scenarios) that traverse an actor from a trigger event (start of the use case) to the goal (success scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of scenarios that traverse an actor from a trigger event toward a goal but fall short of the goal (failure scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Where can we use ‘use cases’?===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases are usually used to capture the requirements of an interaction based system. When there is a lot of interaction between actors and the system, it makes sense to capture as many interactions and scenarios possible before starting development of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases help to eliminate rework due to requirements misunderstandings between developers and stakeholders by aiming to reach a point where there are no surprises for the users. Use cases help to build an explicit shared understanding that everyone can take away with them, the users, developers, testers, technical authors, and others.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases have received some interest as a starting point for test design. By analyzing use cases for the system, we can know various interactions between the system and actors which will help in drawing out test plans.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Where can’t we use ‘use cases’?===&lt;br /&gt;
&amp;lt;p&amp;gt;[ http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use case flows are not well suited to easily capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Some systems are better described in an information/data-driven approach than in a functionality-driven approach of use cases. A good example of this kind of system is data-mining systems used for Business Intelligence. If you were to describe this kind of system in a use case model, it would be quite small and uninteresting (there are not many different functions here) but the set of data that the system handles may nevertheless be large and rich in details.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Common mistakes while writing Use cases: ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The system boundary is undefined or inconstant.  A system boundary is a boundary that separates the internal components of a system from external entities.  If we are not able to identify the system boundaries, we will not be able to clearly define the actors, scenarios and other essential factors involved in writing a good and useful use case. For example, the system described in the library example, has a clear boundary. It is used to accept inputs as book names, checks the id and provides a location for the books. We know its role very clearly. Suppose the system was used to manage everything like security, employee details etc, we will not be able to identify the goal, actors and scenarios very clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases should not be used to capture all the details of a system. The granularity to which you define use cases in a diagram should be enough to keep the use case diagram uncluttered and readable, yet, be complete without missing significant aspects of the required functionality.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The use cases are written from the system’s (not the actors’) point of view. Use cases written from a system point of view will make the writer have the tendency to get into technical details. If we wrote the example test case above from the systems point of view, we would have statements like “obtain location of book from database and display location of books to user”. This is more detail than necessary. Also, this would not capture the interaction with the actor very clearly. Use cases also give a brief insight into how the UI should look but when written from the system these details might not be captured clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  Use Cases can take many different forms, depending on the type of development process being used [http://www.answers.com/topic/use-case?cat=technology].  The format that a Use Case takes is not as important as the process that it goes through [http://www.gatherspace.com/static/use_case_example.html].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [http://www.gatherspace.com/static/use_case_example.html].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [http://agile.csc.ncsu.edu/SEMaterials/UseCaseRequirements.pdf].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard. [http://en.wikipedia.org/wiki/Unified_Modeling_Language UML] is a standardized modeling language for software development.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.&amp;quot;[http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
&lt;br /&gt;
===Essential Use Cases Vs System Use cases=== &lt;br /&gt;
&amp;lt;p&amp;gt;[http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases may be described at the abstract level (business use case, sometimes called essential use case), or at the system level (system use case). The difference between these is the scope.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;business use case&amp;lt;/b&amp;gt; is described in technology-free terminology which treats system as a black box and describes the business process that is used by its business actors (people or systems external to the process) to achieve their goals (e.g., manual payment processing, expense report approval, manage corporate real estate). The business use case will describe a process that provides value to the business actor, and it describes what the process does. Business Process Mapping is another method for this level of business description. A significant advantage of essential use cases is that they enable you to stand back and ask fundamental questions like &amp;quot;what's really going on&amp;quot; and &amp;quot;what do we really need to do&amp;quot; without letting implementation decisions get in the way.  These questions often lead to critical realizations that allow you to rethink, or reengineer if you prefer that term, aspects of the overall business process.&lt;br /&gt;
A Very good example of essential use case can be seen in this link: http://www.agilemodeling.com/artifacts/essentialUseCase.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;system use case&amp;lt;/b&amp;gt; describes a system that automates a business use case or process. It is normally described at the system functionality level (for example, &amp;quot;create voucher&amp;quot;) and specifies the function or the service that the system provides for the actor. The system use case details what the system will do in response to an actor's actions. For this reason it is recommended that system use case specification begin with a verb (e.g., create voucher, select payments, exclude payment, cancel voucher). An actor can be a human user or another system/subsystem interacting with the system being defined&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Pluggable Use Cases:===&lt;br /&gt;
&amp;lt;p&amp;gt;[http://alistair.cockburn.us/Pluggable+use+cases]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases describe various scenarios and sequence of operations to achieve a given goal between actors and systems. We have seen that we can write Use Cases in various styles with varying degrees of details, varying aims and for varying audience in terms of business and System use cases. Business use cases are very readable with very little technical content so that stakeholders and business managers can understand the system as a black box. This may not work with the developers. They would like to see more technical details and have use cases that describe fully what a system must do under all circumstances.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;It has been seen through experience that there is some amount of common behavior that is replicated in many business use cases. By extracting these common processing details (e.g. create, read, update, delete, etc), the contents of use cases can be reduced. These common processing details can be put into what is called lower level pluggable use cases. Essentially, we are creating various levels of use cases. At the highest level, what we can call as the main use case shows the more fundamental processing steps with the names of the pluggable use cases 'plugged' in-between. This helps abstracting out lower level details and keeping the use cases more simple. What this also does is, business managers and stake holders can see the higher level use case which still remains simple and readable whereas the lower level details which are of interest to developers is not lost either. These use case levels provides the freedom of reading at various degrees of granularity.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Pluggable use cases have to be written in a generic form so that it can be used wherever needed. It should be generic enough that it can be plugged into other use cases. All the rules of a regular use cases still apply in terms of finding goals(which are essentially sub goals of the system), actors etc.&lt;br /&gt;
Pluggable use cases can be produced in a way where their content is the same for all transactions, that is, it is common to various scenarios and projects. The difference in each project or scenario is mainly the data they handle and the sequence of activities being performed. The unique data and rules of each scenario are separated and documented independently in &amp;quot;Companion Tables&amp;quot; from the process steps which enable flexibility and maximum reuse.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Thus pluggable use cases become building blocks for higher level use cases. They are organized and applied within each use case to reach its goals. Whenever a pluggable use case is invoked, the invocation references the companion table that provides the unique data and rules for that use case. They are most effective when they are used in conjunction with these tables.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use case content is dependent on the requirements of the system under design. This is also true for pluggable use cases. While the majority of pluggable use case content can be used verbatim across any project or company, minor customizations may be needed to accommodate the individual needs of the project and company.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Some use case sequences are simple while others are complex. To manage degrees of complexity, a use case can exercise one, multiple or a series of pluggable use cases wherever desired and in any order. To maximize use case cohesion and increase reusability a pluggable use case may employ another pluggable use case. The versatility of pluggable use cases provides a solid foundation for the construction of project use cases.        &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The following link has excellent examples of how pluggable use cases can be written and used http://alistair.cockburn.us/Pluggable+use+cases&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different examples of Use Cases.  Some are listed below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.objectmentor.com/resources/articles/usecases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.w3.org/2002/06/ws-example &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.agilemodeling.com/essays/useCaseReuse.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.soi.wide.ad.jp/class/20040034/slides/07/9.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.cs.colorado.edu/~kena/classes/6448/s05/reference/usecases/examples.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://courses.softlab.ntua.gr/softeng/Tutorials/UML-Use-Cases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38909</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38909"/>
		<updated>2010-10-20T16:45:25Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases can be defined in many ways. There are many formal definitions for it. Very simply put, a use case is a reason to use a system. For example, a student borrowing a book from a library would be a use case of the library or a bank cardholder might need to use an ATM to get cash out of their account. More formally, “a use case is a collection of possible sequences of interactions between the system under discussion and its Users (or Actors), relating to a particular goal” [http://alistair.cockburn.us/Use+case+fundamentals].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. The system is treated as a &amp;quot;black box&amp;quot;, and the interactions with system, including system responses, are as perceived from outside the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;Use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system.&amp;quot;[http://www.bredemeyer.com/use_cases.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Terms used with Use cases===&lt;br /&gt;
Now let us define some terms used with use cases: [http://en.wikipedia.org/wiki/Use_case]&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Actor:&amp;lt;/b&amp;gt; An actor is a type of user that interacts with the system (ex student borrowing book or cardholder using ATM). Actors are also external entities (people or other systems) who interact with the system to achieve a desired goal. The goal must be of value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Goal:&amp;lt;/b&amp;gt; Without a goal a use case is useless. There is no need for a use case when there is no need for any actor to achieve a goal. A goal briefly describes what the user intends to achieve with this use case. For example, the goal of a student using the library is to obtain the book. There is no point in having a use case like “the student enters the library” as that in itself has no value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Stakeholder:&amp;lt;/b&amp;gt; A stakeholder is an individual or department that is affected by the outcome of the use case. Individuals are usually agents of the organization or department for which the use case is being created. A stakeholder might be called on to provide input, feedback, or authorization for the use case. The stakeholder section of the use case can include a brief description of which of these functions the stakeholder is assigned to fulfill.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Trigger:&amp;lt;/b&amp;gt; A trigger describes the event that causes the use case to be initiated. This event can be external or internal. If the trigger is not a simple true &amp;quot;event&amp;quot; (e.g., the customer presses a button), but instead &amp;quot;when a set of conditions are met&amp;quot;, there will need to be a triggering process that continually (or periodically) runs to test whether the &amp;quot;trigger conditions&amp;quot; are met: the &amp;quot;triggering event&amp;quot; is a signal from the trigger process that the conditions are now met. &lt;br /&gt;
In our example with the student, a trigger would be the need for the book due to an approaching exam or test which causes the student to go to the library to borrow a book.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Precondition:&amp;lt;/b&amp;gt; A precondition defines all the conditions that must be true (i.e., describes the state of the system) for the trigger to meaningfully cause the initiation of the use case. That is, if the system is not in the state described in the preconditions, the behavior of the use case is indeterminate. For example, the student should be a member of the library and have the required identity to borrow a book. If the student is not a member of the library, there is no point in the student trying to borrow a book from that library.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Scenarios:&amp;lt;/b&amp;gt; A scenario usually specifies when the use case starts and ends. It describes the interaction with actors and shows the flow of events between a user and system. For example, when a student tries to borrow a particular book from the library, it doesn’t always necessarily turn out the same way. Sometimes the book is available and sometimes it is already borrowed by someone else or the library may not have a given book. These are all examples of use case scenarios. The outcome in each case if different depending on circumstances, but they all relate to the same goal that is, they are all triggered by the same need(in this case, need for the book) and all the scenarios have the same starting point.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Simple Example===&lt;br /&gt;
&amp;lt;p&amp;gt;Now that we know something about use cases, let us go ahead and describe a simple use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
 Use Case 1: Request book from the library (automated system).&lt;br /&gt;
 &lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
   To borrow book a particular book from the library&lt;br /&gt;
 &lt;br /&gt;
 1.2: Actors&lt;br /&gt;
   Student&lt;br /&gt;
 &lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
   Student should be a member of the library and have an id.&lt;br /&gt;
 &lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
   System requests for student ID and checks if he/she is a member&lt;br /&gt;
   Student selects “request book”&lt;br /&gt;
   Student enters name(s) of the book(s)&lt;br /&gt;
   System checks for availability of books and displays results accordingly&lt;br /&gt;
   Student confirms the order&lt;br /&gt;
   System displays details of where the requested books are stacked&lt;br /&gt;
 &lt;br /&gt;
 1.5: Alternate Path&lt;br /&gt;
   System does not recognize Id or student is not a member. System will not allow any books to be checked out.&lt;br /&gt;
   All books requested are already checked out. Displays this information to student and closes request.&lt;br /&gt;
&lt;br /&gt;
===Important Characteristics of Use Cases===&lt;br /&gt;
&amp;lt;p&amp;gt;The above description shows a very simple use case. However, there are a few essential characteristics to be noticed about the use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have identified the key components of a use case, that is, the goal, actors, preconditions, key scenarios/flow and preconditions. It is very essential that we identify these components before writing a use case.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have not gone into any sort of technical details about implementation or user interface design. Use cases only represent a very high level design. We are only trying to understand the flow and uses of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of paths (scenarios) that traverse an actor from a trigger event (start of the use case) to the goal (success scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of scenarios that traverse an actor from a trigger event toward a goal but fall short of the goal (failure scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Where can we use ‘use cases’?===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases are usually used to capture the requirements of an interaction based system. When there is a lot of interaction between actors and the system, it makes sense to capture as many interactions and scenarios possible before starting development of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases help to eliminate rework due to requirements misunderstandings between developers and stakeholders by aiming to reach a point where there are no surprises for the users. Use cases help to build an explicit shared understanding that everyone can take away with them, the users, developers, testers, technical authors, and others.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases have received some interest as a starting point for test design. By analyzing use cases for the system, we can know various interactions between the system and actors which will help in drawing out test plans.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Where can’t we use ‘use cases’?===&lt;br /&gt;
&amp;lt;p&amp;gt;[ http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use case flows are not well suited to easily capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Some systems are better described in an information/data-driven approach than in a functionality-driven approach of use cases. A good example of this kind of system is data-mining systems used for Business Intelligence. If you were to describe this kind of system in a use case model, it would be quite small and uninteresting (there are not many different functions here) but the set of data that the system handles may nevertheless be large and rich in details.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Common mistakes while writing Use cases: ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The system boundary is undefined or inconstant.  A system boundary is a boundary that separates the internal components of a system from external entities.  If we are not able to identify the system boundaries, we will not be able to clearly define the actors, scenarios and other essential factors involved in writing a good and useful use case. For example, the system described in the library example, has a clear boundary. It is used to accept inputs as book names, checks the id and provides a location for the books. We know its role very clearly. Suppose the system was used to manage everything like security, employee details etc, we will not be able to identify the goal, actors and scenarios very clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases should not be used to capture all the details of a system. The granularity to which you define use cases in a diagram should be enough to keep the use case diagram uncluttered and readable, yet, be complete without missing significant aspects of the required functionality.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The use cases are written from the system’s (not the actors’) point of view. Use cases written from a system point of view will make the writer have the tendency to get into technical details. If we wrote the example test case above from the systems point of view, we would have statements like “obtain location of book from database and display location of books to user”. This is more detail than necessary. Also, this would not capture the interaction with the actor very clearly. Use cases also give a brief insight into how the UI should look but when written from the system these details might not be captured clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  Use Cases can take many different forms, depending on the type of development process being used [http://www.answers.com/topic/use-case?cat=technology].  The format that a Use Case takes is not as important as the process that it goes through [http://www.gatherspace.com/static/use_case_example.html].&lt;br /&gt;
&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [http://www.gatherspace.com/static/use_case_example.html].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [SOURCE].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [SOURCE].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;DEFINE UML HERE&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.[SOURCE: http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses – DIRECT QUOTE]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [SOURCE: http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
&lt;br /&gt;
===Essential Use Cases Vs System Use cases=== &lt;br /&gt;
&amp;lt;p&amp;gt;[http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases may be described at the abstract level (business use case, sometimes called essential use case), or at the system level (system use case). The difference between these is the scope.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;business use case&amp;lt;/b&amp;gt; is described in technology-free terminology which treats system as a black box and describes the business process that is used by its business actors (people or systems external to the process) to achieve their goals (e.g., manual payment processing, expense report approval, manage corporate real estate). The business use case will describe a process that provides value to the business actor, and it describes what the process does. Business Process Mapping is another method for this level of business description. A significant advantage of essential use cases is that they enable you to stand back and ask fundamental questions like &amp;quot;what's really going on&amp;quot; and &amp;quot;what do we really need to do&amp;quot; without letting implementation decisions get in the way.  These questions often lead to critical realizations that allow you to rethink, or reengineer if you prefer that term, aspects of the overall business process.&lt;br /&gt;
A Very good example of essential use case can be seen in this link: http://www.agilemodeling.com/artifacts/essentialUseCase.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;system use case&amp;lt;/b&amp;gt; describes a system that automates a business use case or process. It is normally described at the system functionality level (for example, &amp;quot;create voucher&amp;quot;) and specifies the function or the service that the system provides for the actor. The system use case details what the system will do in response to an actor's actions. For this reason it is recommended that system use case specification begin with a verb (e.g., create voucher, select payments, exclude payment, cancel voucher). An actor can be a human user or another system/subsystem interacting with the system being defined&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Pluggable Use Cases:===&lt;br /&gt;
&amp;lt;p&amp;gt;[http://alistair.cockburn.us/Pluggable+use+cases]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases describe various scenarios and sequence of operations to achieve a given goal between actors and systems. We have seen that we can write Use Cases in various styles with varying degrees of details, varying aims and for varying audience in terms of business and System use cases. Business use cases are very readable with very little technical content so that stakeholders and business managers can understand the system as a black box. This may not work with the developers. They would like to see more technical details and have use cases that describe fully what a system must do under all circumstances.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;It has been seen through experience that there is some amount of common behavior that is replicated in many business use cases. By extracting these common processing details (e.g. create, read, update, delete, etc), the contents of use cases can be reduced. These common processing details can be put into what is called lower level pluggable use cases. Essentially, we are creating various levels of use cases. At the highest level, what we can call as the main use case shows the more fundamental processing steps with the names of the pluggable use cases 'plugged' in-between. This helps abstracting out lower level details and keeping the use cases more simple. What this also does is, business managers and stake holders can see the higher level use case which still remains simple and readable whereas the lower level details which are of interest to developers is not lost either. These use case levels provides the freedom of reading at various degrees of granularity.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Pluggable use cases have to be written in a generic form so that it can be used wherever needed. It should be generic enough that it can be plugged into other use cases. All the rules of a regular use cases still apply in terms of finding goals(which are essentially sub goals of the system), actors etc.&lt;br /&gt;
Pluggable use cases can be produced in a way where their content is the same for all transactions, that is, it is common to various scenarios and projects. The difference in each project or scenario is mainly the data they handle and the sequence of activities being performed. The unique data and rules of each scenario are separated and documented independently in &amp;quot;Companion Tables&amp;quot; from the process steps which enable flexibility and maximum reuse.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Thus pluggable use cases become building blocks for higher level use cases. They are organized and applied within each use case to reach its goals. Whenever a pluggable use case is invoked, the invocation references the companion table that provides the unique data and rules for that use case. They are most effective when they are used in conjunction with these tables.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use case content is dependent on the requirements of the system under design. This is also true for pluggable use cases. While the majority of pluggable use case content can be used verbatim across any project or company, minor customizations may be needed to accommodate the individual needs of the project and company.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Some use case sequences are simple while others are complex. To manage degrees of complexity, a use case can exercise one, multiple or a series of pluggable use cases wherever desired and in any order. To maximize use case cohesion and increase reusability a pluggable use case may employ another pluggable use case. The versatility of pluggable use cases provides a solid foundation for the construction of project use cases.        &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The following link has excellent examples of how pluggable use cases can be written and used http://alistair.cockburn.us/Pluggable+use+cases&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different examples of Use Cases.  Some are listed below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.objectmentor.com/resources/articles/usecases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.w3.org/2002/06/ws-example &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.agilemodeling.com/essays/useCaseReuse.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.soi.wide.ad.jp/class/20040034/slides/07/9.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.cs.colorado.edu/~kena/classes/6448/s05/reference/usecases/examples.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://courses.softlab.ntua.gr/softeng/Tutorials/UML-Use-Cases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38870</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38870"/>
		<updated>2010-10-20T13:22:01Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases can be defined in many ways. There are many formal definitions for it. Very simply put, a use case is a reason to use a system. For example, a student borrowing a book from a library would be a use case of the library or a bank cardholder might need to use an ATM to get cash out of their account. More formally, “a use case is a collection of possible sequences of interactions between the system under discussion and its Users (or Actors), relating to a particular goal” [http://alistair.cockburn.us/Use+case+fundamentals].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. The system is treated as a &amp;quot;black box&amp;quot;, and the interactions with system, including system responses, are as perceived from outside the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&amp;quot;Use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system.&amp;quot;[http://www.bredemeyer.com/use_cases.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Terms used with Use cases===&lt;br /&gt;
Now let us define some terms used with use cases: [http://en.wikipedia.org/wiki/Use_case]&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Actor:&amp;lt;/b&amp;gt; An actor is a type of user that interacts with the system (ex student borrowing book or cardholder using ATM). Actors are also external entities (people or other systems) who interact with the system to achieve a desired goal. The goal must be of value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Goal:&amp;lt;/b&amp;gt; Without a goal a use case is useless. There is no need for a use case when there is no need for any actor to achieve a goal. A goal briefly describes what the user intends to achieve with this use case. For example, the goal of a student using the library is to obtain the book. There is no point in having a use case like “the student enters the library” as that in itself has no value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Stakeholder:&amp;lt;/b&amp;gt; A stakeholder is an individual or department that is affected by the outcome of the use case. Individuals are usually agents of the organization or department for which the use case is being created. A stakeholder might be called on to provide input, feedback, or authorization for the use case. The stakeholder section of the use case can include a brief description of which of these functions the stakeholder is assigned to fulfill.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Trigger:&amp;lt;/b&amp;gt; A trigger describes the event that causes the use case to be initiated. This event can be external or internal. If the trigger is not a simple true &amp;quot;event&amp;quot; (e.g., the customer presses a button), but instead &amp;quot;when a set of conditions are met&amp;quot;, there will need to be a triggering process that continually (or periodically) runs to test whether the &amp;quot;trigger conditions&amp;quot; are met: the &amp;quot;triggering event&amp;quot; is a signal from the trigger process that the conditions are now met. &lt;br /&gt;
In our example with the student, a trigger would be the need for the book due to an approaching exam or test which causes the student to go to the library to borrow a book.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Precondition:&amp;lt;/b&amp;gt; A precondition defines all the conditions that must be true (i.e., describes the state of the system) for the trigger to meaningfully cause the initiation of the use case. That is, if the system is not in the state described in the preconditions, the behavior of the use case is indeterminate. For example, the student should be a member of the library and have the required identity to borrow a book. If the student is not a member of the library, there is no point in the student trying to borrow a book from that library.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Scenarios:&amp;lt;/b&amp;gt; A scenario usually specifies when the use case starts and ends. It describes the interaction with actors and shows the flow of events between a user and system. For example, when a student tries to borrow a particular book from the library, it doesn’t always necessarily turn out the same way. Sometimes the book is available and sometimes it is already borrowed by someone else or the library may not have a given book. These are all examples of use case scenarios. The outcome in each case if different depending on circumstances, but they all relate to the same goal that is, they are all triggered by the same need(in this case, need for the book) and all the scenarios have the same starting point.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Simple Example===&lt;br /&gt;
&amp;lt;p&amp;gt;Now that we know something about use cases, let us go ahead and describe a simple use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
 Use Case 1: Request book from the library (automated system).&lt;br /&gt;
 &lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
   To borrow book a particular book from the library&lt;br /&gt;
 &lt;br /&gt;
 1.2: Actors&lt;br /&gt;
   Student&lt;br /&gt;
 &lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
   Student should be a member of the library and have an id.&lt;br /&gt;
 &lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
   System requests for student ID and checks if he/she is a member&lt;br /&gt;
   Student selects “request book”&lt;br /&gt;
   Student enters name(s) of the book(s)&lt;br /&gt;
   System checks for availability of books and displays results accordingly&lt;br /&gt;
   Student confirms the order&lt;br /&gt;
   System displays details of where the requested books are stacked&lt;br /&gt;
 &lt;br /&gt;
 1.5: Alternate Path&lt;br /&gt;
   System does not recognize Id or student is not a member. System will not allow any books to be checked out.&lt;br /&gt;
   All books requested are already checked out. Displays this information to student and closes request.&lt;br /&gt;
&lt;br /&gt;
===Important Characteristics of Use Cases===&lt;br /&gt;
&amp;lt;p&amp;gt;The above description shows a very simple use case. However, there are a few essential characteristics to be noticed about the use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have identified the key components of a use case, that is, the goal, actors, preconditions, key scenarios/flow and preconditions. It is very essential that we identify these components before writing a use case.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have not gone into any sort of technical details about implementation or user interface design. Use cases only represent a very high level design. We are only trying to understand the flow and uses of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of paths (scenarios) that traverse an actor from a trigger event (start of the use case) to the goal (success scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of scenarios that traverse an actor from a trigger event toward a goal but fall short of the goal (failure scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Where can we use ‘use cases’?===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases are usually used to capture the requirements of an interaction based system. When there is a lot of interaction between actors and the system, it makes sense to capture as many interactions and scenarios possible before starting development of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases help to eliminate rework due to requirements misunderstandings between developers and stakeholders by aiming to reach a point where there are no surprises for the users. Use cases help to build an explicit shared understanding that everyone can take away with them, the users, developers, testers, technical authors, and others.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases have received some interest as a starting point for test design. By analyzing use cases for the system, we can know various interactions between the system and actors which will help in drawing out test plans.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Where can’t we use ‘use cases’?===&lt;br /&gt;
&amp;lt;p&amp;gt;[ http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use case flows are not well suited to easily capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Some systems are better described in an information/data-driven approach than in a functionality-driven approach of use cases. A good example of this kind of system is data-mining systems used for Business Intelligence. If you were to describe this kind of system in a use case model, it would be quite small and uninteresting (there are not many different functions here) but the set of data that the system handles may nevertheless be large and rich in details.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Common mistakes while writing Use cases: ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The system boundary is undefined or inconstant.  A system boundary is a boundary that separates the internal components of a system from external entities.  If we are not able to identify the system boundaries, we will not be able to clearly define the actors, scenarios and other essential factors involved in writing a good and useful use case. For example, the system described in the library example, has a clear boundary. It is used to accept inputs as book names, checks the id and provides a location for the books. We know its role very clearly. Suppose the system was used to manage everything like security, employee details etc, we will not be able to identify the goal, actors and scenarios very clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases should not be used to capture all the details of a system. The granularity to which you define use cases in a diagram should be enough to keep the use case diagram uncluttered and readable, yet, be complete without missing significant aspects of the required functionality.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The use cases are written from the system’s (not the actors’) point of view. Use cases written from a system point of view will make the writer have the tendency to get into technical details. If we wrote the example test case above from the systems point of view, we would have statements like “obtain location of book from database and display location of books to user”. This is more detail than necessary. Also, this would not capture the interaction with the actor very clearly. Use cases also give a brief insight into how the UI should look but when written from the system these details might not be captured clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  Use Cases can take many different forms, depending on the type of development process being used [http://www.answers.com/topic/use-case?cat=technology].  The format that a Use Case takes is not as important as the process that it goes through [SOURCE].&lt;br /&gt;
&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [SOURCE].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [SOURCE].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [SOURCE].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;DEFINE UML HERE&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.[SOURCE: http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses – DIRECT QUOTE]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [SOURCE: http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
&lt;br /&gt;
===Essential Use Cases Vs System Use cases=== &lt;br /&gt;
&amp;lt;p&amp;gt;[http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases may be described at the abstract level (business use case, sometimes called essential use case), or at the system level (system use case). The difference between these is the scope.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;business use case&amp;lt;/b&amp;gt; is described in technology-free terminology which treats system as a black box and describes the business process that is used by its business actors (people or systems external to the process) to achieve their goals (e.g., manual payment processing, expense report approval, manage corporate real estate). The business use case will describe a process that provides value to the business actor, and it describes what the process does. Business Process Mapping is another method for this level of business description. A significant advantage of essential use cases is that they enable you to stand back and ask fundamental questions like &amp;quot;what's really going on&amp;quot; and &amp;quot;what do we really need to do&amp;quot; without letting implementation decisions get in the way.  These questions often lead to critical realizations that allow you to rethink, or reengineer if you prefer that term, aspects of the overall business process.&lt;br /&gt;
A Very good example of essential use case can be seen in this link: http://www.agilemodeling.com/artifacts/essentialUseCase.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;system use case&amp;lt;/b&amp;gt; describes a system that automates a business use case or process. It is normally described at the system functionality level (for example, &amp;quot;create voucher&amp;quot;) and specifies the function or the service that the system provides for the actor. The system use case details what the system will do in response to an actor's actions. For this reason it is recommended that system use case specification begin with a verb (e.g., create voucher, select payments, exclude payment, cancel voucher). An actor can be a human user or another system/subsystem interacting with the system being defined&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Pluggable Use Cases:===&lt;br /&gt;
&amp;lt;p&amp;gt;[http://alistair.cockburn.us/Pluggable+use+cases]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases describe various scenarios and sequence of operations to achieve a given goal between actors and systems. We have seen that we can write Use Cases in various styles with varying degrees of details, varying aims and for varying audience in terms of business and System use cases. Business use cases are very readable with very little technical content so that stakeholders and business managers can understand the system as a black box. This may not work with the developers. They would like to see more technical details and have use cases that describe fully what a system must do under all circumstances.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;It has been seen through experience that there is some amount of common behavior that is replicated in many business use cases. By extracting these common processing details (e.g. create, read, update, delete, etc), the contents of use cases can be reduced. These common processing details can be put into what is called lower level pluggable use cases. Essentially, we are creating various levels of use cases. At the highest level, what we can call as the main use case shows the more fundamental processing steps with the names of the pluggable use cases 'plugged' in-between. This helps abstracting out lower level details and keeping the use cases more simple. What this also does is, business managers and stake holders can see the higher level use case which still remains simple and readable whereas the lower level details which are of interest to developers is not lost either. These use case levels provides the freedom of reading at various degrees of granularity.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Pluggable use cases have to be written in a generic form so that it can be used wherever needed. It should be generic enough that it can be plugged into other use cases. All the rules of a regular use cases still apply in terms of finding goals(which are essentially sub goals of the system), actors etc.&lt;br /&gt;
Pluggable use cases can be produced in a way where their content is the same for all transactions, that is, it is common to various scenarios and projects. The difference in each project or scenario is mainly the data they handle and the sequence of activities being performed. The unique data and rules of each scenario are separated and documented independently in &amp;quot;Companion Tables&amp;quot; from the process steps which enable flexibility and maximum reuse.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Thus pluggable use cases become building blocks for higher level use cases. They are organized and applied within each use case to reach its goals. Whenever a pluggable use case is invoked, the invocation references the companion table that provides the unique data and rules for that use case. They are most effective when they are used in conjunction with these tables.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use case content is dependent on the requirements of the system under design. This is also true for pluggable use cases. While the majority of pluggable use case content can be used verbatim across any project or company, minor customizations may be needed to accommodate the individual needs of the project and company.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Some use case sequences are simple while others are complex. To manage degrees of complexity, a use case can exercise one, multiple or a series of pluggable use cases wherever desired and in any order. To maximize use case cohesion and increase reusability a pluggable use case may employ another pluggable use case. The versatility of pluggable use cases provides a solid foundation for the construction of project use cases.        &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The following link has excellent examples of how pluggable use cases can be written and used http://alistair.cockburn.us/Pluggable+use+cases&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different examples of Use Cases.  Some are listed below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.objectmentor.com/resources/articles/usecases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.w3.org/2002/06/ws-example &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.agilemodeling.com/essays/useCaseReuse.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.soi.wide.ad.jp/class/20040034/slides/07/9.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.cs.colorado.edu/~kena/classes/6448/s05/reference/usecases/examples.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://courses.softlab.ntua.gr/softeng/Tutorials/UML-Use-Cases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38869</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38869"/>
		<updated>2010-10-20T13:10:42Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases can be defined in many ways. There are many formal definitions for it. Very simply put, a use case is a reason to use a system. For example, a student borrowing a book from a library would be a use case of the library or a bank cardholder might need to use an ATM to get cash out of their account. More formally, “a use case is a collection of possible sequences of interactions between the system under discussion and its Users (or Actors), relating to a particular goal” [http://alistair.cockburn.us/Use+case+fundamentals].&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. The system is treated as a &amp;quot;black box&amp;quot;, and the interactions with system, including system responses, are as perceived from outside the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system.[http://www.bredemeyer.com/use_cases.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Terms used with Use cases===&lt;br /&gt;
Now let us define some terms used with use cases: [http://en.wikipedia.org/wiki/Use_case]&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Actor:&amp;lt;/b&amp;gt; An actor is a type of user that interacts with the system (ex student borrowing book or cardholder using ATM). Actors are also external entities (people or other systems) who interact with the system to achieve a desired goal. The goal must be of value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Goal:&amp;lt;/b&amp;gt; Without a goal a use case is useless. There is no need for a use case when there is no need for any actor to achieve a goal. A goal briefly describes what the user intends to achieve with this use case. For example, the goal of a student using the library is to obtain the book. There is no point in having a use case like “the student enters the library” as that in itself has no value to the actor.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Stakeholder:&amp;lt;/b&amp;gt; A stakeholder is an individual or department that is affected by the outcome of the use case. Individuals are usually agents of the organization or department for which the use case is being created. A stakeholder might be called on to provide input, feedback, or authorization for the use case. The stakeholder section of the use case can include a brief description of which of these functions the stakeholder is assigned to fulfill.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Trigger:&amp;lt;/b&amp;gt; A trigger describes the event that causes the use case to be initiated. This event can be external or internal. If the trigger is not a simple true &amp;quot;event&amp;quot; (e.g., the customer presses a button), but instead &amp;quot;when a set of conditions are met&amp;quot;, there will need to be a triggering process that continually (or periodically) runs to test whether the &amp;quot;trigger conditions&amp;quot; are met: the &amp;quot;triggering event&amp;quot; is a signal from the trigger process that the conditions are now met. &lt;br /&gt;
In our example with the student, a trigger would be the need for the book due to an approaching exam or test which causes the student to go to the library to borrow a book.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Precondition:&amp;lt;/b&amp;gt; A precondition defines all the conditions that must be true (i.e., describes the state of the system) for the trigger to meaningfully cause the initiation of the use case. That is, if the system is not in the state described in the preconditions, the behavior of the use case is indeterminate. For example, the student should be a member of the library and have the required identity to borrow a book. If the student is not a member of the library, there is no point in the student trying to borrow a book from that library.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;b&amp;gt;Scenarios:&amp;lt;/b&amp;gt; A scenario usually specifies when the use case starts and ends. It describes the interaction with actors and shows the flow of events between a user and system. For example, when a student tries to borrow a particular book from the library, it doesn’t always necessarily turn out the same way. Sometimes the book is available and sometimes it is already borrowed by someone else or the library may not have a given book. These are all examples of use case scenarios. The outcome in each case if different depending on circumstances, but they all relate to the same goal that is, they are all triggered by the same need(in this case, need for the book) and all the scenarios have the same starting point.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Simple Example===&lt;br /&gt;
&amp;lt;p&amp;gt;Now that we know something about use cases, let us go ahead and describe a simple use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
 Use Case 1: Request book from the library (automated system).&lt;br /&gt;
 &lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
   To borrow book a particular book from the library&lt;br /&gt;
 &lt;br /&gt;
 1.2: Actors&lt;br /&gt;
   Student&lt;br /&gt;
 &lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
   Student should be a member of the library and have an id.&lt;br /&gt;
 &lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
   System requests for student ID and checks if he/she is a member&lt;br /&gt;
   Student selects “request book”&lt;br /&gt;
   Student enters name(s) of the book(s)&lt;br /&gt;
   System checks for availability of books and displays results accordingly&lt;br /&gt;
   Student confirms the order&lt;br /&gt;
   System displays details of where the requested books are stacked&lt;br /&gt;
 &lt;br /&gt;
 1.5: Alternate Path&lt;br /&gt;
   System does not recognize Id or student is not a member. System will not allow any books to be checked out.&lt;br /&gt;
   All books requested are already checked out. Displays this information to student and closes request.&lt;br /&gt;
&lt;br /&gt;
===Important Characteristics of Use Cases===&lt;br /&gt;
&amp;lt;p&amp;gt;The above description shows a very simple use case. However, there are a few essential characteristics to be noticed about the use case:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have identified the key components of a use case, that is, the goal, actors, preconditions, key scenarios/flow and preconditions. It is very essential that we identify these components before writing a use case.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have not gone into any sort of technical details about implementation or user interface design. Use cases only represent a very high level design. We are only trying to understand the flow and uses of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of paths (scenarios) that traverse an actor from a trigger event (start of the use case) to the goal (success scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We have recorded a set of scenarios that traverse an actor from a trigger event toward a goal but fall short of the goal (failure scenarios).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Where can we use ‘use cases’?===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases are usually used to capture the requirements of an interaction based system. When there is a lot of interaction between actors and the system, it makes sense to capture as many interactions and scenarios possible before starting development of the system.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases help to eliminate rework due to requirements misunderstandings between developers and stakeholders by aiming to reach a point where there are no surprises for the users. Use cases help to build an explicit shared understanding that everyone can take away with them, the users, developers, testers, technical authors, and others.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases have received some interest as a starting point for test design. By analyzing use cases for the system, we can know various interactions between the system and actors which will help in drawing out test plans.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Where can’t we use ‘use cases’?===&lt;br /&gt;
&amp;lt;p&amp;gt;[ http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use case flows are not well suited to easily capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Some systems are better described in an information/data-driven approach than in a functionality-driven approach of use cases. A good example of this kind of system is data-mining systems used for Business Intelligence. If you were to describe this kind of system in a use case model, it would be quite small and uninteresting (there are not many different functions here) but the set of data that the system handles may nevertheless be large and rich in details.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Common mistakes while writing Use cases: ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The system boundary is undefined or inconstant.  A system boundary is a boundary that separates the internal components of a system from external entities.  If we are not able to identify the system boundaries, we will not be able to clearly define the actors, scenarios and other essential factors involved in writing a good and useful use case. For example, the system described in the library example, has a clear boundary. It is used to accept inputs as book names, checks the id and provides a location for the books. We know its role very clearly. Suppose the system was used to manage everything like security, employee details etc, we will not be able to identify the goal, actors and scenarios very clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use cases should not be used to capture all the details of a system. The granularity to which you define use cases in a diagram should be enough to keep the use case diagram uncluttered and readable, yet, be complete without missing significant aspects of the required functionality.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;The use cases are written from the system’s (not the actors’) point of view. Use cases written from a system point of view will make the writer have the tendency to get into technical details. If we wrote the example test case above from the systems point of view, we would have statements like “obtain location of book from database and display location of books to user”. This is more detail than necessary. Also, this would not capture the interaction with the actor very clearly. Use cases also give a brief insight into how the UI should look but when written from the system these details might not be captured clearly.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  There are many different ways to write Use Cases [SOURCE].  Because of this, there are many different formats that Use Cases can take when they are written.  There are, however, certain guidelines that should be followed in the process of writing Use Cases.  In general, these guidelines are:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Never include implementation specific terminology in the Use Case [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Each Use Case should be a set of scenarios, which include different flows of events [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [SOURCE].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [SOURCE].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [SOURCE].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;DEFINE UML HERE&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.[SOURCE: http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses – DIRECT QUOTE]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [SOURCE: http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
&lt;br /&gt;
===Essential Use Cases Vs System Use cases=== &lt;br /&gt;
&amp;lt;p&amp;gt;[http://en.wikipedia.org/wiki/Use_case]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases may be described at the abstract level (business use case, sometimes called essential use case), or at the system level (system use case). The difference between these is the scope.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;business use case&amp;lt;/b&amp;gt; is described in technology-free terminology which treats system as a black box and describes the business process that is used by its business actors (people or systems external to the process) to achieve their goals (e.g., manual payment processing, expense report approval, manage corporate real estate). The business use case will describe a process that provides value to the business actor, and it describes what the process does. Business Process Mapping is another method for this level of business description. A significant advantage of essential use cases is that they enable you to stand back and ask fundamental questions like &amp;quot;what's really going on&amp;quot; and &amp;quot;what do we really need to do&amp;quot; without letting implementation decisions get in the way.  These questions often lead to critical realizations that allow you to rethink, or reengineer if you prefer that term, aspects of the overall business process.&lt;br /&gt;
A Very good example of essential use case can be seen in this link: http://www.agilemodeling.com/artifacts/essentialUseCase.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;A &amp;lt;b&amp;gt;system use case&amp;lt;/b&amp;gt; describes a system that automates a business use case or process. It is normally described at the system functionality level (for example, &amp;quot;create voucher&amp;quot;) and specifies the function or the service that the system provides for the actor. The system use case details what the system will do in response to an actor's actions. For this reason it is recommended that system use case specification begin with a verb (e.g., create voucher, select payments, exclude payment, cancel voucher). An actor can be a human user or another system/subsystem interacting with the system being defined&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Pluggable Use Cases:===&lt;br /&gt;
&amp;lt;p&amp;gt;[http://alistair.cockburn.us/Pluggable+use+cases]&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Use cases describe various scenarios and sequence of operations to achieve a given goal between actors and systems. We have seen that we can write Use Cases in various styles with varying degrees of details, varying aims and for varying audience in terms of business and System use cases. Business use cases are very readable with very little technical content so that stakeholders and business managers can understand the system as a black box. This may not work with the developers. They would like to see more technical details and have use cases that describe fully what a system must do under all circumstances.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;It has been seen through experience that there is some amount of common behavior that is replicated in many business use cases. By extracting these common processing details (e.g. create, read, update, delete, etc), the contents of use cases can be reduced. These common processing details can be put into what is called lower level pluggable use cases. Essentially, we are creating various levels of use cases. At the highest level, what we can call as the main use case shows the more fundamental processing steps with the names of the pluggable use cases 'plugged' in-between. This helps abstracting out lower level details and keeping the use cases more simple. What this also does is, business managers and stake holders can see the higher level use case which still remains simple and readable whereas the lower level details which are of interest to developers is not lost either. These use case levels provides the freedom of reading at various degrees of granularity.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Pluggable use cases have to be written in a generic form so that it can be used wherever needed. It should be generic enough that it can be plugged into other use cases. All the rules of a regular use cases still apply in terms of finding goals(which are essentially sub goals of the system), actors etc.&lt;br /&gt;
Pluggable use cases can be produced in a way where their content is the same for all transactions, that is, it is common to various scenarios and projects. The difference in each project or scenario is mainly the data they handle and the sequence of activities being performed. The unique data and rules of each scenario are separated and documented independently in &amp;quot;Companion Tables&amp;quot; from the process steps which enable flexibility and maximum reuse.&lt;br /&gt;
&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Thus pluggable use cases become building blocks for higher level use cases. They are organized and applied within each use case to reach its goals. Whenever a pluggable use case is invoked, the invocation references the companion table that provides the unique data and rules for that use case. They are most effective when they are used in conjunction with these tables.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use case content is dependent on the requirements of the system under design. This is also true for pluggable use cases. While the majority of pluggable use case content can be used verbatim across any project or company, minor customizations may be needed to accommodate the individual needs of the project and company.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Some use case sequences are simple while others are complex. To manage degrees of complexity, a use case can exercise one, multiple or a series of pluggable use cases wherever desired and in any order. To maximize use case cohesion and increase reusability a pluggable use case may employ another pluggable use case. The versatility of pluggable use cases provides a solid foundation for the construction of project use cases.        &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The following link has excellent examples of how pluggable use cases can be written and used http://alistair.cockburn.us/Pluggable+use+cases&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different examples of Use Cases.  Some are listed below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.objectmentor.com/resources/articles/usecases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.w3.org/2002/06/ws-example &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.agilemodeling.com/essays/useCaseReuse.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.soi.wide.ad.jp/class/20040034/slides/07/9.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.cs.colorado.edu/~kena/classes/6448/s05/reference/usecases/examples.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://courses.softlab.ntua.gr/softeng/Tutorials/UML-Use-Cases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38789</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38789"/>
		<updated>2010-10-20T00:29:34Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  There are many different ways to write Use Cases [SOURCE].  Because of this, there are many different formats that Use Cases can take when they are written.  There are, however, certain guidelines that should be followed in the process of writing Use Cases.  In general, these guidelines are:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Never include implementation specific terminology in the Use Case [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Each Use Case should be a set of scenarios, which include different flows of events [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [SOURCE].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [SOURCE].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [SOURCE].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;DEFINE UML HERE&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.[SOURCE: http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses – DIRECT QUOTE]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [SOURCE: http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different examples of Use Cases.  Some are listed below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.objectmentor.com/resources/articles/usecases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.w3.org/2002/06/ws-example &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.agilemodeling.com/essays/useCaseReuse.htm &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.soi.wide.ad.jp/class/20040034/slides/07/9.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://www.cs.colorado.edu/~kena/classes/6448/s05/reference/usecases/examples.html &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; http://courses.softlab.ntua.gr/softeng/Tutorials/UML-Use-Cases.pdf &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38788</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38788"/>
		<updated>2010-10-20T00:18:16Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  There are many different ways to write Use Cases [SOURCE].  Because of this, there are many different formats that Use Cases can take when they are written.  There are, however, certain guidelines that should be followed in the process of writing Use Cases.  In general, these guidelines are:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Never include implementation specific terminology in the Use Case [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Each Use Case should be a set of scenarios, which include different flows of events [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [SOURCE].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [SOURCE].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [SOURCE].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;DEFINE UML HERE&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.[SOURCE: http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses – DIRECT QUOTE]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [SOURCE: http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases. For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]. A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38784</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38784"/>
		<updated>2010-10-20T00:16:28Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  There are many different ways to write Use Cases [SOURCE].  Because of this, there are many different formats that Use Cases can take when they are written.  There are, however, certain guidelines that should be followed in the process of writing Use Cases.  In general, these guidelines are:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Never include implementation specific terminology in the Use Case [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Each Use Case should be a set of scenarios, which include different flows of events [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [SOURCE].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [SOURCE].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [SOURCE].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;DEFINE UML HERE&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.[SOURCE: http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses – DIRECT QUOTE]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [SOURCE: http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Tools ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are many different tools available to create Use Cases.  A sampling are below: &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;'''Rational Rose:'''&lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For a more comprehensive list, go to [http://en.wikipedia.org/wiki/List_of_Unified_Modeling_Language_tools Wikipedia's list of UML modeling tools]&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38780</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38780"/>
		<updated>2010-10-20T00:13:09Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  There are many different ways to write Use Cases [SOURCE].  Because of this, there are many different formats that Use Cases can take when they are written.  There are, however, certain guidelines that should be followed in the process of writing Use Cases.  In general, these guidelines are:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Never include implementation specific terminology in the Use Case [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Each Use Case should be a set of scenarios, which include different flows of events [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [SOURCE].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [SOURCE].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [SOURCE].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;DEFINE UML HERE&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.[SOURCE: http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses – DIRECT QUOTE]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [SOURCE: http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
&lt;br /&gt;
=== '''Tools''' ===&lt;br /&gt;
'''Rational Rose:''' &lt;br /&gt;
&lt;br /&gt;
One of the most popular tool for use-case driven development.&lt;br /&gt;
&lt;br /&gt;
http://www-306.ibm.com/software/awdtools/developer/rose/index.html&lt;br /&gt;
&lt;br /&gt;
'''Sun Java Studio Enterprise:''' &lt;br /&gt;
&lt;br /&gt;
Sun Java Studio Enterprise offers a UML tool. &lt;br /&gt;
&lt;br /&gt;
http://developers.sun.com/jsenterprise/&lt;br /&gt;
&lt;br /&gt;
'''Visual case:''' &lt;br /&gt;
&lt;br /&gt;
UML &amp;amp; E/R Database Design Tool&lt;br /&gt;
&lt;br /&gt;
http://www.visualcase.com/&lt;br /&gt;
&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38779</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38779"/>
		<updated>2010-10-20T00:11:30Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  There are many different ways to write Use Cases [SOURCE].  Because of this, there are many different formats that Use Cases can take when they are written.  There are, however, certain guidelines that should be followed in the process of writing Use Cases.  In general, these guidelines are:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Never include implementation specific terminology in the Use Case [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Each Use Case should be a set of scenarios, which include different flows of events [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
=== Iterative Process ===&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [SOURCE].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 1 ====&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 2 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Iteration 3 ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Results ====&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [SOURCE].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [SOURCE].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;DEFINE UML HERE&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.[SOURCE: http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses – DIRECT QUOTE]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [SOURCE: http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38778</id>
		<title>CSC/ECE 517 Fall 2010/ch4 4a RJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2010/ch4_4a_RJ&amp;diff=38778"/>
		<updated>2010-10-20T00:09:18Z</updated>

		<summary type="html">&lt;p&gt;Jmfoste2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;font size=5&amp;gt;Use Cases&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
== Use Case Basics ==&lt;br /&gt;
== Writing Use Cases ==&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Cases for a system is a process taken between those designing the system and the Stakeholders.  There are many different ways to write Use Cases [SOURCE].  Because of this, there are many different formats that Use Cases can take when they are written.  There are, however, certain guidelines that should be followed in the process of writing Use Cases.  In general, these guidelines are:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Never include implementation specific terminology in the Use Case [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Each Use Case should be a set of scenarios, which include different flows of events [SOURCE]&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Normally, this process is done iteratively, so that the iterations can build upon each other [SOURCE].  Below is an example of three iterations of the Use Case writing process to illustrate how it can reveal things about a system and layout the functional requirements. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For this example, we will be creating Use Cases to solve the following problem statement:&lt;br /&gt;
&lt;br /&gt;
	Develop a system to allow users to schedule meetings with each other.&lt;br /&gt;
&lt;br /&gt;
In this system, the stakeholders will be the users of the system.  The Users will also be the only Actors in the system. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For the first iteration, we will write out short sentences to describe the functionality that the system will have.  Some use cases could be:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Request a Meeting&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Approve a meeting request&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Suggest a new time&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;View a User's schedule&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In this example, the fourth Use Case may not have been an obvious requirement that could be derived from the original problem statement, but in the process of creating the Use Cases, it was discovered that it would be a good requirement for the system to have.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The next iteration would involve writing out longer descriptions for each. This could be done in paragraph form, or by writing a list.  Below are the Use Cases in paragraph form:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 '''Use Case 1 – Request a meeting'''&lt;br /&gt;
 User A chooses the date and time for a meeting with User B.  User A chooses User B as the recipient of the meeting request.&lt;br /&gt;
 User A sends the meeting request to User B through the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 2 – Approve a Meeting Request'''&lt;br /&gt;
 User A receives a meeting request from User B and accepts the user request.  A notification that User A has accepted the meeting is sent to User B.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 3 – Suggest a different meeting time'''&lt;br /&gt;
 User A receives a meeting request from User B and suggests a different time and/or date for the meeting.  The response is sent to User B through &lt;br /&gt;
 the system.&amp;lt;br&amp;gt;&lt;br /&gt;
 '''Use Case 4 – View a User's schedule'''&lt;br /&gt;
 User A would like to schedule a meeting with User B.  User A starts the system and opens up User B's schedule.  &lt;br /&gt;
 User A can see when User B has already scheduled  meetings and User A can then use that information to send User B a meeting request.&lt;br /&gt;
 Use A can also access their own schedule to view when User A has scheduled meetings.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;From writing out the Use Cases with more description, it should become clear that Use Case 2 and Use Case 3 are very similar.  In both Use Cases, User A responds to a meeting request and a response/notification is sent to User B.  This might lead the designers of the system to combine these two Use Cases into one Use Case for &amp;quot;Respond to Request.&amp;quot;&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In the next iteration, we're going to use a Use Case template.  There are many different Use Case templates, which include different information [SOURCES].  A template can be created that is unique for the system being described, provided each Use Case uses the same template. The template we will use will include:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case &amp;lt;Number&amp;gt;: Title&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.1: &amp;lt;Summary/Goal&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.2: &amp;lt;Actors&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.3: &amp;lt;Preconditions&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.4: &amp;lt;Main Path&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;Number&amp;gt;.5: &amp;lt;Alternate Paths – including sub-flows [S] and error-flows [E]&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Writing Use Case 1 in this format yields:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 Use Case 1: Request a meeting&amp;lt;br&amp;gt;&lt;br /&gt;
 1.1: Summary/Goal&lt;br /&gt;
 User A can choose the date and time for a meeting with User B [Main].  User A can choose User 	B as the recipient of the meeting &lt;br /&gt;
 request [Main], or multiple Users as the recipient [S.2].  User A sends the meeting request to User B through the system [Main]. &lt;br /&gt;
 Before User A sends the meeting request, User A can opt not to send the request and delete it [S.1]. If User B is no longer in &lt;br /&gt;
 the system, User A receives notification that the meeting request cannot be sent [E.1].&amp;lt;br&amp;gt;&lt;br /&gt;
 1.2: Actors&lt;br /&gt;
 Users&amp;lt;br&amp;gt;&lt;br /&gt;
 1.3: Preconditions&lt;br /&gt;
 - User A and User B are recorded as Users in the system&lt;br /&gt;
 - User A has logged into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.4: Main Path&lt;br /&gt;
 1) User A chooses a date for a meeting&lt;br /&gt;
 2) User A chooses a time for a meeting&lt;br /&gt;
 3) User A chooses User B as the recipient for the meeting request&lt;br /&gt;
 4) User A submits the meeting request&lt;br /&gt;
 5) User B receives the meeting request the next time User B logs into the system&amp;lt;br&amp;gt;&lt;br /&gt;
 1.5: Alternative Path&lt;br /&gt;
 S.1 &lt;br /&gt;
 User A creates a meeting request, but down not submit it.  A meeting request is not sent to User B&amp;lt;br&amp;gt;&lt;br /&gt;
 S.2&lt;br /&gt;
 User A creates a meeting request for more than one User.  User A submits the meeting request and each User receives it the next&lt;br /&gt;
 time they log in&amp;lt;br&amp;gt;&lt;br /&gt;
 E.1&lt;br /&gt;
 User B is no longer a User in the system.  User A receives notification that the meeting request cannot be sent.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;There are a few interesting things revealed about the system by re-writing the Use Case in this format.  The most important is likely that Users will need to &amp;quot;log-in&amp;quot; to the system.  This implies that there could be another Actor in the system, namely an Admin.  This leads to these additional Use Cases:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;table border=1&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Use Case #&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Description&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Actor&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Log into the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Create a User in the system&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Admin&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Each of these new Use Cases would then go through the iterations listed above until they are in the template form. &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see from the example, each iteration refines the Use Case and helps to clarify the requirements of the system.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Use Case Diagrams ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Use Case Diagrams are useful for showing how each component in a system will interact with other components of the system [SOURCE].  They are not good for showing the flow of events that a system will have, like the written Use Cases are [SOURCE].  Also, unlike written Use Cases, Use Case Diagrams use UML so that there is a standard.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;DEFINE UML HERE&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Components of a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;UCDs have only 4 major elements: The actors that the system you are describing interacts with, the system itself, the use cases, or services, that the system knows how to perform, and the lines that represent relationships between these elements.[SOURCE: http://www.andrew.cmu.edu/course/90-754/umlucdfaq.html#uses – DIRECT QUOTE]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Actors''' in Use Case Diagrams are represented by stick figures:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Actor.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Use Cases''' are represented by ovals:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UseCase.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;'''Relationships''' are represented by Solid Lines.  Sometimes, arrowheads are added to the lines to indicate the direction of the invocation, or to show which actor is the primary actor [SOURCE: http://www.agilemodeling.com/artifacts/useCaseDiagram.htm]&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Lines.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;Two special relationships that can be shown in a Use Case Diagram are the ''Extends'' and ''Includes'' relationships.  These relationships are usually shown with a dotted line with an arrowhead and &amp;lt;&amp;lt;extend&amp;gt;&amp;gt; or &amp;lt;&amp;lt;include&amp;gt;&amp;gt; written near the line.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Extends'' relationship is used to show when Use Case X is a special case of Use Case Y [SOURCE].  In this situation, the dotted line is drawn from Use Case X to Use Case Y with the arrowhead pointing to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;The ''Includes/Uses'' relationship is used to show that every time Use Case X is done, Use Case Y must also be done [SOURCE].  In this case, the arrow points to Use Case Y.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating a Use Case Diagram ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;A Use Case Diagram for the same system described in the Writing a Use Case might look something like:&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:UCDiagram.png]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;As you can see, &amp;quot;Accept Meeting&amp;quot; and &amp;quot;Suggest new time&amp;quot; are special cases of &amp;quot;Respond to request&amp;quot;.  Also, from this diagram, the system designers are saying that in order to &amp;quot;Request a Meeting&amp;quot; the user must &amp;quot;View Schedule&amp;quot;.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Advanced Topics ==&lt;br /&gt;
== Tools and Examples ==&lt;br /&gt;
== Further Reading ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>Jmfoste2</name></author>
	</entry>
</feed>