CSC/ECE 506 Spring 2014/3a ns: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
 
(22 intermediate revisions by 3 users not shown)
Line 14: Line 14:
-----------------------
-----------------------


When using any parallel programming models, [http://en.wikipedia.org/wiki/Synchronization_(computer_science) synchronization]<ref>Rahman, M.M.; , "Process synchronization in multiprocessor and multi-core processor," Informatics, Electronics & Vision (ICIEV), 2012 International Conference on , vol., no., pp.554-559, 18-19 May 2012</ref> is needed to guarantee accuracy of the overall program. The following situations highlight the necessity of synchronization<ref>http://people.csail.mit.edu/rinard/paper/cpande99.pdf</ref>.
When using any parallel programming model, [http://en.wikipedia.org/wiki/Synchronization_(computer_science) synchronization]<ref>Rahman, M.M.; , "Process synchronization in multiprocessor and multi-core processor," Informatics, Electronics & Vision (ICIEV), 2012 International Conference on , vol., no., pp.554-559, 18-19 May 2012</ref> is needed to guarantee accuracy of the overall program. The following situations highlight the necessity of synchronization<ref>http://people.csail.mit.edu/rinard/paper/cpande99.pdf</ref>.


Case1:  A scenario in which the code following a parallelized loop requires that all of the parallel [http://en.wikipedia.org/wiki/Process_(computer_science)  processes] be completed before advancing.
Case 1:  A scenario in which the code following a parallelized loop requires that all of the parallel [http://en.wikipedia.org/wiki/Process_(computer_science)  processes] be completed before advancing.


Case2: A scenario in which a code segment in the middle of parallelized section needs to be executed sequentially (critical section), to ensure program correctness.
Case 2: A scenario in which a code segment in the middle of a parallelized section needs to be executed sequentially (critical section), to ensure program correctness.


Case3: A scenario in which multiple processes must update a global variable in such a way that one process does not overwrite the updates of a different process.
Case 3: A scenario in which multiple processes must update a global variable in such a way that one process does not overwrite the updates of a different process.


=== Synchronization Mechanisms ===
=== Synchronization Mechanisms ===
Line 26: Line 26:
Let us now briefly understand the various [http://en.wikipedia.org/wiki/Synchronization_(computer_science)#Thread_or_process_synchronization process/thread synchronization mechanisms] that helps in achieving correct program execution order.
Let us now briefly understand the various [http://en.wikipedia.org/wiki/Synchronization_(computer_science)#Thread_or_process_synchronization process/thread synchronization mechanisms] that helps in achieving correct program execution order.


'''Semaphore''' : [https://en.wikipedia.org/wiki/Semaphore_(programming) Semaphore] is a variable that helps in controlling the access to a common resource in a parallel programming model. Not only do they arbitrate, but also help in avoiding race conditions. Semaphores keep only the count of the resource availability.  
'''Semaphore''': [https://en.wikipedia.org/wiki/Semaphore_(programming) Semaphore] is a variable that helps in controlling the access to a common resource in a parallel programming model. Not only do they arbitrate, but also help in avoiding race conditions. Semaphores keep only the count of the resource availability.  


'''Mutex / Lock''' :[https://en.wikipedia.org/wiki/Mutex Mutex] refers to Mutual Exclusion. It helps avoiding concurrency issues like race conditions by ensuring that no two concurrently executed processes access a critical section at the same time. Though Mutexes are essentially the same as Semaphores, the fundamental differences between them are as follows  [http://en.wikipedia.org/wiki/Semaphore_(programming)#Semaphores_vs._mutexes]:
'''Mutex / Lock''': [https://en.wikipedia.org/wiki/Mutex Mutex] refers to Mutual Exclusion. It helps avoiding concurrency issues like race conditions by ensuring that no two concurrently executed processes access a critical section at the same time. Though Mutexes are essentially the same as Semaphores, the fundamental differences between them are as follows  [http://en.wikipedia.org/wiki/Semaphore_(programming)#Semaphores_vs._mutexes]:


1.  Mutexes allow exclusive process access to a resource. Semaphores allow any process to access a resource. Thus the element of ownership in mutex ensures that only the process that has locked the mutex can unlock it. However, in case of Semaphores, the process locking and unlocking the semaphore can be different.
1.  Mutexes allow exclusive process access to a resource. Semaphores allow any process to access a resource. Thus the element of ownership in mutex ensures that only the process that has locked the mutex can unlock it. However, in case of Semaphores, the process locking and unlocking the semaphore can be different.
Line 61: Line 61:


====Locking the semaphore====
====Locking the semaphore====
 
By acquiring a lock, it is ensured that the execution of other processes or threads (trying to access the shared resource) is postponed indefinitely. Blocking the execution of a higher priority thread or a thread responsible for performing real time operations may be undesired, thus highlighting the necessity of [http://en.wikipedia.org/wiki/Non-blocking_synchronization Non-Blocking statements].
Depending upon the application, we may want a blocking or non blocking assignment of locks. This is achieved using the following constructs:
Depending upon the application, we may want a blocking or non blocking assignment of locks. This is achieved using the following constructs:
   
   
Line 89: Line 89:
Return Value: sem_post() returns 0 on success; on error, the value of the semaphore is left unchanged, -1 is returned, and errno is set to indicate the error.
Return Value: sem_post() returns 0 on success; on error, the value of the semaphore is left unchanged, -1 is returned, and errno is set to indicate the error.


===='''Pseudo Code'''====
===='''Pseudocode'''====


       <code>
       <code>
Line 162: Line 162:
Return Value: The functions return a zero on success; otherwise, an error number is returned to indicate the error.
Return Value: The functions return a zero on success; otherwise, an error number is returned to indicate the error.


====='''Pseudo code'''=====
====='''Pseudocode'''=====
     <code>
     <code>
     pthread_mutex_t *mutex,
     pthread_mutex_t *mutex,
Line 311: Line 311:


Return Value: If successful the function returns zero; otherwise, an error number is returned to indicate the error.
Return Value: If successful the function returns zero; otherwise, an error number is returned to indicate the error.


'''•  ''pthread_cond_destroy(pthread_cond_t *cond):''''' This function destroys the given condition variable specified by cond; the object becomes, in  effect,  uninitialized.An implementation may cause pthread_cond_destroy() to set the object referenced by cond to an invalid value.An already destroyed condition variable object can be reinitialized using pthread_cond_init(); the results of otherwise referencing the object after it has been destroyed are undefined.
'''•  ''pthread_cond_destroy(pthread_cond_t *cond):''''' This function destroys the given condition variable specified by cond; the object becomes, in  effect,  uninitialized.An implementation may cause pthread_cond_destroy() to set the object referenced by cond to an invalid value.An already destroyed condition variable object can be reinitialized using pthread_cond_init(); the results of otherwise referencing the object after it has been destroyed are undefined.
Line 317: Line 316:


Return Value: If successful the function returns zero; otherwise, an error number is returned to indicate the error.
Return Value: If successful the function returns zero; otherwise, an error number is returned to indicate the error.


For example<ref>http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html#BASICS</ref>, consider the following code:
For example<ref>http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html#BASICS</ref>, consider the following code:
Line 362: Line 360:


===== Waiting on condition=====
===== Waiting on condition=====
These functions are used to block a condition variable.


'''•  ''int pthread_cond_timedwait(pthread_cond_t *restrict cond,pthread_mutex_t *restrict mutex, const struct timespec *restrict abstime);'' and ''int pthread_cond_wait(pthread_cond_t *restrict cond, pthread_mutex_t *restrict mutex):'''''  
'''•  ''int pthread_cond_timedwait(pthread_cond_t *restrict cond,pthread_mutex_t *restrict mutex, const struct timespec *restrict abstime);'' and ''int pthread_cond_wait(pthread_cond_t *restrict cond, pthread_mutex_t *restrict mutex):'''''  
These functions are used to block a condition variable. They are called with mutex locked by the calling thread or undefined behavior results.These functions atomically release mutex and cause the  calling thread to block on the condition variable cond; atomically here means atomically with respect to access by another thread to the  mutex and then the condition variable. That is, if another thread is able to acquire the mutex after the about-to-block thread has released it, then a subsequent call to pthread_cond_broadcast() or pthread_cond_signal() in that thread shall behave as if it were issued after the about-to-block thread has blocked.The mutex would be locked and be owned by the calling thread upon successful return.
These functions are called with mutex locked by the calling thread or undefined behavior results.They atomically release mutex and cause the  calling thread to block on the condition variable cond; atomically here means atomically with respect to access by another thread to the  mutex and then the condition variable. That is, if another thread is able to acquire the mutex after the about-to-block thread has released it, then a subsequent call to pthread_cond_broadcast() or pthread_cond_signal() in that thread shall behave as if it were issued after the about-to-block thread has blocked.The mutex would be locked and be owned by the calling thread upon successful return.


===== Waking thread based on condition=====
===== Waking thread based on condition=====
These  functions unblock threads blocked on a condition variable.


'''•  ''pthread_cond_broadcast(pthread_cond_t *cond): and pthread_cond_signal(pthread_cond_t *cond):''''' These  functions unblock threads blocked on a condition variable.The pthread_cond_broadcast() function unblocks  all threads  currently blocked on the specified condition variable cond. The pthread_cond_signal() function unblocks at least one of the threads that are blocked on the specified condition variable cond (if any threads are blocked on cond).
'''•  ''pthread_cond_broadcast(pthread_cond_t *cond): and pthread_cond_signal(pthread_cond_t *cond):''''' The pthread_cond_broadcast() function unblocks  all threads  currently blocked on the specified condition variable cond. The pthread_cond_signal() function unblocks at least one of the threads that are blocked on the specified condition variable cond (if any threads are blocked on cond).


If  more than one thread  is blocked on a condition variable, the scheduling policy determines the order in which threads are unblocked.When each thread unblocked as a result of a pthread_cond_broadcast() or pthread_cond_signal() returns from its call to pthread_cond_wait() or pthread_cond_timedwait(), the thread owns the mutex with which it called pthread_cond_wait() or pthread_cond_timedwait().The thread(s) that are unblocked contend for the mutex according to the scheduling policy and as if each had called pthread_mutex_lock().
If  more than one thread  is blocked on a condition variable, the scheduling policy determines the order in which threads are unblocked.When each thread unblocked as a result of a pthread_cond_broadcast() or pthread_cond_signal() returns from its call to pthread_cond_wait() or pthread_cond_timedwait(), the thread owns the mutex with which it called pthread_cond_wait() or pthread_cond_timedwait().The thread(s) that are unblocked contend for the mutex according to the scheduling policy and as if each had called pthread_mutex_lock().
Line 385: Line 385:


===== Barrier Wait =====
===== Barrier Wait =====
The wait function is used to synchronize parallel threads.


'''•  ''int pthread_barrier_wait(pthread_barrier_t *barrier):''''' The wait function is used to synchronize parallel threads.Until a required number of threads call pthread_barrier_wait() referring the barrier, the calling thread blocks.When the required number of threads call the barrier referenced, a zero value is returned to all the threads, except for one.The constant PTHREAD_BARRIER_SERIAL_THREAD is returned to one unspecified thread.And then it is sent to the state it has as a result of the most recent init function.
'''•  ''int pthread_barrier_wait(pthread_barrier_t *barrier):''''' Until a required number of threads call pthread_barrier_wait() referring the barrier, the calling thread blocks.When the required number of threads call the barrier referenced, a zero value is returned to all the threads, except for one.The constant PTHREAD_BARRIER_SERIAL_THREAD is returned to one unspecified thread.And then it is sent to the state it has as a result of the most recent init function.


When the required number of threads have arrived at the barrier during the execution of a signal handler, it marks the completion of barrier wait.If a signal is delivered to a thread blocked on a barrier, upon return from the signal handler the thread resumes waiting at the barrier if the barrier wait has not completed; otherwise, the thread continues as normal from the completed barrier wait. Until the thread in the signal handler returns from it, it is unspecified whether other threads may proceed past the barrier once they have all reached it.A thread that has blocked on a barrier does not prevent any unblocked thread that is eligible to use the same processing resources from eventually making forward progress in its execution. Eligibility for processing resources is determined by the scheduling policy.
When the required number of threads have arrived at the barrier during the execution of a signal handler, it marks the completion of barrier wait.If a signal is delivered to a thread blocked on a barrier, upon return from the signal handler the thread resumes waiting at the barrier if the barrier wait has not completed; otherwise, the thread continues as normal from the completed barrier wait. Until the thread in the signal handler returns from it, it is unspecified whether other threads may proceed past the barrier once they have all reached it.A thread that has blocked on a barrier does not prevent any unblocked thread that is eligible to use the same processing resources from eventually making forward progress in its execution. Eligibility for processing resources is determined by the scheduling policy.
Line 398: Line 399:
Return Value: Upon successful completion, the functions returns zero; otherwise, an error number is returned to indicate the error.
Return Value: Upon successful completion, the functions returns zero; otherwise, an error number is returned to indicate the error.


====='''Pseudo code'''=====
====='''Pseudocode'''=====
     <code>
     <code>
     pthread_barrier_t *barrier;
     pthread_barrier_t *barrier;
Line 432: Line 433:
     for ( i = 1 ; i <= N ; i++ )
     for ( i = 1 ; i <= N ; i++ )
     {
     {
         a[i] = b[i] + c[i];      /* can be parallelized */
         S1: a[i] = b[i] + c[i];      /* can be parallelized */
         sum  = sum + a[i] ;      /* critical section that must be protected by synchronization mechanism */
         S2: sum  = sum + a[i] ;      /* critical section that must be protected by synchronization mechanism */
     }
     }


There are no dependencies between first and second statements inside the loop, but we can see that "sum" is a critical variable.
There are no loop dependencies for S1 and S2, but we can see that "sum" is a critical variable.
"sum" can be protected using various synchronization mechanisms like semaphores, mutexes and condition variables.  They are illustrated as follows.
"sum" can be protected using various synchronization mechanisms like semaphores, mutexes and condition variables.  They are illustrated as follows.


'''Semaphore Pseudo Code'''
'''Semaphore Pseudocode'''


     sem_t * sem;
     sem_t * sem;
Line 451: Line 452:
     }
     }


Even though semaphore achieves the objective of protecting the critical section, mutex provides additional security by preventing accidental deletion of locks. This property is due to the exclusive ownership property associated with the mutex variables where only the locked mutex can unlock the critical section.
Even though semaphore achieves the objective of protecting the critical section, mutex provides additional security by preventing accidental deletion of locks. This property is due to the exclusive ownership property associated with mutex variables where only the locked mutex can unlock the critical section.


'''Mutex Pseudo Code'''
'''Mutex Pseudocode'''
   
   
     pthread_mutext * mut_id;
     pthread_mutext * mut_id;
Line 465: Line 466:
     }
     }


Suppose in the above example the thread has to wait for some other arbitrary process to complete in the critical section, then conditional variable synchronization can be used. Here the mutex will wait until a condition variable is satisfied. The condition variable is satisfied based on the progress of other threads which control this condition variable.  This is illustrated as follows.
Suppose in the above example the thread has to wait for some other arbitrary process to complete in the critical section, then conditional variable synchronization can be used. Here the mutex will wait until a condition variable is satisfied. The setting of a condition variable is controlled by other another thread.  This is illustrated as follows.


'''Conditional Variable Pseudo Code'''
'''Conditional Variable Pseudocode'''


     //thread 1: main thread
     //thread 1: main thread
Line 489: Line 490:
     }
     }


 
The previous example does not mandate barrier synchronization, as the purpose was to avoid shared memory read contention. The purpose of barrier synchronization is to address the case when one needs to complete all parallel tasks until a common point before proceeding. Consider the following code, amenable to DOALL parallelism, slightly modified for illustrating barrier synchronization.  
Consider the following code, amenable to DOALL parallelism, slightly modified for illustrating barrier synchronization.


     for ( i = 1 ; i <= N ; i++ )
     for ( i = 1 ; i <= N ; i++ )
Line 498: Line 498:
     }
     }


The Barrier implementation for the above code segment is shown below:
In the above example, minimum of "a" can be calculated only after the computations for "a" is complete. The Barrier implementation for the above code segment is shown below:


'''Barrier Pseudo Code'''
'''Barrier Pseudocode'''


     pthread_barrier_t* bar;
     pthread_barrier_t* bar;
Line 522: Line 522:
     }
     }


The above code can be parallelized using semaphores and mutex. They are illustrated as follows.
S1 does not have any inter loop dependencies. However, S2 is dependent on S1, and S2 has inter loop dependencies. Thus we can parallelize S1 without any synchronization. However, we need synchronization mechanisms for S2. The above code can be parallelized using semaphores and mutexes. The advantages of using mutex over semaphore is the same as mentioned before in DOALL parallelism. They are illustrated as follows.


'''Semaphore Pseudo Code'''
'''Semaphore Pseudocode'''


     sem_t * sem;                    /* a vector of semaphores used to keep track of each loop */
     sem_t * sem;                    /* a vector of semaphores used to keep track of each loop */
Line 536: Line 536:
     }
     }


'''Mutex Pseudo Code'''
'''Mutex Pseudocode'''


     pthread_mutext * mut_id;// a vector of semaphores of length N is  used  
     pthread_mutext * mut_id;// a vector of semaphores of length N is  used  
Line 559: Line 559:
     }
     }


The above code can be parallelized using semaphores and mutex. They are illustrated as follows.
S1 has inter loop dependencies. S2 depends on S1 even though S2 does not have loop dependencies. Hence S2 can be executed only after S1 is executed in each loop. This necessitates the use of synchronization mechanisms.
The above code can be parallelized using semaphores and mutexes. They are illustrated as follows.


'''Semaphore Pseudo Code'''
'''Semaphore Pseudodode'''


     sem_t * sem;
     sem_t * sem;
Line 579: Line 580:
     }
     }


'''Mutex Pseudo Code'''
'''Mutex Pseudocode'''


     pthread_mutex_t * mut_id;
     pthread_mutex_t * mut_id;

Latest revision as of 18:19, 26 April 2014

Problem Statement 3a

Previous Work

Current Work

Overview

The aim of this wiki is to highlight the synchronization mechanisms for DOALL, DOACROSS and DOPIPE parallelization techniques.

Synchronization

Necessity


When using any parallel programming model, synchronization<ref>Rahman, M.M.; , "Process synchronization in multiprocessor and multi-core processor," Informatics, Electronics & Vision (ICIEV), 2012 International Conference on , vol., no., pp.554-559, 18-19 May 2012</ref> is needed to guarantee accuracy of the overall program. The following situations highlight the necessity of synchronization<ref>http://people.csail.mit.edu/rinard/paper/cpande99.pdf</ref>.

Case 1: A scenario in which the code following a parallelized loop requires that all of the parallel processes be completed before advancing.

Case 2: A scenario in which a code segment in the middle of a parallelized section needs to be executed sequentially (critical section), to ensure program correctness.

Case 3: A scenario in which multiple processes must update a global variable in such a way that one process does not overwrite the updates of a different process.

Synchronization Mechanisms


Let us now briefly understand the various process/thread synchronization mechanisms that helps in achieving correct program execution order.

Semaphore: Semaphore is a variable that helps in controlling the access to a common resource in a parallel programming model. Not only do they arbitrate, but also help in avoiding race conditions. Semaphores keep only the count of the resource availability.

Mutex / Lock: Mutex refers to Mutual Exclusion. It helps avoiding concurrency issues like race conditions by ensuring that no two concurrently executed processes access a critical section at the same time. Though Mutexes are essentially the same as Semaphores, the fundamental differences between them are as follows [1]:

1. Mutexes allow exclusive process access to a resource. Semaphores allow any process to access a resource. Thus the element of ownership in mutex ensures that only the process that has locked the mutex can unlock it. However, in case of Semaphores, the process locking and unlocking the semaphore can be different.

2. Mutexes support priority inversion, helping promote the current priority of the process that has the lock, in case a higher priority process starts waiting on the locked mutex.

3. Mutexes, unlike Semaphores ensure that the process holding the mutex cannot be accidentally deleted by other processes.

Conditional Variables: Though the use of Mutex protects an operation, it doesn’t permit the thread to wait until another thread completes an arbitrary activity (e.g. Parent thread might want to wait until the child thread has completed its execution) [2]. By giving this facility, conditional variables help in solving various synchronization problems like the producer/consumer problem [3].

Barriers [4] : This form of synchronization introduces a common stop point for multiple threads and processes. It ensures that all threads/processes reach this barrier before continuing any further.

Case 1, mentioned in the previous subsection, can be addressed using Barrier synchronization. Cases 2 and 3 can be addressed using Semaphores, Mutexes and Conditional Variables.

The following section discusses two libraries that implement the aforementioned synchronization mechanisms.

Libraries<ref>http://linux.die.net/man/3/</ref>

1. Semaphore.h


A semaphore is special variable that acts similar to a lock. For a process to enter into the critical section it must be able to acquire the semaphore. If the semaphore cannot be acquired, then the process is “put to sleep” and the processor is then used for another process. This means the processes cache is saved off in a place where it can be retrieved when the process is “woken up”. Once the semaphore is available the “sleeping” process is woken up and obtains the semaphore and proceeds in to the critical section.A simple way to execute a semaphore would be to use the following functions for various operations on semaphores;

Initializing a semaphore

int sem_init(sem_t *sem, int pshared, unsigned int value): This function initializes the unnamed semaphore at the address pointed to by sem. Value is the initial value of the semaphore. The variable pshared is indicative of whether the semaphore is shared between threads or processes. If its value is zero it is shared between threads else it is shared between processes.

Return Value: sem_init() returns 0 on success; on error, -1 is returned, and errno is set to indicate the error.

Locking the semaphore

By acquiring a lock, it is ensured that the execution of other processes or threads (trying to access the shared resource) is postponed indefinitely. Blocking the execution of a higher priority thread or a thread responsible for performing real time operations may be undesired, thus highlighting the necessity of Non-Blocking statements. Depending upon the application, we may want a blocking or non blocking assignment of locks. This is achieved using the following constructs:

int sem_wait(sem_t *sem):[Blocking] This function decrements (locks) the semaphore pointed to by sem. Decrement will only proceed if the value of the semaphore is greater than zero. If its value is zero, then the call blocks till the value of the semaphore becomes positive so that it can acquire it or a signal handler interrupts the call.

int sem_trywait(sem_t *sem):[Non-Blocking] This function is the same as sem_wait(), except that if the decrement cannot be immediately performed, then call returns an error (errno set to EAGAIN<ref>http://www-numi.fnal.gov/computing/minossoft/releases/R2.3/WebDocs/Errors/unix_system_errors.html</ref>) instead of blocking.

int sem_timedwait(sem_t *sem, const struct timespec *abs_timeout):[Blocking with Timeout] This function is also the same as sem_wait(), except that abs_timeout specifies a limit on the amount of time that the call should block if the decrement cannot be immediately performed. The abs_timeout argument points to a structure that specifies an absolute timeout in seconds and nanoseconds. This structure is defined as follows:

      struct timespec 
      {
      time_t tv_sec;          /* Seconds */
      long   tv_nsec;         /* Nanoseconds [0 .. 999999999] */
      };

If the timeout has already expired by the time of the call, and the semaphore could not be locked immediately, then sem_timedwait() fails with a timeout error (errno set to ETIMEDOUT<ref>http://pubs.opengroup.org/onlinepubs/7908799/xsh/errors.html</ref>). If the operation can be performed immediately, then sem_timedwait() never fails with a timeout error, regardless of the value of abs_timeout. Furthermore, the validity of abs_timeout is not checked in this case.

Return Value: All of these functions return 0 on success; on error, the value of the semaphore is left unchanged, -1 is returned, and errno<ref>http://www.kernel.org/doc/man-pages/online/pages/man3/errno.3.html</ref> is set to indicate the error.

Releasing the semaphore

int sem_post(sem_t *sem): This function increments (unlocks) the semaphore pointed to by sem, thus making the value of the semaphore positive. Some other process can now acquire it.

Return Value: sem_post() returns 0 on success; on error, the value of the semaphore is left unchanged, -1 is returned, and errno is set to indicate the error.

Pseudocode

     
     sem_t *sem; 
     int pshared;
     unsigned int value;
     int i = sem_init(sem, pshared, value);                            /*initialize the semaphore*/
     int wait=sem_wait(sem);                                           /*will decrement the value of the semaphore i.e. acquire the lock */
     
     if(wait==-1)
         printf(“Error occurred, the value of the semaphore was not decremented”);
     
     /*critical section;*/
     int post=sem_post(sem);                                           /*will increment the value of the semaphore i.e. release the lock*/
     if(post==-1)
         printf(“Error occurred, the value of the semaphore was not incremented”);
     

2. Pthread.h


POSIX threads<ref>http://maxim.int.ru/bookshelf/PthreadsProgram/toc.html</ref>, usually referred to as pthreads defines a set of programming language types,functions and constants.It is implemented with a pthread.h header file and a thread library. The pthread library provides the following synchronization mechanisms:

1. Mutexes

2. Joins

3. Conditional Variables

4. Barriers

Mutexes

Mutual Exclusion Lock<ref>http://docs.oracle.com/cd/E19963-01/html/821-1601/sync-28983.html</ref>, mutex in short is another synchronization method and is used to avoid race conditions. In cases leading to data inconsistencies,like when multiple threads are to be prevented from operating on the same memory location simultaneously or when a specific order of operation is expected, mutexes are used.It blocks access to variables by other threads. Mutexes are in particular used to protect a critical region (“a segment of memory”) from other threads<ref>Raghunathan, S.; , "Extending Inter-process Synchronization with Robust Mutex and Variants in Condition Wait," Parallel and Distributed Systems, 2008. ICPADS '08. 14th IEEE International Conference on , vol., no., pp.121-128, 8-10 Dec. 2008</ref>.

The following are the functions for using mutexes:<ref>http://pic.dhe.ibm.com/infocenter/aix/v7r1/index.jsp?topic=%2Fcom.ibm.aix.genprogc%2Fdoc%2Fgenprogc%2Fmutexes.htm</ref>

Initialising the mutex

pthread_mutex_init (pthread_mutex_t *restrict mutex,const pthread_mutexattr_t *restrict attr): The mutex referenced by the mutex is initialised with the attributes specified by attr using this function. The default mutex attributes are used if attribute passed is NULL.Passing a NULL attribute implies passing the address of a default mutex. The state of the mutex is initialized and unlocked,upon successful initialization.

The pthread_mutex_init can be used to reinitialize an already destroyed mutex,but attempting to initialize an already initialized mutex results in undefined behavior. The macro PTHREAD_MUTEX_INITIALIZER is used to initialize mutexes that are statically allocated in cases where default mutex attributes are appropriate. It is dynamic initialization by a call with parameter attr specified as NULL to pthread_mutex_init() but in this case no error checks are performed.

Return Value: The function returns zero on success; otherwise, an error number is returned to indicate the error.

Destroying the mutex

pthread_mutex_destroy (pthread_mutex_t *mutex): This function is used to destroy a mutex which is no longer needed. The function destroys the mutex object passed by mutex attribute and hence the mutex object becomes uninitialized. pthread_mutex_destroy() sets the object referenced to an invalid value. A destroyed mutex object can be reinitialized using pthread_mutex_init().A mutex cannot be referenced after it is destroyed.It is advised to destroy an initialized mutex that is unlocked. Attempting to destroy a locked mutex results in undefined behavior.

Return Value: The function returns zero on success; otherwise, an error number is returned to indicate the error.

Locking the mutex

pthread_mutex_lock (pthread_mutex_t *mutex): This function is used to lock the mutex passed. If the mutex is already locked by another process, the calling thread blocks until the mutex is unlocked by the other process.This operation returns the mutex object referenced by mutex in the locked state with the calling thread as its owner.Error checking is provided if the mutex is of type PTHREAD_MUTEX_ERRORCHECK. Error shall be returned if a thread attempts to relock a mutex that it has already locked or if a thread attempts to unlock a mutex that it has not locked or a mutex which is unlocked.

Mutexes of type,PTHREAD_MUTEX_RECURSIVE<ref>http://developer.apple.com/library/ios/#documentation/System/Conceptual/ManPages_iPhoneOS/man3/pthread_mutexattr_settype.3.html</ref> maintain a lock count. The lock count is set to one when a mutex is acquired by a thread successfully for the first time. And the lock count is incremented in units of one every time a thread relocks this mutex. Each time the thread unlocks the mutex, the lock count is decremented by one. When the lock count reaches zero, the mutex becomes available for other threads to acquire.An error is returned if a thread attempts to unlock a mutex that it has not locked or a mutex which is unlocked.

If the mutex type is PTHREAD_MUTEX_DEFAULT, attempting to recursively lock the mutex or attempting to unlock the mutex if it was not locked by the calling thread or attempting to unlock the mutex if it is not locked results in undefined behavior.If a signal is delivered to a thread waiting for a mutex, upon return from the signal handler the thread shall resume waiting for the mutex as if it was not interrupted.

Return Value: The functions return a zero on success; otherwise, an error number is returned to indicate the error.

Unlocking the mutex

pthread_mutex_unlock (pthread_mutex_t *mutex): This function is used to release a mutex that is previously locked.The function shall release the mutex object referenced by mutex. The manner in which a mutex is released is dependent upon the mutex's type attribute.If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, the scheduling policy shall determine which thread shall acquire the mutex. An error is returned if mutex is already unlocked or owned by another thread.

Return Value: The functions return a zero on success; otherwise, an error number is returned to indicate the error.

Pseudocode
   
    pthread_mutex_t *mutex,
    const pthread_mutexattr_t *attr;
    int p, p1, pu, pd;
    p = pthread_mutex_init(mutex, attr);
    
    if(p!=0)
         printf("Error occurred mutex was not created”);
    
    pl = pthread_mutex_lock(mutex);     
    if(pl!=0)
         printf("Error occurred mutex was not locked”);
    
    /*critical section*/
    pu = pthread_mutex_unlock(mutex);
    if(pu!=0)
         printf("Error occurred mutex was not unlocked”);
    
    pd = pthread_mutex_destroy(mutex);
    if(pd!=0)
         printf("Error occurred mutex was not destroyed”);
    
Avoiding Deadlock<ref>http://www2.chrishardick.com:1099/Notes/Computing/C/pthreads/mutexes.html</ref>

Deadlocks occur when the program holds more than one mutex. A classic example of deadlock is;

Thread 1:

   lock mutex_a
   |
   lock mutex_b
   -blocked forever waiting for mutex_b

Thread 2:

   lock mutex_b
   |
   lock mutex_a
   -blocked forever waiting for mutex a


Few common techniques<ref>http://pages.cs.wisc.edu/~remzi/Classes/537/Fall2011/Book/threads-deadlock.pdf</ref> exist which can be used to avoid deadlocks, which are;

1. Establish a locking hierarchy.<ref>http://software.intel.com/sites/products/documentation/doclib/stdxe/2013/inspectorxe/lin/ug_docs/GUID-38B68BDA-257C-4D4E-9AC9-B0B2698AD950.htm</ref>

2. Spinlock.

3. Chaining.


1. Establish a locking hierarchy:

• A hierarchy needs to be established to hold multiple locks.

• A rule could be established such that in order to lock mutex_a and mutex_b, mutex_a must be locked before mutex_b.

• If unnecessary then both locks should not be held simultaneously.

• Order of locking the mutexes must be maintained.

• Mutexes can be unlocked in any order that is preferred by the program, because unlocking doesn't create deadlocks.

• Another function must be formulated in order to unlock the set of mutexes.


Thread 1:

   lock mutex_a
   lock mutex_b
   
   //perform processing
   
   unlock mutex_a
   unlock mutex_b
   ...

Thread 2:

   lock mutex_a(blocked)
   
   wake up(obtained mutex_a)
   lock mutex_b
   
   //perform processing
   
   unlock mutex_a
   unlock mutex_b
   ...

2. Spin lock:

• The first mutex can be locked without any constraints. But from here on to lock additional mutexes non-blocking mutexes<ref>http://docs.oracle.com/cd/E19683-01/806-6867/sync-36993/index.html</ref> must be used (pthread_mutex_trylock())

• In case of a failure in any lock, unlock in reverse order and try again.

• This method of unlocking in the reverse order reduces the spinning for other threads.


Thread 1:

   lock mutex_a
   try-lock mutex_b
   |
   perform processing 
   | 
   unlock mutex_b
   unlock mutex_a  
   |        
   ...

Thread 2:

   lock mutex_a (blocked)
   
   wake up (obtained mutex_a)
   try-lock mutex_b
   |
   perform processing
   |
   unlock mutex_b
   unlock mutex_a

3. Chaining:

• Traversing linked lists and other data structures.

• A locking hierarchy is to be created, like Lock1, Lock2, Lock3, Unlock1, Lock4.

Thread 1

   lock head node  
   head node processing  
   |                    
   traverse branch locking first node 
   unlock head node 
   first node processing

Conditional Variables

Conditional variables<ref>http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp</ref> are a kind of synchronization objects that are used to allow threads to wait for certain events to occur and are slightly more complex than mutexes. In order to ensure a safe and consistent serialization, usage of condition variables requires the thread to cooperatively use a specific protocol which includes a mutex, a boolean predicate and the condition variable itself. The threads that are cooperating using condition variables can wait for a condition to occur, or can wake up other threads that are waiting for a condition.

The following are the functions used in conjunction with the conditional variable:

Creating/Destroying

int pthread_cond_init(pthread_cond_t *restrict cond, const pthread_condattr_t *restrict attr): This function is used to initialize the conditional variable referenced by 'cond' with attributes referenced by 'attr'. Default condition variable attributes shall be used if attribute is NULL.It is same as passing the address of a default condition variable attributes object.Upon successful initialization, the conditional variable state is initialized. Attempting to initialize an already initialized condition variable results in undefined behavior.

Return Value: If successful the function returns zero; otherwise, an error number is returned to indicate the error.

pthread_cond_destroy(pthread_cond_t *cond): This function destroys the given condition variable specified by cond; the object becomes, in effect, uninitialized.An implementation may cause pthread_cond_destroy() to set the object referenced by cond to an invalid value.An already destroyed condition variable object can be reinitialized using pthread_cond_init(); the results of otherwise referencing the object after it has been destroyed are undefined. It shall be safe to destroy an initialized condition variable upon which no threads are currently blocked. Attempting to destroy condition variable upon which other threads are currently blocked results in an undefined behavior.

Return Value: If successful the function returns zero; otherwise, an error number is returned to indicate the error.

For example<ref>http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html#BASICS</ref>, consider the following code:

 struct list 
 {
    pthread_mutex_t lm;
    ...
 }
 struct elt 
 {
    key k;
    int busy;
    pthread_cond_t notbusy;
    ...
 }
 /* Find a list element and reserve it. */
 struct elt * list_find(struct list *lp, key k)
 {
    struct elt *ep;
      pthread_mutex_lock(&lp->lm);
      while ((ep = find_elt(l, k) != NULL) && ep->busy)
      pthread_cond_wait(&ep->notbusy, &lp->lm);
      if (ep != NULL)
      ep->busy = 1;
      pthread_mutex_unlock(&lp->lm);
      return(ep);
 }
 delete_elt(struct list *lp, struct elt *ep)
 {
    pthread_mutex_lock(&lp->lm);
    assert(ep->busy);
    ... remove ep from list ...
    ep->busy = 0;	 /* Paranoid. */
      (A) pthread_cond_broadcast(&ep->notbusy);
       pthread_mutex_unlock(&lp->lm);
      (B) pthread_cond_destroy(&rp->notbusy);
       free(ep); 
  }

In this example, the condition variable and its list element may be freed (line B) immediately after all threads waiting for it are awakened (line A), since the mutex and the code ensure that no other thread can touch the element to be deleted.

Waiting on condition

These functions are used to block a condition variable.

int pthread_cond_timedwait(pthread_cond_t *restrict cond,pthread_mutex_t *restrict mutex, const struct timespec *restrict abstime); and int pthread_cond_wait(pthread_cond_t *restrict cond, pthread_mutex_t *restrict mutex): These functions are called with mutex locked by the calling thread or undefined behavior results.They atomically release mutex and cause the calling thread to block on the condition variable cond; atomically here means atomically with respect to access by another thread to the mutex and then the condition variable. That is, if another thread is able to acquire the mutex after the about-to-block thread has released it, then a subsequent call to pthread_cond_broadcast() or pthread_cond_signal() in that thread shall behave as if it were issued after the about-to-block thread has blocked.The mutex would be locked and be owned by the calling thread upon successful return.

Waking thread based on condition

These functions unblock threads blocked on a condition variable.

pthread_cond_broadcast(pthread_cond_t *cond): and pthread_cond_signal(pthread_cond_t *cond): The pthread_cond_broadcast() function unblocks all threads currently blocked on the specified condition variable cond. The pthread_cond_signal() function unblocks at least one of the threads that are blocked on the specified condition variable cond (if any threads are blocked on cond).

If more than one thread is blocked on a condition variable, the scheduling policy determines the order in which threads are unblocked.When each thread unblocked as a result of a pthread_cond_broadcast() or pthread_cond_signal() returns from its call to pthread_cond_wait() or pthread_cond_timedwait(), the thread owns the mutex with which it called pthread_cond_wait() or pthread_cond_timedwait().The thread(s) that are unblocked contend for the mutex according to the scheduling policy and as if each had called pthread_mutex_lock().

Barriers

A barrier<ref>http://www.cs.utah.edu/~retrac/papers/jpdc05.pdf</ref> is a type of synchronization method which is used for a group of threads or processes in the source code. It basically stops any thread/process at a certain point till all other threads/processes reach it. Only then are the processes are allowed to proceed.

Initialising the Barrier

int pthread_barrier_init(pthread_barrier_t *restrict barrier, const pthread_barrierattr_t *restrict attr, unsigned count): The init function initializes the barrier with the specified attributes and reserves any resources required to use the barrier. Attempting to initialize an already initialized barrier or initializing a barrier when any thread is blocked on the barrier or using an uninitialized barrier would lead to undefined results.

The count argument specified the number of threads that must call before any of the them successfully return from the call and hence it should be a positive number greater than zero.Failure of the init function results in non initialization of the barrier and the contents of barrier are undefined.Only the object referenced by barrier may be used for performing synchronization. The result of referring to copies of that object in calls to pthread_barrier_destroy() or pthread_barrier_wait() is undefined.

Return Value: Upon successful completion, these functions shall return zero; otherwise, an error number shall be returned to indicate the error.

Barrier Wait

The wait function is used to synchronize parallel threads.

int pthread_barrier_wait(pthread_barrier_t *barrier): Until a required number of threads call pthread_barrier_wait() referring the barrier, the calling thread blocks.When the required number of threads call the barrier referenced, a zero value is returned to all the threads, except for one.The constant PTHREAD_BARRIER_SERIAL_THREAD is returned to one unspecified thread.And then it is sent to the state it has as a result of the most recent init function.

When the required number of threads have arrived at the barrier during the execution of a signal handler, it marks the completion of barrier wait.If a signal is delivered to a thread blocked on a barrier, upon return from the signal handler the thread resumes waiting at the barrier if the barrier wait has not completed; otherwise, the thread continues as normal from the completed barrier wait. Until the thread in the signal handler returns from it, it is unspecified whether other threads may proceed past the barrier once they have all reached it.A thread that has blocked on a barrier does not prevent any unblocked thread that is eligible to use the same processing resources from eventually making forward progress in its execution. Eligibility for processing resources is determined by the scheduling policy.

Return Value: Upon successful completion, the function shall return PTHREAD_BARRIER_SERIAL_THREAD for an arbitrary thread synchronized at the barrier and zero for each of the other threads. Otherwise, an error number shall be returned to indicate the error.

Destroying the Barrier

int pthread_barrier_destroy(pthread_barrier_t *barrier):This function is used to destroy the barrier passed by the barrier attribute and also releases any resources used by the barrier. A destroyed barrier can be reused when reinitialized by another call to pthread_barrier_init().The destroyed barrier is set to an invalid value. The results are undefined if pthread_barrier_destroy() is called when any thread is blocked on the barrier, or if this function is called with an uninitialized barrier.

Return Value: Upon successful completion, the functions returns zero; otherwise, an error number is returned to indicate the error.

Pseudocode
   
   pthread_barrier_t *barrier;
   pthread_barrierattr_t *attr;
   unsigned int count;
   int i = pthread_barrier_init(barrier, attr, count);             // initialize the barrier
   
   if(i!=0)
      printf(“Error occurred barrier was not initialized”):
   
   int b = pthread_barrier_wait(barrier);                          //synchronize participating threads
   if(b!=0)
      printf(“Error occurred in synchronizing threads”);
   
   /* critical section */
   
   int d = pthread_barrier_destroy(barrier);                       //destroy the barrier
   if(d!=0)
      printf(“Error occurred barrier was not destroyed”):
   

Synchronization Mechanisms Examples

DOALL parallelism is supported by parallelization mechanisms like semaphore, mutex, condition variable and barrier depending on applications. DOACROSS and DOPIPE parallelism are supported by point to point synchronization mechanisms like semaphore and mutex.

DOALL Parallelism


Consider the following code, amenable to DOALL parallelism.

Code Segment

    for ( i = 1 ; i <= N ; i++ )
    {
       S1: a[i] = b[i] + c[i];      /* can be parallelized */
       S2: sum  = sum + a[i] ;      /* critical section that must be protected by synchronization mechanism */
    }

There are no loop dependencies for S1 and S2, but we can see that "sum" is a critical variable. "sum" can be protected using various synchronization mechanisms like semaphores, mutexes and condition variables. They are illustrated as follows.

Semaphore Pseudocode

   sem_t * sem;
   sem_init ( sem , 0 , 1 );        /*initialize the semaphore variable*/
   for ( i =1 ; i <= N ; i++ )      /*parallelize the for loop*/
   {       
           a[i] = b[i] + c[i];
           sem_wait(sem);           /*enter critical section if sem ==0*/
           sum  = sum + a[i] ;
           sem_post(sem);           /*exit critical section*/
   }

Even though semaphore achieves the objective of protecting the critical section, mutex provides additional security by preventing accidental deletion of locks. This property is due to the exclusive ownership property associated with mutex variables where only the locked mutex can unlock the critical section.

Mutex Pseudocode

   pthread_mutext * mut_id;
   pthread_mutex_init (mut_id);             /*initialize mutex variable */
   for ( i =1 ; i <= N ; i++ )              /* parallelize the for loop */
   {    
           a[i] = b[i] + c[i];
           pthread_mutex_lock ( mut_id );   /* enter critical section if mut_id is free*/
           sum = sum + a[i];
           pthread_mutex_unlock ( mut_id ); /*exit critical section*/
   }

Suppose in the above example the thread has to wait for some other arbitrary process to complete in the critical section, then conditional variable synchronization can be used. Here the mutex will wait until a condition variable is satisfied. The setting of a condition variable is controlled by other another thread. This is illustrated as follows.

Conditional Variable Pseudocode

   //thread 1: main thread
   pthread_condition_t * cond;
   pthread_mutex_init ( mut_id );
   for ( i =1 ; i <= N ; i++ ) // parallelize the for loop
   {
       a[i] = b[i] + c[i];
       pthread_mutex_lock ( mut_id );   /*enter critical section if mut_id is free*/
       sum = sum + a[i];
       pthread_cond_wait(cond,mut_id);  /*this thread goes to sleep until ‘cond’ condition is satisfied*/
       pthread_mutex_unlock ( mut_id ); /*exit critical section*/
   }
   //thread 2: condition signaling 
   while (i > 0)
   i -- ;
   if (i == 0)
   {	pthread_mutex_lock (mut_id);       /* mut_id indicates thread 1 mutex is targeted by this condition */
       pthread_cond_signal (cond,mut_id); /* set condition variable*/
       pthread_mutex_unlock ( mut_id );   /* exit mutex*/
   }

The previous example does not mandate barrier synchronization, as the purpose was to avoid shared memory read contention. The purpose of barrier synchronization is to address the case when one needs to complete all parallel tasks until a common point before proceeding. Consider the following code, amenable to DOALL parallelism, slightly modified for illustrating barrier synchronization.

   for ( i = 1 ; i <= N ; i++ )
   {	
       a[i] = b[i] + c[i] ;
       min(a);	              /*computes the minimum value from array a*/
   }

In the above example, minimum of "a" can be calculated only after the computations for "a" is complete. The Barrier implementation for the above code segment is shown below:

Barrier Pseudocode

   pthread_barrier_t* bar;
   pthread_barrier_init (bar, attr, nthreads); /*initialize barrier variable*/
   for ( i=1 ; i <= N ; i++ )                  /* parallelize the for loop*/
   {   
       a[i] = b[i] + c[i];
       pthread_barrier_wait (bar);             /* Post the barrier*/
	min(a);                                 /*computes the minimum value from array a */
   }

DOACROSS Parallelism


Consider the following code, amenable to DOACROSS parallelism

   for ( i = 1 ; i <= N ; i++ )
   {
       S1: d[i] = b[i] * c[i] ;  /*no loop dependencies*/
       S2: a[i] = a[i-1] + d[i]; /* loop dependency present*/
   }

S1 does not have any inter loop dependencies. However, S2 is dependent on S1, and S2 has inter loop dependencies. Thus we can parallelize S1 without any synchronization. However, we need synchronization mechanisms for S2. The above code can be parallelized using semaphores and mutexes. The advantages of using mutex over semaphore is the same as mentioned before in DOALL parallelism. They are illustrated as follows.

Semaphore Pseudocode

   sem_t * sem;                    /* a vector of semaphores used to keep track of each loop */
   for ( i =1 ; i <= N ; i++ )
   {
       sem_init( sem[i], 0 ,1);  /* one semaphore initialized for each iteration */
       S1: d[i] = b[i] * c[i];   /* no dependencies, hence parallel execution */
       sem_wait( sem[i-1]);
       S2: a[i] = a[i-1] + d[i]; /*executes only if S2 in previous loop completes execution*/
       sem_post(sem[i]);
   }

Mutex Pseudocode

   pthread_mutext * mut_id;// a vector of semaphores of length N is  used 
   for ( i =1 ; i <= N ; i++ )
   {
       pthread_mutex_init (mut_id[i]);      /*one mutex initialized for each iteration*/
       S1: d[i] = b[i] * c[i];
       pthread_mutex_lock ( mut_id [i-1] ); /* no dependencies, hence parallel execution*/
       S2: a[i] = a[i-1] +d[i];             /*executes only if S2 in previous loop completes execution*/
       pthread_mutex_unlock ( mut_id [i]);
   }

DOPIPE Parallelism


Consider the following code, amenable to DOPIPE parallelism

   for ( i = 1 ; i <= N ; i++ )
   {
       S1: a[i] = a[i-1] + b[i] ; /* loop dependent statement */
       S2: c[i] = c[i] + a[i];    /* loop independent but dependent on S1 */
   }

S1 has inter loop dependencies. S2 depends on S1 even though S2 does not have loop dependencies. Hence S2 can be executed only after S1 is executed in each loop. This necessitates the use of synchronization mechanisms. The above code can be parallelized using semaphores and mutexes. They are illustrated as follows.

Semaphore Pseudodode

   sem_t * sem;
   for ( i = 1 ; i <= N ; i++ )
   {
       sem_init(sem[i] , 0 ,1);    /*initialize all N semaphore variables*/
   }
   for ( i = 1 ; i <= N ; i++ )
   {
       S1: a[i] = a[i-1] + b[i] ; /*loop dependent statement executed first*/
       sem_post (sem[i]);         /*Tell second loop that S1 is completed*/
   }
   for ( i = 1 ; i <= N ; i++ )
   { 
       sem_wait(sem[i]);          /*Wait for S1 to complete in first loop*/
       S2: c[i] = c[i] + a[i] ;   /*Execute once synchronization is done*/
   }

Mutex Pseudocode

   pthread_mutex_t * mut_id;
   for ( i = 1 ; i <= N ; i++ )
   {
       pthread_mutex_init(mut_id[i]); /*initialize all mutex variables*/
   }
   for ( i = 1 ; i <= N ; i++ )
   {
       S1: a[i] = a[i-1] + b[i] ;        /*loop dependent statement executed first*/
       pthread_mutex_unlock (mut_id[i]); /*Tell second loop that S1 is completed*/
   }
   for ( i = 1 ; i <= N ; i++ )
   {
       pthread_mutex_lock (mut_id[i]);   /*Wait for S1 to complete in first loop */
       S2: c[i] = c[i] + a[i];           /*Execute once synchronization is done */
   }

References

<References>http://linux.die.net/man/3/

http://publib.boulder.ibm.com/infocenter/iseries/v7r1m0/index.jsp
http://maxim.int.ru/bookshelf/PthreadsProgram/toc.html
http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html#BASICS

</References>