CSC/ECE 506 Spring 2011/ch5 LL: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
=Supplement to Chapter 5: Parallel Programming For Linked Data Structures= | =Supplement to Chapter 5: Parallel Programming For Linked Data Structures= | ||
==Introduction== | ==Introduction== | ||
Chapter 5 of [[#References | Solihin (2008)]] discusses parallel algorithms for operating on Linked data structures. The focus is primarily on thread-safe accesses based on locks. The semantics of locks imply that a single thread can access the data structure at any given time. The chapter discusses optimizations such as finer grained locks and read-write locks to enable more parallelization | Chapter 5 of [[#References | Solihin (2008)]] discusses parallel algorithms for operating on Linked data structures. The focus is primarily on thread-safe accesses based on locks. The semantics of locks imply that a single thread can access the data structure at any given time. The chapter discusses optimizations such as finer grained locks and read-write locks to enable more parallelization. This supplements discusses non-blocking algorithms to access linked data structures and the related issues. | ||
=Non blocking algorithms= | =Non blocking algorithms= | ||
==Motivation== | ==Motivation== | ||
Parallel algorithms enable threads (or processes) to concurrently access shared data structures in a thread-safe manner. The two categories of concurrent algorithms are: | |||
* "Blocking algorithms" | |||
All lock based algorithms are examples of blocking algorithms, since only one thread can be present in the critical section while other threads trying to get in will block on the lock. All the methods outlined in chapter 5 [[#References | Solihin (2008)]] are lock based and hence are blocking in nature. | |||
Lock based algorithms have the following drawbacks: | |||
* Blocking limits the extent of scalability, especially when the granularity of the lock is too coarse. | |||
* Locks can result in deadlocks and livelocks []. | |||
* Locks can cause priority inversion i.e. high priority threads/processes cannot proceed if a low priority thread/process is holding the common lock. | |||
* Locks can result in convoying. All other threads have to wait if a thread holding a lock is descheduled due to a time-slice interrupt or page fault (See lock convoy) | |||
* Locks can cause other issues such as thundering herd problem [] | |||
* "Non-blocking algorithms" | |||
==categories== | ==categories== | ||
==building blocks== | ==building blocks== |
Revision as of 16:25, 27 February 2011
Supplement to Chapter 5: Parallel Programming For Linked Data Structures
Introduction
Chapter 5 of Solihin (2008) discusses parallel algorithms for operating on Linked data structures. The focus is primarily on thread-safe accesses based on locks. The semantics of locks imply that a single thread can access the data structure at any given time. The chapter discusses optimizations such as finer grained locks and read-write locks to enable more parallelization. This supplements discusses non-blocking algorithms to access linked data structures and the related issues.
Non blocking algorithms
Motivation
Parallel algorithms enable threads (or processes) to concurrently access shared data structures in a thread-safe manner. The two categories of concurrent algorithms are:
- "Blocking algorithms"
All lock based algorithms are examples of blocking algorithms, since only one thread can be present in the critical section while other threads trying to get in will block on the lock. All the methods outlined in chapter 5 Solihin (2008) are lock based and hence are blocking in nature.
Lock based algorithms have the following drawbacks:
- Blocking limits the extent of scalability, especially when the granularity of the lock is too coarse.
- Locks can result in deadlocks and livelocks [].
- Locks can cause priority inversion i.e. high priority threads/processes cannot proceed if a low priority thread/process is holding the common lock.
- Locks can result in convoying. All other threads have to wait if a thread holding a lock is descheduled due to a time-slice interrupt or page fault (See lock convoy)
- Locks can cause other issues such as thundering herd problem []
- "Non-blocking algorithms"
categories
building blocks
Lock-free linked list implementation
Pop-only case
Push-Pop case aka the ABA problem
Tag-based solution
DCAS based solutions
Memory model and barrier instructions
The X86 memory model
Barrier instructions
Linked list example of use of barrier instructions
Skip lists
References
- Yan Solihin, Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems, Solihin Books, August 2009.
- David E. Culler, Jaswinder Pal Singh, and Anoop Gupta, Parallel Computer Architecture: A Hardware/Software Approach, Gulf Professional Publishing, August 1998.
- Nimar S. Arora , Robert D. Blumofe , C. Greg Plaxton, "Thread scheduling for multiprogrammed multiprocessors (1998)" http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.130.3853
- Wikipedia, NonBlocking http://en.wikipedia.org/wiki/Non-blocking_algorithm
- Wikipedia, ABA http://en.wikipedia.org/wiki/ABA_problem
- Wikipedia, Hazard http://en.wikipedia.org/wiki/Hazard_pointer
- Wikipedia, CAS http://en.wikipedia.org/wiki/Compare-And-Swap
- Wikipedia, LLSC http://en.wikipedia.org/wiki/Load-Link/Store-Conditional
- GCC, Atomic builtins http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html#Atomic-Builtins
Appendix
// Sequential code, from Solihin (2008), p. 25.
for (i = 0; i < 8; i++) a[i] = b[i] + c[i]; sum = 0; for (i = 0; i < 8; i++) if (a[i] > 0) sum = sum + a[i]; Print sum;