CSC/ECE 517 Summer 2008/wiki1 4 wm: Difference between revisions
No edit summary |
|||
Line 1: | Line 1: | ||
= Introduction = | = Introduction = | ||
A thread is a basic unit of CPU utilization. A traditional process has a single thread of control. Many modern operating systems provide features enabling a process to contain multiple threads of control. If the process has multiple threads of control, it can do more than one task at a time, such as, displaying graphics and reading keystrokes. Threads are used mainly to run asynchronous tasks and pass the application’s tasks to an executor. They are useful because they reduce the overall time of execution of programs. All threads belonging to the same process share its code section, data section and other operating system resources, such as open files and signals. Thus multithreading is more efficient than having parallel processes running for the same program, which require a huge overhead. | |||
=Multi-threaded programming= | |||
Multithreading support for single processors works by giving each of the threads a “time slice” of the CPU, similar to the parallel execution of processes. On multi-core machines, different threads can run on the different processors and the threads truly run simultaneously. Support for threads may be provided at either the user level, for user or green threads, or by the kernel, for kernel or native threads. User threads are supported above the kernel and are managed without kernel support, whereas kernel threads are supported and managed directly by the operating system. | |||
==Multi-threading models== | |||
There are three common multithreading models | |||
===One-to-one=== | |||
This model maps each user-thread to a kernel thread. It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call. It also allows multiple threads to run in parallel on multiprocessors. The main disadvantage of this model is the overhead of creating each kernel thread for each user thread. This burdens the performance of an application and limits the number of threads supported by the system. | |||
===Many-to-one=== | |||
This model maps many user-level threads to one kernel thread. It is the model used by green threads. Green threads are scheduled by a Virtual Machine instead of natively by the OS. They emulate multithreaded environments without relying on any native OS capabilities. These user-level threads are lightweight and very efficient. They are an easy way to achieve parallelism in a program. An application can have as many user-level threads as it needs, but true concurrency is not achieved because the kernel can schedule only one thread at a time. The developers can create as many user threads as necessary. However, there are two main disadvantages: | |||
1) If a thread makes a blocking system call, the entire process will block. | |||
2) If you are running on a multi-core machine, the multiple threads are unable to run in parallel. | |||
===Many-to-Many=== | |||
This model multiplexes many user-level threads to a smaller or equal number of kernel threads. This model has the best of both worlds. Developers can create as many user threads as they need. When a user thread performs a blocking system call, the kernel can schedule another thread for execution and the kernel threads can run in parallel on multi-core systems. |
Revision as of 16:33, 6 June 2008
Introduction
A thread is a basic unit of CPU utilization. A traditional process has a single thread of control. Many modern operating systems provide features enabling a process to contain multiple threads of control. If the process has multiple threads of control, it can do more than one task at a time, such as, displaying graphics and reading keystrokes. Threads are used mainly to run asynchronous tasks and pass the application’s tasks to an executor. They are useful because they reduce the overall time of execution of programs. All threads belonging to the same process share its code section, data section and other operating system resources, such as open files and signals. Thus multithreading is more efficient than having parallel processes running for the same program, which require a huge overhead.
Multi-threaded programming
Multithreading support for single processors works by giving each of the threads a “time slice” of the CPU, similar to the parallel execution of processes. On multi-core machines, different threads can run on the different processors and the threads truly run simultaneously. Support for threads may be provided at either the user level, for user or green threads, or by the kernel, for kernel or native threads. User threads are supported above the kernel and are managed without kernel support, whereas kernel threads are supported and managed directly by the operating system.
Multi-threading models
There are three common multithreading models
One-to-one
This model maps each user-thread to a kernel thread. It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call. It also allows multiple threads to run in parallel on multiprocessors. The main disadvantage of this model is the overhead of creating each kernel thread for each user thread. This burdens the performance of an application and limits the number of threads supported by the system.
Many-to-one
This model maps many user-level threads to one kernel thread. It is the model used by green threads. Green threads are scheduled by a Virtual Machine instead of natively by the OS. They emulate multithreaded environments without relying on any native OS capabilities. These user-level threads are lightweight and very efficient. They are an easy way to achieve parallelism in a program. An application can have as many user-level threads as it needs, but true concurrency is not achieved because the kernel can schedule only one thread at a time. The developers can create as many user threads as necessary. However, there are two main disadvantages: 1) If a thread makes a blocking system call, the entire process will block. 2) If you are running on a multi-core machine, the multiple threads are unable to run in parallel.
Many-to-Many
This model multiplexes many user-level threads to a smaller or equal number of kernel threads. This model has the best of both worlds. Developers can create as many user threads as they need. When a user thread performs a blocking system call, the kernel can schedule another thread for execution and the kernel threads can run in parallel on multi-core systems.