CSC 456 Fall 2013/4b cv: Difference between revisions
No edit summary |
|||
Line 1: | Line 1: | ||
===Background=== | ===Background=== | ||
Non Uniform Memory (NUMA) technology has become the optimal solution for more complex systems in terms of the increase of processors. NUMA provides the functionality to distribute memory to each processor, giving each processor local access to its own share, as well as giving each processor the ability to access remote memory located in other processors. Due to the access that this memory system is allowed in order to effectively utilize it there needs to be an efficient way of handling page management. The NUMA system memory is managed in a node based model. The nodes consist CPUs, cache lines, and local memory that can communicate via a NUMA connector. This system creates page pools for each of the different nodes and can swap pages from each of these nodes using a swapper thread. The page pools contain free lists that hold the available pages | Non-Uniform Memory (NUMA) technology has become the optimal solution for more complex systems in terms of the increase of processors. NUMA provides the functionality to distribute memory to each processor, giving each processor local access to its own share, as well as giving each processor the ability to access remote memory located in other processors. Due to the access that this memory system is allowed in order to effectively utilize it there needs to be an efficient way of handling page management. The NUMA system memory is managed in a node based model. The nodes consist CPUs, cache lines, and local memory that can communicate via a NUMA connector. This system creates page pools for each of the different nodes and can swap pages from each of these nodes using a swapper thread. The page pools contain free lists that hold the available pages; the active and inactive lists are used to manage page reclamation. | ||
===History=== | ===History=== |
Revision as of 01:08, 12 November 2013
Background
Non-Uniform Memory (NUMA) technology has become the optimal solution for more complex systems in terms of the increase of processors. NUMA provides the functionality to distribute memory to each processor, giving each processor local access to its own share, as well as giving each processor the ability to access remote memory located in other processors. Due to the access that this memory system is allowed in order to effectively utilize it there needs to be an efficient way of handling page management. The NUMA system memory is managed in a node based model. The nodes consist CPUs, cache lines, and local memory that can communicate via a NUMA connector. This system creates page pools for each of the different nodes and can swap pages from each of these nodes using a swapper thread. The page pools contain free lists that hold the available pages; the active and inactive lists are used to manage page reclamation.
History
Different Strategies
- first touch - allocates the frame on the node that incurs the page fault, i.e. on the same node where the processor that accesses it resides.
- round robin - pages are allocated in different memory nodes and are accessed based on time slices.
- local to first access - waits
- local to first request