CSC 456 Fall 2013/4b cv: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 3: Line 3:
===Background===
===Background===


NUMA is often grouped together with Uniform Memory Access(UMA) because the two methods of memory management have similar features. The architecture of UMA(see figure 1.1) has a bus inbetween the processors/cache and the memory for each processor. NUMA(see figure 1.2) however has a directl connection between the processor/cache and the memory for the processor, the bus is then connected to the memory.  
NUMA is often grouped together with Uniform Memory Access(UMA) because the two methods of memory management have similar features. The architecture of UMA(see figure 1.1) has a bus inbetween the processors/cache and the memory for each processor. NUMA(see figure 1.2) however has a directl connection between the processor/cache and the memory for the processor, the bus is then connected to the memory. The main trade off between UMA and NUMA is related to memory access time. Since the NUMA memory is directly linked to the processor/cache it provides faster access to local data but is slower when accessing remote data.


The NUMA system memory is managed in a node based model. The nodes consist CPUs, cache lines, and local memory that can communicate via a NUMA connector. This system creates page pools for each of the different nodes and can swap pages from each of these nodes using a swapper thread. The page pools contain free lists that hold the available pages; the active and inactive lists are used to manage page reclamation.  
The NUMA system memory is managed in a node based model. The nodes consist CPUs, cache lines, and local memory that can communicate via a NUMA connector. This system creates page pools for each of the different nodes and can swap pages from each of these nodes using a swapper thread. The page pools contain free lists that hold the available pages; the active and inactive lists are used to manage page reclamation.  

Revision as of 17:11, 21 November 2013

Non-Uniform Memory Access(NUMA) technology has become the optimal solution for more complex systems in terms of the increase of processors. NUMA provides the functionality to distribute memory to each processor, giving each processor local access to its own share, as well as giving each processor the ability to access remote memory located in other processors. NUMA is a very important processor feature and if it is ignored one can expect sub-par application memory performance.

Background

NUMA is often grouped together with Uniform Memory Access(UMA) because the two methods of memory management have similar features. The architecture of UMA(see figure 1.1) has a bus inbetween the processors/cache and the memory for each processor. NUMA(see figure 1.2) however has a directl connection between the processor/cache and the memory for the processor, the bus is then connected to the memory. The main trade off between UMA and NUMA is related to memory access time. Since the NUMA memory is directly linked to the processor/cache it provides faster access to local data but is slower when accessing remote data.

The NUMA system memory is managed in a node based model. The nodes consist CPUs, cache lines, and local memory that can communicate via a NUMA connector. This system creates page pools for each of the different nodes and can swap pages from each of these nodes using a swapper thread. The page pools contain free lists that hold the available pages; the active and inactive lists are used to manage page reclamation.

History

Different Strategies

Page Allocation strategies are broken down to three categories when concerning with NUMA:

  • Fetch - determining which page to be brought to main memory
    • demand fetching
    • prefetching
  • Placement - determining where to hold the page
    • Fixed-Node
    • Preferred-Node
    • Random-Node
  • Replacement - determining which page to remove for new pages
    • Per-Task
    • Per-Computation
    • Global
  • first touch - allocates the frame on the node that incurs the page fault, i.e. on the same node where the processor that accesses it resides.
  • round robin - pages are allocated in different memory nodes and are accessed based on time slices.
  • random -

Page Allocation Support in OpenMP

  • has directives for allocating blocks a certain way
    • !dec$ migrate_next_touch(v1,...,v2) - migrates selected pages to referencing thread for easy access

<ref name="rot99">Rothman, Jeffrey B. and Alan Jay Smith. Sector Cache Design and Performance. http://www.eecs.berkeley.edu/Pubs/TechRpts/1999/CSD-99-1034.pdf

</ref> <references/>