ECE506 CSC/ECE 506 Spring 2013/11a ad: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
Line 22: Line 22:
into the appropriate non-overlapping subsets of systems with  
into the appropriate non-overlapping subsets of systems with  
common general advantages and drawbacks.  
common general advantages and drawbacks.  
 
Criterion: DSM implementation level  
Criterion: DSM implementation level  
Types:  
Types:  
  1 Hardware  
  1 Hardware  
Line 31: Line 30:
       2.1.2. Outside the kernel  
       2.1.2. Outside the kernel  
   2.2. Runtime library routines  
   2.2. Runtime library routines  
2.3. Compiler-inserted primitives  
2.3. Compiler-inserted primitives  
3. Hardware/software combination  
3. Hardware/software combination  


The level of DSM implementation affects both the programming model and the overall system performance. While the  
The level of DSM implementation affects both the programming model and the overall system performance. While the  
hardware solutions bring total transparency to the programmer, and achieve very low access latencies, software solutions  
hardware solutions bring total transparency to the programmer, and achieve very low access latencies, software solutions  
can better exploit the application behavior and represent the  
can better exploit the application behavior and represent the  
ideal polygon to experiment with new concepts and algorithms. <br>
ideal polygon to experiment with new concepts and algorithms.  
 
As the consequence, the number of software DSM systems presented in the open literature is considerably higher, but the  
As the consequence, the number of software DSM systems presented in the open literature is considerably higher, but the  
systems intending to become commercial products and standards  
systems intending to become commercial products and standards  
are mostly hardware-oriented.  
are mostly hardware-oriented.Architectural configuration of the system affects the system  
Architectural configuration of the system affects the system  
performance, since it can offer or restrict a good potential for  
performance, since it can offer or restrict a good potential for  
parallel processing of requests related to the DSM management.  
parallel processing of requests related to the DSM management.  
It also strongly affects the scalability. Since a system applying a  
It also strongly affects the scalability. Since a system applying a  
DSM mechanism is usually organized as a set of clusters  
DSM mechanism is usually organized as a set of clusters  
connected by an interconnection network, architectural parameters include:  
connected by an interconnection network, architectural parameters include:
a) Cluster configuration (single/multiple processors,  
a) Cluster configuration (single/multiple processors,  
with/without, shared&ivate, single/multiple level caches,  
with/without, shared&ivate, single/multiple level caches,  
Line 59: Line 59:
for software solutions, It includes the memory organization and  
for software solutions, It includes the memory organization and  
the placement of directory, as well.  
the placement of directory, as well.  
Almost all types of interconnection networks found in multiprocessors and distributed systems have also been used in DSM  
Almost all types of interconnection networks found in multiprocessors and distributed systems have also been used in DSM  
systems, The majority of software-oriented DSM systems were  
systems, The majority of software-oriented DSM systems were  
Line 74: Line 75:
b) Granularity of coherence unit (word, cache block, page,  
b) Granularity of coherence unit (word, cache block, page,  
complex data structure, etc.)  
complex data structure, etc.)  
The impact of this organization to the overall system performance is closely related to the locality of data access.  
The impact of this organization to the overall system performance is closely related to the locality of data access.  
Hardware solutions always &al with non-structured data objects  
Hardware solutions always &al with non-structured data objects  

Revision as of 00:16, 19 April 2013

Introduction

When dealing with a relatively small number of processors (8-16), according to Solihin 320, using a bus based shared memory structure is fine. Unfortunately, when you need to provide a shared memory structure for processors much greater than that, you will need a different set of organization. This new organization is needed due to the physical limitations of the bus. There are two ways you can create such a system. These include Distributed Shared Memory (DSM) or Non-Uniform Memory Access (NUMA). The benefits of having a DSM and NUMA is that we can now scale to a larger amount of processors. The disadvantage is that scaling in such a way may not be the most cost-effective solution, Solihin 320. For the remainder of this section, we will be discussing the performance of DSM's.


According to Solihin 320, there are two aspects that restrict the scalability of bus-based multiprocessors. These include the physical limitations of interconnections and the limitations of the protocol. To explain in detail, on a bus-based system, adding a processor will not affect any other physical restrictions on the system. Unfortunately, when adding a new processor, you will be reducing the speed of the bus. Second, the protocol needed to keep coherence does not scale well. As you increase the number of processors to the system, the amount of traffic also increases. This means that you might run the risk of overwhelming the bandwith. According to Solihin, there are a few ways that we can mitigate this problem. The following is from 321 of the Solihin textbook.


Multiple Caches of Shared Resource
Figure 1. Ways to Scale Multiprocessors


From the table, we can see that there is three ways to scale a multiprocessor system. The first being a single bus system. This is the least scalable due to the limitations of the bus wire itself. As you add processors you will decrease the bus speed due to having to increase the wire length. Also, you run into an issue of overwhelming the bus due to the amount of traffic. The second way is to use a point-to-point bus system. This allows for the speed of the bus to remain relatively fast, but since the traffic will also scale with the number of processors, there will be a limitation due to overwhelming the bus system with traffic. Lastly, the most scalable system to date is using a directory system. This allows for the bus to remain fast due to the short wires, and the bus traffic to remain low since the directory holds information on cache locations.


DSM Classification

In order to provide a wide and extensive overview in the field of DSM, possible platforms for classification and a set of relevant parameters that must be considered in DSM design. The selection of classification criteria can be taken conditionally, since some of the parameters could also be adopted as the platform for classification. Our choice of classification criteria relies on the possibility to classify all existing systems into the appropriate non-overlapping subsets of systems with common general advantages and drawbacks. Criterion: DSM implementation level Types:

1 Hardware 
2 Software 
 2.1. Operating system 
     2.1.1. Inside the kernel
     2.1.2. Outside the kernel 
 2.2. Runtime library routines 
2.3. Compiler-inserted primitives 
3. Hardware/software combination 

The level of DSM implementation affects both the programming model and the overall system performance. While the hardware solutions bring total transparency to the programmer, and achieve very low access latencies, software solutions can better exploit the application behavior and represent the ideal polygon to experiment with new concepts and algorithms.

As the consequence, the number of software DSM systems presented in the open literature is considerably higher, but the systems intending to become commercial products and standards are mostly hardware-oriented.Architectural configuration of the system affects the system performance, since it can offer or restrict a good potential for parallel processing of requests related to the DSM management. It also strongly affects the scalability. Since a system applying a DSM mechanism is usually organized as a set of clusters connected by an interconnection network, architectural parameters include:

a) Cluster configuration (single/multiple processors, with/without, shared&ivate, single/multiple level caches, etc.) b) Interconnection network (bus hierarchy, ring, mesh, hyper- cube, specific LAN, etc.)

Cluster configuration is usually very important for the hardware-oriented proposals that integrate the mechanisms of cache coherence on the lower level with the DSM mechanisms on the higher level of the system organization, or even store all shared data in large caches. Cluster configuration is mostly transparent for software solutions, It includes the memory organization and the placement of directory, as well.

Almost all types of interconnection networks found in multiprocessors and distributed systems have also been used in DSM systems, The majority of software-oriented DSM systems were actually build on the top of Ethernet, although some of the solutions tend to be architecture independent and portable to various platforms. On the other hand, topologies such as bus hierarchy or mesh are typical for hardware solutions. The choice of topology can be also very important for the implementation of DSM algorithm, since it affects the possibility and cost of broad- cast and multicast transactions. Shared data organization represents the global layout of shared address space, as well as the size and organization of data items in it, and can be distinguished as: a) Structure of shared data (non structured or structured into objects, language types. etc.) b) Granularity of coherence unit (word, cache block, page, complex data structure, etc.)

The impact of this organization to the overall system performance is closely related to the locality of data access. Hardware solutions always &al with non-structured data objects (typically cache blocks), while many software implementations tend to use data items that represent logical entities, in order to take advantage of the locality naturally expressed by the application. On the other hand, some software solutions, based on virtual memory mechanisms, organize data in larger physical blocks (pages), counting on the coarse-grain sharing.

Software Support

In 1986, the first software supported DSM was created. Since then, it has been well over 20 years and there have been great improvement upon the first initial system. First, it is usually the case that the software support will find some way to relax the memory consistency model. This is due to the fact that memory passing on a DSM is much more expensive than message passing on a single bus system. Over the last 20 years, over 20 different memory consistency models have been proposed <ref name="shi"></ref>. Second, cache coherence must be addressed. Having multiple cache copies means that when one copy is updated the other cache copies should be affected in some way such that the old values are not used. Traditionally there are two techniques, one being snoopy protocol and the second being directory based protocol. According to Shi <ref name="shi"></ref>, snoopy protocol is less used due the fact that it requires hardware support. Lastly, according to Shi <ref name="shi"></ref> the major problem is the interface. For a DSM system to be competitive, it has to be able to work for many customers. Below is a listing of some representative software DSM implementations.


Software DSM
Figure 3. Representative Software DSM Implementations <ref name="shi"></ref>


Hardware Support

Although a lot of research has been towards software support for DSM, there has been some research in adding some hardware support. Unfortunately, according to Shi <ref name="shi"></ref> there is a rejection of hardware support from large corporations. What will occur when using hardware support is a issue of compatibility. Fortunately, recent adoptions of certain hardware standards will allow for some hardware support on the mass level.


11a.  Performance of DSM systems.  Distributed shared memory systems combine the programming models of shared memory systems, and the scalability of distributed systems.  However, since DSM systems need extra coordination between software layer and underlying hardware, achieving good performance could be a big challenge. The factors that harm the performance could be the overhead to maintain cache coherence, memory consistency, and the latency of interconnections. Please further explore the factors that can affect the performance of DSM systems, and the improvements that have been made on the existing systems.

Performance Concerns

A DSM machine has unique requirements compared to shared memory / bus based machines in order to provide cache coherence and memory consistency, as well as having interconnects. Performance concerns of each of these concepts in DSM machines are discussed in detail.

Maintaining cache coherence

To maintain correct cache coherence, write propagation and write serialization must be provided, both of which can have adverse effects on performance.

Write serialization requires that all writes to a memory location be seen in the same order by other processors. Earlier, an example was given indicating how write serialization can be violated by observing writes out of order. A naive implementation of write serialization would require that a request and all it's messages are performed atomically to avoid overlapping of requests [1, p. 338]. Solihin [1, p. 342-344] discusses correctness issues that can occur if requests are allowed to overlap without special consideration. A non-overlapping approach would require that each request has conditions defined that indicate when it was begun and when it ends, in order for the home node to observe and wait for completion prior to processing other requests to the same block.

The performance concern of disallowing overlapping of requests is that subsequent read or write operations to the same block would be delayed from initiating, even if some of the messages within the requests can be overlapped without correctness concerns.

Another performance problem that can arise through cache coherence are false sharing misses. False sharing can be explained by an example. Suppose two processors have a cache block caches in the shared state, but processor A is reading and writing to a variable x within this block, and processor B is reading and writing to a variable y within this block. Although both processors are not attempting to access each others variables, since both variables map to the same cache block, then each processor either invalidates or sends updates to the other without the other actually needing the data. in a DSM system, these invalidations or updates can unnecessarily utilize a significant amount of interconnect bandwidth.

Maintaining memory consistency

Depending on the memory consistency model that is being enforced, performance can be lost be having to ensure various degrees of atomicity of memory accesses and program ordering.

For sequential consistency, to follow program order requires that the program executes statements in the order defined by the source code for a thread. The implication is that statements within a thread cannot be executed out of order, so compiler optimizations and processor optimizations that include out of order execution as an attempt to reduce the latency of individual instructions and increase instruction level parallelism on pipelined architectures must be avoided to varying degrees. [1, p. 293]

For atomicity of memory accesses to be seen by all processors, special considerations need to be made for DSM systems. In general, all processors must be able to detect when a load or a store has completed. For a store, or write atomicity, on a bus based system, the completion can be assumed as occurring as soon as a read exclusive request reach the bus. This is because all processors on the bus are guaranteed to see the request and will immediately invalidate themselves, resulting in the next read to the location becoming a read miss, requiring the block to be re-cached with the most up to date value. On a DSM system, however, there is no bus, so a write cannot be assumed complete as soon as invalidations are sent on the network. Rather, they must wait until acknowledgements of the invalidations are received which can take many hops and incur high latency, especially if there is network congestion. [1, p.292]

Overall, memory consistency on DSM systems can require latencies while waiting for acknowledgements to see completion of writes and loss of performance from in-order execution.

Relaxed memory consistency models are techniques normally used to alleviate performance concerns, and are discussed in detail as improvements in specific DSM systems.

Latency of interconnections

A mentioned in the cache coherency and memory consistency sections, interconnections unique distinguish DSM systems from bus based systems. Interconnections are unlike a bus in that they do not guarantee that messages reach recipients, and certainly aren't seen by the receivers at the same moment. Each message must be sent as a transaction in a networking protocol, and each packet sent has at least the latency of hops through routers in the network, and can also incur latency in being generated.

Since messages can become ubiquitous if the DSM system was naively designed to perform identically to a bus based system, care must be taken to design coherence protocols and consistency models that minimize the sending and receiving of messages, and if they must be sent or received, allow for overlapping of execution without blocking while waiting for messages or their receipt.