2.6 Blade Servers and Message passing

From Expertiza_Wiki
Jump to navigation Jump to search

Are blade servers an extension of message passing ?

Blade servers are not essentially an extension of message passing. Infact they use message passing in order to achieve fast and efficient performance. Parallel computing frequently relies upon message passing to exchange information between computational units. In high-performance computing, the most common message passing technology is the Message Passing Interface (MPI), which is being developed in an open-source implementation supported by Cisco Systems® and other vendors.

High performance computing (HPC) Cluster applications require a high performance interconnect for blade servers to achieve fast and efficient performance for computation-intensive applications.When messages are passed between nodes , some time is spent transmitting these messages, and depending on the frequency of the data synchronization between processes, that factor can have a significant effect on total application run time. It is critically important to understand how the application works with respect to interprocess communications patterns and the frequency of updates, because these affect the performance and design of the parallel application, the design of the interconnecting network, and the choice of network technology.

Using traditional transport protocols such as TCP/IP, the CPU is responsible for managing how data is moved between I/O memory and for transport protocol processing. The effect of this is that time spent in communicating between nodes is time not spent on processing the application. Therefore, minimizing communications time is a key consideration for certain classes of applications.

MPI is “middleware” software that sits between the application and the network hardware. It provides a portable mechanism to enable messages to be exchanged between processes regardless of the underlying network or parallel computational environment. As such,implementations of the MPI standard use underlying communications stacks such as TCP or UDP over IP, InfiniBand, or Myrinet to communicate between processes. MPI offers a rich set of functions that can be combined in simple or complex ways to solve any type of parallel computation. The ability to exchange messages enables instructions or data to be passed between nodes to distribute data sets for calculation. MPI has been implemented on a wide variety of platforms, operating systems, and cluster and supercomputer architectures.

See Also [1] The best of both worlds