CSC/ECE 506 Fall 2007/wiki1 8 s5: Difference between revisions

From Expertiza_Wiki
Jump to navigation Jump to search
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:


== Message Passing ==
== Message Passing ==
Line 29: Line 30:


Although Message Passing Model as a whole has not changed over time, the Message Passing Interface (MPI) has undergone continuous change. MPI is a communications protocol used to program parallel computers. MPI is not sanctioned by any major standards body; nevertheless, it has become the de facto standard for communication among processes that model a parallel program running on a distributed memory system.
Although Message Passing Model as a whole has not changed over time, the Message Passing Interface (MPI) has undergone continuous change. MPI is a communications protocol used to program parallel computers. MPI is not sanctioned by any major standards body; nevertheless, it has become the de facto standard for communication among processes that model a parallel program running on a distributed memory system.


== Message Passing Interface (MPI) ==
== Message Passing Interface (MPI) ==
Line 67: Line 69:


7. Hardware implementations
7. Hardware implementations


== Blade Servers ==
== Blade Servers ==

Latest revision as of 02:22, 11 September 2007


Message Passing

When we have multiple processors, there needs to be a way to communicate between those processors. Message Passing forms a part of this communication architecture. There are other methods of communication like Shared Address Space and Data Parallel Processing, which along with Message Passing contribute to the communication abstraction. Communication abstraction is essentially a layer in between the application software and the communication hardware where the programmer uses available libraries to initiate communication between processors though programs.


Message Passing Model

Message Passing Model is defined as:

1. Set of Processes having only local memory

2. Processes communicate by sending and receiving messages

3. Transfer of data between processes requires cooperative operations to be performed by each process (a send operation must have a matching receive)


The message passing model has gained wide use in the field of parallel computing due to advantages that include:

1. Hardware match - The message passing model fits well on parallel supercomputers and clusters of workstations which are composed of separate processors connected by a communications network.

2. Functionality - Message passing offers a full set of functions for expressing parallel algorithms, providing the control not found in data-parallel model.

3. Performance - Message passing gives the programmer explicit control of data locality. This in turn enables effective management of memory and caches in CPUs.


Latest Developments in Message Passing

Although Message Passing Model as a whole has not changed over time, the Message Passing Interface (MPI) has undergone continuous change. MPI is a communications protocol used to program parallel computers. MPI is not sanctioned by any major standards body; nevertheless, it has become the de facto standard for communication among processes that model a parallel program running on a distributed memory system.


Message Passing Interface (MPI)

It is a specification for message passing libraries, designed to be a standard for distributed memory, message passing and parallel computing. The goal of the Message Passing Interface simply stated is to provide a widely used standard for writing message-passing programs. The interface attempts to establish a practical, portable, efficient, and flexible standard for message passing.


Advantages

MPI is preferred over other implementations for several reasons like:

1. Standardization - MPI is the only message passing library which can be considered a standard. It is supported on virtually all High Performance Computing (HPC) platforms.

2. Portability – Modification of source code not required when the application is ported to a different platform that supports MPI.

3. Performance - vendor implementations should be able to exploit native hardware features to optimize performance.

4. Functionality (over 115 routines)

5. Availability - a variety of implementations are available, both vendor and public domain.


MPI Implementations

Some of the implementations of MPI include:

1. Classical Cluster and Supercomputer implementations

2. Python

3. OCaml

4. Java

5. Microsoft Windows

6. MATLAB

7. Hardware implementations

Blade Servers

A blade server is a server chassis housing multiple thin, modular electronic circuit boards, known as server blades. Each blade is an independent server, often dedicated to a single application. The blades are servers on a card, containing processors, memory, integrated network controllers, an optional fiber channel host bus adaptor (HBA) and other input/output (IO) ports.

Blade servers allow more processing power in less rack space, simplifying cabling and reducing power consumption. According to WinSystems article on server technology, enterprises moving to blade servers can experience as much as an 85% reduction in cabling for blade installations over conventional 1U or tower servers. With so much less cabling, IT administrators can spend less time managing the infrastructure and more time ensuring high availability.


A blade server is sometimes referred to as a high-density server and is typically used in a clustering of servers that are dedicated to a single task, such as:

1. File sharing

2. Web page serving and caching

3. SSL encrypting of Web communication

4. The transcoding of Web page content for smaller displays

5. Streaming audio and video content


Architecture

A general blade server architecture consists of hardware components like the switch blade (for network signal switch functions), chassis (with fans, temperature sensors, etc), and multiple compute blades (for computer server functions). Blades that are application specific are positioned between switch blade and compute blades.

The outside world connects through the rear of the chassis to a switch card in the blade server. The switch card is provisioned to distribute packets to blades within the blade server. All these components are wrapped together with network management system software provided by the blade server vendor. The network management could be done through Message Passing which essentially makes blade servers an extension of message passing.


Evolution

Blade servers date back to 1970s. The evolution chronology as provided by Wikipedia(Article: Blade Servers) can be summarized as below:

1. In the 1970s, soon after the introduction of 8-bit microprocessors, complete microcomputers were placed on cards and packaged in standard 19-inch racks. This architecture was used in the industrial process control industry as an alternative to minicomputer control systems. Programs were stored in EPROM on early models and were limited to a single function.

2. In 1981 the VMEBus architecture was designed in California. VMEBus architecture defined a computer interface which included implementation of a board-level computer that was installed in a chassis backplane with multiple slots for pluggable boards that provide I/O, memory, or additional computing. This architecture introduced the use of a chassis which forms the backbone of Blade servers today.

3. Later, PCI Industrial Computer Manufacturers Group (PICMG) developed a chassis/blade structure for Peripheral Component Interconnect bus PCI which was called CompactPCI. Though these chassis based computers included multiple computing elements to provide desired level of performance, there was always one master board coordinating the operation of the entire system.

4. In the next phase, PICMG expanded the CompactPCI specification with the use of standard Ethernet connectivity between boards across the backplane.

5. The first open architecture for a multi-server chassis was provided in Sept 2001 with the adoption of PICMG 2.16 CompactPCI Packet Switching Backplane specification. This was the closest form of present day blade server.

The name blade server is given to a card including the processor, memory, I/O and non-volatile program storage (flash memory or small hard disk(s)). This represents a complete server, with its operating system and applications packaged on a single card / board / blade. These blades operate independently within a common chassis, doing the work of multiple separate server boxes efficiently. Less space consumption is the most obvious benefit of this packaging, but additional efficiency benefits have become clear in power, cooling, management, and networking due to the sharing of common infrastructure to support the entire chassis, rather than providing each of these on a per server box basis.


Future

Early versions of server blades will be primarily high-density, low-power devices with relatively low performance. This type of blade is suited for first-tier applications such as static Web servers, security, network services, and streaming media because the applications can be easily and inexpensively load balanced. The performance of an application depends on the aggregate performance of the servers rather than the performance of an individual server.

Higher performance, less dense blade designs will help drive blade usage into more mainstream applications in the corporate data center. These designs can offer the individual performance characteristics and features available in today's rack-dense servers along with the cost, deployment, serviceability, and density benefits of server blades. The blades will be well suited to high-performance Web servers, dedicated application servers, server-based or thin-client computing, and high-performance computing (HPC) clusters. The introduction of server blades and associated technology like IB (InfiniBand) will usher in a new IT infrastructure.


References

1. Message Passing

2. MPI Implementation

3. Blade Servers

4. Blade Server Evolution