CSC/ECE 506 Fall 2007/wiki4 001 a1: Difference between revisions
Line 81: | Line 81: | ||
As far as processor efficiency and performance of supercomputer interconnects, Infiniband ranks as the best according to Top500. While Ethernet remains atop the most implemented list due to small scale networks not needing speed as much as low cost scalability in an interconnect, Infiniband is taking a larger portion of the market due to its high performance specs. | As far as processor efficiency and performance of supercomputer interconnects, Infiniband ranks as the best according to Top500. While Ethernet remains atop the most implemented list due to small scale networks not needing speed as much as low cost scalability in an interconnect, Infiniband is taking a larger portion of the market due to its high performance specs. | ||
[[Image: | [[Image:graphjdbraman.jpg]] | ||
Current InfiniBand installations use 4x links. This means four times the base lane rate of 2.5 Gbits/s each, or 10 Gbits/s at full duplex. Also available are 12x links, or 30 Gbits/s. | Current InfiniBand installations use 4x links. This means four times the base lane rate of 2.5 Gbits/s each, or 10 Gbits/s at full duplex. Also available are 12x links, or 30 Gbits/s. |
Revision as of 01:07, 29 November 2007
Current Supercomputer Interconnect Topologies
Gigabit Ethernet
Gigabit Ethernet is by far the communications choice for small, lower-powered clusters with a minimal need for communications. Ethernet is defined by IEEE Standard 802.3. It is implemented by networking protocols that allow 1GB of data to be transfered at a speed of up to 1 GB per second. 1 GB Ethernet is currently being replaced in the marketplace with the faster 10GB Ethernet. The standard defines the use of data frame collision detection rather than collision avoidance. CSMA/CD is used to describe the method Ethernet protocols allow communication. This stands for Carrier Sense Multiple Access with Collision Detection. If two stations operating Gigabit Ethernet send data frames which collide, the following protocol is followed according to Standard 802.3:
Main procedure 1. Frame ready for transmission. 2. Is medium idle? If not, wait until it becomes ready and wait the interframe gap period (9.6 µs in 10 Mbit/s Ethernet). 3. Start transmitting. 4. Does a collision occur? If so, go to collision detected procedure. 5. Reset retransmission counters and end frame transmission.
Collision detected procedure 1. Continue transmission until minimum packet time is reached (jam signal) to ensure that all receivers detect the collision. 2. Increment retransmission counter 3. Is maximum number of transmission attempts reached? If so, abort transmission. 4. Calculate and wait random backoff period based on number of collisions. 5. Re-enter main procedure at stage 1.
The most common Ethernet frame format (Type II) includes bits for source MAC address, destination MAC address, ether type, payload, and checksum. MAC (Media Access Control) is a layer 2 protocol that works below the Ethernet 802.2 LLC (Logical Link Control) and above the physical layer in most network topologies. Ethernet interfaces with MAC and LLC in the data link layer below the network layer. The following is a diagram Etherent data frame format:
Supercomputers connected by Ethernet can choose to use on of many physical layer links and network layer types. TCP/IP is the most common network layer implemented worldwide which is why gigabit Ethernet is so prevalent. Less modification has to take place for usability in large scale supercomputer networks. Many of the other most implemented supercomputer interconnects are simply custom implementations of the network layer type. This is done to speed up Ethernet LAN clusters running on the data link layer since TCP/IP can have too much latency and poor reliability.
Infiniband
Infiniband is a technology that runs on top of TCP/IP, operating in the "Data Engine Layer" below upper layers over which client communications take place. To reduce TCP/IP latencies, a large portion of the TCP/IP protocol stack execution is offloaded onto the client's ethernet NIC card. A subnet of switches and end users can communicate to any other Infiniband subnet via TCP/IP. An example of an Infiniband network is shown below.
The standard Infiniband data packet header order is defined as follows:
LRH - Local Route Header GRH - Global Route Header BTH - Base Transport Header ExTH - Extended Transport Header Msg Payload - Message Payload Immediate Data Header I-CRC - Invariant CRC (32-bit) Not used for Raw datagram V-CRC - Variant CRC (16-bit)
The local route header is added to data to be sent to another end users within a subnet, its header is further defined as follows:
LRH used to route packets within subnets. VL - Actual Virtual Lane Used Vers - LRH Version NH - Next Header - indicates next header IBA transport, GRH, IPv6 (raw), Ethertype (raw) SL - Service Level Destination LID - Destination Local Identifier (unique only within subnet) Rsv - Reserve Field Pkt Len - Packet Length (multiple of 4 bytes - up to 8KB) Source LID - Source Local Identifier
The global route headers is added to data to be sent to an end user on antother subnet, its header is further defined as follows:
All endnodes are required to source / sink GRH packets GID (Global Identifier) – valid IPv6 Address GRH used to route packets between subnets and for multicast GRH is consistent with IPv6 header per RFC 2460. TC (Traffic Class) – Communicates end-to-end class of service Flow ID (Flow Label) – Can be used to identify an end-end flow Present in all packets if the LRH Next Header = GRH. Each end node shall be assigned an unique GID Applications target an end node by its GID
As far as processor efficiency and performance of supercomputer interconnects, Infiniband ranks as the best according to Top500. While Ethernet remains atop the most implemented list due to small scale networks not needing speed as much as low cost scalability in an interconnect, Infiniband is taking a larger portion of the market due to its high performance specs.
Current InfiniBand installations use 4x links. This means four times the base lane rate of 2.5 Gbits/s each, or 10 Gbits/s at full duplex. Also available are 12x links, or 30 Gbits/s.
Infiniband DDR
In InfiniBand DDR (Double Data Rate), lanes are increased to 5 Gbits/s, making a 4x link 20 Gbits/s, and 12x link 60 Gbits/s, at full duplex. Quad data rate (QDR), demonstrated in 2007 with system production expected in 2008, at 10 Gbits/s per lane can acheive 120 Gbits/s on a 12x link.
Federation
Myrinet
Myrinet is a platform developed by Myricom to run on Ethernet LAN clusters that provides 5-10 times lower latency than Ethernet over TCP/IP. It is a 2.5 Gbit/s link that provides processor offload. Users of Myrinet interconnects include the University of Illinois, Indiana University, the University of Southern California, Vanderbilt University, and Los Alamos National Laboratory.
NUMAlink
XT3 Internal Interconnect
Quadrics
Sources
[1] top500.org interconnect usage (Share %), performance statistics (Rmax Sum)
[2]IEEE 802.3 Ethernet Standard
[3] Ethernet protocol summary on Wikipedia
[4] Myrinet article
Dr. Steve Hunter, NCSU Architecture of Parallel Computers, Lecture 12, 6/12/2006
[5] RTC Magazine, Processor Efficiency chart