About InfiniBand™

InfiniBand™ is an industry-standard specification that defines an input/output architecture used to interconnect servers, communications infrastructure equipment, storage and embedded systems. InfiniBand is a true fabric architecture that leverages switched, point-to-point channels with data transfers today at up to 120 gigabits per second, both in chassis backplane applications as well as through external copper and optical fiber connections.

With QDR 40Gb/s shipping today, InfiniBand has a robust roadmap defining increasing speeds. Many IBTA member companies are developing products or have recently announced FDR 56Gb/s-enabled products with immediate availability in 2011. Please check with individual vendors for product details and see the FDR InfiniBand fact sheet for more information.

The roadmap shows projected increased demand for higher bandwidth with new EDR 100Gb/s products beyond 2011.

InfiniBand is a low-latency, high-bandwidth interconnect which requires low processing overhead and is ideal to carry multiple traffic types (clustering, communications, storage, management) over a single connection. As a mature and field-proven technology, InfiniBand is used in thousands of data centers, high-performance compute clusters and embedded applications that scale from two nodes up to a single cluster that interconnect thousands of nodes. Through the availability of long-haul InfiniBand over WAN technologies, InfiniBand is able to efficiently move large data between data centers around the globe.


Advantages

InfiniBand™ addresses the challenges IT infrastructures face as more demands are placed on the interconnect with ever-increasing demands for computing and storage resources. Specifically, InfiniBand has the following advantages: 

  • Superior performance: InfiniBand is the only shipping solution that supports 40Gb/s host connectivity and 120Gb/s switch to switch links.
  • Low-latency: InfiniBand's ultra-low latencies, with measured delays of 1µs end to end, greatly accelerate many data center and high performance computing (HPC) applications.
  • High-efficiency: InfiniBand provides direct support of advanced reliable transport protocols such as Remote Direct Memory Access (RDMA) to enhance the efficiency of customer workload processing. 
  • Cost effectiveness: InfiniBand Host Channel Adapters (HCAs) and switches are very competitively priced and create a compelling price/performance advantage over alternative technologies. 
  • Fabric consolidation and low energy usage: InfiniBand can consolidate networking, clustering, and storage data over a single fabric which significantly lowers the overall power, real estate and management overhead required for servers and storage. 
  • Reliable, stable connections: InfiniBand is perfectly suited to meet the mission-critical needs of today's enterprise by enabling fully redundant and lossless I/O fabrics, with automatic path failover and link layer multi-pathing abilities to meet the highest levels of availability.
  • Data integrity: InfiniBand enables the highest levels of data integrity by performing cyclic redundancy checks (CRCs) at each fabric hop and end to end across the fabric to ensure the data is correctly transferred. 
  • Rich, growing ecosystem: InfiniBand is at the center of an ecosystem that includes open-source software distribution from the OpenFabrics Alliance, innovative and cost-effective cabling, and long-haul solutions that reach outside the data center and across the globe.
  • Highly interoperable environment: InfiniBand compliance testing conducted by the IBTA, combined with interoperability testing conducted by the OpenFabrics Alliance, results in a highly interoperable environment, which benefits end users in terms of product choice and vendor independence.

Infiniband Market Momentum

InfiniBand™ is seeing continued growth on the TOP500 list, which serves as a precursor to commercial high performance computing (HPC) adoption. For the November 2010 list, InfiniBand represents more than 43 percent of all systems. InfiniBand was used by almost 46 percent of the TOP500 sites. InfiniBand connects the majority of the top 100 with 61 percent, the top 200 with 58 percent and the top 300 with 51 percent.

The total number of InfiniBand-connected CPU cores on the TOP500 list has grown 65 percent, from 1.4 million in Nov. 2009 to 2.3 million in Nov. 2010. InfiniBand connects the majority of the Petaflop systems on the Top10 with four out of seven.

With 96 percent system efficiency, InfiniBand is the only non-proprietary, open-standard I/O that provides the interconnect technology required to handle supercomputing’s high demand for CPU cycles without time wasted on I/O transactions. InfiniBand is the most efficient I/O used for inter-server communication in the TOP500.

Below is a breakdown of the November 2010 TOP500 list as the data relates to InfiniBand.

Performance of the systems on the TOP500 list has increased by a factor of 10 every five years.

The TOP500 performance increase of the years is being driven by the use of cluster system, which now represent 82.8 percent of the list. 

InfiniBand is becoming the preferred HPC cluster interconnect with 43 percent of the TOP500 list currently using InfiniBand. Use of InfiniBand on the TOP500 is now approaching that of Gigabit Ethernet.

InfiniBand now enables almost 46 percent of all systems on the TOP500 list, compared to Gigabit Ethernet's 20 percent.