Specification FAQ

The InfiniBand™ specification defines an interconnect technology for servers and storage that changes the way data centers are built, deployed and managed. By creating a centralized I/O fabric, InfiniBand enables greater server and storage performance and design density while creating data center solutions that offer greater reliability and performance scalability. InfiniBand technology is based upon a channel-based, switched fabric, point-to-point architecture. 

What is InfiniBand? 
InfiniBand is an industry standard, channel-based, switched fabric interconnect architecture for server and storage connectivity.

Why is InfiniBand important? 
High-performance applications – such as bioscience and drug research, data mining, digital rendering, electronic design automation, fluid dynamics, and weather analysis – require high-performance message passing and I/O to accelerate multiple processes that each require access to large datasets to compute and store results. 

In addition, enterprise applications – such as customer relationship management, database, virtualization, and web services, as well as key vertical markets such as financial services, insurance services, and retail – demand the highest possible performance from their computing systems. 

The combination of 40Gb/s InfiniBand interconnect solutions and servers based on multi-core processors delivers optimum performance to meet these challenges.

What performance range is offered by InfiniBand? 
InfiniBand offers three levels of link performance - 10Gb/s, 20Gb/s and 40Gb/s. Each of these link speeds also provides low-latency communication within the fabric, enabling higher aggregate throughput than other protocols. This uniquely positions InfiniBand as the ideal I/O interconnect for data centers. 

What will it take to integrate InfiniBand Architecture into a virtualized data center? 
Growth of multi-core CPU-based servers and use of multiple virtual machines are driving the need for more I/O connectivity per physical server. Typical VMware ESX server environments, for example, require use of multiple Gigabit Ethernet NICs and Fibre Channel HBAs. This increases I/O cost, cabling and management complexity. 

InfiniBand I/O virtualization solves these problems by providing unified I/O on the compute server farm, enabling significantly higher LAN and SAN performance from virtual machines. It allows for effective segregation of the compute, LAN and SAN domains to enable independent scaling of resource. The result is a more change-ready virtual infrastructure.

Finally, in VMware ESX environments, the virtual machines, applications and vCenter-based infrastructure management operate on familiar NIC and HBA interfaces, making it easy for the IT manager to avail of the above value propositions with minimal disruptions and learning.

InfiniBand optimizes data center productivity in enterprise vertical applications, such as customer relationship management, database, financial services, insurance services, retail, virtualization, cloud computing and web services. InfiniBand-based servers provide data center IT managers with a unique combination of performance and energy-efficiency resulting in a hardware platform that delivers peak productivity, flexibility, scalability and reliability to optimize TCO.

What is the expected demand for InfiniBand-enabled solutions? 
According to IDC research, high performance computing (HPC), scale-out database environments, shared and virtualized I/O, and increasing demand from financial applications with HPC-like characteristics are driving and will continue to drive the rapid adoption of InfiniBand. 

IDC projects the following market growth:

  • IDC expects factory revenue for InfiniBand products, which include HCAs, to increase at a 29.3 percent compound annual growth rate (CAGR) from $62.3 million in 2006 to $224.7 million in 2011.
  • IDC predicts factory revenue for IB switches will increase at a 45.2% CAGR over the next five years from $94.9 million in 2006 to over $612 million by 2011.
  • IDC says that although InfiniBand penetration in the market is led mostly by the HPC market, the demand will expand outside this traditional market space into mainstream datacenters with workloads that require attributes provided by InfiniBand.

How are InfiniBand fabrics managed? 
InfiniBand specifications have standardized InfiniBand management infrastructure. InfiniBand fabrics are managed via InfiniBand consoles, and InfiniBand fabric management is expected to snap into existing enterprise management solutions. 

How big is the InfiniBand architecture development effort? 
The InfiniBand Trade Association has grown from 7 companies to more than 40 since its launch in August 1999. Membership is open to any company, department of government or academic institution interested in the development of InfiniBand architecture. To see a list of current trade association members please visit the member roster.

What is the relationship between InfiniBand and Fibre Channel or Gigabit Ethernet? 
The InfiniBand architecture is complementary to Fibre Channel and Gigabit Ethernet. InfiniBand is uniquely positioned to become the I/O interconnect of choice for data center implementations. Networks such as Ethernet and Fibre Channel are expected to connect into the edge of the InfiniBand fabric and benefit from better access to InfiniBand architecture-enabled compute resources. This will enable IT managers to better balance I/O and processing resources within an InfiniBand fabric. 

What type of cabling does InfiniBand support? 
In addition to a board form factor connection, it supports both active and passive copper (up to 30 meters pending speeds) and fiber-optic cabling (up to 10km). 

How many nodes does InfiniBand support? 
The InfiniBand Architecture is capable of supporting tens of thousands of nodes in a single subnet.