Latest InfiniBand Integrators’ List and RoCE Interoperability List Now Available
After a successful Plugfest 31, the latest InfiniBand Integrators’ List and RoCE Interoperability List are now available for download. The Lists feature InfiniBand and RoCE solutions from IBTA member companies, including Broadcom Limited, Finisar, Fujitsu Component Limited, Intel Corp., Jess-Link Products Co. Ltd., Luxshare-ICT, Mellanox Technologies, Molex, NetApp, Prime World International Holdings Ltd., QLogic – A Cavium Company, TE Connectivity and Volex. Additionally, test vendors Ace Unitech Anritsu, Keysight Technologies, Molex, Software Forge, Tektronix and Wilder Technologies generously contributed their equipment, software and expertise to the IBTA Plugfest 31.
Access the Lists and additional Plugfest resources below:
Check in on the IBTA Plugfest page frequently for updates on future compliance and interoperability events.
InfiniBand Continues to Lead the TOP500 Supercomputer List
The June 2017 TOP500 List reveals that InfiniBand remains the most used High Performance Computing (HPC) interconnect and accelerates the majority of newly listed TOP500 systems. The TOP500 List is published twice a year and ranks the top supercomputers worldwide based on the LINPAC benchmark rating system, providing valuable statistics for tracking trends in system performance and architecture. These results reflect continued industry demand for InfiniBand’s unparalleled combination of network bandwidth, low latency, scalability and efficiency.
Key InfiniBand metrics from the latest TOP500 List include:
- 60% of the HPC systems
- 56% of new systems on TOP500 List
- 48% of the Petascale systems
- EDR 100 Gb/s grew 2.5 times compared to the November 2016 rankings
Read the full IBTA announcement for more information on InfiniBand-powered TOP500 systems and view the complete list on TOP500 List website.
InfiniBand Roadmap Outlines Path to 1000 Gb/sec
The IBTA’s InfiniBand Roadmap was developed with the intent to keep the rate of InfiniBand performance increases in line with systems-level performance gains. For those with a stake in the interconnect business, the roadmap offers a vendor-neutral outline for the progression of InfiniBand technology so that they may plan their product development accordingly. For enterprise and HPC end users, it provides specific milestones around expected improvements to ensure their InfiniBand investment is protected. The roadmap has been updated to take into consideration HDR 200 Gb/sec technologies shipping this year and to pave the way to a world with 1000 Gb/sec server connectivity with XDR.
Read insideHPC's article for more insight on the IBTA’s InfiniBand Roadmap.
Data Center Networks Prepare for AI with InfiniBand
TechTarget’s Erica Mixon explores the storage, networking and compute challenges that data centers will face when implementing AI applications. In order for data center networking architects to best prepare their infrastructures for AI, they must take into account scalability, high bandwidth and low latency capabilities – all of which InfiniBand is perfectly suited to deliver now and into the future.
For a detailed overview of network considerations for data center architectures running AI applications, read TechTarget’s coverage.
When Trying to Achieve Exascale, InfiniBand Delivers Best ROI
Co-design architecture is the latest advancement helping the HPC industry reach Exascale performance. By taking a holistic system-level approach and leveraging system efficiencies, the HPC industry can achieve fundamental performance improvements. According to HPCWire, the high performance, efficiency and scalability advantages that InfiniBand offers will enable this next step on the path towards Exascale.
Read HPCWire's article to discover InfiniBand’s role in reaching Exascale.
New Survey Cements InfiniBand’s Dominance in HPC
A recent report from Intersect360 reveals that InfiniBand remains the dominant protocol for high performance systems, while its share of HPC storage and LAN segments continues to grow. The report also states that most academic, government and even commercial sites are turning to InfiniBand given the value they place on the interconnect’s unparalleled performance.
To read more on Intersect360’s survey results, read HPCWire's article.
Use RDMA if You Want Absolute Best Performance
A recent blog from Microsoft’s Windows and Windows Server storage engineering teams begs the question – To RDMA, or to not RDMA for Storage Spaces Direct? After comparing RDMA-enabled with RDMA-disabled Ethernet on new hardware, it was determined that RDMA significantly boosts performance. RDMA-based technologies like RoCE are proven to reduce latency and provide solid performance, further proving this point.
For more information on the benefits of using RDMA, read Storage at Microsoft’s blog.
Cavium Announces New FastLinQ® 41000 Series of Ethernet NICs
Cavium’s FastLinQ 41000 Series is a low power, second generation of 10/25/40/50GbE NIC, delivering advanced networking for cloud and telco architectures by leveraging over a decade of technology expertise. With support for RoCE, the QL41000 family is the industry’s only NIC adapter family with Universal RDMA that delivers the technology choice and investment protection that customers need.
For more on the news, check out the official press release from Cavium.
HPE Synergy Platform to Use New Mellanox Ethernet Switch ASIC
Mellanox Technologies announced its Spectrum™ Ethernet switch ASIC will power the first HPE Synergy Switch Module supporting native 25, 50, and 100 Gb/s Ethernet connectivity. Leveraging RoCE, the new solution enables efficiency and scalability, extreme computing, real-time response, I/O consolidation and power savings for Cloud, Web 2.0, Data Analytics, Database, and Storage Platforms. The RoCE-powered solutions also support a broad range of enterprise application environments including financial services, government and education, industrial design and manufacturing, life sciences, and web serving and collaboration.
Read Mellanox's official announcement here.
Mellanox InfiniBand and HPE Solutions Power New NASA Supercomputer
NASA Ames Research Center selected Mellanox EDR InfiniBand solutions and the new HPE SGI 8600 liquid cooled platform to expand their Electra supercomputing cluster with next generation interconnect and processor technology. This system expansion will help enable smart offloading, achieving the highest level of application performance and efficiency.
Read Mellanox's full announcement for more details.
Flash Memory Summit
August 7-10, 2017
Flash Memory Summit in Santa Clara, CA, is the world’s largest conference dedicated to flash memory technology, showcasing flash-based storage solutions from leading supplies revolutionizing every aspect of the datacenter. Through keynotes, technical sessions and exhibit floor booths, attendees will learn about the latest flash trends and developments driving the industry.
Stop by the “Beer, Pizza and Chat with the Experts” on Tuesday, August 8, 7:30 to 9 p.m., where IBTA representatives will host a table on RDMA interconnects. This informal event gives attendees a chance to sit and “talk shop” with a diverse group of experts on the latest trends happening within the flash industry.
For information on the event, visit the website.
For questions or comments, please contact email@example.com.