InfiniBand Interconnect infrastructure for scalable data center Iinfrastructures

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
High-performance computing, big data, Web 2.0 and search applications depend on managing, understanding and responding to massive amounts of user-generated data in real time. With more users feeding more applications and platforms, the data is no longer growing arithmetically — it is growing exponentially. To keep up, data centers need to grow as well, both in data capacity and the speed data can be accessed and analyzed.
Scalable data centers today consist of parallel infrastructures, both in the hardware configurations (clusters of compute and storage) and in the software configuration (for example Hadoop), and require the most scalable, energy-efficient, high-performing interconnect infrastructure: InfiniBand.
While Ethernet is used widely in data centers, it requires backward compatibility to decades worth of legacy equipment and its architecture is layered — top of rack, core and aggregation. While this is a suitable match for a dedicated data center, for a fast growing and scalable compute infrastructure, this is more of a challenge.
InfiniBand was first used in the high-performance computing arena due to its performance and agility. But it isn’t just InfiniBand’s extreme low latency, high throughput and efficient transport (that requires little CPU power) that has made it the obvious choice for scalable data centers. Rather, it’s InfiniBand’s ability to accommodate unlimited-sized flat networks based on the same switch components, the capability to ensure lossless and reliable delivery of data, and it’s capability of congestion management and support for shallow buffers.
The basic building blocks of the InfiniBand network are the switches (ranging from 36 to 648 ports in a single enclosure) and gateways from InfiniBand to Ethernet (10G or 40G). The InfiniBand switch fabric runs at 56Gbps, allowing flexible configurations and oversubscription in cases where the throughput to the server can be lower. The InfiniBand fabric and the applications that run on top of InfiniBand adapters are managed the same that we manage Ethernet fabrics and applications running on Ethernet NICs.
InfiniBand is a lossless fabric that does not suffer from the spanning tree problems of Ethernet. Scaling is made easy through the ability to add simple switch elements and grow the network to 40,000 server and storage end-points in a single subnet and to 2^128 (~3.4e+38) endpoints in a full fabric. InfiniBand adapters consume extremely low power of less than 0.1 watt per gigabit, and InfiniBand switches less than 0.03 watts per gigabit. [Also see: “Figuring out the data center fabric maze”]
As InfiniBand competes with Ethernet, InfiniBand pricing has kept competitive, and the higher throughput enables the lowest cost per end point.
10X performance improvement, 50% capex reduction
The combination of sub 1 microsecond latency, 56 Gigabits per second throughput, Remote Direct Memory Access (RDMA), transport offload, lossless, congestion free, and more, enables InfiniBand users to dramatically increase their application performance and reduce their capital and operational expenses.
Oracle, for example, jumped on the InfiniBand train a few years ago and has built database, cloud, in-memory and storage solutions based on InfiniBand. That decision enabled it to provide 10X and more performance improvement for its users.