CIOs and IT organizations worldwide are under tremendous pressure to keep up with the demands of business for more services, applications and analytics aimed at sharpening competitive edge. Many have sought new ways of architecting and operating their data center infrastructure.
To address organizations’ need to boost network performance, efficiency, scalability and security cost-effectively, NetworkWorld Asia and Mellanox Technologies jointly held an executive briefing for a select group of business and IT leaders from a wide range of sectors, including banking and financial services, pharmaceuticals, hospitality, government, marine, e-commerce, technology and environment services.
Marc Sultzbaugh, senior vice president of Worldwide Sales at Mellanox Technologies (pictured below, left), acknowledged how growing demands are driving IT executives’ decisions around data center infrastructure. For instance, as organizations move from hard disk drives (HDDs) to solid state disks (SSDs), the latency and the bandwidth to feed that storage is higher than before. Similarly, as organizations move from dual-socket, six-core CPUs to dual-socket 24-core CPUs, the serverI/O requirements are greatly increased. Further, applications today run across multiple servers that will need to communicate with each other.
“The point is that your biggest investments inside the data center are typically the CPU and the memory associated with that,” Sultzbaugh told the executives during the session, which was moderated by Victor Ng, editorial director of the Enterprise Group at Questex Media, publisher of NetworkWorld Asia. “The biggest impact that you’ll have on the efficiency of that investment will be from the network performance. That’s the biggest revolution that has happened with the hyperscale web.”
One thing that Mellanox has learned from its years of experience in the high-performance computing space is that the network or the physical layer can oftentimes be the weakest link in creating efficient, reliable and robust infrastructure. Over the years, the company has evolved in tandem with its largest customers – the hyperscale web and cloud companies – because their infrastructure is very supercomputer-like. “It’s really about scale-out, direct-attached storage, very high performance and squeezing every ounce of efficiency that they can out of their infrastructure dollars,” said Sultzbaugh.
Offloading to the network
When the data center environment is virtualized and big data analytics is added, faster and more efficient pipes are required to reduce latency as traffic patterns change. “The way we’ve done that is to take network functions and offload them from the general x86 processor, into the network” Sultzbaugh explained in response to a participant’s question. “So, for things like traditional TCP/IP, there are other ways of moving data across the Ethernet network today like Remote Direct Memory Access (RDMA), which allows the network to move data without going through the software stack and without interrupts through the CPU.”
The outcome is a two-fold improvement. Organizations will improve not only the capacity of their network but also free up CPU cycles in their most precious resource to run more virtual machines (VMs), analytics, applications and database. For instance, instead of getting 1Gbps performance out of the network that takes 15-20% of CPU cycles, organizations can achieve 10, 25 and 50Gbps with less than 10% of the CPU cycles.
This efficiency is significant as data centers move beyond virtualizing servers to converged and hyperconverged infrastructure, breaking down the silos of compute, storage and network and transforming the way IT is procured, architected and operated.
Sultzbaugh specifically addressed participants’ concerns around integration of hyperconverged systems into existing infrastructure by pointing out that instead of a rip-and-replace approach, they can start within a rack, making sure they have the right solutions and a scalable environment.
To this end, organizations have moved workloads from traditional symmetric multiprocessor systems to pizza box-sized machines running Linux in the server rack. In enterprise storage environments, Sultzbaugh expects the NVMe or non-volatile memory express standard to be a key driver. NVMe allows SSDs to make effective use of a high-speed Peripheral Component Interconnect Express (PCIe) bus in a computer. “NVMe is driven by Intel and others in the industry, and it has completely rewritten the storage software stack so that you have a much more efficient software as well as SSDs,” he said.
Flexible, open networks
On the network side, Mellanox now runs Linux on its switches so organizations can use third-party network tools. “Imagine that now you can mix and match multi-vendor switches or open an RFQ for network switches independent of the OS,” Sultzbaugh suggested. “The same thing that has happened in the server and storage will happen in the network space.”
Mellanox is a founding member of the Open Ethernet initiative, which essentially enables organizations to run any operating system, hypervisor or other software on the network appliance of choice. By disaggregating the decisions organizations make on hardware from software, organizations can use third-party network operating systems to run both HP and Dell Ethernet switches, for instance, by leveraging common APIs.
“At some point, it may be a standard Linux distribution that runs your network just like it runs your servers,” Sultzbaugh added. “That has a lot of implications about not only your ability to have heterogeneous networking solutions but how you support and do DevOps on your networks. So, you don’t have to have Cisco Certified Internetwork Experts that know every little command line to deploy and configure your network.”
Sultzbaugh concluded the session by stressing that there’s a better way to enable data that is significantly more cost-effective and offers better overall IT infrastructure performance than solutions from traditional vendors that seek to maintain the status quo instead of innovate.
Mellanox is actively transforming its business model from being an OEM of large companies such as EMC and Oracle to branding its own products that target the enterprise data center. It aims to alleviate vendor lock-in and compete on price-performance on the hardware level as well as offer choices in the kind of software organizations use.