Network blind spots have become a costly and risky challenge as enterprises begin to deploy 10G and higher-speed networks. Existing monitoring systems struggle to keep up with traffic or to filter data ‘noise’ at a rate that they were not designed to handle.
Virtualized data centers that have been using physical network test access points (TAPs) or port mirroring switched port analyzers (SPANs) to monitor application performance find that they are losing visibility and access to virtualized traffic. Unlike on physical networks, data in virtualized environments may not traverse a physical switch or network but may remain in the same physical host.
Virtual machine (VM)-to-VM traffic, for example, passes from the virtual adapter to the virtual switch and back out again without ever exiting the physical host, creating a network blind spot. A multi-tiered application consists of multiple components on the same server and often, IT teams cannot observe and analyze communications among the database, middleware and web front end.
As a result, IT administrators cannot identify where any performance problem exists with a virtualized application. That means no visibility to prove regulatory compliance for the application or to pinpoint where responsibility lies in large implementations that involve a virtualization team as well as application, network and security teams.
Further, security and performance tools and services, including application performance monitors, protocol analyzers and data loss prevention tools, rely on packet-level data flow for accurate analysis and troubleshooting to take action.
To maintain network health and ensure a high-quality user experience, network operators need end-to-end visibility. CIOs must eliminate these blind spots and support business agility by providing actionable insights into performance degradations and potential security issues for rapidly growing complex networks.
Throwing more monitoring and security tools at the visibility problem may not be the answer, especially if the way they are implemented is problematic.
Virtualization vendors and third parties have begun providing visibility into VM traffic so existing monitoring technologies see both the physical and virtual infrastructure. For example, the port-mirroring feature of WMware’s vSphere Distributed Switch sends a copy of network packets out of the virtual environment while the Cisco Nexus 1000 also provides packet-level data from virtual implementations.
That’s where a solution like the Ixia Net Tool Optimizer (NTO) network packet broker (NPB) enables existing tools to monitor and troubleshoot across both the virtual and physical aspects of the data center. Acting as an aggregation and filtering platform, the NPB strategically offloads much of the packet processing requirements from current monitoring tools and gets the right information to the right tools at the right time for analysis.
“Enterprises need to architect visibility into the network from the very beginning,” says Michael Scheppke, senior director of Sales at Ixia. “This requires new strategies and new thought processes. The goal is to build a more systematic means for harvesting packet streams (usually port mirroring, physical taps and inline bypass switches) and leveraging them out for multiple purposes (usually through the use of NPBs).”
Such an architecture integrates virtual and physical monitoring solutions and provides a way to implement fail-safe inline security. These include physical and virtual TAPs; bypass switches to maintain connectivity when an inline security tool is down; and full-featured NPBs that solve network visibility needs from single-point solutions to large-scale deployments.
With this set of visibility tools or visibility architecture in place, IT teams control network traffic with high precision, be it filtering out packets, load-balancing traffic to monitoring tools, aggregating packets from multiple sources, sending packets from the same source to two different places, or replicating or de-duplicating packets, say officials at Ixia.
For instance, network traffic speeds can be downshifted so 1G/10G tools can monitor 10G/40G networks. Clearly, this approach:
- Extends the life of IT tools investments and maximizes the usefulness of current tool capacity
- Integrates into automated and software-defined data centers
- Provides network visibility that is scalable to match traffic and business growth
This allows network and security professionals to view the same network data and collaborate effectively in solving or preventing problems. Businesses that have implemented this approach stand to improve the ROI of monitoring tools and productivity of IT staff by cost reduction, cost avoidance and revenue generation.
For example, an NPB can alleviate additional network equipment purchases, especially when it comes to monitoring tools and where the network has been upgraded to 10, 40 or 100 GbE. The load balancing feature within an NPB segments packets to specific monitoring tools with lower data rates than the line rate. That offers a way to delay purchase of higher-rate tools.
Data gathered from the filters set up in an NPB can be analyzed and used to improve the network, reduce or eliminate service level agreement penalties as applicable, and lower any associated costs, Ixia officials say.