Posted by: Andy Huckridge | June 19, 2012

Blueprint: Network Architecture

Blueprint: Network Architecture.

Introduction

It is well understood that placing an aggregator between a number of network links and an analytic tools layer can be important for giving you more coverage of your network links, helping tools run at full capacity, and centralizing traffic to a NoC from a number of disparate network segments. Aggregators allow for the monitoring of a greater number of network segments, increasing the utilization of your tools and allowing for greater network coverage as a whole. Network problems can be found more quickly, facilitating faster troubleshooting and resulting in greater network up time and less churn.

 

But there’s an oft overlooked side effect – the loss of Link Layer Visibility. Critical information about the nature of the data you’re trying to monitor and analyze can no longer be seen. You’ve lost information such as which port the packet came from, when the packet arrived, the underlying characteristics of the network link, and the nature of the traffic on the link. In fact there’s a whole wealth of information that’s been lost when Link Layer Visibility has been lost, including the associated meta-data. What are the real network ramifications of losing Link Layer Visibility? You don’t know which network segment your tool is viewing, or have visibility into any of the meta-data about the packet itself, including:


• Delay

• Jitter

• Latency

• Packet loss

• Malformed packets count

• Duplicated packets count

• Fragmented packets count

 

The problem extends all the way up to the Tool layer

By introducing an aggregator which doesn’t preserve Link Layer Visibility, tool results are incorrect, streams are discarded, and you start losing packets because there are now collisions introduced at the ingress, again by the aggregator, due to the characteristics of bursty traffic. In fact – it’s worse than that. Sessions are now mixed together and your analytic tool may not be able to differentiate between one session and another – from packets arriving on one port and packets arriving on a different port. Tool results can often be wrong, resulting in a loss of accuracy and incorrect decision making since there may not be complete flows of traffic to analyze, with decreased result precision – where the same test gives different results over and over. In short your tool does more harm than good. Tools become more inefficient since they are now being overloaded on the front end by too much traffic – a consequence of no filtering. Indeed, tools have to work harder to check for and remove duplicated packets, as well as to reassemble fragmented packets where packet size boundaries have been exceeded.

 

The solution

There’s nothing wrong with adding an aggregator – or with aggregation, but next time you’re in need take a look at the VSS Monitoring line of intelligent network taps. Features for preserving Link Layer Visibility include:


• Port & Time stamping

• Filtering

• Packet counters

• Microburst detection and mitigation

• Aggregation

• High availability / High resiliency

• Session Aware Load Balancing

Multiply that with the industry’s only true Mesh deployment architecture, as opposed to the far less reliable Hub & Spoke approach, and not only do you benefit from seeing multiple network segments with the same, or with multiple different tools, but you also get much more resilient monitoring with a network wide view that self-learns, self-heals and never loses a packet! 

 

Recovering a lost situation

By installing a VSS solution on top of your existing aggregator you get to reverse many of the problems explained above, as well as gain an optimized tools layer, deferring new investment and preserving existing investments. You also increase the accuracy and precision of the tool results as the data stream becomes more optimized through packet de-duplication, packet fragment reassembly & filtering. By surrounding the aggregator with a VSS Monitoring traffic capture layer, you preserve the port and ingress time as well as the associated meta-data.

 

 

Conclusion

Aggregators are good at aggregating, but little else. A dedicated traffic access layer combined with Network Intelligence Optimization features provides the most uptime, lowest downtime, highest reliability network with the most efficient tools layer and optimized traffic for service monitoring. Only by preserving Link Layer Visibility can you guarantee you’ll be able to find and process the packet that will lead to getting the network back up again. Using an aggregator that doesn’t preserve Link Layer Visibility will help to conceal the problem that is keeping your network down, hiding the packet at issue whilst effectively impeding a resolution.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: