On the testing edge was a column I wrote for TMC’s IMS Magazine during 2008. It featured education, thought leadership and news relevant to the Test & Measurement industry. Interesting to note that now in 2012, IMS is actually coming back to life! Hence the re-posting of these earlier articles…

IMS Magazine

Oct/Nov 2008 | Volume 3/Number 5

By Andy Huckridge, M.Sc.

The Chrome browser from Google has been on everyone’s lips recently. But what does it tell us from a testing perspective? In short, it tells us that software quality is now a key marketing tool. In a highly contended marketplace, where everyone knows what bad quality means, a product that simply works as promised can change the landscape over-night — whether or not Google Chrome will actually succeed.

Have you had your browser die on you while filling up various questionnaires and other overly complex web content? Have you felt the sluggishness of the software when trying to multitask across several different sites? Still, everyone knows the web, and its problems, and maybe even take them for granted. But have you had bad experiences with other consumer products? VoIP, 3G, IMS, Digital Television? Most of you probably have.

Consumers Work Like a Herd
Testing has many faces, and if one of those aspects is missed, the endresult will not be accepted by the marketplace. Consumers are more and more informed on the quality of products, and user experiences are shared openly through the Internet. For example, before buying a car, people first search the Internet for common failures in the brand, and study carefully the opinions of other people. Not about the features, but past problems or lack of problems. Such brand-loss is difficult to rebuild, as the Internet never forgets a thing.

User Experience
People buy products with solid brands, and advertise those products openly. Critique is usually open as well. But the selection criteria change over time. The IMS quality assurance market is still caught up in legacy criteria such as quality of service and performance, which might have been the top criteria for carriers and service providers. Consumers see the same issues with different eyes. The main selection criteria are almost always brand and reputation, and those are built from usability and reliability. In short: the overall quality of the product.

Test More with Less
But how do you keep up with the increasing demands of the consumers? How do you keep the brand untarnished? The solution is test automation.

Unit testing today is mostly automated. Almost every testing professional is also a programmer, fluently writing test scripts with a wide range of scripting languages. Test automation frameworks bind them together, and automate the early testing steps. Also, the user interfaces are automatically explored to try various test cases, including recording and reproducing common use cases. Think of them as cheap test engineers – teach them once and they will automatically do the same thing over and over again.

A recent addition to most professional test automation frameworks is fuzzing, a negative testing approach that will explore the unexpected inputs to the software to find and eliminate security issues in the software. In their marketing material, Google described the tests done by fuzzing tools to be like monkey testing, random inputs to various APIs and network interfaces.

Collaborate
The testing at telecom companies has been dominated by large testing vendors that do it all in a piece of test equipment. Today those companies still dominate the carrier tests. But when testing the consumer products, the field is completely different. It feels like client-side testing is so much ahead of the core network testing in the area of test automation.

Due to availability of test automation tools, testing today is simpler and faster. But the area of test automation often involves a number of different tools and test tool vendors. Collaboration between those vendors is key for good quality products. Various user environments and communication technologies require different tools. Very rarely you find one vendor that can offer everything by themselves. But that just enables us testing experts to pick and choose the best products, ones that fit our own special needs.

Andy Huckridge is Vice President, Marketing, Codenomicon. Andy has worked in the Silicon Valley telecommunications industry for more than a decade and has a broad background in defining and marketing products for the semiconductor, VoIP and IMS/NGN space. Andy is active in various Forums including the Multi-Service Forum, where he is chairperson of the Interoperability Working Group & NGN Certification Committee. Andy is a VoIP patent holder, an IETF RFC co-author and inaugural member of the “Top 100 Voices of IP Communications” list. He holds Bachelor’s and Master’s degrees in Telecommunication Engineering from the University of Surrey, England.

Posted by: Andy Huckridge | February 14, 2012

The Product Development Life Cycle

On the testing edge was a column I wrote for TMC’s IMS Magazine during 2008. It featured education, thought leadership and news relevant to the Test & Measurement industry. Interesting to note that now in 2012, IMS is actually coming back to life! Hence the re-posting of these earlier articles…

IMS Magazine

By Andy Huckridge, M.Sc.

In last month’s edition of this column, I covered the vocabulary of testing and the main test methodologies. This month we’re going to cover the testing involved during the Product Development Life Cycle which will enable us to look at the test strategies for specific NGN Network Elements (NE’s) in future editions of this column.

Role of Standards Bodies
To say it all starts here would be an understatement. A good test equipment vendor will be avidly involved in the telecom standards process (ITU, ETSI, ANSI, IETF etc.) to provide the most up to date tools to Network Equipment Manufacturers (NEM). By the same token a NEM could get left behind if it were not represented there. Although the standards process is often long, seemingly complicated and old fashioned, the finished product — a standard — is well worth it.

Role of Industry Fora/Advocacy Groups
Within the IMS / NGN / Service Oriented Architecture (SOA) space there are many such groups — SIP Forum, Multiservice Forum (MSF), IMS/NGN Forum and IP Sphere to name but a few — with new groups appearing all the time. These groups normally advocate a specific awareness or implementation to suit the needs of their members. We often see the first signs of public testing efforts from within these groups. The IMS/NGN Forum holds Plug Tests every few months. The SIP Forum holds periodic bake-offs with the MSF at the biennial Global MSF Interoperability event. Recent efforts from the industry have seen the introduction of certification programs for well-established technology areas as well as for areas where multi-vendor interoperability has not been achieved. In addition, they provide basic interoperability and permanent test beds.

Testing Life Cycle and Considerations: What Test Methodologies to Use and Where
Design & Development Test. You have a team of developers — and they are scattered around the globe. How exactly do you test their code? This is the realm of dev-test — “not breaking the tree” and “checking working modules in” are common terms here. But more importantly it’s about having a colleague walk through your code to test it. Unfortunately, bugs still make it to the quality assurance (QA) stage since all too often the same person is writing the code and as well as testing it. Never a good idea!

Test methodologies employed here would facilitate the prototyping of code/protocol implementations. Code integrity testing tools are also common, specifically White and Gray box testing. Often these are referred to by alternative names such as Security and Vulnerability testing, or Protocol Fuzzing. Interoperability testing in an open system is important at this stage. But in closed single vendor systems it can often be overlooked. Load or stress testing tools are seldom employed at this stage of a product’s life cycle.

Software Quality Assurance (SQA) / Product Verification (PV) Testing. Software QA or product verification is the department normally responsible for in-depth product testing. Common methodologies include load or stress testing as well as conformance testing where an external standard is referenced. Robustness and interoperability methodologies are also common. Even with all these different test methodologies employed, very often a company can be its own worst enemy by using internal test tools. For example, the developer of the code in the device under test (DUT) will often write the accompanying internal test tool, thus nullifying any independent observation, verification and validation.

A good rule of thumb here is for a company to spend at least 1 percent of its product’s market share on SQA/PV test equipment. In my experience the most successful companies have had the most diligent testing departments.

Manufacturing Test. Sometimes called ‘Mfg test’ or ‘Go/No-Go’ testing, this test methodology is used to only prove the manufacturing process and not to verify the product design. This test is done to verify that the product has been built to set specifications. For hardware, ATE or functional testing is the norm, which is very often performed at a subcontractor’s facility. For software, this is normally functional testing alone, often with hardware and software integration included. For example, voice quality testing mimics the end user experience.

Acceptance Testing. This test methodology generally involves running a suite of tests on a completed/installed system, which may also encompass 3rd party sub components. Each individual test, known as a case, exercises a particular operating condition of the system’s environment or feature and will result in a Accepted/Not-accepted outcome. There is generally no degree of success or failure.

Field Test and Service Assurance. This is often the most time consuming part of a system install, sometimes also called ‘Turn-up’ testing. This methodology deals with system level test done against different components from the same vendor or interoperability testing if connecting to components from different vendors. Having a call or service invoked is normally the sign of a successful field test. Often capacity tests follow.

After a successful field test, the next methodology along the Product Development Life Cycle comes down to a management/monitoring or ‘Service assurance’ function to test the system stays in compliance with all industry standards and both vendor and operator requirements. As well as making sure the end customer is happy of course. Good testing!

Andy Huckridge is an independent consultant and an expert in NGN technologies. Andy has worked in the Silicon Valley Telecommunications industry for 12 years and has a broad background in defining and marketing products in the Semiconductor, VoIP and IMS/NGN space. He holds Bachelor’s and Master’s degrees in Telecommunication Engineering from the University of Surrey, England. Andy is active in various Forums including the Multi-Service Forum, where he is Chairperson of the Interoperability Working Group & NGN Certification Committee. Andy is a VoIP patent holder, an IETF RFC co-author and inaugural member of the “Top 100 Voices of IP Communications” list.

Posted by: Andy Huckridge | February 13, 2012

Vocabulary of Testing

On the testing edge was a column I wrote for TMC’s IMS Magazine during 2008. It featured education, thought leadership and news relevant to the Test & Measurement industry. Interesting to note that now in 2012, IMS is actually coming back to life! Hence the re-posting of these earlier articles…

IMS Magazine

 

April 2008 | Volume 3 / Number 2

By Andy Huckridge, M.Sc.

In the first edition of On the Testing Edge, I covered the landscape of why services are so important for Next Generation Networks and many of the issues testing can overcome to facilitate a trouble-free roll out. This month we’re going to dig further, taking a look at the vocabulary of testing, the different categories of testing and follow up with how they relate to the product development life cycle.

Precision: The degree of refinement with which an operation is performed or a measurement stated. This in simple terms means the following: can the same test be run with the same results observed? In a capacity test, how equal are the results each time the test is run?

Accuracy: The degree of conformity of a measurement to a standard or a true value. In simple terms this means how well a value can be determined. In Voice Quality Metric testing, a MOS score of 3.5, versus 3.51.
Reproducibility: The ability to produce the same outcome given a controlled set of variables. In a test situation, this is the ability of a test to produce the same bug time after time often a crucial factor if a bug is to be found and subsequently remedied.

Independent observation, verification and validation: When testing, it is often not enough to have the same person or test setup to find and diagnose a problem. The same goes for a programmer who can’t find his or her own bug — they are often too close to the problem. Thus, it’s important to separate the observation and verification phases in testing.

Lord Kelvin: Essential commentary for both the understanding of a problem and how to improve a product, or cure a defect / bug.

  • “If you can not measure it, you can not improve it.”
  • “To measure is to know.”

Most Common Types of Testing
Black box testing treats the software as a black-box without any understanding as to how the internals of the box behave. This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behavior), is the same as the expected value specified in the test case.

White box testing, however, is when the tester has access to the internal data structures, code, and algorithms. For this reason, unit testing and debugging can be classified as white-box testing and it usually requires writing code, or at a minimum, stepping through it, and thus requires more knowledge of the product than the black-box tester.

In recent years the term gray box testing has come into common usage. This involves having access to internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level.

Functional testing covers how well the system executes the functions it is supposed to execute, which can include placing a call, performing a transfer, if a PBX (News – Alert) for example. Functional testing covers the obvious surface type of functions, as well as their back-end operation.

Conformance testing is used to make sure a standard or protocol actually conforms to a specific standard. This type of testing facilitates better system / interoperability testing later on in the testing life-cycle.

Capacity / Stress / Throughput / Load testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing often refers to tests that put a greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances.

Interoperability testing generally appears at the system level, especially in complex telecoms systems like IMS. Most often just a single call or two (or service interaction) are used to verify that two systems are interoperable.

Robustness testing is in many ways similar to conformance testing, but with the added flexibility and freedom of going outside of the protocol or standard. To send bad, or malformed packets into a Device Under Test (DUT) for example. This can also be referred to as “Fuzzing the protocol” to see the resultant behavior on a specific network element or device.

Andy Huckridge, is Director, NGN Solutions at Spirent Communications, where he leads Spirent’s strategy for the Multimedia Application Solutions division. His responsibilities include product management, strategic business planning & market development. Andy has worked in the Silicon Valley Telecommunications industry for 12 years and has a broad background in defining and marketing products in the Semiconductor, VoIP and IMS/NGN space. He holds Bachelor’s and Master’s degrees in Telecommunication Engineering from the University of Surrey, England. Andy is active in various Forums including the Multi-Service Forum, where he is Chairperson of the Interoperability Working Group & NGN Certification Committee. Andy is a VoIP patent holder, an IETF RFC co-author and inaugural member of the “Top 100 Voices of IP Communications” list.

Posted by: Andy Huckridge | February 13, 2012

IMS — From Network Deployment to Service Delivery

On the testing edge was a column I wrote for TMC’s IMS Magazine during 2008. It featured education, thought leadership and news relevant to the Test & Measurement industry. Interesting to note that now in 2012, IMS is actually coming back to life! Hence the re-posting of these earlier articles…

IMS Magazine

 

February 2008 | Volume 3 / Number 1

By Andy Huckridge, M.Sc.

IMS has transitioned from a concept to a “here and now” architecture. The impact IMS stands to have on revenue streams is forcing service providers and equipment manufacturers to look closely at combining existing service offerings and to pay close attention to the quality of experience (QoE) these combined services deliver. As a delivery system, IMS provides subscribers with widespread access to new and existing services independent of location or device. The architecture comprises evolving protocols and interface specifications to make possible voice, video and data services over fixed and mobile environments. IMS also offers high scalability for network expansion along with system redundancy for improved reliability.

IMS is expected to work with any wireless or fixed network that uses packet switching, including older gateway-supported telephone systems. Operators and service providers seeking to employ IMS will be able to use a variety of network architectures including their existing systems to offer services such as Voice-over-Internet Protocol (VoIP), gaming, messaging, content sharing and presence information among other applications.

One of the most important promises of IMS is the rapid introduction of new multimedia services. By separating the Application/Services Layer from the control and transport planes, new individual applications can be developed faster at lower cost. Also, service providers can use third parties for application development. Subscribers will enjoy greater service selection from just about any device. They will have broader access from workstations, cell phones, PDAs, fixed and mobile viewing devices, and the latest devices presently in development. Subscribers are expected to use many additional services, generating revenue potential for the operators and providers.

While IMS is not distinctly about developing new services, it certainly enables the introduction of new services by using InternetProtocol (IP) and the IETF-designated Session Initiation Protocol (SIP). Described in RFC 3261, SIP is an application-layer signaling protocol that starts, changes and stops sessions between participants. The 3rd Generation Partnership Project (3GPP) standardized the SIP variant used in IMS, but other protocols and functions also contribute to the viability of IMS across a variety of networks and devices.

A New Service Architecture
Properly set up, IMS core networks will interwork with 2G/2.5G and 3G cellular networks, public switched telephone networks (PSTN) and other existing VoIP networks. And while IMS network and device performance standards have not yet been fully adopted, which leaves open-ended questions as to how IMS goodness metrics will be achieved and measured. One thing is certain, however: The pressure is on to create new services despite the need to solidify standards and measurements.

IMS Architecture

Fixed and mobile convergence (FMC) paves the way for merging wireless and traditional wireline technologies. The dissimilarity of fixed networks and mobile networks is clear; they were invented and implemented at different times and for different services. Mobile networks enhance many exciting, customer appealing services while fixed services offer mainly caller ID, call back, second line and call block. Yet the two types must now come together. They need to be delivered with a single technology in such a way to provide operating benefits and costs benefits. Carriers will be able to save money by merging the cores of fixed and mobile networks, and NEMs will realize savings by offering a common architecture to service providers.

This new architecture will benefit landline providers as they stave off mobile subscriber churn. Capital expenditures will decrease significantly after some increased operating expenditures. Ultimately, a win-win situation can be expected as providers and operators offer more compelling IMS services, but the steps leading to such success will have to be assured through proper testing of equipment and systems.

Delivering IMS Based Services
Service providers and network operators are independent businesses. None is likely to use IMS and protocols in the same manner. However, all will utilize combinations of protocols depending on their family of offerings and individual strategies. To succeed, providers and operators will have to understand how and where to test their system if they want the best network performance and maximum revenue generation. The first step is getting to know how FMC relates to IMS, and the next step is to begin comprehensive testing.

Until the industry fully implements IMS networks that provide IMS services, FMC and FMC-based services must be linked to IMS. Providers and operators who do not understand this concept may lose service revenue. For the time being, IMS will carry FMC services to subscribers over FMC-capable networks that will essentially be IMS-based. IMS is expected to carry FMC-based services such as call swapping between a landline and a mobile, as well as swapping between the Radio Access Network (RAN controls transmission/reception of cellular radio signals) and the WiFi network at home. Eventually, new IMS-based services will be rolled-out on IMS-based networks. Underlying network topology, whether mobile or fixed, will be irrelevant. At that time, IMS will be in full operating mode with the transition finalized.

IMS testing over mobility and FMC should be approached by isolating packets and security gateway devices by emulating WLAN access points, millions of mobile nodes and the entire 3G mobile packet core. A test methodology is required that covers all aspects of IMS service delivery — conformance, functional and performance. As the FMC transition approaches and IMS evolves, special attention must be paid to billing systems and security threats.

IMS is a paradigm shift and has a significant impact on testing strategies. Historically, 18 months or more are required to introduce a new service. IMS can potentially reduce time to a few months, even weeks. To do so, testing strategies must be nimble which they have not been traditionally. Testers must be capable of allowing quick prototyping of new services in the lab prior to deployment. As service providers and NEMs evaluate IMS test solutions, they should consider testers that are designed to be inherently flexible to quickly craft new call flows for specific applications. These test systems should allow users to isolate individual application servers (AS) or test applications as a system, including the control plane and the AS. The IMS test systems should analyze and validate functionality, error handling, or tune an application server for performance. Furthermore, they should test most IMS applications such as Presence, Push-to-Talk, Instant Messaging, and Share List Servers.

Taking Care of Billing
The implementation of IMS is a business decision, as payment systems are an integral part of IMS architecture. Standards-based interfaces and network elements have been defined to facilitate billing. IMS changes the rules on unique/customized billing schemes to maximize average revenue per user (ARPU). These billing schemes will permit subscribers to choose from a large selection of services and products by adding or deleting offerings in real-time.

Unlike in the past when testing billing in an IMS service environment, service providers need to consider several important aspects, such as validating the billing criteria and process when adding new services, identifying and testing new billing schemes that are not limited per minute charges and ensuring that systems can handle content-sensitive billing, such as billing different type content in the same service at different rates.

Seizing Security by the Horns
Dedicated testing for threats conducted before and after deployment will keep IMS systems functioning properly, or this new technology will suffer like many unprotected enterprises. While IMS promises easy access across multiple providers, the reality of implementation still faces interoperability hurdles between legacy and next-generation networks. This implementation issue is especially true for not only security but billing accuracy as well. Vendors and operators must carefully evaluate and verify their IMS strategies prior to full-scale deployment. IMS networks must be able to interoperate with today’s existing networks, which is why thorough network and device testing is vital every step of the way—from before deployment and throughout the deployment process.

IMS security must be managed at two separate levels — Network-to-Network Interconnection (NNI) and User-to-Network Interconnection (UNI). Service providers must ensure that when connecting to other service provider networks, traffic passes securely between the networks and that the billing information transfers in a secure manner. Currently, for service providers there are a major set of issues surrounding the users’ ability to access the network, in terms of authenticating the user and making sure they can access only the services they have been granted.

QoS in IMS

Services are the Future
IMS is a business decision that involves technology modification and creation. When a common architecture is implemented, the gate opens wide for the introduction of innovative new services to subscribers. Such new services are expected to drive the adoption of IMS and the global implementation of Next Generation Networks. It is important to remember that the IMS architecture is for delivering services and not necessarily for advocating an inherent service. For the first time in telecom history, an architecture separates the service layer from the network’s signaling and bearer layers. IMS allows NEMs and operators to focus on a “service architecture/service delivery” approach to enhance time-to-market for new products — boosting their ability to compete in an already highly competitive marketplace.

Our new columnist, Andy Huckridge, is Director, NGN Solutions at Spirent Communications, where he leads Spirent’s strategy for the Multimedia Application Solutions division. His responsibilities include product management, strategic business planning & market development. Andy has worked in the Silicon Valley Telecommunications industry for 12 years and has a broad background in defining and marketing products in the Semiconductor, VoIP and IMS/NGN space. He holds Bachelor’s and Master’s degrees in Telecommunication Engineering from the University of Surrey, England. Andy is active in various Forums including the Multi-Service Forum, where he is Chairperson of the Interoperability Working Group & NGN Certification Committee. Andy is a VoIP patent holder, an IETF RFC co-author and inaugural member of the “Top 100 Voices of IP Communications” list.

Posted by: Andy Huckridge | February 13, 2012

Script for the ROI Challenge Video

Network Intelligence Optimization: How to reduce operator Capex & Opex

Section 1: Introduction

Operators are in a bind. The race to upgrade key infrastructure for 4G, means that CapEx and OpEx budgets are constrained like never before.  Meanwhile the need to support new premium services, such as VoLTE, as well as surging over-the-top content (for which the operators are largely losing potential revenues) means an acute need for monitoring systems. How does the operator ensure the network stays up without escalating both the service and support costs?

Voice and Data traffic compared

An operator needs to see a clear ROI path, but traditional monitoring solutions simply don’t provide that story. A new approach is needed, and in this video, we’ll show how a Network Intelligence Optimization layer solves this challenge and why it provides the best return on investment (ROI) for operators. So let’s look more closely at the challenges… On the CapEx side, new technology is being rolled out faster and more efficiently through all-IP platforms – but this break-neck pace calls for regular upgrades in the monitoring tools and analytic elements. For example, 10G monitoring tools are now widely available, but can be prohibitively expensive and are often under-utilized when deployed. A more cost effective path for the operator is to preserve the use of their existing tools.

Benefits of deploying VSS Monitoring

 

On the OpEx side, new services are developed and launched every day. There are myriads of possible interactions between the services and the network itself, between the pay-for-play services and the ‘over the top’ services, where the possibility of a network’s performance being slowed or even being brought down due to the handset wake-up cycle is very real. No operator can afford to grow their support department in this way – for services which essentially provide little or no revenue. Clearly, telecom, enterprise and government network operators must develop a holistic and future minded strategy for network monitoring and network operation management.

Section 2 The solution — Network Optimization Layer

An alternative which preserves or often greatly reduces current levels of CapEx and OpEx, is to deploy a Network Intelligence Optimization layer, this extends the life of existing tools even as the network capacity is upgraded, the visibility of the network increased system-wide, future tool expenses more accurately predicted facilitating better tool management. These advantages bring clear benefits to both the top and bottom lines of the operators business.

Network Intelligence Optimization Layer

This Network Intelligence Optimization layer becomes the path for a smart network-monitoring infrastructure. To sustain the increase in speed, the traffic capture layer must continue at line rate in hardware, where a deeper awareness of packets and applications, as well as more dynamic handling such as filtering, de-encapsulation or packet re-assembly are essential. The first step in this direction is to eliminate the 1:1 approach used in the past for connecting network intelligence tools to the switching infrastructure.

Purpose built appliances featuring a broad set of line-rate, traffic intelligence capabilities can be deployed at strategic points in the network, providing passive or active monitoring between the network intelligence tools and the switching infrastructure. By deploying these appliances in a mesh, the Network Intelligence Optimization layer can become self-healing while gaining vast scalability in step with the operators budget for monitoring coverage and future expansion.

Section 3 – Let’s calculate the ROI

Let’s consider a real case by plugging numbers into this ROI calculator. For instance, a tier 1 mobile operator who deployed LTE into a new market needed to install 10 regional POPs, with each POP having an overlap of analytic elements which directly contributed to a financial inefficiency as well as a management overhead.

ROI Calculator

After the operator installed the Network Intelligence Optimization Layer they were able to observe significant financial benefits. Capex spend was reduced by 80%, with Opex spend being reduced by 50%, the operator saving over $2m during the three year Total Cost of Ownership period. In this particular study the operator’s Return on Investment was over 1700%, which yielded a payback of just 2 months.

 

Section 4 — The case for VSS

By optimizing the network intelligence tools in your network and data centers, VSS solves a variety of problems with a very clear ROI. By adopting a systems-approach to Network Intelligence Optimization, you get the flexibility of modularity to deploy just what you need and when. This approach ensures support for your existing set of network intelligence, analytic or security tools. You also gain maximum reliability with automatic, system-wide fault tolerance.

 

Full Mesh Architecture

 

Section 5 — Conclusions & further recommendations

To wrap up, the network monitoring optimization framework discussed here allows organizations to migrate from a high initial CAPEX business model to a lower and variable CAPEX model in the network-monitoring component of the budget. It also opens the door for the network operator to do more in other areas such as network forensics, lawful intercepts, behavioral analysis, centralizing applications for compliance, as well as QoE differentiators.

ROI Calculator Inputs

 

With so much change happening in the network, it’s essential that operators stay one step ahead of the curve by solving the challenge of scaling network monitoring with a real framework that’s in step with the business plan. To learn more about how these efficiencies may be incorporated in to your organization and how you can take the ROI challenge, please contact VSS Monitoring at www.vssmonitoring.com Thank you.

Posted by: Andy Huckridge | February 12, 2012

Maintaining “Link Layer Visibility”

Which server is the data coming from?

An issue in the Network Monitoring / Network Management world that’s often not fully understood, nor corrected after aggregation has been introduced upstream of the network TAPs on input side of analytic tools is that of losing Link Layer Visibility.

So there you are, you’ve just bought an expensive analytic tool and you want to get more coverage of your network links, say to a few other local network segments – after you realise the tool’s not running at its full capacity, you buy an aggregator to place between your network segments and your analytic tool. Guess what, you’ve lost Link Layer Visibility… You’ve lost critical information about the nature of the data you’re trying to monitor and / or analyse.

What are the real network ramifications of losing Link Layer Visibility? You don’t know which network segment your tool is looking at, or what your tool does when looking at multiple different network segments, tool results are not correct and you start losing packets because there are now collisions introduced at the ingress, again by the aggregator, due to the characteristics of bursty traffic. In fact – it’s worse than that. Sessions are now mixed together and your analytic tool may not be able to differentiate between one session and another – from packets arriving on one port and packets arriving on a different port.

There’s nothing wrong with adding an aggregator – or with aggregation, that’s fine, but next time you’re in need – take a look at the VSS Monitoring line of products which preserve or restore Link Layer Visibility when lost, through advanced features such as the following:

Multiply that with the industry’s only true Mesh deployment architecture, as opposed to the far less reliable Hub & Spoke approach – and not only do you benefit from seeing multiple network segments with the same, or with multiple different tools, but you also get much more resilient monitoring with a network wide view that self-learns, self-heals and never loses a packet!

What’s the upshot?

Only by preserving Link Layer Visibility can you guarantee you’ll be able to find and process the packet that will lead to getting the network back up again. Using an aggregator that doesn’t preserve Link Layer Visibility will help to conceal the problem that is keeping your network down, hide the packet at issue and impede a resolution.

Posted by: Andy Huckridge | February 10, 2012

Network Intelligence Optimization: How to reduce Operator Capex / Opex

Network monitoring systems are essential in ensuring the performance and reliability of key infrastructure, but operators need to see a clear return-on-investment for these expensive tools. In this video, Andy Huckridge, Dir. of Marketing at VSS Monitoring, shows how a Network Intelligence Optimization Layer solves the challenge and why it provides the best ROI for operators.

Visual representation of Packet Fragmentation

Continuing on from the previous article, yet another common issue that dogs unintelligent monitoring systems – which causes a reduction in the effectiveness of connected Analytical tools.

What is Packet fragmentation and what causes packets to become fragmented?

Within a network it is sometimes necessary to concatenate either the body of header of an existing packet with a different header, or to encapsulate it completely within a separate packet – this is normally to better facilitate the packet’s passage through a network. There are many reasons to do this, perhaps there are two different network types, with different policies or with different MTU sizes – it could be that one network uses encryption and another one doesn’t. Or, and more commonly – it could be a where two networks use dissimilar transport protocols. For example, MPLS-TE and SCTP, TCP or UDP and even VLAN tagging. Either way something has to be done to transport a packet from one network through another dissimilar network type to a final destination network type.

What causes this to happen?

Specifically, in 4G/LTE mobile networks there is the mandated use of GTP – GPRS Tunneling Protocol. If the packet to be encapsulated has already reached its maximum MTU size, and then encapsulation is added – the packet becomes too long to pass through the network as a single packet and must be split in two or more packet fragments. This can cause a headache for the analytic tools which sit downstream of the traffic capture system itself. If you also take in to account that networks have multiple routes available to them and very often certain traffic types are send through certain routes, a packet can end up with its fragments being out of order, or even greatly separated in time when they are received by the network element that is to process them for analytic information. Imagine a network where there are multiple transport routes between source and destination – the fragmented packet’s middle fragment may arrive in advance of the first fragment or behind the third fragment! The 3rd fragment could arrive before the first two fragments, or get badly delayed in another part of the network, holding up the ability to process the first two fragments.

Why is this an issue?

All of a sudden, and for no apparent reason the traffic no longer resembles how it was sent, its characteristics have changed completely. The network has affected the transport of the packet. In order for a tool to be at its full capacity, it’s expecting perfect packets. Capacity will be reduced if the tool has to reassemble the fragments itself. Very often a more serious situation arises where the tool may or may not be able to remove the body of the fragment from the encapsulation type – leading to incomplete or inaccurate packet analysis. Where this happens, and very often analytic tools need to see the whole interchange between sender and receiver – just losing one packet, or even a fragment of a packet renders the session null & void. This collected and backhauled traffic is useless.

How to avoid the issue of fragmented packets flowing in to your tool, causing reduced performance?

Analytic tools are not build from the ground up to reassemble packets, nor to de-encapsulate them. This function is better left to another network element, ideally in the traffic capture layer. When you select a monitoring system, make sure it has a capability to re-assemble fragmented packets as a base feature…

Posted by: Andy Huckridge | February 8, 2012

How to avoid packet duplication through a monitoring system

Visual representation of packet duplication

Let’s take a look at another common issue that dogs unintelligent monitoring systems – which causes a reduction in the effectiveness of connected Analytical tools.

What is Packet duplication and what causes packets to be duplicated?

This is where the same packet is on the wire multiple times. It gets there through a multitude of ways, but the issue is that it causes problems with analytic tools such that they end up spending more time than they should removing the duplicated packets before any actual analysis can take place. Analytic tool results can become skewed, network issues can become deeply hidden.

With a router plugged in to a single network segment, if multiple SPAN ports are connected to a single aggregator, identical traffic will be present from each of the SPAN ports. You monitoring system will have collected multiple copies of the same packet. With an aggregator, if you are monitoring traffic from several points in a series of point to point links, you’ll end up with the same packet as it traverses each link, resulting in multiple copies of the same packet.

Why is this an issue?

If you have multiple identical packets being provided by the monitoring system / traffic visibility / network monitoring infrastructure, the tool has to do more work. The tool becomes less effective – essentially costs more, as you have to buy more tools to analyse same amount of data. On top of that, there could be issues with the analysis results since the same packets are arriving with different delays respective to each other which can cause different / un-repeatable results / a lack of precision with the results. Also, issues with a network could be hidden because the tool can’t correctly analyse the data – as the underlying issue is being effectively hidden by the duplicated packets.

How to avoid this issue?

When you select a monitoring system, make sure it has a capability to remove duplicated packets as a base feature…

Affording the Upgrade: The New World of Network Intelligence

The ever-increasing demand for “anytime, anywhere” data and how to keep it from shutting you down

« Newer Posts - Older Posts »

Categories