Posted by: Andy Huckridge | March 3, 2012

US Tier 1 Mobile Carrier Launches 4G FDD-LTE Network

US Tier 1 Mobile Carrier Launches 4G FDD-LTE Network

“VSS Monitoring provided the infrastructure we needed to differentiate our network offering.” – Network Architect

It was clear that VSS Monitoring was the best system to meet all the requirements. “We chose a VSS Monitoring solution as the foundation of our monitoring system because it supports all Ethernet interfaces in a single unit, can cost-effectively upgrade to 10G through SFP+ transceivers, and can easily configure complex filtering rules,” the architect said.

Advertisement

Leader in 4G Mobile Space Expands Technology Partner Ecosystem; Added Optimization Features; Supports Deployments for Eight of Top Ten Mobile Carriers Worldwide

 

BARCELONA, Spain (Mobile World Congress Booth: # 2B115) & SAN FRANCISCO, Calif. (RSA Booth: # 2533) – Feb. 27, 2012 – VSS Monitoring, the world leader in Network Intelligence Optimization, today announced that it is building strong customer and partner momentum by providing the best ROI in the industry and introducing unrivaled features including packet deduplication and line-rate packet reassembly.

VSS’ unique systems based approach provides unmatched visibility and scalability to network intelligence tools without having to compromise security or link-level visibility. New packet de-duplication and fragment reassembly features reduce unnecessary overhead for enterprise network intelligence tools while increasing the overall performance and accuracy of existing tools.

Unprecedented levels of optimization allow enterprise network operators and security organizations to achieve information and security intelligence acceleration while ensuring existing network intelligence tools, incident analysis and response operations keep up with Big Data explosions, evolving cyber threats, rapidly increasing network speeds and the IP convergence between voice, video and data.

Growth highlights include:

  • Introduction of the monitoring industry’s most effective packet de-duplication and first line-rate packet reassembly features
  • Expansion of technology solution partner ecosystem to more than 40 leading companies
  • Customers including eight out of the top ten mobile carriers in the world, as well as recent deployments for a major tier-one North American carrier and one of the largest carriers internationally
  • Publication of best practices for the MultiService Forum; selected as the only monitoring vendor for MSF VoLTE Interoperability event
  • Introduction of ROI calculator tool
  • VSS products won awards for applications within 4G/LTE networks including Product of the Year Award from 4G Wireless Evolution and Excellence Award from INTERNET TELEPHONY
  • VSS was recognized as one of Silicon Valley’s Top 50 Fastest Growing Companies by the Silicon Valley/San Jose Business Journal

Deduplication Feature:

  • As a result of multiple SPAN port configurations on the router sending data upstream as well as the monitoring of traffic from multiple points on an aggregator, copies of the same packet appear on the wire multiple times, rendering analytic tools less effective and decreasing the lifespan of tools.
  • VSS Monitoring’s packet deduplication feature removes duplicate network packets from traffic bound for analytics tools, improving the efficiency and effectiveness of existing monitoring solutions.

Defragmentation and Packet Reassembly Capability:

  • 4G/LTE mobile networks have mandated use of GTP (GPRS Tunneling Protocol), a group of IP-based communication protocols. When packets are encapsulated and reach maximum transmission unit size, the packet must be split into two or more fragments. With multiple routes and traffic types downstream of the traffic capture system, fragments separate and fall out of order. Lacking resemblance to the initial traffic, the network alters the delivery of the packet, causing a collection and backhaul of traffic.
  • VSS Monitoring’s defragmentation capability reassembles packets flowing into analytic tools, optimizing network performance.

Supporting Quotes:

  • Rob Markovich, senior vice president of worldwide sales and marketing, VSS Monitoring, said: “More organizations are migrating their network infrastructures and platforms to the cloud, increasing the need for intelligent filtering of corporate data to the most appropriate security and analysis tool. To meet the needs of the ever evolving and growing network, VSS provides scalable and intelligent systems capable of pre-processing and pre-filtering data to provide the network intelligent tools with the appropriate traffic.”
  • Andy Huckridge, director of marketing, VSS Monitoring, said: “CapEx and OpEx constraints continue to be a critical challenge for operators rolling out new services. VSS solutions allow operators to optimize existing investments while maximizing the return on new investments – providing the monitoring industry’s best ROI. Our recent high profile deployments and the continued growth of our partner ecosystem highlight the importance of VSS’ innovative solutions across various industries, from government organizations to enterprises and carriers.”

 Supporting Resources:

About VSS Monitoring:
VSS Monitoring is the world leader in Network Intelligence Optimization, providing a visionary, systems approach for optimizing and scaling the connectivity between network switching and the network intelligence universe of analytics, security, and acceleration tools. VSS network intelligence optimization systems improve tool usage, simplify operations, increase efficiencies and greatly enhance ROI. The company is headquartered in San Mateo, Calif. For more information, visit www.vssmonitoring.com.

Posted by: Andy Huckridge | February 23, 2012

VSS Monitoring Delivers Monitoring Industry’s Best ROI

Network Intelligence Optimization Layer Provides Complete Visibility Across the Network Reducing CAPEX and OPEX, with Immediate Return on Investment

 

SAN MATEO, Calif. Feb. 23, 2012VSS Monitoring, the world leader in Network Intelligence Optimization, today introduced a new calculator tool for its network operator customers to measure the ROI of their network deployments. VSS Monitoring also announced that its fixed and mobile operator customers are benefitting from the monitoring industry’s highest return on investment at day-one, with Capital Expenditure (CAPEX) reductions of up to 80 percent and Operating Expenditure (OPEX) reductions of up to 50 percent.

With the explosive growth of network traffic, network intelligence tools face an increasing variety of constraints. To improve performance and security, these tools require operations personnel to monitor, analyze and examine network traffic in depth. Increasing subscriber interest and revenue dependence on new mobile services raise the priority to reduce spotty network performance. The increasing quantity and variety of analyzers needed to attach to network links to support this traffic growth often dramatically increases CAPEX and OPEX expenditures.

VSS Monitoring’s system-based solutions proactively copy, forward and redirect network traffic to the appropriate analyzers in real-time.  This layer of optimization allows network operators to attain higher service availability, lower labor costs for network analysis, lower analysis costs overall and achieve greater subscriber and revenue security.

Increased Tool ROI

  • The intelligence of the VSS Monitoring system filters and grooms traffic to substantially improve the efficiency of connected tools. Many application-specific tools perform better when they receive only certain types of IP traffic from certain parts of the network.
  • Selective hardware-based filtering, high data burst buffers and session-aware load balancing ensure that tools are not overwhelmed with traffic and that no packets are lost due to oversubscription.

Reduced Tool Cost

  • A network intelligence optimization system between the tools and the network infrastructure, instead of a 1:1 connection of tools to network links, enables network operations to monitor several links or the entire network with a single tool, reducing the CAPEX needed to completely cover the network.
  • With the ability for a tool to receive only the traffic of interest, 1G tools can work with 10G network links. No longer receiving all network traffic, they receive only the required traffic at full line rates. The tools therefore continue to perform effectively and accurately for a higher speed link, deferring or eliminating the need to purchase costly 10G tools.
  • Often multiple tools are interested in subsets of the same traffic type. Session awareness in the system allows traffic to multiple tools to be load balanced, enabling each tool to analyze the entire session or conversation accordingly. The session-aware load balancing of high-speed traffic to lower speed tools (e.g. from 10G to 1G) provides better quality data to the tools.

Lowest Cost of Ownership

  • Network intelligence optimization systems scale with evolving network needs by adding more nodes. The overall solution cost of the network intelligence optimization layer is substantially lower than a non-systematic approach to deploying large numbers of tools.
  • Lower management overhead, shorter time to troubleshoot network anomalies and repair lead to further reduction in operating costs, faster ROI and the ability to meet service level agreements (SLAs).

Supporting Quotes:

  • Andy Huckridge, director of marketing, VSS Monitoring, said: “In a highly competitive industry, carriers and operators are constantly seeking ways to contain costs while responding to rapidly changing business conditions, all while maintaining high visibility and compliance in the network. Optimizing and securing the flow of network analysis helps network operators fuel a sustainable competitive advantage, guarantee future operating success, maximize return on new tool investment and a greater return on existing investment.”

Supporting Resources:

About VSS Monitoring:
VSS Monitoring is the world leader in Network Intelligence Optimization, providing a visionary, systems approach for optimizing and scaling the connectivity between network switching and the network intelligence universe of analytics, security, and acceleration tools. VSS network intelligence optimization systems improve tool usage, simplify operations, increase efficiencies and greatly enhance ROI. The company is headquartered in San Mateo, Calif. For more information, visit http://www.vssmonitoring.com.

Posted by: Andy Huckridge | February 16, 2012

The “Flattening Effect” and Network Intelligence

How To Stay Ahead of the Growing Trend for “Anywhere, Anytime” Data

by Terence Martin Breslin, CEO, and Andy Huckridge, Director of Marketing

12/15/2011

The immense growth of IP-based data traffic and applications on mobile devices is pushing the adoption of 4G technologies and fueling the migration to faster data rates. Service providers busy with migration strategies are upgrading existing networks as stop-gap measures to allow an all-IP based services platform. Carriers and handset vendors are differentiating their offerings by rolling out application portals while providing improved monetization and ARPU. User mobility is pushing the trend for “anywhere, anytime” data technology, while applications are driving the subscriber need.

Continuous advancement in technology powers much of the above, including additional overall data traffic and the migration to mobile connectivity/broadband. Applications are becoming pervasive, with the subscriber in control of the what, where and how. To continue to drive down costs, operators are moving to an all-IP core, attempting to reduce network complexity and in some cases altogether outsourcing the management of their networks.

With so much change happening in the network, the migration itself doesn’t occur over night. In the near future, network operators need to combine next generation systems and devices with a supportable hybrid network that interconnects various types of existing platforms. Because the network has simultaneously become both flatter and more complex, the journey toward a converged all-IP network comes with an entirely new set of network performance and management philosophies to be adopted and developed by IT organizations. To drive the need for maintaining and managing the experience of the subscriber, real-time monitoring, troubleshooting and provisioning of the network must be implemented strategically and methodically. Real-time monitoring of network traffic has proven to be crucial to diagnosing and analyzing network performance and services, and consequently the subscriber’s quality of experience (QoE).

Out With the Old, In With the New – Problems Associated With Legacy Monitoring Schemes

Fragmented monitoring approaches increase problems associated with performance and complexity. Several of these problems have emerged with the growing complexity of data on the network and the accumulation of outdated network monitoring components. Due to the constant push for more efficient connectivity, traditional network monitoring approaches are inadequate for managing network components on enterprise and service provider infrastructures.

The aforementioned traditional approach improves the visibility of network performance by placing a series of tools into the network, but while the system solves some problems, new issues arise. IT managers are faced with the inability to access a particular point in a network with multiple tools, creating a “blind spot” on the network that can cause inefficient and difficult to solve troubleshooting transactions. Blind spots frequently occur with the type of overhead management utilized in legacy monitoring schemes.  Different sets of tools from different vendors dispersed throughout the network in various locations, each with individual management software inoperable with other vendors, can be a recipe for disaster. As network IT managers have limited accessibility to certain points on the network, they must manage an overflow of data. Monitoring costs are becoming increasingly more expensive as network management is becoming more inefficient. With rising costs and reduced ROI, profit is impacted by the lack of fast and efficient troubleshooting. The fragmented approach to network monitoring causes additional performance and complexity problems.

The “Flattening” Effect:  A Pathway for Network Intelligence Optimization to Save Time and Money

Telecom, enterprise and government network operators must develop a holistic and future minded strategy for network monitoring and network management. They must also keep in mind the key aspects of a traffic capture solution, such as the price-performance, diversity, agility and intelligent capabilities. Depending on future requirements, network operators should keep in mind existing macro trends when deciding network monitoring needs, such as “Flattening the Network”, technology development and economics.

The continuous growth of IP will accelerate the pace at which legacy systems are displaced by an all-IP network. The “flattening” effect will create more distributed IP components and broader ranges of IP services rolling out in the network, leading to more potential points of failure and increased complexity of the network. This opens opportunities for additional points of monitoring, in which the monitoring infrastructure should be “flat” and flexible across all parts of the network. The Network Intelligence Optimization framework is paving a path for a smart network-monitoring infrastructure. To sustain the increase in speed, the traffic capture layer must continue at line rate in hardware, where a deeper awareness of packets and applications, as well as more dynamic handling is essential.

Today, network managers must do more with less, delivering tighter budget control while improving service delivery. Conversely, the network monitoring optimization framework allows organizations to migrate from a high initial CAPEX business model to a lower and variable CAPEX model in the network-monitoring component of the budget. With less, the network managers can do more in other areas such as network forensics, lawful intercepts, behavioral analysis, centralizing applications for compliance, etc. Managed service providers (MSP) have also become mainstream and are focusing on monetization of QoS/QoE, rather than solely on monitoring network elements and packets. The layered-approach to network monitoring is fundamental and crucial to enabling business model differentiation in such network environments.

About the Authors

 
Terence Martin Breslin founded VSS Monitoring in October 2003. Under his leadership the company has grown into the world’s leading innovator of Distributed Traffic Capture Systems™, Protector Series™ inline load balancers / speed converters for security appliances, and network TAPs. His vision of creating a distributed systems architecture to replace the practice of using only standalone TAPs for network traffic capture has changed the practice and potential of network analysis. By providing visibility of any link in even the largest network, VSS Monitoring’s products greatly increase the ROI and productivity of networks and the people who use them.
Martin brings to VSS Monitoring extensive technology and engineering experience domestically and internationally, including directing major projects for government and international enterprises. He holds an MBA from Golden Gate University in San Francisco and a Bachelors degree in Computer Science from the National University of Ireland.
Andy Huckridge is a seasoned Telecom industry executive, currently serving as Director of Marketing at VSS Monitoring. He also serves as an independent Telecom consultant to Network Equipment Vendors (NEMs), Test Equipment Vendors, Service Providers focusing in the Test and Measurement industry. Andy has experience in overseeing various international projects in the Telecom / Security and Next-Generation space with leading companies.
  VSS Monitoring, Inc. is the leader in network traffic capture, with the world’s largest and most feature-rich family of traffic capture devices allowing IT professionals to see into the farthest reaches of even the largest networks, preventing problems from reaching end users, and greatly accelerating the ROI of network monitoring and security tools. VSS’s innovative Distributed Traffic Capture Systems and active information assurance appliances such as the Protector Series™ herald a new architecture of network monitoring, one which fundamentally improves its capability and price-performance. The company is headquartered in San Mateo, California.
Posted by: Andy Huckridge | February 16, 2012

VSS Monitoring Targets 4G/LTE Monitoring

VSS Monitoring unveiled a new framework for monitoring traffic and services across 4G/LTE networks.

Service providers have traditionally employed a flat architecture of probes and testers for monitoring their 2G and 3G network traffic locally. To handle more complex 4G networks, VSS Monitoring is developing a hierarchical framework capable of providing a network-wide view of traffic in real-time as well as packet-level visibility at any network node. The company said its “Network Monitoring 2.0” adopts a systems approach with no single point of failure. The system promises low-latency and scales to a worldwide distributed network.

Essentially, by decoupling the monitoring infrastructure from the core network, the traffic capture system can act as a universal access layer for all monitoring tools. The traffic capture layer is possible because the network taps are distributed and intelligent.

“Mobile operators clearly need a solution optimized for 4G monitoring,” said Andy Huckridge, VSS Director of Marketing.

VSS Monitoring’s Optimizer 2016 is an intelligent traffic capture device for networks from 10 Mbps to 10 GigE. It provides session-aware load balancing, a technology that maintains network session integrity to the monitoring infrastructure, allowing users to deploy multiple one Gigabit analytical tools to monitor a 10 GigE line, ensuring full coverage and maximizing monitor ROI. Hardware-based filtering allows users to filter traffic at line rate by address and protocols; users can also create custom filters. It supports VSS’ intelligent stacking technology, “vStack+”, which enables traffic capture devices to be deployed in a redundant, low-latency mesh for total, dynamic, fault-tolerant visibility that scales to even the largest networks.
http://www.vssmonitoring.com
01-Jun-10

Posted by: Andy Huckridge | February 16, 2012

Assuring VoIP Quality for Triple Play: It’s a Whole New Ballgame

Assuring VoIP Quality for Triple Play: It’s a Whole New Ballgame

by Beth Wingerd and Andy Huckridge, M.Sc. (Telecoms)

5/19/2005

 

Guaranteeing VoIP quality across triple play deployments is a tricky process. Firstly, routers and other standard IP equipment were not originally designed to deliver voice or video services. Secondly, service providers, creating complex integration topologies / architectures are often using legacy equipment. Also problems such as latency, jitter and packet loss — while not of concern in a data-only network — can significantly affect the quality of both voice and video.

Is there a solution? Ensuring that voice, video and data applications run across a network in harmony are really a function of testing and diagnostics. Most at risk is voice quality, after all, if a file transfer is delayed a half of a second, no one will even notice. But a half-second delay on voice will often result in immediate complaints — and if it happens often enough will simply result in the loss of a customer.

But let’s back up a minute. Before testing even starts, it’s critical to know what is considered an acceptable level of voice quality. For a VoIP call to be considered POTS quality, quality measurements must meet the following generally accepted standards:

Call Setup/Post Dial Delay: < 3 s
Latency: < 150ms
Jitter: < 10ms
Loss: 10**-5 BER

If these QoS parameters are met, voice quality generally achieves a MOS score — the de-facto industry standard for measuring voice quality — of 4.0 or higher, which indicates toll quality voice. The factors affecting MOS include latency, jitter, loss, as well as codec, tandem encoding/decoding, background noise and echo. Maintaining a MOS score of 4.0 or higher requires controlling and testing these factors — as well as a host of other issues — in the equipment, within the network and in the service delivery.

Covering the Basics

On the equipment side, savvy network equipment manufacturers (NEMs) are testing the basic functionality of their equipment in the labs before it even reaches their customers. Such testing includes ensuring that voice is prioritized ahead of data.

NEMs are also relying on test equipment to determine the ROI of the equipment for their service provider customers. This means determining specific quality and capacity metrics such as how many voice channels the equipment can support as well as how many packets per second can be processed. They are also pre-testing the robustness of their equipment to withstand security attacks that are designed to kill or crash the systems.

On the network side — particularly with enterprise networks — it is critical to test a network prior to deployment to ensure that it can support high quality VoIP. If service providers try to deploy VoIP for an enterprise customer without first testing that customer’s network, they will be blamed when service quality is not acceptable — even if the problem lies within the customer’s network. And with an increasing number of VoIP providers available it’s easy for a customer to switch providers if quality does not meet expectations from day one.

When pre-testing a network, the test systems need to simulate the flooding of the network with high call volumes to determine the affect on call quality. When moving to trial a service during a First Office Application or bringing a new customer online during the provisioning verification stage, the ability to emulate the customer’s traffic is essential. Service providers need testing equipment that allows them to simulate the transmission of packets using the customer’s actual IP addresses and subnets.  Testing a network before service turn-up is also crucial to uncovering configuration mistakes that could lead to quality problems. For instance, if gateways are not configured for proper IP address to phone number translations, calls cannot be set up. Router configurations must be accurate to enable optimal forwarding and QoS for voice traffic, which can be accomplished by implementing DiffServ prioritization and queuing, and MPLS traffic engineering. In addition, queuing choice and configuration are also critical because incorrect queuing can cause inappropriate packet delays.

Post-Deployment: A Look at Services

The services side mainly involves assuring quality after the service is in the production phase. There are three aspects of a good VoIP service assurance solution:

* Fault management, or the identification of fault events (via alarms) in the network and the correlation of these events

* Performance management, or the collection and trending of performance-related data over time for customer reporting and proactive monitoring purposes

* Diagnostics, or the ability to identify the exact cause and required repair for a known performance or fault event

By far the most important aspect of the three listed above is diagnostics. Good diagnostic solutions can decrease a service provider’s mean time to repair by 50 percent — while also allowing less skilled technicians to easily diagnose network problems.

Uncovering the Various Layers

Having a good diagnostic solution is particularly critical when supporting triple play services, where the same transmission equipment is often used to support a variety of different applications. This makes it very difficult to uncover what equipment is causing a particular problem. That is why one of the most critical features of a good diagnostic system is the ability to diagnose problems across the various layers of the network using one integrated tool.

For instance, a quality issue in Layer 7 — the application layer — may not be caused by a problem in that layer. It could be caused by a Layer 1 problem — that is, a physical layer problem — that was created by a backhoe that brought down the connection to a DSLAM. Or it could have been caused by a Layer 2 problem — for instance, by a glitch in the Ethernet switch that was causing traffic to be sent to the wrong location. With the ability to run tests at different layers — as well as within layers — technicians can quickly isolate the problem to a specific protocol or equipment without having to be IP experts themselves. Even further diagnostics can also be conducted to isolate problems to a specific network segment.

Without the ability to diagnose services issues across layers, resolving them becomes much more complex. The inability to correctly diagnose network problems can actually cost service providers money. For instance, one Ethernet service provider was paying a customer rebates every month because the customer claimed that the service provider was not meeting the quality standards outlined in its SLA. Using diagnostic tools, the service provider monitored the customer’s traffic and discovered that the customer was exceeding the agreed upon bandwidth, which was in turn causing dropped packets. The service provider no longer had to pay rebates, the customer was able to reconfigure their equipment, and the problem was easily fixed

The bottom line is that offering triple play services — whether they are delivered to enterprises or residential customers — introduces complexity into a network, which in turn makes the need for network testing critical. As more and more VoIP providers enter the marketplace, ensuring high quality VoIP services from day one of deployment will be imperative to retaining customers. After all, isn’t creating sticky customers the goal of offering triple play services to begin with?

 

About the Authors

  Beth Wingerd, Senior Director, IP Services and Andy Huckridge, Product Marketing Manager, IP Telephony, for Spirent Communications Systems

About Spirent Communications

  Spirent Communications  is a global provider of integrated performance analysis and service assurance systems that enable the development and deployment of next-generation networking technology such as Internet Telephony, broadband services, 3G wireless, global navigation satellite systems, and network security equipment. Spirent’s solutions are used by more than 1,500 customers in 30 countries, including the world’s largest equipment manufacturers, service providers, enterprises and governments.
Posted by: Andy Huckridge | February 16, 2012

Codenomicon Appoints New CEO and Vice President of Marketing

Codenomicon Appoints New CEO and Vice President of Marketing

Codenomicon, which specializes in security and quality testing software, announced the appointment of David Chartier as its new CEO. Chartier brings 20 years of technology industry experience that includes serving as chairman of Maxware, which was subsequently acquired by SAP in 2007. He has also held CEO positions for startup companies that include IntelliSearch and Computas, and has served as chairman of Active ISP. Chartier founded InfoStream and led the company through a successful IPO in 1999.

Codenomicon also announced the appointment of Andy Huckridge as vice president of marketing. Huckridge most recently served as director of NGN solutions for Spirent Communications, a Codenomicon customer. Andy is closely engaged with industry bodies and forums, including the MultiService Forum where he has served as chairperson of the Interoperability Working Group & NGN Certification Committee. Before Spirent, Huckridge was the director of product management for Centile and the director of marketing for 8X8, Inc.
http://www.codenomicon.com
30-Jun-08

Posted by: Andy Huckridge | February 16, 2012

Test for VoIP security

Test for VoIP security

Converge Magazine. July 1st, 2006

The growing popularity of IP telephony services is stimulating concern over VoIP security, with potential security threats including attacks that disrupt service and attacks that steal confidential information. Denial-of-service (DoS) attacks, viruses, worms and legal, but unwanted, spam impact the quality of VoIP services or make them unavailable. DoS attacks can be specific to known VoIP protocols or applications, or they can be general in nature.

Given the distributed nature of VoIP networks, there is also the potential for intruders to eavesdrop on confidential phone conversations. Attackers might try to monitor the signaling to track call patterns and discover identity, affiliation, presence and usage of callers. Or, the data stream itself might be recorded. Calls also can be hijacked to gain access to private information exchanged during sessions between a VoIP endpoint and the network. The hijacked transactions may be signaling, media or both.

The sophistication of security testing for these next generation telephony networks should ensure that VoIP networks can withstand real-world conditions. Furthermore, VoIP security testing should move beyond mere conformance testing and into performance-centric testing.

At a basic level, VoIP security testing should establish that the equipment conforms to the high-level call-signaling protocols, either H.323 or session initiation protocol, that define VoIP networks. If the technology does not meet the specifications, it opens loopholes that can be exploited by hackers.

Testing should be performed under real-world, voice-stream load generation to assess the robustness of the network. Many VoIP network components may maintain a secure posture under artificially light traffic loads generated in a test environment, but fail under the strain of live service deployment. Meeting the specification with conformance testing and assuring the network will not fail under real-world traffic loads are minimal standards for network security.

Advances in security testing mirror those in voice-quality testing. The key to assuring network performance is drilling down from the high-level VoIP signaling protocols to test specific layers and component standards that address security, user authentication and encryption. An effective security testing methodology should analyze at the media and transport layers, as well as the signaling layer.

Security test capabilities will move beyond gateway signaling and begin testing implementations of transport layer security (TLS) and secure real-time protocol (SRTP) that address the transport layer and the media stream, respectively. The TLS standard provides authentication on both ends of a VoIP call to counter DoS and flooding attacks. The SRTP standard encrypts the transmitted data or the conversation part of the VoIP call to prevent eavesdropping.

Although security testing is a natural follow-on to voice-quality assurance testing in stimulating market adoption of VoIP telephony, security measures place additional burdens on network performance. IT managers should deploy more intelligent firewalls that are VoIP aware, along with other deep-packet inspection engines, such as intrusion-detection and intrusion-prevention systems to ensure that malicious traffic does not impact communications.

Testing is necessary to properly assess the latency impact that this additional inspection is adding to the network. Delay and jitter are the two biggest impacts on VoIP voice quality. Here again, real-world testing should emulate high loads of converged traffic.

Andy Huckridge is product marketing manager at Spirent Communications

Posted by: Andy Huckridge | February 15, 2012

Overcoming Deployment Limitations of IMS

Overcoming Deployment Limitations of IMS

by Andy Huckridge, M.Sc. (Telecoms)

Director of Marketing, IMS Solutions

8/22/06

There’s little doubt that the future of IMS (IP Multimedia Subsystem) is bright. According to ABI Research, the IMS services market revenue is expected to be somewhere between $50 billion to $90 billion by 2010. There are dozens of IMS trials underway around the world and it is difficult to attend a telecom conference or seminar without IMS being discussed. 

IMS is expected to play a key role in the convergence of telecom services and offers the opportunity for revenue growth by attracting new customers and increasing the average revenue per user. The IMS architecture, based on IP, allows for the delivery of services to subscribers independent of device or network type – regardless of location. It can be used to provision true mobility with services that will follow the user from a computer, mobile device or even television through a single account.

While this convergence will facilitate easier access to services and a variety of revenue generating applications, it will also open the door to potential complaints from customers about networks failing, poor media, and less-than-satisfactory performance.

IMS is a continually evolving series of protocols and interface specifications designed to facilitate standards-based fixed/mobile, voice/data and voice/video convergence. With dozens of specifications — not all of which are standards-based, or set to become standards any time soon — these protocols may create some ambiguity in the development and implementation of communications services. However, this has done little to dampen the industry’s collective spirit. Network convergence presents an unprecedented opportunity for a new wave of network and handset upgrade cycles.

The transition to a full-IP network will be characterized by a series of intermediate steps over the course of several years as equipment manufacturers work to meet the technical and business challenges that their customers — the service providers — must tackle. This complex process will likely be best addressed through partnering and consolidation.

Curbing IMS Complexity 

The inherent complexity of IMS — and its numerous standards, interfaces and protocols — may present a stumbling block for many. Implementing IMS is far from simple. Prototyping and predicting how such traffic impacts a network will be difficult to initially manage.

Complex interoperability requirements such as handshaking, media conversion and synchronization must be resolved in order to guarantee quality of service. These issues underscore the need for a deep understanding of underlying technologies as well as the importance of interoperability testing, feature testing, performance and scalability assessment, and Quality of Experience (QoE) and service management. Each of these phases of testing will play a critical role in the successful deployment of IMS-based networks.

Ensuring Success through Multi-Level Testing

While network equipment manufacturers begin to create Network Elements to handle IMS protocols, service providers must be certain these systems are configured properly and provide consistent, integrated services. This process will include testing equipment to ensure functional, interoperability, scalability, security and fault tolerance. The uncertainty associated with IMS specifications will likely impact the amount of system testing and tuning required during early trials and on-going service management. Routers/switches, gateways, session border controllers, softswitches and DSLAMs, PBXs and endpoints will have to support a variety of protocols. Understanding the types of service that a network may handle will help to determine potential issues and what could impact the delivery of service.

Simply testing the operation of IMS infrastructure is not enough. QoE must also be examined to understand that multimedia applications including voice and video are meeting customer demands. Testing should also continue into the initial phase of an IMS implementation so that service providers can assess any issues from the field to guarantee that data has been correctly formatted and is delivered to the simulated end point.

Essential for proper simulation, emulation and testing is the use of equipment that understands the session state. Simply testing individual message packets to review standards compliance is a recipe for failure in an IMS environment. The complex handshaking and session management requirements of implementing a real-world IMS network demand that such sessions, bridging multiple networks, must be simulated. This must occur with realistic appraisals of performance in session setup, operation and teardown as well as performing the correction actions when something goes wrong.

Traffic simulation, equipment and device emulation, and other tests are required to understand that the IMS network and associated infrastructure can accommodate this type of traffic. Testing and stressing of different formats of SIP headers, handshakes across wireless and wireline networks, and circuit-switched and packet-switched networks, will be the first stage in testing. This will ensure that handshakes occur properly; errors, drops and retries are responded to in a timely manner using agreed-upon parameters, and that call accounting, authorization and access controls meet required policies.

Integration testing is another key step in this process. It must not only be performed in test environments but in operational networks as part of validating the work of partners and testing the deployment of new services.

Given the rate at which new IMS protocols are introduced and revisions are made to existing protocols, regression testing is critical. Regression testing enables network equipment manufacturers and service providers to ensure that the IMS platform being deployed will be able to transit new/revised protocols through the operational network. Regression testing ensures that new and/or revised protocols do not introduce new problems into the network.

The Untested Potential 

The IP Multimedia Subsystem is poised for tremendous growth–and the rewards will affect the entire telecommunications industry. Service providers and carriers will create compelling new services for their consumer and business customers based on fixed/mobile convergence, device-independent mobility, the delivery of advanced VoIP and video services, Internet integration, Cellular Push-to-Talk, and more.

However, the ability to successfully deploy IMS and maintain a high QoE is clearly dependent on rigorous testing of all IMS Network Elements and of the network infrastructure itself. Because of the complexity associated with IMS, service providers will face an entirely new set of challenges in the network management arena. The right testing during deployment as well as ongoing monitoring and service management can support an IMS network and applications that lead to increased revenue and subscriber loyalty.

About the Author

  Andy Huckridge is director, IMS Solutions Marketing at Spirent Communications, where he leads Spirent’s IMS strategy for the VoIP market. His responsibilities include business planning & market development.Andy has worked in the communications industry for 12 years including roles at Centile, Inc., and 8×8, Inc., where he was director of product marketing, and has a broad background in defining and marketing products in the Semiconductor and IP Telephony space.He holds a bachelor’s and master’s degrees in Telecommunication Engineering from the University of Surrey, England.

Andy is active in various Forums including the Multi-Service Forum, where he is Chairperson of the Interoperability Working Group.

About Spirent Communications

  Spirent Communications  is a global provider of integrated performance analysis and service assurance systems that enable the development and deployment of next-generation networking technology such as Internet Telephony, broadband services, 3G wireless, global navigation satellite systems, and network security equipment. Spirent’s solutions are used by more than 1,500 customers in 30 countries, including the world’s largest equipment manufacturers, service providers, enterprises and governments.
Posted by: Andy Huckridge | February 15, 2012

Reducing 4G/LTE Testing Time with Trial Acceleration

Reducing 4G/LTE Testing Time with Trial Acceleration

6/1/2011

by Andy Huckridge, M.Sc. (Telecoms)

Director of Marketing

The GSA (Global mobile Suppliers Association) recently published an update to its Evolution to LTE report. The findings: more than 200 operators are investing in LTE, with 154 LTE network deployments in progress or planned in 60 countries, including 20 networks which have commercially launched. A further 54 operators in 20 additional countries, according to the GSA, are engaged in LTE technology pilot trials or tests ahead of formal commitments to deploy commercial networks.
The above numbers don’t include the WiMAX, WiMAN and/or WiBro tests underway by Intel, KDDI, KT, LG, Intel, Sprint, Samsung and others. Complicating this further is the adoption of Voice over LTE (VoLTE), for which standards are still developing, such as the GSMA’s IR.92 / OneVoice recommendation. With an estimated more than 140 4G trials of all kinds underway, it’s safe to say there’s a whole lot of testing going on.
LTE trials usually involve a lot of repetitive cabling / un-cabling and configuration of both the test network and the device under test itself. Very often there is more than one device in the test, each device needing to be swapped in and out multiple times. Many test protocols call for serialized testing, whereby dozens or hundreds of steps must be followed in order, the better to simulate real-world interoperability conditions. And interoperability can be complex given the need to work well with 2G/3G, whether GSM or CDMA, and with analog systems.
In talking with those implementing tests at large carriers and listening to others at industry meetings, I’ve heard that more than 50% of testing time is consumed by these standard operations as opposed to the actual testing itself. This has been the nature of the beast for quite some time for trials and evaluations.

Trial Acceleration

More recently there has been a significant advance in 4G telecoms testing for how trials and evaluations are carried out. The new method is called Trial Acceleration. It helps service providers to get through the evaluation phase and to market faster, and it allows much more control when working with existing products and new software loads.

The main breakthrough comes in minimizing wiring/rewiring over and over when configuring equipment and the test network. Wiring needs to be done just once per physical test scenario. Timeline gains of up to 60% are not uncommon – according to one leading worldwide mobile carrier, mainly due to the heavily repetitive nature of 4G trial / evaluation testing.

Another problem in trialing is that of trace capture and analysis. This brings into consideration the issues with switch SPAN ports, switches and hubs as a means of accessing the network for packet capture.

Here is a test case for LTE with repeated swapping-in or swapping-out of multiple same-function network elements, such as the P or S Gateways, the Policy and Charging Rules Function or the Mobility Management Entity function as well as logical elements within the IP Multimedia Subsystem core:

LTE pooling test scenario without Trial Acceleration

The Trial Acceleration System is in many ways similar to a switch or router, but with several additional capabilities. Unlike a switch or router, the system acts as a network of intelligent TAP points connected as a system, sitting in-line between the mobile core and the system under test. Therefore the monitoring system has access to all levels of communication – from OSI Layer 2 upwards, overcoming many of the shortcomings of switches, their lack of visibility in to a test scenario and the way that SPAN or mirror ports function in masking Layer 2 information such as jitter.
Initial Connection to the Network

The Trial Acceleration System takes the place of the device or devices under test. All wiring is carried out as usual, but terminates in the Trial Acceleration System instead of multiple wirings to several similar or different network elements. Bi-directional traffic flows though the Trial Acceleration System between the core and the specific device under test at that time for that scenario. Results are logged and traces can be collected. All traffic is made available to a special monitoring port whereby analyzers or monitoring tools can be connected.

In practice, the Trial Acceleration System is reconfigured through a management GUI for each test run—with each network element that is not currently involved with the testing being isolated and unable to see any 3rd party traffic / packets. With several dedicated monitoring ports built in, trials can now be carried out more comprehensively, with greater depth of monitoring and results analysis:

LTE pooling test scenario after inclusion of Trial Acceleration System

Benefits include:

* More in depth reporting with greater monitored network and interface coverage.
* Trace the path of a packet through the network end-to-end.
* Minimize manual equipment configuration.
* Reduce switch SPAN / mirror port contention.
* Effectively use Layer 2 information such as jitter and latency in diagnosing potential QoS / QoE issues.
* Speed up fault-finding when systems are not working as expected.
* Monitor all network nodes / interfaces at the same time.
* Attach multiple analyzer devices at the same time.

Given how much of testing time is devoted to physical connectivity tasks, techniques like Trial Acceleration can help carrier’s complete tests faster.

About the Author

  Andy Huckridge is Director of Marketing at VSS Monitoring, Inc.
VSS Monitoring, Inc. is the leader in network traffic capture, with the world’s largest and most feature-rich family of traffic capture devices allowing IT professionals to see into the farthest reaches of even the largest networks, preventing problems from reaching end users, and greatly accelerating the ROI of network monitoring and security tools. VSS’s innovative Distributed Traffic Capture Systems and active information assurance appliances such as the Protector Series™ herald a new architecture of network monitoring, one which fundamentally improves its capability and price-performance. The company is headquartered in San Mateo, California.

« Newer Posts - Older Posts »

Categories