Posted by: Andy Huckridge | February 13, 2012

Vocabulary of Testing

On the testing edge was a column I wrote for TMC’s IMS Magazine during 2008. It featured education, thought leadership and news relevant to the Test & Measurement industry. Interesting to note that now in 2012, IMS is actually coming back to life! Hence the re-posting of these earlier articles…

IMS Magazine


April 2008 | Volume 3 / Number 2

By Andy Huckridge, M.Sc.

In the first edition of On the Testing Edge, I covered the landscape of why services are so important for Next Generation Networks and many of the issues testing can overcome to facilitate a trouble-free roll out. This month we’re going to dig further, taking a look at the vocabulary of testing, the different categories of testing and follow up with how they relate to the product development life cycle.

Precision: The degree of refinement with which an operation is performed or a measurement stated. This in simple terms means the following: can the same test be run with the same results observed? In a capacity test, how equal are the results each time the test is run?

Accuracy: The degree of conformity of a measurement to a standard or a true value. In simple terms this means how well a value can be determined. In Voice Quality Metric testing, a MOS score of 3.5, versus 3.51.
Reproducibility: The ability to produce the same outcome given a controlled set of variables. In a test situation, this is the ability of a test to produce the same bug time after time often a crucial factor if a bug is to be found and subsequently remedied.

Independent observation, verification and validation: When testing, it is often not enough to have the same person or test setup to find and diagnose a problem. The same goes for a programmer who can’t find his or her own bug — they are often too close to the problem. Thus, it’s important to separate the observation and verification phases in testing.

Lord Kelvin: Essential commentary for both the understanding of a problem and how to improve a product, or cure a defect / bug.

  • “If you can not measure it, you can not improve it.”
  • “To measure is to know.”

Most Common Types of Testing
Black box testing treats the software as a black-box without any understanding as to how the internals of the box behave. This level of testing usually requires thorough test cases to be provided to the tester who then can simply verify that for a given input, the output value (or behavior), is the same as the expected value specified in the test case.

White box testing, however, is when the tester has access to the internal data structures, code, and algorithms. For this reason, unit testing and debugging can be classified as white-box testing and it usually requires writing code, or at a minimum, stepping through it, and thus requires more knowledge of the product than the black-box tester.

In recent years the term gray box testing has come into common usage. This involves having access to internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level.

Functional testing covers how well the system executes the functions it is supposed to execute, which can include placing a call, performing a transfer, if a PBX (News – Alert) for example. Functional testing covers the obvious surface type of functions, as well as their back-end operation.

Conformance testing is used to make sure a standard or protocol actually conforms to a specific standard. This type of testing facilitates better system / interoperability testing later on in the testing life-cycle.

Capacity / Stress / Throughput / Load testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing often refers to tests that put a greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances.

Interoperability testing generally appears at the system level, especially in complex telecoms systems like IMS. Most often just a single call or two (or service interaction) are used to verify that two systems are interoperable.

Robustness testing is in many ways similar to conformance testing, but with the added flexibility and freedom of going outside of the protocol or standard. To send bad, or malformed packets into a Device Under Test (DUT) for example. This can also be referred to as “Fuzzing the protocol” to see the resultant behavior on a specific network element or device.

Andy Huckridge, is Director, NGN Solutions at Spirent Communications, where he leads Spirent’s strategy for the Multimedia Application Solutions division. His responsibilities include product management, strategic business planning & market development. Andy has worked in the Silicon Valley Telecommunications industry for 12 years and has a broad background in defining and marketing products in the Semiconductor, VoIP and IMS/NGN space. He holds Bachelor’s and Master’s degrees in Telecommunication Engineering from the University of Surrey, England. Andy is active in various Forums including the Multi-Service Forum, where he is Chairperson of the Interoperability Working Group & NGN Certification Committee. Andy is a VoIP patent holder, an IETF RFC co-author and inaugural member of the “Top 100 Voices of IP Communications” list.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: