Calculate Test Coverage: How Coverage Can Catch The Cheat

Coverage is a measurement for how thorough a test has been, based on identified coverage elements. If no elements have been touched, the coverage is 0%; if all have been touched, the coverage is 100%.All too often, the coverage for a test is not calculated, and useful information concerning the quality of the test is not obtained. If coverage is not calculated it is very difficult to describe how thorough a test has been. A tester may think the test is ‘good’, only to find out the hard way that many defects have survived the test. That is why it is so important to calculate test coverage.

This blog describes how calculation of coverage revealed how superficially the supplier had tested a product. It was such an easy thing to calculate the coverage, and the result made a significant difference for the future users.

This is a story about an IT system, based on a standard system, that was about to be released to more than 50.000 users. More than 500.000 people were going to be indirectly affected by the system. Managers of the future users and the project manager were worried. There were rumours that other customers for the same system were unhappy with it. A test consultant was called in by the project manager to oversee the testing. The end of the story was that the system release was delayed by more than 7 months, the supplier accepted a penalty of about €700.000, and the users were happy with the new system.

The system was put out to tender according to an EU acquisition contract. The test principle in such contracts is that the supplier provides all the test procedures. These are to be reviewed and accepted by the customer, and then executed by the supplier with the customer present. Such contracts are used all the time all over Europe; mainly be the public sector, where public money are being used. The test consultant was called in to review the supplier’s test specification shortly before the planned release; only the acceptance testing were to be executed. The consultant’s very first impression of the test specification was that it was too small for the 570 requirements, including about 30 complex use cases.

Test Specification

The test specification consisted of about 50 test procedures; for each of these the requirements allegedly tested by the test procedure were listed in the introduction to the test procedure.
With reference to the contract, the test consultant asked for a bi-directional traceability table. This proved difficult to get, but after a while the supplier produced a list of ‘all’ requirements and the id of the test procedure that covered them. At first glance this list looked OK, but a further analysis showed that there were ‘holes’ in the requirement list; the numbers were for example going ..18, 19, 23, 31, 32, 33 …. In fact, the test specification covered just under 50% of the requirements, and for some of the use cases, considered the most important of the requirements, the condition coverage was down to under 20%.

Future users were brought in to prioritise functionality areas and perform guided exploratory testing to get an idea of the general quality of the system. Nobody was impressed. Based on the results of this test, negotiations between the customer and the supplier took place, and the supplier agreed to expand the test specification to obtain an average condition coverage of 60% – not brilliant, but that was what was possible. During the test execution more than 80 defects were found; some of these rather serious defects that would have been undetected had we not been able to argue for an expanded test specification, using coverage as the strongest argument.

Test Coverage

The coverage is a measurement that is exact if you agree on the way to calculate it; the number obtained cannot easily be questioned.

Unless a test is fully exploratory, it is usually designed against some sort of test basis, for example textual requirements, acceptance criteria for a set of user stories or entries in a checklist. Both the test basis and most of the known and described test case design techniques make it possible to count the number of specific coverage elements that can be tested, for example equivalence partitions or code statements. The percentage of the coverage elements being touched is the coverage.

When the test is designed it is possible to count how many of the identified coverage elements have been covered in the test specification. The coverage obtained in the design can be calculated, and this is the first guideline for the thoroughness of the test. When the test is executed according to the test specification, it is possible to count how many coverage element that have been touched and calculate the actual coverage obtained.

Calculation of coverage is not difficult, but a very powerful way to be explicit about the thoroughness of a given test.

About the Author

Anne Mette HassAnne Mette Hass, M.Sc.C.E., has worked in IT since 1980. She started as a programmer; but quickly turned to testing.

The pervasiveness of IT has allowed Anne Mette Hass to work in many different businesses, including life science, the oil industry, banking, insurance, the public sector, and universities. Anne Mette Hass has lived and worked in Denmark and also in Norway, England, France, and Italy for longer periods.
It is important to Anne Mette Hass to have a deep understanding of what she is doing, and therefore she mixes practical work and theory. She holds an ISEB Practitioner certification, and has taught ISTQB Foundation and Advanced Manager and Analyst.

Anne Mette Hass has been member of the ISO 29119 Software Testing Standard working group since 2005, as editor of Part 3.
Anne Mette Hass is a frequent speaker at conferences. She has written more than 30 papers and 4 books, primarily about testing, but also covering requirements and configuration management issues. She is also creator of the poster “Software Testing at a Glance – or two.” Privately Anne Mette Hass is married and has a daughter and a small dog. The family lives in Copenhagen.

About the Author

Anne Mette

With a M.Sc.C.E. degree, Anne Mette Hass has worked in IT since 1980. She started as a programmer; but gradually got more and more interested in testing. While working on two assignments in quality assurance for the European Space Agency in the early 1990s, she really discovered her passion for processes, testing, and compliance, and since then she has used and enhanced her experience working as a test, quality assurance, and process consultant.
Find out more about @annemettehass