Defect Clustering And Pesticide Paradox is a familiar concept. Translated to the field of testing, it refers to the fact that when repeating the same tests over and over, eventually those same test cases will stop finding new bugs.
This is especially relevant when you add automated testing into your mix of testing methods.
Automated testing has become almost a necessity when working in today’s speedy delivery expectations, brought on mostly by the shift to Agile practices. This shift has created a ripple effect throughout product development, testing and release cycles. Everyone seems to expect things to go faster. Development is under pressure to deliver quickly without affecting product quality, QA is under pressure to test and discover all the existing bugs, and management is pressured to release the next new thing successfully. Even the end users now expect “instant gratification”. This is where many turn to automation in order to help cope with the pressure, and speed up delivery while maintaining quality.
First off, let me state that automation is a good solution under the right circumstances, but it also has its pitfalls, that we need to try and avoid. Even with manual testing, we tend to create test sets we then continue using repeatedly. This is even more important when working with automatic test suits that are run more frequently and reviewed less commonly.
It is important not to get “attached” to your tests and fall into the Pesticide Paradox, as your testing will suffer, bugs will go unnoticed and well, you can imagine the rest. In fact, test cases need to be continuously reviewed and updated, whether they are automated or manually run.
How do your test sets become irrelevant?
- You can never have complete application coverage
Even a very straightforward application or software will require an impossibly large amount of test cases in order to fully cover all possible scenarios. The solution to this is to prioritize test cases by risk, probability, time restrictions and many other factors. But these priorities change over time, and this means that tests that had a high priority some months ago, may become less relevant and efficient to test today.
- Your application changes over time
Whether your product has some new features, small UI or UX changes, or has simply been updated for better compatibility (with the newest mobile gizmo for instance), it has changed. The test cases you’ve used to assure quality so far, need to be reviewed and updated as well.
- You are too relaxed when not in danger
While developers and testers will be extra alert in places where bugs have been found in the past, other areas of code or testing might be taken for granted as being “in the clear”.
How to maintain relevant test cases?
The main rule of thumb would be to continuously review and evaluate your existing test cases. Here are a few practical points of advice:
- Make sure you are up to date on changes in the product or application.
This includes the obvious direct changes you will need to make to your test cases, but you need to also think of new indirect scenarios to cover with new tests.
- “Clean house” – Get rid of duplicate test cases or irrelevant tests no longer used. They clutter up the test repository, can create confusion and waste your time when trying to weed out the truly needed tests.
- Change your test data to add randomness. Because many of our bugs are data-specific, if a test continuously stops reporting bugs (more common with automated testing), it is wise to change the data the tests are feeding on, the added “randomness” will give you truer readouts.
- Drop the formalities. Don’t use only one approach of testing. Add some informal testing to each testing cycle, or at minimum per release, this could be Exploratory testing session, Bug-hunts etc. Adding some ‘human heuristics’ can increase test coverage effectiveness.
To conclude, It is not possible to create full coverage of your system. Even if you managed to create a broad coverage testing suit, you can’t let your guard down for long. Continuously re-evaluate your test cases, make changes and modification, never assume anything, and add in some “human intuition” to your testing, even if you heavily rely on automation.
About the Author
Joel is a software tester and chief architect at Practi Test. He blogs at QA inteligence.