Unconventional Wisdom V9: Proactive Testing: Part 2 – Drives Development

In a forthright post each month, Robin dispels some truth that you otherwise might not realise with his regular Unconventional blog. In the latest instalment, Robin continues his exploration of   what Proactive testing is and addresses how it can improve your risk based testing.

—————————————————————————————————————————–

 

“Letting testing drive development” is an unconventional key concept of my risk-based Proactive Testing™ methodology. It’s not a political statement. We’re not saying QA/Testing should be the boss. Rather, we feed information revealed by powerful Proactive Testing™ techniques back into the development process, which helps developers deliver software quicker and cheaper, as well as better.

Proactive Testing™ incorporates familiar proven testing concepts and techniques, along with additional less-well-known but more powerful methods, within unconventional models and approaches that can overcome many typical traditional testing weaknesses. The prior article discussed what Proactive Testing™ is not. This article describes what it is.

Conventional Risk-Based Testing

Most of the testers I encounter consulting, at conferences, and in classes seem to share similar frustrating testing experiences. They care a lot about quality and work exceedingly hard to test well, yet they never are able to catch nearly as many of the defects as they want to; and they never achieve the influence they feel they deserve.

I find conventional testing is “reactive.” Tests are largely reacting to whatever software is being created. That’s pretty inevitable for the many testers I meet who don’t get involved until the software is delivered to them to test. Those fortunate enough to get in earlier still often lack sufficient information, so they have to wait for the code anyhow to see what it actually does before they can create tests for it. When a tester’s first crack at the code comes very late in the project, defects they do catch are especially difficult and costly to fix, which can mean some detected defects do not get fixed even then.

Oh, and although testers do not create defects, nonetheless the testers frequently get blamed for the defects they missed. Ironically, such unfavorable perceptions can further reduce testing’s already-limited reputation and resources/time, thereby in turn further reducing testing’s capabilities.

When confronted with more tests to do than time to do them, testers prioritize, generally based on risk. All testing does this, but often implicitly and perhaps unconsciously. When testing consciously and explicitly analyzes risks to determine which tests to run, many refer to it as “risk-based testing”.

Testers tell me they usually focus on features (what the code does) and components (how the code is built) risks to create tests they think they need. Then they analyze and prioritize the tests based on the risks they address. They run the tests dealing with the highest risks more and frequently won’t get to run those dealing with lower risks.

Not Wrong but not very Effective

Most testers I meet realize such typical reactive tests still tend to come too late. However, they generally don’t recognise reactive testing’s bigger weaknesses. They are exerting most of their limited time on test cases, which represent small risks; but digging into detail easily obscures bigger risks. Furthermore, tests based on reacting to what’s in the code and in whatever design there is won’t test all the other conditions that the design and code miss—that tend to cause the most serious and undetected defects.

Proactive Testing™ Detects Ordinarily-Overlooked Defects

Reactive testing relies almost entirely on high-overhead too-late dynamic test execution of explicit conditions. Proactive Testing™ also includes way earlier static testing that much more economically can catch and prevent the biggest source of defects—overlooked errors in requirements and designs (see here).

proactive

 

Proactive Testing™ also does dynamic testing, but starts with large risks rather than with small test case-sized risks. Special test planning and design techniques help identify many of the large risks that turn into showstoppers when overlooked. Merely identifying them reduces their likelihood, and we now have the options of also creating tests for them and even running some tests of highest-priority risks way earlier.

Additional special techniques aid driving the highest-priority large risks down to medium-sized risks, including many that otherwise would be overlooked. Still more special techniques along with conventional test case design techniques drive the highest-priority medium-sized risks down to a set of small-risk test cases, again including many that ordinarily are overlooked.

In this way, time-consuming test creation and execution is limited to the set of test cases that truly address the highest risks, not just the ones reactive testing identifies. Because this Proactive Testing™ risk analysis can be done earlier in the life cycle, more of the more important tests actually can be run earlier as well as more often.

More importantly, developers can use Proactive Testing™ testing information to drive their development to build systems that are more correct in the first place. Developers are spending their time doing right stuff right rather than wastefully creating, hopefully catching, fixing, and retesting so many defects. Without so many design and coding errors, more and better code gets implemented earlier and cheaper.

“It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”
– Will Rogers

 

This limited article can’t get to all the Proactive Testing™ benefits, let alone techniques for achieving them. Proactive Testing™ doesn’t happen magically. Learning how to do it takes time, guidance, and follow-through, which I provide through direct advisory assistance and via my public and in-house, in-person and online training. Contact me to find out more.

About The Blog Series

Much of conventional wisdom is valid and usefully time-saving. However, too much instead is mistaken and misleadingly blindly accepted as truth, what I call “conventional we’s dumb”. Each month, I’ll share some alternative possibly unconventional ideas and perspectives I hope you’ll find wise and helpful.

About the Author

Robin

Consultant and trainer on quality and testing, requirements, process measurement and improvement, project management, return on investment, metrics
Find out more about @robingoldsmith