“It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”
– Will Rogers
Welcome to my Unconventional Wisdom blog. Much of conventional wisdom is valid and usefully time-saving. However, too much instead is mistaken and misleadingly blindly accepted as truth, what I call “conventional we’s dumb”. Each month, I’ll share some alternative possibly unconventional ideas and perspectives I hope you’ll find wise and helpful.
Conventional Risk-Based Testing
Let’s start with “risk-based testing.” I find many testers use the term to distinguish a particular type of testing that they perceive as somehow different from what they ordinarily do. Generally it’s raised as an approach to enlist when time and resources are inadequate for needed testing. My experience is that most use “risk-based testing” to mean a fairly formal and explicit analysis of the risks addressed by prospective tests.
The most common formal method involves rating impact if the risk occurs and likelihood of the risk occurring. Ratings typically are assigned numeric values, such as 1 for low, 2 for medium, and 3 for high. Each risk’s impact rating is multiplied by its likelihood rating to compute risk exposure. Limited test time and resources then are devoted to the tests addressing the highest risk exposures.
I would contend that all testing is risk-based. Testing is our main method of controlling risks in software products and systems. Thus, risk is the main basis for identifying what to test and how to prioritize which tests should receive the most time and resources. What’s at issue is not whether we’re doing risk-based testing, but rather how conscious, formal, and explicit it is.
For much if not most testing, risk analysis is performed, but unconsciously, informally, and implicitly. Regardless of where their risk analysis falls on the continuum, I find testers tend to have great confidence in the wisdom of their (risk-based) judgments about what to test.
Yet, when I show testers on projects or in training seminars how to apply my powerful Proactive Testing™ risk analysis techniques, they routinely become aware of numerous risks their informal (and formal) risk analysis ordinarily overlooks. Overlooked risks of course will not be prioritized or tested. The problem is even greater because we find typical testers routinely overlook up to 75 percent of large showstopper-sized risks. Percentages appear even higher for overlooking medium-sized feature/function/capability risks and small test-case-sized risks.
Other Aspects of Risk in Testing
Testing tends to focus on risks of product/system/software features (what the software does) and components (how it’s built). In contrast, project management risk analysis primarily is concerned with having adequate time, resources, and methods. Most risk books and training I’ve seen concentrate on identifying and addressing project risk, not testing’s product risk.
Project and product risk address different things but are related. Conventional wisdom indeed is correct in recognizing that a project’s squeezed schedules, insufficient resources, and inadequate methods cause more product defects and impede testing’s ability to detect them.
Clients and students say their projects practically always suffer from too little time and resources and from poorly-defined requirements. Common experience as well as various studies, such as the annual CHAOS reports from The Standish Group, say most projects are late, over-budget, and/or fail to deliver promised functionality.
One Shoe Drops
CHAOS and other studies regularly report that requirements-related issues are the major cause of project failures. That conventional wisdom indeed is true. People recognize management routinely imposes too-low budgets and schedules. Peopof drivingle also are all too familiar with how requirements creep necessitates unplanned extra work that is a major cause projects over-budget and past schedule deadlines.
However, what’s seldom-recognized is that misunderstood methodology relating to requirements essentially destines many if not most projects to fail by saddling them with impossibly low budgets and impossibly short schedules.
And Now the Other Shoe
Conventional project management risk analysis wisdom, which we’ve said generally gets most of the risk attention, says inadequate requirements, budgets, and schedules are project risks. Yet, virtually every project anyone I meet experiences, even the ones that “succeed,” have inadequate requirements, budgets, and schedules.
Risk is supposed to involve uncertainty. But inadequate requirements, budgets, and schedules are essentially certain. Consequently, the major project management issues of inadequate requirements, budgets, and schedules are certainties, not risks.