Unconventional Wisdom V5: Challenging Exploratory Testing

“It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”

 – Will Rogers

Welcome to my Unconventional Wisdom blog.  Much of conventional wisdom is valid and usefully time-saving.  However, too much instead is mistaken and misleadingly blindly accepted as truth, what I call “conventional we’s dumb”.  Each month, I’ll share some alternative possibly unconventional ideas and perspectives I hope you’ll find wise and helpful. Read the previous posts in this series 4,3,2 and 1. This post looks at challenging exploratory testing.

 

Unchallenged Assertions as Conventional Wisdomunconventional wisdom

Exploratory testing is a useful technique; but perhaps its greatest success has been convincing many in the testing community to accept as conventional wisdom (getting them to “drink the Kool-Aid,” Google it) that exploratory is the best testing technique.

To a considerable extent, exploratory’s assertions haven’t been challenged, which can quickly lead to their being accepted as conventional wisdom.  Please note, while I’ll address the truth of exploratory’s assertions below, my point here is only that their truth seems to have been accepted at face value without the types of confirmations that testers routinely pride themselves on doing.

As a side note, perhaps recognising risks of unchallenged assertions, the largest US cellular carrier Verizon recently aired a series of commercials featuring comedian Ricky Gervais ridiculing supposed competitor claims.  Full disclosure, I’ve not seen any of the purported ads, so Verizon may well be fabricating them.  Regardless, Verizon’s attacks do seem likely to prevent acceptance of competitors’ claims, especially to the extent of becoming conventional wisdom.

 

Buttressing Exploratory’s Conventional Wisdom

Accepting exploratory testing as the best implies comparison, which can be problematic.  It reminds me of one Gervais ad where a claim of “four times better” wireless services turns out to be compared to the competitor’s previous pitiful performance but still fails to beat Verizon.

Demonizing competition is a key tactic for making one’s advocated approach appear better in comparison.  Demonizing involves attacking those who use other techniques and portraying some outrageously stupid practices as the only alternative to one’s own preferred ways. Exploratory advocates have gained wide acceptance as conventional wisdom that the only alternative to exploratory is spending lots of time writing tediously time-consuming highly-procedural detailed test scripts.  Indeed, many testers do document tests in such manner, often accepting as (non-exploratory) conventional wisdom that that’s how all tests must be.

Exploratory is correct that the more time spent documenting, the less time is available for executing tests.  However, in rightly rejecting the you-must-write-detailed-procedural-test-scripts conventional wisdom, exploratory seeks to replace it as conventional wisdom with the IMHO equally mistaken opposite extreme–that testers should skip writing tests and just go directly to executing tests, figuring out from the execution context what tests to execute.  Also largely accepted are exploratory’s demonizing executing written tests as blindly ignoring context and being requirements-based, which exploratory rejects out of hand since often some are wrong.

 

Challenging Exploratory’s Assertions

I’ve found there really are two forms of exploratory testing.  What I call the “journeyman” version is very common but seldom mentioned spontaneous testing performed by real testers as a supplement to their other testing.  In contrast, what I call “guru exploratory testing” gets almost all the attention because a small number of prominent testers tout it, often as the primary if not only way to test.

At the risk of unduly attributing intent, and despite gurus’ claims of being interested only in helping the testing community, it seems to me that much of guru exploratory is driven by some gurus’ huge needs to be the center of attention and especially to be recognized as the smartest person in the room. Ironically, I’ve found that despite gurus’ craving fame, journeymen often are unaware of them.

Gurus’ presentations frequently demonstrate how they can detect numerous “cool” defects in an application just by poking at it even though the application already may be well established and the guru seems to have come at it “cold.”  On the one hand, the gurus claim they can teach you (for a fee) how to do it too; yet, to be smartest, they need to be the only one who actually can do it.  Although the demos detect often-spectacular defects, it’s questionable how many of them real users actually would encounter.  Moreover, the detected defects’ flash and sizzle can obscure if the exploratory testing missed more important defects that really would affect real users.

 

Exploratory’s (Denied but) Undeniable Issues

Every testing technique has the opportunity to reveal defects some other technique would miss.  (My Proactive Testing™ exploits this truth with what I call the “CAT-Scan Approach™.”)  What’s not evident with exploratory is how many of those defects in fact could have been detected in some other, possibly more effective/efficient way.  Exploratory tends to assume (perhaps unjustifiably) that only exploratory testing could find the defects it detects.

While a program’s execution context indeed must be taken into account, effective testers need to address a broader business context to which the guru approach easily can be oblivious.  By rejecting attention to requirements, gurus further diminish the likely relevance of their tests.

Perhaps the biggest reasons exploratory is not the best type of testing stem from the fact that by definition it cannot be performed until the very end of the life cycle, only after code has been written.  All the potential defects are then in the code, so challenge is maximal; test time often is minimal; and cost/effort to fix defects is largest.  Moreover, most software defects are design errors, which are most effectively detected before they even make it into the code.  Execution-based tail-end exploratory testing is the least effective/efficient way to detect design errors and cannot possibly contribute to preventing design errors from making their way into code.

About The Author

Robin F. Goldsmith, JD advises and trains business and systems professional on risk-based Proactive Software Quality Assurance and Testing™, requirements, REAL ROI™, metrics, outsourcing, project and process management.  He is author of the book, Discovering REAL Business Requirements for Software Project Success, and the forthcoming book, Cut Creep–Put Business Back in Business Analysis to Discover REAL Business Requirements for Agile, ATDD, and Other Project Success.

About the Author

Robin

Consultant and trainer on quality and testing, requirements, process measurement and improvement, project management, return on investment, metrics
Find out more about @robingoldsmith