Each month in a forthright post, Robin dispels some truth that you otherwise might not realise with his regular Unconventional blog. In the latest instalment, Robin looks at what is a Test Case, does it matter and why?
“We don’t have test cases” is a pretty widely-touted piece of what I call “conventional we’s dumb.” Usually it’s someone from Exploratory Testing who raises the topic, although I’ve recently given it light too with my contrary talk, “YOU Don’t Need No Stinking Test Cases?” The topic gets a brief bit of buzz, typically along with some head-nodding to Exploratory speakers and more animated responses to me, and then shortly thereafter returns peacefully to dormancy until whenever it’s mentioned next.
What Is a Test Case?
The premise of not having test cases is that test cases (1) must be written and (2) must be written in a particular format. Specifically, it presumes that a test case must be written as a script with a set of steps and embedded keystroke-level procedural detail directing the execution of each step.
Exploratory Testing began largely as a reaction against spending so much time writing such high-overhead test case scripts. The rationale was and continues to be that the more time one spends writing test cases, the less of limited test time is left for actually executing tests. That’s math, and figures don’t lie.
Since testers generally prefer executing tests to documenting them, and since many testers find little actual value in the overburdening extensive documentation, Exploratory’s advocacy of not writing tests at all but rather spending one’s entire time executing tests was understandably very appealing.
According to Exploratory’s above definition of what a test case is, Exploratory therefore does not have test cases.
But Liars Figure
Pardon the literary alliteration license turn of phrase. I’m not really saying Exploratory folks lie about test cases. However, there are other ways to view test cases; and I believe they are both more appropriate and more useful.
Before elaborating on why Exploratory in fact does have test cases and ramifications thereof, let me digress to expand upon a relevant aspect of important distinctions I described previously. In that article, I contrasted what I call “guru Exploratory testing” with the “journeyman” version. A small number of prominent guru speakers tend to monopolise attention to Exploratory. Journeymen actually use exploratory techniques day-in-day-out, greatly outnumber gurus, and often may not even be aware of the gurus, who in turn probably are not aware of the journeymen.
Upon further reflection, though, I’ve recognised the need also to acknowledge a third group, the gurus’ followers. While they of course have a range of testing understanding, many of them follow their gurus blindly, sometimes in a very black-and-white manner latching on gurus’ phrases like, “We don’t have test cases.”
Similarly, some gurus seem most interested in attracting attention with often outrageous statements, but sometimes some gurus do refrain from dogmatism. In fact, I was pleasantly surprised to recently hear one guru give a presentation that was so nuanced he actually described several more thoughtful approaches to determining what tests to run than merely trying whatever came to mind and also allowed as how for certain situations some written test cases could be suitable.
What Is a Test Case? Revisited
My top-quality-tip-of-the-year article, “What Is a Test Case?” , analyses results of surveys I conducted with leading testers. Survey responses showed considerable consensus that a “test case” is essentially “inputs and/or conditions and expected results.”
Note, this definition does NOT say a test case has to be written and especially does not say it has to be written in a keystroke-level procedural detail script or any other particular format.
Thus, regardless whether inputs/conditions and expected results are documented in written form and regardless whether they were identified prior to the moment of execution, executing a test means there is a test case.
Something is input in some relevant way, sometimes in the presence of given conditions, which may simply exist or may need to be created in order to execute the test. The test produces actual results which are compared to expected results to determine whether the test results seem suitable. Whether or not defined explicitly or even reasonably, Exploratory and all other testers evaluate the test’s actual results based on some expectation.
Sometimes that evaluation involves judgment and weighing various forms of uncertainty, and sometimes the evaluation can be clearly certain and straightforward. A test could involve either type of evaluation and anywhere in between. It is entirely inappropriate and the height of arrogance to declare the former “testing” while denigrating the latter as “checking.”
Effective testers write things in economical useful ways so they can remember, share, reuse, and refine based on additional information, including from using Exploratory techniques. My courses and consulting show powerful Proactive ways to identify, and low-overhead ways to capture, important risks to test for that conventional and Exploratory testing commonly overlook.
“It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”
– Will Rogers
About The Blog Series
Much of conventional wisdom is valid and usefully time-saving. However, too much instead is mistaken and misleadingly blindly accepted as truth, what I call “conventional we’s dumb”. Each month, I’ll share some alternative possibly unconventional ideas and perspectives I hope you’ll find wise and helpful. View more posts in the Unconventional Wisdom series here.