A couple of months ago I had discussion with someone who claims to be an expert in software testing about test planning and the right way of testing. It went something like this:
Q: Do you make a test plan in your test projects?
A: Well, uh, it depends.
Q:
A: You know, it depends.
Q: Ok, but on what does it depend?
A: On the context.
Q:
A:
Q: Yes, ok. Could you for instance tell me how you do test planning in your project.
A: Yes, well, I’m in between projects right now and I think my last project is not a very good example.
The next question I had was a question I asked myself: how fast I can retreat from this conversation.
I hate it when people answers a question with: ‘It depends’ and then stop talking, because usually the person who gives that answer doesn’t tell where it depends on, what choices you have and what determines which choice is preferable in which situation. And I would be happy if the person in question only describes how he did it in his project and why he did it that way. But this explanation is rarely given, usually it stops after the ‘it depends’.
But what I expect from others can others expect from me! In the last few years I learned a lot, experimented a lot and gained experience in applying different forms of exploratory testing. You know; session based testing, bug hunts, test tours and free style exploratory testing. And I think that detailed scripting and global scripting are also useful approaches in some situations. But what determines which way of testing is preferable in which situation? I found six aspects that influence that decision: system, test goals, organisation, documentation, development method and test skills.
Personally I think the characteristics of the system is the most important aspect. If the user interface is the most important part of the system I think a kind of exploratory testing are preferable. When we are talking about a backend system with complex calculation, a form of scripting would probably be better.
Another aspect are the test goals. The question what information you are looking for influences the form of testing highly. You want to know whether the system is for instance in line with a specific law? Scripting! You want to know whether the system is for instance self-explaining? Exploratory testing!
The organisation is also important. In some organisations planning and accountability upfront is non-negotiable. In that case I would use scripting. In other organisations pragmatism is more important, that gives you the possibility to use a form of exploratory testing.
Suppose you are in an organisation where there is very much useful and up-to-date documentation. Then you can use scripting (but you don’t have to). In case there is no documentation you can always use exploratory testing.
The system development method also influences the choice of test form. In projects that use Agile or DevOps scripting is almost impossible unless you apply a form of TDD, BDD and/or ATDD. In my experience exploratory testing forms like session based testing or bug hunts work very good in Agile. Test tours or freestyle exploratory testing work also very good in agile. IN waterfall projects both scripting and exploratory testing can be used.
Last but not least is test skills. Some testers are just really good in making scripts, other are born to be an exploratory tester. I once heard someone say that the tester should adjust to what the context requires him or her to do. I do understand that statement but it is easier said than done. Changing the way people work is at least not an overnight thing, if possible at all. So on the short term test skills is an aspect to consider if the testers are a given.
Is this list of aspects complete? Probably not but until now considering these aspects worked for me. Do you want to add an aspect of replace on? Please let me know.
In context driven testing heuristics (a fallible method for solving a problem or making a decision, definition) are very popular. For a great list of heuristics please look here.
May I propose the SGODDS heuristic (System, testGoals, Organization, Documentation, Development method, testSkills) for determining which way of testing is preferable in which situation?
About The Author
Jan Jaap Cannegieter is a well-known consultant, author, (keynote) speaker and requirements and test specialist form the Netherlands. He has 20 years of experience in ICT and did assignment s in testing, quality assurance, TMMi, CMMI, SPI, Agile and requirements. In testing he was a tester, test manager, test consultant and workshop leader. At this moment Jan Jaap is test/QA-manager and delivery manager at DinamiQs and works with Squerist. Jan Jaap previously was vice-president of SYSQA B.V., a company of 180 employees specialising in requirements, software testing, quality assurance and IT-governance. He is the driving force behind Situational Testing and he wrote several articles and books in the Netherlands.
Twitter: @jjcannegieter