Q&A: The Logic of Verification, Michael Bolton

 

EuroSTAR_Huddle_Sept_Webinars_Michael_Bolton_Facebook_Post_1200x628px

Thank you to Michael Bolton for taking the time out to answer questions asked during his webinar ‘The Logic of Verification’ 

 

  1. We are now on the transition to Continuous Integration/Delivery and the credo is just to automate all and get rid of manual testing or reduce it to the minimum. What is your advice on this knowing for sure that automated testing cannot replace manual testing or reduce it to the minimum?

There is an error in the premise here:  the idea that “automated testing” or “manual testing” exist.  No one does “automated research” or “manual research”; you do not visit a “manual doctor” who refers you to an “automated doctor”; developers don’t do “manual programming” or “automated programming”.  The managers who are delivering “the credo…just to automate all”:  are they doing “manual management” or “automated management”?

Testing is the process of evaluating a product by learning it through exploration and experimentation, which includes to some degree questioning, studying, modeling, observation, inference, etc. Notice that while tools can help us, neither machines nor programs do any of these things:  evaluate, learn, explore, experiment, question, study, model, make inferences.  Procedures within testing can be scripted or automated, but testing cannot be automated.  In that sense, testing is like programming and management; tools are important, but tools don’t do the work.

If your manager wants to automate testing, remind him or her that to do that, you’d have to automate all of the evaluation and learning and exploration and experimentation and modeling and studying of the specs and observation of the product and inference-drawing and questioning and risk assessment and prioritization and coverage analysis.

Of course, you’d also have to automate all of the pattern recognition and decision making and design of the test lab and preparation of the test lab and sensemaking and test code development and tool selection and recruiting of helpers and making test notes and preparing simulations and bug advocacy and triage.

Once you were don’t with that, you’d also have to automate relationship building and product configuration and application of oracles and spontaneous playful interaction with the product and discovery of new information.

If you got over those hurdles, you’d have to automate all of the preparation of reports for management and recording of problems and investigation of problems and working out puzzling situations and building the test team and analyzing competitors and resolving conflicting information and benchmarking…

I believe that talk of “manual testing” and “automated testing” is unhelpful.  http://www.developsense.com/blog/2017/11/the-end-of-manual-testing/

I would like to warn all testers:  we will be vulnerable to management dreams of “automating all the testing” long as we testers keep talking about “automated testing” and “manual testing” ourselves.  http://www.developsense.com/blog/2017/11/the-end-of-manual-testing/.   This is one reason why I emphasize that checking can be automated, but testing cannot.  http://www.satisfice.com/blog/archives/856

 

2.How would you distribute testing work in a large project, where there are several testers involved? I think it is rather easy to distribute verification, as verification is a mathematically measurable process, but it’s difficult to truly distribute testing.

How do you distribute any kind of work in a large project?  Can you distribute programming? Design work?  Management?  Sure you can.

As I noted above, testing is evaluating a product by learning about it through experimentation and exploration. Is distributing evaluation of a product difficult?  Is distributing learning difficult? Experimentation? Evaluation?  Just as you can distribute any kind of work, you can distribute testing work too.  It’s easy to distribute the work.  It’s only marginally harder to co-ordinate it, but that’s still quite manageable.

You can assign individual testers, or pairs, or groups, to particular feature areas of the product.

You can direct testers to investigate specific quality criteria (like performance or security; there are many others).  You can create a product coverage outline, and assign individual testers to specific parts of it, or multiple testers to the same part of it.  )

You can assign a tester or two to the work of one or more developers who are working on some feature or some technology.

You can distribute work based on what would be easy for specific testers because they’re already experts in that kind of work, or based on things that they need to learn so that they can develop expertise.

You can assign testers who aspire to development work to act as toolsmiths, creating software tools that help other testers and developers, while other testers commission and use those tools.

You might want to look into session-based test management (http://www.satisfice.com/tools/sbtm.pdf) as a possible approach to making the process accountable.  An earlier Webinar that I did for EuroSTAR outlines how to develop and charter testing sessions.  https://www.youtube.com/watch?v=vN3E_jjbBpc  I wrote an article several years ago on the approach my colleague Paul Holland used to track the assignment of testing sessions and make the process legible to managers (http://www.developsense.com/articles/2012-02-AStickySituation.pdf).

I don’t believe that verification is a mathematically measurable process; I’m not sure what you mean by the assertion, and I’m not sure how it’s relevant to your question.  I’m happy to help you out, though.  Please drop me a line at [email protected].

 

3. There was a talk earlier in the week about an AI tool testing, what are your views on this type of tool?

“Artificial intelligence” these days is a marketing term.  People say “artificial intelligence” when they really mean “reasonably sophisticated software”.  “AI”, to me, stands for “algorithm improvement”, wherein a piece of software has routines that sharpen its algorithms such that the tool cybernetically zeroes in results that are interesting to some people. Since that process is opaque and beyond the designers’ ability to explain it, some people call it “intelligence”.   I’d argue that it’s opaque largely because the software has not been designed to report on how it reaches its results  — which says something to me about the intelligence of its designers.

We should definitely use software, including reasonably sophisticated software — tools of all kinds — to help us test.  At the same time, let us not pretend that there’s anything particularly remarkable about using tools.  Let us also remember that “AI” tools are creations of human beings, and they’re applied by human beings. All reasonably sophisticated software has bugs, and all tools amplify what we are.  If we are wonderful testers, then tools will increase our wonderfulness.  If we are lousy testers, then tools will allow us to do bad testing faster and worse than we’ve ever done it before.

 

4. Previously You gave a Talk about Testing Versus Checking has your position on this evolved?

The Logic of Verification Talk is part of the continuous, ongoing process of refining ideas about testing and the checking that goes on inside of it.

One big evolutionary point is that we been trying to avoid referring to testing versus checking since 2013.  We talk about testing AND checking, checking INSIDE OF testing.  Checking is a tactic of testing.  http://www.satisfice.com/blog/archives/856.

 

5. What would be your recommendation for distributing work among several test team members, to reduce overlapping? How do you, Michael Bolton, do it?

Start with my answer to Question 3.

Also note that when we’re investigating software, overlapping is to some degree inevitable, and to some degree a feature and not a bug.  Two testers performing “the same test” may notice different problems, based on their skills, their temperaments, their domain knowledge, their familiarity with a product.  A tester with lots of experience with the product may miss a bug because of that experience, where a new tester will see the problem right away; “fresh eyes find failure”.