• Author
    Posts
  • #8032
    @paul-madden

    Hi folks, I read a recent blog post where Jim Cherry (on behalf of Colin) looked at testing and checking – and asked what’s in a name? Jim finished with a quote:

    Someone far cleverer than I once said “A common mistake people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools”…

    Do you agree? Is checking different to testing? If so, who does the checking?

    #8050
    @kasper

    Yes, checking is different to testing.
    Someone who checks uses a pre compiled list of check items to determine if described conditions are met.
    Someone who tests uses all available information, skill, expertise and experience to determine the quality of the product under test.
    Unfortunately both activtities are performed by persons calling themselve testers.

    #8077
    @paul-french

    I see this one come up a lot and I find it quite an emotive subject in that I can find some people’s snobbery over this quite irritating. It’s as though some people resent other people calling themselves testers when they perceive there to be little skill involved, and it somehow devalues how they see themselves?

    For example, If a Test Analyst creates scripts then someone else other than the TA runs the scripts is that person just checking as well? An act of “checking” can actually test the product as well? So is “checking” literally when the “checker” is not applying, or does not need to apply, any thought to what they are doing?

    I have the opinion that testing is on a spectrum and “checking” is at the lower, less applied end of this spectrum. I define checking to be applicable only to be a certain subset of activity that is not really exercising an application but “checking” that certain features or functional requirements/specification have been met – requirements that don’t really have any need for applied test analysis techniques. There is a very fine line between the 2 activities. Putting testing on a spectrum then certainly allows the elitist to put themselves suitably high on the spectrum.

    For the sake of argument, could we say that a lot of us are just “checking” most of the time as we have a lot of experience so that the “tests” we run are so automatic, zero thought is applied?

    A lot of questions there I know? But to summarise again, I would categorise “checking” to be something which neither stretches or tests the application/system nor the thought process of the person who has designed it.

    #8087
    @paul-madden

    Thanks Kasper and Paul for the explanations.

    I’ve since read a blog post by James Bach and Michael Bolton on this, where they say (amongst other things) that “…checking is a tactic of testing…

    Here’s a link to the post on James’ blog:

    http://www.satisfice.com/blog/archives/856

    I didn’t realise this was a topic of discussion amongst the testing community for so long, Michael wrote a blog post about it almost six years ago.

    Is the differentiation between checking and testing commonly used and if so I wonder is automated testing to be replaced by automated checking in testing parlance?

    #8125
    @kasper

    As always James and Michael can state my opinion much more eloquent than I can.
    I am not sure if the differentation is commonly used but in my 20 odd years of programming and testing I always made the distinction.

    @Paul French

    It’s as though some people resent other people calling themselves testers when they perceive there to be little skill involved, and it somehow devalues how they see themselves?

    Lets try to put things into perpective.
    A pilot is a highly skilled professional who uses check lists with the goal of successfully transporting passengers from point A to point B within the boundaries as determined by the check lists.
    A test pilot is a highly skilled professional who uses check lists to determine the behaviour of an airplane when it is put outside the boundaries as determined by the check lists.
    Both groups are highly skilled professionals but they are hardly the same.
    Software checking and software testing follow much the logic.
    Software checking uses check lists to determine the behaviour of a system within the boundaries as described by the checklists.
    Software testing uses the check lists to determine the boundaries and then go on to test the behaviour of a system outside the boundaries as described.
    Both activities can be performed by highly trained professionals, but they are hardly the same.
    Checking or testing does not say a lot about skills – checking performance is a job for highly skilled persons with lots of product knowledge – but it does say something about the mindset required for the job.
    So at least for me personally how someone describes himself or herself has no impact on my value – either for me or the companies that pay for my expertise.

    An act of “checking” can actually test the product as well? So is “checking” literally when the “checker” is not applying, or does not need to apply, any thought to what they are doing?

    An act of checking can test the product but the main difference with testing is that checking can be done by automated scripts. As such performing these activities does not require a great deal of thought by the person performing it.

    I have the opinion that testing is on a spectrum and “checking” is at the lower, less applied end of this spectrum.

    Putting testing on a spectrum is just what I do not like to do. A pilot is not a test pilot, a driver is not a test driver, a software checker -no matter how skilled – is not a software tester.
    I would prefer another defintion where a differentation is made between a checker and a tester. If that makes my an elitist I can life with it.

    For the sake of argument, could we say that a lot of us are just “checking” most of the time as we have a lot of experience so that the “tests” we run are so automatic, zero thought is applied?

    I sure hope not. If a “test” I run requires zero thought I automate it. That is a much better use of my scarce time than manually running a test on auto pilot.

    #8242
    @stevean

    Kasper well put.

    Just because a tester does some checking at some point doesn’t mean they are unskilled. A machine can check a lot, but a human checking will also by nature test as they will stray at times and notice the environment of the ‘check’ and observe side behaviour or alternate routes.
    Whilst grading on a curve, or using a spectrum is not palatable to some, including me, there can be a more digital/stepped view. e.g. Checkers who don’t have the skills or desire to test, Testers who don’t have the patience or desire to check and steps between the 2.
    If you had to put labels on it you could talk about
    Test Techs
    Test Engineers
    Test Analysts
    Test Leads
    etc.
    But labels to people don’t always help. Role and Responsibility Differentiation does.

    I know that after 15+ years in software testing, I still do checking alongside my testing. It doesn’t make me less skilled or less important. I don’t think you can complete a test lifecycle efficiently without some checking, and you can’t do it effectively without some testing.

    #8250
    @paul-madden

    Thanks Stephen, is there a correlation between the amount of checking and the level of experience of the tester? Is checking something less experienced testers are more likely to find themselves doing or does it continue to be something you do regardless of the level of experience you have?

    #8251
    @stevean

    @paul: I think there are too many parameters to make a generalised judgement.
    Of course an inexperianced tester will need to have guidance and is likely to do more checking. But as you get experiance – it depends on the project and environment you work in.
    Some teams will automate as much as possible, leaving very little checking for the human tester to do.
    Others will not and there may be a lot of checking to do.
    Take a form based product, e.g. a CRM tool. There is a lot of checking to be done here. All the forms have a defined layout, and validation rules. Of course this can be automated, but what if the team you work in doesn’t have the budget, or the forms change each release; e.g. are customisable for each deployment. In this case the effort to automate a testing a form check is wasted as the next deployment will be a different content and/or layout. So this team will do a lot of checking, no matter how experianced you are. Of course you might want to automate some of those checks, some of the optional form fields will have standard defined validation rules. Others may not. e.g. a user name for one customer can be a staff id that is bespoke to them, or an email, or a name. Validation is specific to their deployment. (if any validation has been implemented)

    Ideally, as you evolve in testing you would like to automate parts of a test – those parts that are checks or laborious, but it’s not always possible.

    #8380
    @michaelabolton

    This original question takes the same form as “is there a difference between eating and biting”?

    Biting takes place during eating, but when we treat the two as the same, people look at us funny. “Honey, that was delicious. I’ve never bitten a better cake.” “I’m hungry; I really need something to bite.” “Let’s go out to bite before the movie.”

    Yes, checking is different from testing. Checking is something that happens when we’re testing. As Paul points out, checking is a tactic of testing.

    It’s a mistake to ask about the “balance” between testing and checking. When you’re eating soup, or ice cream, or mashed potatoes, you don’t allocate some time to eating and some time to biting. You bite when there’s something that you need to bite.

    Similarly, when there are things that must be checked, you check.

    To Paul, above: “For the sake of argument, could we say that a lot of us are just “checking” most of the time as we have a lot of experience so that the “tests” we run are so automatic, zero thought is applied?” Please note: when no thought is applied, you are not testing. Testers: when no thought is applied, please stay away from the medical device that could save my life or kill me; when no thought is applied, please stay away from anything that has to do with my money; please stay away from my car or the airplane in which I am travelling.

    To Kasper, above: I’d like to reframe your metaphor here. Whether a test pilot or a pilot, both are engaged in complex cognitive activity; both are continuously learning and processing and responding to events, with different information missions. I’d point to the use of tools in aviation; some things can be (and are) handed off to machinery and algorithms. But notice: the machinery does not fly the airplane, and unskilled people cannot delegate the piloting work to the machinery; unskilled people certainly cannot set it up properly; and unskilled people would have significant trouble with using the radio.

    —Michael B.

    #8436
    @hilaryj

    One of the problems in IT is mapping imprecise human languages onto precise meanings.. Checking means different things to different testers.

    I would use the term ‘checking’ to refer to the action of comparing actual and expected results. If the test can be rerun exactly with the same inputs and results much of the checking effort can be automated and the tester may only need to check for a pass/fail status. (However if the status is ‘fail’ then the tester should investigate the failure. This will enable them to correct any faulty tests and to pass useful information to developers to fix genuine defects.)

    A tester doing exploratory testing should be continously checking. ‘Is the screen layout clear?’. ‘Are the fields the correct size for valid data?’ ‘Is the response time reasonable?’ ‘Does the report show the correct selection of data?’, ‘Are values correctly totalled?’. An experieced tester may be doing much of this subconciously but is alert to pick anything that is out of line.

    #8441
    @michaelabolton

    We have lately taken to declaring the Rapid Testing namespace when we talk about testing and checking, explicity for the purpose of mapping more precise meanings onto imprecise human languages. Anyone is welcome to use whatever language they like, of course. Our intention is to reduce confusion by providing a straightforward way of deciding what is a test and what is a check. This in turn allows us to determine what is checkable and what is not, and whether the value of setting up an explicit check meets the cost.

    For a check, there are three required elements: 1) An observation of the product; 2) a decision rule; and (3) an output, a bit, such that all three can be produced entirely algorithmically. “Comparing actual and expected results” doesn’t make the distinction clear, since that’s also a definition that many people apply to testing (notwithstanding the fact that it’s a pretty lame definition of testing, for reasons outlined here). Moreover, there seems to be an assumption here that passing checks require no investigation. That, it seems, would prompt the tester to fall for the Narcotic Comfort of the Green Bar. Tools can do a lot of things for us, but like chain saws and food processors, they can do a lot of harm if we fall asleep.

    In our post Testing and Checking Refined, James and I say

    Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.

    (A test is an instance of testing.)

    Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.

    (A check is an instance of checking.)

    To us, that’s nice and clear.

    By our definition of checking, “Is the screen layout clear?” is not a checkable question, since clarity is a value judgment that cannot be evaluated by an algorithm. “Are the fields the correct size for valid data?” is a checkable question, in that you could program (algorithmically) something to calculate the size of the field, the size of the font in use, and the size of largest valid data element. Notice, however, that there is a lot of tacit knowledge required in order to recognize the possibility of different fonts or font sizes; to determine what is valid, and to determine what the largest valid value might be. To become a check, this tacit knowledge must be made explicit and encoded into a program. “Is the response time reasonable?” is not checkable; reasonableness is a relationship between the product and some person; not an attribute, and not algorithmically decidable.. But it is testable; we can question, model, study, and make inferences about reasonableness. We need a human to do that stuff. Should we decide that seven seconds is a reasonable response time, “Is the response time less than seven seconds?” is checkable.

    We also now maintain that all testing is exploratory; we don’t know where the problems are, and we often must discover where to look for problems. Checking is a tactic of testing work, so we’d invert the logic at the beginning of the last paragraph: a tester doing checking should be continuously (and consciously) exploring—among other things, exploring all the ways in which he or she might be fooled by passing checks.

    —Michael B.

    #8456
    @hilaryj

    here seems to be an assumption here that passing checks require no investigation. That, it seems, would prompt the tester to fall for the Narcotic Comfort of the Green Bar.

    Who in the real world gets to investigate the checks (or tests) that have passed? I don’t mean that you blindly run a regression pack and accept that as a full test (or check). You need to work out your test coverage and, at the very least, ask ‘does the pack cover all the tests I need?’ The answer is going to depend partly on why you are testing/checking the system. If the functionality is unchanged but the software has been refactored or ported then a good regression pack may be sufficient. If there is new functionality then new tests or checks are required and old ones may need to be adjusted. (e.g.boundary values may have changed). If you are testing a fix for a bug that wasn’t identified by the regression pack then the pack is missing a check.

    #8492
    @halperinko

    As I said many times before – this kind of discussions putting the “vs.” between these activities is mostly doing harm to our occupation, since many people just interpret this wrongly.
    1. Both activities are needed.
    2. In some cases it is even “Smarter” to run Checks – for instance when we get a new product/version and we 1st need to verify the basics – are the required fields there?, are their boundaries correct?
    As this will result in feedback which when fixed shall stabilize the basics – we can dive further in order to test & explore the unknowns.
    3. We could all use a common list of Checks and save a lot of time to many organizations – Smart testers will do so, stupid ones will try to invent it all over again and again every time from scratch thinking they just invented the wheel (unaware to tons of ideas they have just missed since not learning from experience of the crowd).

    Stop generalizing people and their roles

    @halperinko Kobi Halperin

    #8650
    @michaelabolton

    @Kobi

    There are two senses of “versus”. One is “in opposition to”, identifying things that are opposed to each other, like “Liverpool vs. Manchester United.” The other “as distinct from” is to identify a distinction between two things, like trees vs. leaves. However, for people who were unable to acknowledge the second sense (vs. people who were) we retired the “versus” part over two years ago. Please read this: http://www.satisfice.com/blog/archives/856

    1) Neither testing nor checking is “needed”; there is no law of the universe that says you must do either one. You choose to these things.

    http://www.developsense.com/blog/2014/02/we-have-to-automate/

    However, no one needs to do checking (the kind of testing that can be performed entirely algorithmically)—or at least no one needs to encode it such that it is done algorithmically. (It might be very reasonable to choose to do so.) But if you are checking you need to testing, because whether you are programming a machine to perform checks or performing checks without a machine, you are inevitably doing testing. Checking is embedded inside of testing. Checking is a tactic of testing.

    2) I don’t know what the difference is between smarter and “smarter” (or smarter vs. “smarter”, if you like). But what you’re saying in your (2) about “verifying the basics” suggests to me that you’re using “checking” in a different sense from the sense we mean it (again, see http://www.satisfice.com/blog/archives/856). It seems to me that you’re talking about something that we would call smoke testing or sanity testing.

    3) I question the idea that “we can all use a common set of checks”. If you mean checks in the sense that we do, a common set of checks is infeasible, since each check must be encoded–programmed–in a way that is specific to the application, the environment, and the purpose that the software is intended to fill (among other factors). I think you might mean that we could use a common set of heuristics, or a common set of checklists. That could be reasonable in a very general sense; I think of Michael Hunter’s “You Are Not Done Yet” as a good example of this kind of thing; something that would act as a reminder to consider certain activities, but not as a mandate. Considering the variety of factors involved in developing and deploying software, I think it would be a good idea to recognize both the value and the limitations of checklists.

    Meanwhile, it seems odd to me that you say “stop generalizing people and their roles” immediately after a paragraph in which you talk about stupid testers and smart testers, and suggest what they will generally do.

    —Michael B.

    #8793
    @andygorman

    Apologies for coming late to the party. Yes, this is a longstanding debate. Years ago, saying testers merely checked lists of features was a way of denigrating the role, a common resort of developers who knew little about testing. Fortunately, we have all moved on, as these posts have proven, though I think the observation that “A check is part of testing, not an instance of it” may be a nuance that the uninitiated find hard to grasp.

    I have to agree with Michael and James’ definition as the most appropriate i.e. “Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product” and therefore checking is something that can be done by automation. Which brings me to the reason for putting in my tuppenny worth. Whether you agree that checking is testing or not, checking is necessary and testers usually take responsibility for ensuring checking takes place. At EuroStar 2012 Paul Gerrard made the point that testers are a valuable (i.e. expensive) asset and those who spend the majority of the time ‘checking’ manually will go the way of the dinosaur. Checking can and should be automated whenever possible, leaving testers to do the more sapient (sorry Michael) tasks.

    #10100
    @ronan

    This topic, along with @paul-gerrard New Model of testing was debated quite a bit this year at EuroSTAR. I wonder did anyone attend and change their opinions or modify their thoughts on these models?

    #10109
    @paul-gerrard

    @paul-madden- testing and checking are different if you define them differently which of course has been done by Michael and James. The ‘debate’, as I see and hear it, centres around two issues.

    1. Checking is a relatively easily understood definition. But testing – being very roughly ‘everything else that testers do’ is harder to put ones finger on.
    2. Some people have taken checking to be ‘testing done by idiots’ and others have taken offence at that.

    I’ll pick up on a few points made by others in this thread and put my case at the end.

    By the way, Michael and I had a long long chat till around 5am on the Wednesday night of EuroSTAR. After a lot of Guinness I don’t know how much sense I was making at least, and my memory is definitely hazy. Please correct me Michael if I misrepresent your view.

    @kasper labels people as checkers and testers. I don’t think this is Michael and James’ intention at all, but it *is* a common and partial representation of the definitions. I’ve never met a checker or a tester. I only meet people.

    @paul-french mentions his irritation. It’s a common response, as I say above. His suggestion that checking requires ‘zero thought’ isn’t quite right. I’ll explain later in this post.

    @kasper-2. As I understand it, a ‘normal’ airline pilot is basically an operator of a big lump of software and hardware. Pilots, through software automation have been reduced to computer operators in many respects. Because they don’t ‘fly’ anymore, they are losing some of the skills (and much of the joy, I expect) of flying a real plane. Many/most air accidents and near misses are caused by human error – pilots blindly trusting their instruments more than their eyes and the seat of their pants. A test pilot will have some goals and constraints and some checks to perform. But most (and I mean almost all) test flights are simulations done in a lab on computers. The Test pilots job is initially to ensure that the models used on the ground are reasonably well calibrated. Mostly checking, I expect. Test pilots do not and cannot have the freedom to explore/experiment as (most) software testers do because many manoevres would result in a crash, death and some awkward explanations. So pilot/test piloting is not a helpful analogy, I fear..

    If we understand what testers (or developers, when they test) do, we don’t need analogies, I believe.

    @stevean Agree labels are not helpful, unless people label themselves. Labelling is a form of bigotry. Let’s not go there.

    @hilaryj This is much closer to my view., in spirit at least.

    @michaelabolton – thanks for the clarifications in Maastricht and here. I understand where you’re coming from and agree. But, as you know I use a different model. Our models don’t align, but I believe our explanations of our models do. That’s what I got from our conversation. I find my model more useful (and dare I say, easier to explain) than yours, that’s all. :O)

    @halperinko Agree we shouldn’t be arguing about ‘versus’ – Michael explains this. But it’s easy to fall into this trap – I was a culprit too, until I read the explanations carefully.

    @andygorman I think it is wrong to prefix the word checking with ‘merely’. Checking is the last 0.001% (or less) of a thought process that starts with utter ignorance and reaches (NB – never ‘ends’) a point of understanding, and trust in it, whereby a check has meaning and value.

    The check itself is (usually) some form of comparison between for example, an observed behaviour and some expectation or expected outcome. One cannot perform a check without a high level of confidence that the behaviour and expectation are what you believe them to be and that your interpretation of the outcome has meaning and value to stakeholders.

    You cannot reach this level of confidence without exploring your sources of knowledge (people, documents, your own experience, biases and ignorance, and those of other people, and the system under test, if it exists and is available and usable) and making a sense of these fallible sources by means of constructing a model (mental or recorded by some means). That process, what Michael and James call “testing” is the other 99.999% of the job we call testing. And that is why people fall into the trap of believing that checking is not skilled. How can it require skills if it requires such a small fraction of our thinking?

    If your requirements are poor, or your stakeholders cannot agree on requirements, or your system is flaky, unstable or just hard to understand (or explain), and in all cases where you simply do not know enough to construct a meaningful check, then you cannot say with any confidence that the system meets some requirement, in part, in some circumstance because you have run a check.

    But people try, and do write software before they know what the software truly should do. And then others try to check that software with the same, or less or different knowledge. So it’s no wonder we have problems.

    I am wrong about the .001% guess – of course I am. The number is meaningless. But…

    We should spend more time focusing on the skills to do the other 99%. Then, and only then can we expect people to define meaningful checks and perform them with tools or humans or whatever.

    This is the context of my suggestion that we should aim to eliminate human checking and replace it with tools – if we can.

    Put it another way. Only the *best* of our testers should be trusted to do checking. If the best testers have tools at their disposal, and they trust them to do what tools are good at, then I have to say, we could dispense with the services of quite a lot of people in the testing business.

    I believe this perspective aligns quite well with sentiments expressed by Michael and James. If you want a quick outline of my New Model for Testing there’s a short video of a talk I did for the BBC here: http://gerrardconsulting.com/?q=node/656

    #10112
    @michaelabolton

    Paul: “@kasper labels people as checkers and testers. I don’t think this is Michael and James’ intention at all, but it *is* a common and partial representation of the definitions.

    It is a misrepresentation of our definitions. People can say whatever brilliant or foolish things they like, but they are not representing our ideas simply because they happen to use the same words.

    Paul: Checking is a relatively easily understood definition. But testing – being very roughly ‘everything else that testers do’ is harder to put ones finger on.

    It’s pretty odd to talk parts of an activity in terms of “all the other parts except this one thing”. How would we “put our finger on” all the parts of driving that are not managing the accelerator pedal? We’d call that “driving”, I think. How would we “put our finger on” all the parts of programming that are not turning source code into machine code? We’d call that “programming”, I’d say. The existence of “cruise control” or “compiling” does not require us to come up with a single word to describe all the other parts of driving or programming; speed management and compilation are things that happen inside the task of driving. To come up with a word for “everything else that is not X” would seem to me to restore the problem that people have historically had with “vs.”, seeing checking in opposition other parts of testing rather than a complement to it.

    If there are specific parts of testing that we’d like to pay attention to, we can specify them, including the things that we specifically call out (questioning, study, modeling, observation, inference) and others that we don’t (resourcing, collaborating, manipulating, navigating, reporting, recording, generating ideas, elaborating on ideas, refining ideas, overproducing ideas, abandoning ideas, recovering ideas, generating test data, coding, analysis,…)

    Paul: Some people have taken checking to be ‘testing done by idiots’ and others have taken offence at that.

    People who have taken checking to be “testing done by idiots” have almost certainly not read our work. If they have read it, they are welcome to point out where we have even hinted a such an idea. In any case, they are not talking about something we have written or said.

    Paul: (paul-french’s) suggestion checking requires ‘zero thought’ isn’t quite right

    It is quite right, in our terms. I think you (and Kasper) are confusing preparing and interpreting checks with checking. The checking is precisely the part that requires zero thought; it is precisely the part that can be done by the machine. The preparatory and follow-up work are testing. Again, think of compiling. Writing a program to be compiled and analyzing the product is not called compiling; it is called programming. Compiling is an automated, skill-free activity within the activity of programming. We don’t talk of writing a program as compiling; we don’t talk of designing and encoding checks as checking. (We don’t speak of writing a compiler as compiling, either.)

    Kasper: If a “test” I run requires zero thought I automate it.

    If a test requires zero thought, it’s not a test. You are not automating the test. You are automating actions within a test that can be automated. The test is not those actions. The test is what you think and what you do. Automation may assist you in part of this process.

    Paul: Agree labels are not helpful, unless people label themselves. Labelling is a form of bigotry.

    Have you noticed that by saying labelling is a form of bigotry, you have just used labelling? And bigotry?

    Labelling might be bigotry if it’s used for bigotry. Labelling is the act of putting a name to something. Without labelling, we don’t have words for things.

    Paul: If we understand what testers (or developers, when they test) do, we don’t need analogies, I believe.

    1) I believe you just used labels there for testers and developers. 2) It certainly seems like some people don’t understand what testers and developers do. To help with that, humans have developed these things called associative speech, abstraction, metaphor, simile, and analogy. All of software is abstractions; all of software is mapping things that people can do or produce onto things that machinery can do or produce.

    http://www.developsense.com/blog/2010/09/the-motive-for-metaphor#educatedimagination
    http://www.joelonsoftware.com/articles/LeakyAbstractions.html

    Paul: @Kobi. Agree we shouldn’t be arguing about ‘versus’ – Michael explains this. But it’s easy to fall into this trap – I was a culprit too, until I read the explanations carefully.

    Thank you for saying so. I’m very glad our conversation was helpful.

    The biggest problem for testing and checking so far has been people who have read our work carelessly, or who are getting upset about things we did not write, but that they’ve made up in their heads. In general, we tend to hope that people do read our stuff carefully, since so many of our readers are in a craft where reading carefully and expressing one’s self precisely is extremely important (lest your spacecraft try to land 150 miles beneath Mars’ crust, or your save function scrambles all of your customers’ data). But it appears our faith in people reading carefully is sometimes misplaced. This is in fact quite discouraging some days.

    Paul: Checking is the last 0.001% (or less) of a thought process that starts with utter ignorance and reaches (NB – never ‘ends’) a point of understanding, and trust in it, whereby a check has meaning and value.

    I’d offer a little caution there. The check doesn’t have meaning or value except that which we ascribe to it. (I assume that’s what you mean, but I’d like to emphasize it.) The check is a signal, like a little bulb that glows green or red; it does not have meaning of its own. Indeed, this is one of the great risks of over-reliance on checking: the narcotic comfort that, because the checks are running green, everything must be okay. I’ve written about that here and here.

    Paul: We should spend more time focusing on the skills to do the other 99%. Then, and only then can we expect people to define meaningful checks and perform them with tools or humans or whatever.

    That’s only sensible, since checks themselves are (by definition) skill-free. And yes, James and I have been focusing on the other 99% for the entirety of our professional careers.

    On the other hand, humans will constantly perform zillions of little checks (observations and evaluations that could be performed algorithmically) as they’re testing something. We do it naturally; we can’t help but to do it; we do it automatically, pre-consciously. So it’s weird to suggest that we should eliminate human checking. I think we could try to eliminate wasteful, over-formalized, and over-structured scripting of human beings, though.

    Paul: Only the *best* of our testers should be trusted to do checking.

    It’s not clear to me what you’re suggesting here. Are you suggesting that only the best of our testers should be trusted to do the risk analysis; framing of questions; encoding of operations, observations, and evaluations such that they produce a bit; the reporting of the bits; and the analysis of the bits? That would be testing. And in fact, we wouldn’t need the best of our testers doing all that stuff; testers could collaborate with programmers or test toolsmiths on parts of that work. Indeed, doing that work would be part of the process of developing testing skill.

    Paul: If the best testers have tools at their disposal, and they trust them to do what tools are good at, then I have to say, we could dispense with the services of quite a lot of people in the testing business.

    We could. We might even want to do that, if the value of the service was sufficiently low (and in many places, I’m pretty sure it is.) On the other hand, if you took the work “best” out of there, we might also get much greater value out of the testers we have.

    —Michael B.

    #10117
    @paul-gerrard

    @Michaelbolton – some follow ups…

    Re: Testing as ‘everything else’. I’ll stick with that for the time being as the diagram on http://www.satisfice.com/blog/archives/category/testing-vs-checking is the only graphical representation that I’ve seen. If it is a venn diagram, it distinguishes checking as a subset of testing and the remainder is what you’ve called ‘learning by experimenting, study…’ and so on. So asI see it in the picture, there are the checking varations, and the ‘rest’ of testing.

    What I’ve tried to so with the New Model is be explicit about those other things. Now, like you – I’m not interested in technologies, environments details, resourcing, paperwork etc. I call these logistics. They are important, but not germane to the thought process of testing.

    What I name as ‘Applying’ in the model is the application of a test – using tools, or people or… anything. Checks fit in there. The human ‘interpretation’ is separated out. This is one reason I regard checking as taking a tiny fraction of our thinking. It’s a fraction of one activity out of ten. Diagrams help. I’d like to see you guys produce one. Or use mine :O)

    Re: “checks=zero thinking”. I generously suggest 0.001%. But you cannot perform a trustworthy/useful check unless you do the other 99.999%. So it’s the very thin icing on a large cake. Cake-making is much more significant than icing. Icing has no meaning without cake. Checking has no meaning without enquiry, modelling, predicting, challenging and so on. Do checks require skill? I’m sure there are some that require it. I guess I’m far less confident than you are that they don’t. This is a whole different debate, but I believe your definition of checks allows for skill if the algorithm is followed by a human. The microchecks you suggest humans perform almost without thinking require intelligence, skill of some sort, don’t they?

    Re: Checks and value. Five or six Eurostars ago I did a ‘coffee break’ video http://gerrardconsulting.com/?q=node/598 that talked about tests, significance and value. I’d be happy to substitute checking for the word testing in that video now, and I think what I said then still reflects my position. Interested to hear what you think on that.

    Re: ‘Best’ testers. Since I argue that checking depends on the mastery of the 99.999% that goes before it. Checking depends on good enquiry, modelling, prediction, challenge and so on could be regarded as the hardest skill to acquire. Not something to be left to our most junior, least experienced, or least capable of testers.

    My mind wanders… A ton of metaphors come to mind. I’m thinking of car-making.

    The creation of a car demands on a huge range of skills and logistics. The paint job (for mass produced cars at least) involves dunking in vats of primer and paint or use of robots that spray paint according to some numerical control. The paint job has no meaning without the car. but the paint job has a disproportionate significance to the buyer. The appearance of the car might be what the buyer bases their buying decision on. but the costs (and skills) involved relate to what went on before the paint job. The value of what testers do for stakeholders, with respect to confidence in a product meeting their needs in the broadest sense) is based on checking. This is what most stakeholders believe they are paying for, I think. You might assign checking tasks to tools or juniors, but the leg-work – sorry brain-work – that goes beforehand needs your best people.

    Is it a sensible ambition to want to eliminate human checking? In the context of checks that are significant (see my coffee time video). I would suggest yes (until a better criteria emerges). The checking that people do instinctively, unconsciously (as they perform almost any activity) – probably not.

    Could developers or other team members share these activities? Of course. The way I look at testing is as a thinking process. Everything else (that I can think of) is logistics. Could a developer do modelling (risk analysis is modelling IMHO) – of course. One of the challenges we’ve had over many years in the software development business is we have called testers testers. We’ve tried continuously to assign activities to specialists a la Taylor’s Scientific Management. What Agile reminded us of was skills/capabilities are not distributed ‘scientifically’ and that good teams find ways to dynamically assign tasks and pass on skills where possible. So ‘everyone is a tester’. Possibly.

    BTW, I think of myself more as a developer rather than a tester. I do not write ‘enough’ test automation. Most of my time spent developing is actually spent testing. In my model the left hand side of it is common to devs and testers. Not that we think the same thoughts, Rather, the processes are the same. So I argue (from my solo experience) that developers spend more time testing than testers. ‘Testers’ could learn a lot from ‘developers’ :O)

    Regarding labelling: Like most folk I use words lazily. When I refer to a tester, I really mean, ‘someone doing testing’. That could be anyone from time to time. Testing is an activity, not a role. If you call yourself a tester, that’s cool. If someone labels me – I bristle. No offence to bristles intended.

    #10121
    @paulcoyne73

    I think this is a false dichotomy. You can’t perform testing unless there is a check, but if all there is is a check then you’re not testing. It’s necessary, but not sufficient. I’m struggling to find an adequate analogy, but I’ll try “living” and “breathing” (and I’ll restrict myself to mammals here, to avoid any zoological pedantry!). In order to live I have to breathe, but if all I’m doing is breathing it’s not much of a life. OK a poor analogy, but I hope you get my drift.

    #10126
    @kasper

    Hmm, most of the time I try not to answer posts that misinterpret my statements but this time I would like to end it before this gains a life of its own.

    @kasper labels people as checkers and testers. I don’t think this is Michael and James’ intention at all, but it *is* a common and partial representation of the definitions. I’ve never met a checker or a tester. I only meet people.

    @paul-gerrard labels people for labeling people.. At least in Dutch it would be exceedingly clear that I was not talking about the holistic human being but people performing a role. To be clear my statement addresses the role. If me not being native English speaking causes confusion in this respect I am sorry and hope to have rectified it by this statement. I only meet people performing a role. In my work relations the role is a very important part.

    To be clear I never state that I represent anything written or stated by Michael and James. So the “common and partial representation of the definitions” is in your misinterpretation of my words – not in the words itself.

    Everything I write represents my opinion, if my opinion happen to coincide with the opinion of someone who has stated the opinion I support more eloquent than I can I refer to this person rather than writing the same opinion less eloquent.
    I happen to agree with things Michael and James have written and I also happen to disagree with things Michael and James have written.

    The contribution to James and Michael was in direct response to the question by @paul-madden “Is the differentiation between checking and testing commonly used”. Again this would be perfectly clear if this forum was in Dutch so if me not being native English speaking causes confusion in this respect I am sorry and hope to have rectified it by this statement.

    kasper-2. As I understand it, a ‘normal’ airline pilot is basically an operator of a big lump of software and hardware. Pilots, through software automation have been reduced to computer operators in many respects. Because they don’t ‘fly’ anymore, they are losing some of the skills (and much of the joy, I expect) of flying a real plane. Many/most air accidents and near misses are caused by human error – pilots blindly trusting their instruments more than their eyes and the seat of their pants. A test pilot will have some goals and constraints and some checks to perform. But most (and I mean almost all) test flights are simulations done in a lab on computers. The Test pilots job is initially to ensure that the models used on the ground are reasonably well calibrated. Mostly checking, I expect. Test pilots do not and cannot have the freedom to explore/experiment as (most) software testers do because many manoevres would result in a crash, death and some awkward explanations. So pilot/test piloting is not a helpful analogy, I fear..

    @paul-gerrard I think it would be very benificial if you actually talked to pilots/test-pilots before stating your opinion about their job.
    I actually quite like the reframe by @michaelabolton so I let this one rest.

    If we understand what testers (or developers, when they test) do, we don’t need analogies, I believe.

    @paul-gerrard This whole discussion was started by a question. If everything was understood both question and all later posts would be pointless. I respectfully disagree.

    Paul: “@kasper labels people as checkers and testers. I don’t think this is Michael and James’ intention at all, but it *is* a common and partial representation of the definitions.”

    It is a misrepresentation of our definitions. People can say whatever brilliant or foolish things they like, but they are not representing our ideas simply because they happen to use the same words.

    @michaelabolton Please read my statement – I do not claim to represent your ideas. The only time I represent someone elses ideas is when I am asked to do so.

    @michaelabolton It is quite right, in our terms. I think you (and Kasper) are confusing preparing and interpreting checks with checking.

    @michaelabolton Since I stated my description of checking – and yes, it is different from your definition – and I am completely in line with that definition I claim that the only confusion is the assumption that I follow your terms.

    Kasper: If a “test” I run requires zero thought I automate it.

    If a test requires zero thought, it’s not a test. You are not automating the test. You are automating actions within a test that can be automated. The test is not those actions. The test is what you think and what you do. Automation may assist you in part of this process.

    @michaelabolton This statement was in direct answer to

    @paul-french For the sake of argument, could we say that a lot of us are just “checking” most of the time as we have a lot of experience so that the “tests” we run are so automatic, zero thought is applied?

    I merely used the same terminology in the answer. Since Paul French used another definition of checking than I did I found it prudent to keep the same terminology and just answer the question. From my previous posts it should be clear that I do not mean automating the test but in fact are referring to automating actions within a test that can be automated. Once again if my poor command of the English language causes confusion here I am sorry. In Dutch the use of quotation marks would make this self explanatory, hence no further explanation was given.

    =============================================================================================================

    To be absolutely clear:
    The original question was “Do you agree? Is checking different to testing? If so, who does the checking?”

    My answers:
    Yes, checking is different to testing.

    People who are good at performing checks are not necessarily good at testing.
    YES I DO mean the people performing the checks. These people are not necessarily the same people that write the test cases, test scripts or any other label you want to put on the design process.
    Yes I do see that people that are happy to be performing checks can be extremely unhappy when forced to perform testing.
    I do define checking as the use of a pre compiled list of check items to determine if described conditions are met.
    This does not mean that there is no thought or intelligence needed to be able to describe or perform the checks.

    I do define testing as the use of all available information, skill, expertise and experience to determine the quality of the product under test.
    Yes, this includes (the results of) checking where needed BUT and this is important: It goes beyond checking. It is about finding problems – it is not (only) about conforming described conditions.
    Testing needs more imagination and creativity than checking. I have held and stated this opinion for a long time – including an (admittedly not very good) presentation “Imagination is more important than knowledge” at Eurostar 2008.
    This is solely my opinion. I do not claim it is my unique opinion, but to the best of my knowledge it is not derived from the ideas of someone else. I do certainly not claim that it represents the ideas of James and Michael or even that it is aligned / in line with their opinions.

    Another question was: “Is the differentiation between checking and testing commonly used”
    My answer is: Yes, both Michael Bolton and James Bach have written enough about this so I can refer to them for further discussion without claiming that I represent them or their ideas in any way, shape or form.
    This is the only time I mention anybody else. In context it should be clear that I do not claim to represent Michael or James but use them to make the point that the differentiation is commonly used.

    @michaelabolton I might be overly sensitive here but I like to respond to

    The biggest problem for testing and checking so far has been people who have read our work carelessly, or who are getting upset about things we did not write, but that they’ve made up in their heads. In general, we tend to hope that people do read our stuff carefully, since so many of our readers are in a craft where reading carefully and expressing one’s self precisely is extremely important (lest your spacecraft try to land 150 miles beneath Mars’ crust, or your save function scrambles all of your customers’ data). But it appears our faith in people reading carefully is sometimes misplaced. This is in fact quite discouraging some days.

    I do read your work – and I believe rather careful. Since I have stated my opinion on testing and checking in public since 2008 – and I hold to my own descriptions- that does not mean I don’t value your ideas, it means that I have my own opinion. It would be rather silly for me to disband my stated opinions.
    I repeat one last time that I do not claim to represent your ideas – I like to think that we develop our thoughts in the same general direction – but since we have never met in person or virtual it is rather a stretch of the imagination to make any such foolish claims.
    If the statement was not partly directed at me feel free to ignore my response to it.

    Disclaimer:
    These are my ideas, except if they are yours and you have published them earlier. In that case I am deeply sorry and will happily give tribute where it is due. Please contact me if that is the case.
    My ideas do not represent anybody else or any company.
    If I am unclear or not making sense try to remember that not everybody is native English speaking. Please contact me and point out my mistakes, I am happy to correct them.

    #10129
    @kasper

    Reading my messages back with the comments in mind I see that my poor choice of words here:

    As always James and Michael can state my opinion much more eloquent than I can.

    led to a misunderstanding.
    My apologies for that. I hope that my last post cleared things up.

    Lesson learned: don’t post in the forum while working on something else.

    #10174
    @jerryweinberg

    I think the most (only?) interesting part of this thread is this:

    Why people are putting so much emotional energy into it?

    I think answering that question would tell us a lot about where we stand today as a profession.

    Let me post a similar question. Instead of ” Is checking different to testing?” let’s discuss

    Is communicating different to testing?

    There’s no doubt in my mind that testing involves communicating, so what would be the point of discussing this question? And, we could pose similar questions about anything that’s involved in testing. How about breathing?

    So, what am I missing? Why is this question worth all the energy? What’s the hidden issue behind it all? Maybe that will teach us someething about us.

    #10175
    @michaelabolton

    What’s the use of making distinctions between anything—between colours, for example? They’re all just colours, right? Just go when the light changes colour.

    The point of discussing this is, to me, pretty obvious: unless we’re clear about what we’re talking about, we cannot reasonably say that we have a profession.

    What we are seeing here is, in its own small way, a replay of the history of science. If you’re up for a long read, have a look at Leviathan and the Air Pump. For a shorter one, have a look at this: http://www.developsense.com/blog/2014/03/harry-collins-motive-for-distinctions/

    —Michael B.

    #10176
    @jerryweinberg

    Michael wroteL “The point of discussing this is, to me, pretty obvious: unless we’re clear about what we’re talking about, we cannot reasonably say that we have a profession.”

    And that accounts for all the emotional charge in the posts? Michael, you’re being super-superreasonable.

    Do you really think that the fuzzy distinction between checking and testing is a major impediment to our being a profession? To me, there are so, so many other, larger things blocking
    1. being a profession
    2. being seen by others as a profession

    How about we spend our time identifying those blocks and discussing how to do something about them (as you have done so well in the past)? If you post this questionL

    What stands in the way of testing becoming a profession, and being regarded as a profession by others?

    I’ll gladly spend my time and thought on that thread, instead of hassling over how many angels can dance on the head of a pin.

    #10177
    @michaelabolton

    Emotional energy is an important clue to significance. (I learned more about this from you than from anyone else, Jerry.)

    One of the things that prevents a profession from being recognized as such is shallow and sloppy and lazy talk about its terminology. And that brings pain.

    I see and feel pain associated with the confusion between testing and checking. I see products are being negligently tested when I visit client sites. Products that I strongly suspect have been checked, but negligently tested regularly appear in my hands and on my computer. This hurts me and people I care about. I see testing being reduced to simple output checking, such that “anyone can do it if only we hand them the right tool or the right artifact”. This view offends me, because it implicitly denies that testing deserves the status of a profession—and an a priori assumption of unworthinesss is major barrier to something being seen as a profession.

    I believe making important distinctions—in this case, distinctions between what people can do, what tools can do, and what people can do using tools—is one of the essential elements of establishing a profession. (I learned the importance of speaking precisely from you, among others, too.) I believe that people will not recognize as a profession any activity that they believe can be scripted or automated away. And to me that’s not a hidden issue at all; to me, it’s it’s in plain sight, right there on the table. The unprofessional part, to me, is not the arguing about it. That is the professional part. So another emotional reaction comes from my perception that people are trying to sweep both the distinction and the controversy about it under the rug, as though real professions don’t have controversy.

    I believe that making distinctions also helps to shine light on cultural patterns that exist within a profession (Quality Software Management, especially Volume 1, is an example of that). In developing these distinctions, we are doing exactly the work required to establish a profession. Arguing about them is part of that process of determining what means to act professionally. A professional tester, we hold, does at least these things http://www.developsense.com/resources/et-dynamics3.pdf (And yes, we would argue it badly needs an update, especially since we’ve lately recognized that we’re talking not about exploratory testing, but testing generally; http://www.satisfice.com/blog/archives/1509)

    I believe that making those distinctions and having them available is a core part of clear, professional communication. As you say, “Most professional testers will know most of what’s in this book (Perfect Software and Other Illusions About Testing), but I hope that by reading on, they will see new ways to communicate what they know—to their managers, developers, coworkers, and customers.” Many colleagues have told me that talk of testing and checking has made it easier to communicate what they know. As an example, I could cite most of Perfect Software, but especially Chapter 4 in you unpack many meanings—testing for discovery; pinpointing; locating; determining significance; repairing; troubleshooting; testing to learn—from “testing”. That’s because “in larger organizations with dedicated testers and/or customer support personnel, confusion about the differences among testing for discovery, pinpointing, locating, determining significance, repairing, troubleshooting, and testing to learn can lead to conflict, resentment, and failed projects.” For all of those other elements, so it goes for checking.

    I believe that naming things without knowing them is a great way to fool ourselves, and if testing has a purpose, its primary purpose is helping to reduce the risk of people being fooled.

    Let me post a similar question. Instead of ” Is checking different to testing?” let’s discuss

    Is communicating different to testing?

    There’s no doubt in my mind that testing involves communicating, so what would be the point of discussing this question? And, we could pose similar questions about anything that’s involved in testing. How about breathing?

    Yes, communicating is different to testing. Someone can test without communicating (many testers do this, in my experience), and someone can communicate without testing (ditto). Yes, good testing involves good communicating. So what are the skills are required for good communication? To talk about those skills meaningfully (especially as they pertain to testing), we must start by acknowledging that testing and communicating are different, and unpacking their elements.

    On breathing: taking a breath—pausing, at least—has moments relevant to testing. It might be worthwhile to unpack those too. So we do, here and here.

    —Michael B.

    #10178
    @jerryweinberg

    Thank you, Michael. That’s more like it.

    You see what happens when you follow where the emotional charge leads you? There can be no real profession without people following the lead of what they care about intensely.

    p.s. My biggest concern now about “checking” is that people don’t unpack what’s involved in that. Checking requires many things of the person carrying out those checks. I’d like to see those skills and attitude laid out for all to see. It’s not “just” running some “automatic” test cases, any more than “manual” testing is just banging keys like a monkey.

    #10179
    @michaelabolton

    Thanks, Jerry.

    To your PS, my immediate response would be: a blog post that I wrote six years ago. (For the time-pressured, do a text search for the paragraph that begins “It takes sapience to recognize”, and start there.)

    But that was just a start, so the second response is that James and I are in the midst of producing a white paper (maybe a little e-book) on the subject. We’ve been at it for a while. That work is currently being reviewed by a number of our colleagues, as well as by us. At the same time as we’d like to get it out soon, we don’t want to rush it either.

    —Michael B.

    #10202
    @kasper

    I think the most (only?) interesting part of this thread is this:

    Why people are putting so much emotional energy into it?

    I do hope discussing the original question had some merit.

    As for the emotional energy? Emotional energy equals passion. I am very glad people show passion about their profession.

Viewing 29 posts - 1 through 29 (of 29 total)

You must be logged in to reply to this topic.