Home › Forums › Software Testing Discussions › A New Agile Testing Ecosystem
- This topic has 18 replies, 9 voices, and was last updated 10 years ago by Michael.
-
AuthorPosts
-
October 22, 2014 at 12:58 pm #5019October 22, 2014 at 2:12 pm #5024
If you get the message “You Must be Logged in To Reply To This Topic” below after logging in, just hit Ctrl+F5 and that should solve that problem.
October 22, 2014 at 2:16 pm #5026No questions? Hmm. Maybe other people are having the same kinds of problems logging in that I’ve been having?
October 22, 2014 at 2:17 pm #5027Hello,
Thanks for a great webinar! My question is about user stories. If a user story is made true to real life there should be a lot of unexpected things happening in them. How do you think that should be described? Or is it merely an other skill of the tester to incorporate such disruptions?
Regards
Johani Karonen
University of Skövde, SwedenOctober 22, 2014 at 2:23 pm #5028Hi, Johani…
Thanks; great question. My answer is “both”. One of the testing skills is the capacity to anticipate, identify, and imagine the possibility of problems not only with the product but also with our ideas about the product and conditions of its use. So, in advance of direct interaction with the product, question our ideas about how it could be misused, or even abused (don’t forget the hackers!). During interaction with the product, ideas will come to mind, unexpected things will happen, and distractions will arise. We can try to reduce distractions, but we can also consider the potential usefulness of distraction that is productive for testing. So, to some degree, embrace distractions and disruptions; they’ll happen to real people in the real world.
Cheers,
—Michael B.
October 22, 2014 at 2:35 pm #5029I like that you used the quadrant and your reviewing of the change of the build and the sliding of the meaning of the words. Have you been dwelling a lot or did it come t you suddenly? That’s a silly question, let me rephrase. You seem to do a lot of filosphical reasoning and battling with the words. Is it hard or does it come easy? What does your thinking cap look like?
/ Johani
October 22, 2014 at 2:37 pm #5030Hi Michael,
one thing I wasn’t clear on was scripted tested – were you describing it, as one of the tools of a tester; which can be a restrictive practice, as we are missing the other abilities that we possess, like exploration, hunches around common issues, taking on different roles or using the software in different ways.
However, while this work has value, the communication of this to a company is difficult – whether they be the CEO, CTO, A Senior Developer or a Product owner; as a lot of what they believe the testing initiative provides is a quantitative understanding of the work being undertaken facilitated by scripted test.
So, How do we start dispelling this long held belief? A belief so strongly held, that I’ve had CEO’s explain to me how they used to do the testing, and how it is a relatively simple approach!
Regards
Andrew Newton.
October 22, 2014 at 2:48 pm #5033hi Michael, great Webinar, thank you!
I’m a tester in an agile environment. I’m not in any team but support all of them (not many, luckily) specially for all that concern about exploratory and UAT.
we have a disagreement on E2E view: for me they are to stress the entire of your system (suits perfect in Regression phase, better if automated); for my developers they are only about to check if a single module works as required (while I call this Integration tests).
I know I’m a tester I must to be able to adapt to (almost) anything, but I would like to know an expert opinion, for my personal culture first and to argue friendly with them after.Thank you.
MarzioOctober 22, 2014 at 2:52 pm #5034Hi Michael,
As someone who does a lot of ‘security’ testing as part of my day to day work, exploratory charters and so on, it’s good to see these kinds of testing traditionally driven out of the “Testing with Tools” section of the quadrants.
Whilst I do use tools to augment and assist my testing, I still need a fundamental understanding of what the tool is doing in order that I can interpret results. That means being able to do the same sorts of things manually on a smaller scale, that can then be driven to a larger scale by a tool.
A lot of the security testing I do is manual first, SQL injection on a new form for example. We might then use a scanner to automate the bigger stuff, across the application.
Anyway…thanks for the insight and the learning. Looking forward to sharing this slide deck with my team.
Cheers,
Dan (@thetestdoctor)October 22, 2014 at 7:27 pm #5045Hi, Johani…
The Agile Quadrants had bugging me at a kind of background radiation level for a long time. When James and I had a chance to work after Let’s Test last spring, we went down a list of things that we wanted to work on. One of them was the relationship between Rapid Software Testing and Agile.
Probing the meaning of words, to me, is an essential activity and skill in testing.we need to avoid being fooled. We need to recognize how words are models of things in the world, and as such they express what they represent imperfectly and subjectively. That’s a big risk for software development which mostly deals in abstractions. Models help us, but failing to come to agreement (or reaching only shallow agreement) on what things mean is the source of many bugs.
Talking about words and what they mean comes pretty easily to me and to James. Some people find it tedious. And indeed, sometimes it is tedious, especially when we’re struggling not only to express what we mean, but to figure out what we mean. But mostly, it is a fascinating, valuable, and exploratory process.
—Michael B.
October 22, 2014 at 7:42 pm #5046Hi, Andrew…
>One thing I wasn’t clear on was scripted tested – were you describing it, as one of the tools of a tester; which can be a restrictive practice, as we are missing the other abilities that we possess, like exploration, hunches around common issues, taking on different roles or using the software in different ways.
Here’s something worth thinking about: how do you script a process of discovery? In skilled testing, we can talk about coverage (that is, where to look for bugs) and we can talk about oracles (that is, the multiple ways in which we might recognize bugs). For the last several decades, much testing has been focused on constrained procedures, mostly ignoring ideas about oracles and coverage. This would be a great idea if only we could get the bugs to follow those procedures too.
>However, while this work has value, the communication of this to a company is difficult – whether they be the CEO, CTO, A Senior Developer or a Product owner; as a lot of what they believe the testing initiative provides is a quantitative understanding of the work being undertaken facilitated by scripted test.
There can be no quantitative understanding without a qualitative understanding behind the quantity of what is being expressed. Testing work is not production of test cases, any more than management work is production of management cases. Good management is not a scripted activity, but a cybernetic one (some management procedures in large dinosaur-like companies notwithstanding). Testing is something that informs management, extensions of management’s senses. Notice that your eyes, ears, nose, fingertips, and taste buds obtain information in the world without following a script.
The issue here, I believe, is that testers have not learned how to talk about oracles and coverage, and management doesn’t know how to ask about them. This puts us into a feedback loop where testers keep talking about procedures because management keeps asking about them because testers keep talking about them.
>So, How do we start dispelling this long held belief? A belief so strongly held, that I’ve had CEO’s explain to me how they used to do the testing, and how it is a relatively simple approach!
I provide ideas for talking about oracles here http://www.developsense.com/blog/2012/07/few-hiccupps/ and coverage here http://developsense.com/articles/2008-09-GotYouCovered.pdf; http://developsense.com/articles/2008-10-CoverOrDiscover.pdf; http://developsense.com/articles/2008-11-AMapByAnyOtherName.pdf. In my experience, management doesn’t care very much about your procedure for doing things when you provide them with what they’re actually looking for: timely, relevant, significant information about problems that threaten the on-time, successful release of the product.
There are two steps to dispelling the belief. First, learn to test expertly and learn to describe your testing. Then, invite the CEO to sit with you and to see how your testing is vastly more sophisticated, powerful, and valuable than something that can be done with a “relatively simple approach”.
Cheers,
—Michael B.
October 22, 2014 at 7:49 pm #5047Hi, Marzio
Hi Michael, great Webinar, thank you!
Thank you.
I’m a tester in an agile environment. I’m not in any team but support all of them (not many, luckily) specially for all that concern about exploratory and UAT.
We have a disagreement on E2E view: for me they are to stress the entire of your system (suits perfect in Regression phase, better if automated); for my developers they are only about to check if a single module works as required (while I call this Integration tests).
Is your disagreement about the activity or about the label for the activity? It seems to me that you have to investigate the product from both perspectives. If you can get agreement on what’s actually going on, the label doesn’t really matter much within your organization, as long as the labels don’t cause intolerable confusion for anyone. But it seems to me that you’d better be doing both kinds of testing work.
I know I’m a tester I must to be able to adapt to (almost) anything, but I would like to know an expert opinion, for my personal culture first and to argue friendly with them after.
A good idea in these kinds of discussions is to ask what matters about the distinctions and what matters about the labels. Anybody who has a concern about the risk or harm that either a distinction or a label (or the lack of either one) might cause should be listened to very carefully and not dismissed casually. In development work, we’re not only learning how to build the product; we are also learning how to describe the product, how to test it, how to describe the testing, and our understanding of anything else in the project. That’s why we call it software development, and not software assembly.
—Michael B.
October 22, 2014 at 7:58 pm #5048Hi, Dan…
As someone who does a lot of ‘security’ testing as part of my day to day work, exploratory charters and so on, it’s good to see these kinds of testing traditionally driven out of the “Testing with Tools” section of the quadrants.
I think that it’s a mistake to have a “testing with tools” section of the quadrants. It’s weird to me as having a “cooking with utensils” section in a recipe book. Tools and their use pervade testing, just as they pervade software development generally.
Whilst I do use tools to augment and assist my testing, I still need a fundamental understanding of what the tool is doing in order that I can interpret results. That means being able to do the same sorts of things manually on a smaller scale, that can then be driven to a larger scale by a tool.
Not only so that you can interpret the results, but also so that you can use the tool skillfully.
A lot of the security testing I do is manual first, SQL injection on a new form for example. We might then use a scanner to automate the bigger stuff, across the application.
I’m grumpy about the manual versus automated dichotomy. http://www.developsense.com/blog/2013/02/manual-and-automated-testing/.
I’d prefer that people express things more precisely. For example: “A lot of the security testing I do starts with SQL injection on a new form. We might then use a scanner to extend and accelerate the testing across the application.” Do you see how this changes things a bit, putting you (and not the tool) at the centre of your testing?Anyway…thanks for the insight and the learning. Looking forward to sharing this slide deck with my team.
Thanks for the kind words. Let me know if I can be of help.
Cheers,
—Michael B.
October 24, 2014 at 11:58 pm #5144Thank you for great webinar!
Michael, what is your opinion on the statement “the developer should write unit tests for his own code himself”, meant as “to let a tester (with coding skills, of course) write them for developer’s code is not a good idea”? I have heard that kind of statements especially from agile teams.
October 27, 2014 at 9:09 am #5150Hi Michael,
Thank you for the webminar, it underscores things I am doing as Tester in an agile team, and it is helping us driving the conversation about what kind of service the team wants to get from the tester.
In one of the cuadrants, we came to the part of: “Modelling in diverse ways”
Could you explain what this is about? I have an idea, but I would not want to be wrong because of my lack of understanding of English language,
My understanding is that as a Tester I can explain examples or stories about how a certain feature will work, enriching the original description that use to be quite spartan.
The happy path use to be explained, but that might not be the most interesting part of it all.Thank you!
October 28, 2014 at 6:24 pm #5202@Alexei…
Michael, what is your opinion on the statement “the developer should write unit tests for his own code himself”, meant as “to let a tester (with coding skills, of course) write them for developer’s code is not a good idea”?
My answer to questions of this nature is to point out a question that has been begged “Compared to what?” or “For what purpose?” Why would a programmer write unit tests for his own code himself? Here are some reasons: 1) to obtain rapid feedback after a change; 2) to aid in refining the design (as happens with TDD); 3) to prevent bugs from being buried over the long haul; 4) to aid in documenting the intention of the code; 5) in the case of legacy programs, to aid in all four of the preceding items while learning about the program that is being maintained or fixed. Why would a tester do that? 1) As an extra check on the developer; 2) To learn something about how the code works interactively; 3) Because the programmer asked him to help in broadening the code coverage; 4) To help save work in cases where a programmer adamantly refuses to write the unit checks himself; 5) in an attempt by a (probably misguided) manager to free the developer for “more productive work”. Then repeat the exercise with “Why would a programmer NOT write unit checks for himself?” and “Why would a tester NOT write unit checks?” (That’s an exercise I’ll leave to the reader.) But it seems to me that the reasons for having the developer (rather than the tester) write unit checks dominate the reasons not to, or the reasons to have the tester do it; and both my experience and the experience of many other people—notably Agilists) supports that. Nonetheless, each programmer and each team is entitled to an answer that fits their context.
—Michael B.
October 28, 2014 at 6:36 pm #5203@Jokin
In one of the cuadrants, we came to the part of: “Modelling in diverse ways”. Could you explain what this is about?”
Modelling means to produce simplifications of things that are more complex, with the goal of helping us to learn about them, study them, or evaluate them. Testing is based on models, as is most everything that humans do. We have to choice but to express the product and its interactions with people and the world as something simpler than they really are.
Models are always missing something, practically by definition; they’re simplifications after all. To reduce the likelihood that we’ll miss something important, we represent the product in terms of its structure, functions, data, interfaces, platform, operations, and time. We model risks by modelling the quality criteria for the product (capability, reliability, usability, charisma, security, compatibility, performance, installability, and development-focused quality criteria; then we develop risk models based on threats to those quality criteria. We might express our mental models through diversified models in the form of artifacts: running narrative, mind maps, videos, test data, drawings and sketches, user stories, tables, flowcharts…That’s what it means to model in diverse ways—and of course, as a description, this description is a model too.
—Michael B.
October 28, 2014 at 8:22 pm #5204I very liked your idea to analyze with asking “why” and “why NOT”, thank you very much!
In details I would a bit disagree on some particular reasons belonging just to “why write myself”:
1) to obtain rapid feedback after a change; 2) to aid in refining the design (as happens with TDD); 3) to prevent bugs from being buried over the long haul; 4) to aid in documenting the intention of the code; 5) in the case of legacy programs, to aid in –
1) and 3) as well) is for me the valid reasons “why tester writes” as well. If we do that right. To do it right for me – let the tester write the tests in the same repository and the continues integration loop should inform tester and programmer about the failed und passed tests.
4) is sometime interesting too, I had some experiences as a tester, that being unbiased when writing tests for the code without looking in the internals much could reveal some hidden “unintentional” behavior, meaning not always wrong behavior but sometimes use cases, that programmer didn’t thought of, even if they “work” fine.
Just as a model, if paring 2 programmers is a valid option, why not to try pair programmer and tester, so that programmer drives the coding part and tester – the unit checking part? What do you think about this kind of pairing?
October 29, 2014 at 2:30 pm #5244Hi Alexei…
In details I would a bit disagree on some particular reasons belonging just to “why write myself”:
The point of what I wrote is not to get you to agree. The point of what I wrote is to get you to think for yourself about what works for you and your programmers, and the rest of your context. Whether you agree with me or not is largely irrelevant, since I’m not the boss of you; nor is anyone else.
Just as a model, if paring 2 programmers is a valid option, why not to try pair programmer and tester, so that programmer drives the coding part and tester – the unit checking part? What do you think about this kind of pairing?
I think that if you think it’s worthwhile, you should try it. I think you should set up some experiments (choose a fault-tolerant situation in case things go badly). See what happens—what works and what doesn’t—and observe and interview the participants. Form some theories on how to maximize the good stuff and minimize or eliminate the bad; then apply those theories through more little experiments.
Cheers,
—Michael B.
-
AuthorPosts
- You must be logged in to reply to this topic.