Rikard Edgren was the opening keynote at EuroSTAR 2022. This post is a collection of some of the questions that he was asked at the Conference that he did not get to answer with an introduction from Rikard.
I got loads of question at EuroSTAR conference on my talk “My Essence of Testing: Understanding Relations”. Only a few could be answered in Copenhagen, so here are some more answers.
If you weren’t there, you might wanna read “In search of the potato “, “Software Quality Characteristics 1.0” (Emilsson/Jansson/Edgren), ” 37 Sources for Test Ideas” (Emilsson/Jansson/Edgren), and “Binary Disease” for context.
Should we create requirements from the testing potato once we as tester find important areas outside the documented parts?
Yes, that’s a good idea. Our learnings should be made available to others that are interested. I am usually not in charge of writing requirements, but I often suggest more items to the ones who are (who sometimes add them, and sometimes not). Another way is to add additional information in a JIRA ticket, or to mention things in other ways.
Many of the things in the potato outside of requirements are things that are good to know and/or difficult to define properly as requirements tend to be written, so it will only be transferred to requirements sometimes. One thing I would like to have is a “requirement appendix”, where all the additional good information that didn’t fit as a requirements are documented.
You say fast learning is important. Is shallow learning a problem?
Could be, but shallow learning is better than no learning, and is a path to deeper learning. Sometimes shallow learning is enough, but it is very difficult to know when it isn’t.
You talked about important relationships. Are there unimportant relations that you automatically overlook?
Yes, we do that all the time. But which overlooks I do, I don’t know. If i miss something important, hopefully I get a second chance (that I might also miss), or someone else will catch it.
How do you decide how much and what to test automise? (assuming the sw is testable)
At my current work we are doing “late automation”, meaning that we build automated regression test suites after we have tested the software a lot, bugs have been fixed, and we know a lot about what is important, and what might hold more risk for the future.
We are doing acceptance testing, and make test suites on GUI level with Selenium/Cucumber/Docker/Jenkins.
We cover broad scenarios, and go deeper when we feel it is worthwhile. How much of that is a balance of available time and what is better covered by other people, or by other test methods. It is difficult to give a generic answer, but when you are heavily involved, it usually sorts itself when building the tests.
Any specific tips for test in agile context?
As tester you are often pressed by Short sprints, hence turn around time for test needs to be short
We all work in some kind of agile context, aren’t we?
Exploratory testing when you know a lot is by far the fastest way to test software pretty good. As for the learning, a lot happens automatically on the way, and the big challenge is to take the time to learn a bit more when you need it in order to test well. I rarely can schedule this, there are always matters to attend, so rather I take some “learning time” when I realize that I really have to do it.
I think most of us would be more productive in the long term if there was more slack in our schedules, but that rarely is the case. Short-term delivery seems over-popular these days, and it is a pity.
How do you document your testing?
It depends on what documentation is needed. Tests run by machines are documented by the code, results, and log files. Planned tests run by me are most often documented by a pre-written test idea (one-liner) and the result (OK, OK??, ??, NOT OK, <details> etc.) Tests I just do are typically not documented, unless the findings are interesting. If the testing needs documentation I do a session charter (SBTM) and document what I do with appropriate granularity. I value information gathering over documentation noone might read.
Does your model of testing change with every product you test? Do you have different models?
Yes and no. Each product has its own models, but some of them might be shared.
So I work with a SOA architecture for Swedish health care, and the overarching model of information exchange is the same.
My model of what a service contract is, and the basic rules are the same; but the details are different.
The interesting factors for citizens (age, county, secrecy, first-timer etc.) are the same, but might mean slightly different things for different services.
Within a product, the models can differ as well, and they can also change over time
Have you considered ISO 9126 in your list of quality aspects?
Yes, we looked at all lists of quality attributes/characteristics we could find to get inspiration. Your suggestion even made it to the attribution notes: “inspired by James Bach’s CRUSSPIC STMPL, ISO 9126-1, Wikipedia:Ilities and more…”
Do you think ‘charisma’ and the User Experience of the product are similar or completely different aspects?
I think they are very connected, User Experience can mean many things, but are often Usability + Charisma.
But we believe the Charisma word has more charisma to it.
How do you treat someone that has the “binary disease”?
I don’t think it is possible to treat someone else with this disease, they have to do it themselves, starting with acknowledgement. But you can of course lead by example, and do your best so people aren’t forced into this kind of thinking even more. The disease doesn’t come from people, ut comes from our tools and environment.
Do you agree that good communication skills are one of the most important skills of a tester, and also one of the most demanding skills to master?
Can you be a good “tester” in a business context you don’t “like”?
I think it is possible, but I could never do it. It would not help my intrinsic motivation to learn more and do good.
How high do you rank Accessibility in your list of quality criteria (which is great BTW)?
I don’t rank the characteristics, their importance differs for different applications, and they also change over time.
Accessibility has become more important in my and many others contexts, so if we would redo the list it would probably need several bullets. Technical Accessibility (WCAG) could be one, and it’s relevant to consider problems with sight, hearing, cognition, and more.
How do you explain the “complicated” model in your head to stakeholders and auditors?
I don’t need to. They want to have information about the quality, and are not very interested in how we get there.
One model I want to share with them is “quality goals”, describing what is important to them. This starts with the stakeholders, but since it is generally “hidden” I try to extract it by questions and conversations.
Do you have thoughts on the future of manual testing? (As automation becomes more and more important.)
Michael Bolton would be very upset with me if I wrote “manual testing”, since all testing is done by humans (even though a tool might do the actual execution, that is part of testing).
But I guess you mean testing primarily executed by humans where tools at the most acts as support. This will stay for as long we create software for people. The human mind is astonishing, and can see things, and understand things by testing, and I can’t see that replacable by tools in the foreseeable future. Software created for robots will have less need for this, though (where API:s used by AI might be the first examples).
You mention important/not important as a binary choice. How do you determine what is most important?
I’m sorry if I inflicted binary thinking on you, I do believe that importance is one of the greyest scales we have. There will be many things that are important, and I don’t think there is a need to make a prioritized list. But you always have a test strategy (documented or not), and you will have to make choices on what to spend most time on, and make sure these choices are anchored.
But the most important things might not get most of your testing (because it is covered elsewhere, or has less risk). At a given moment, you might need to decide what is most important right now, and it is hard to choose between several pressing matters. I solve this by going for a run.
Referring to your slide with req, important, everything, important is really subjective and you purposely skipped some of the reqs that are usually made by the business, very bravely but safely too?
I suppose you refer to the “requirements” part that is outside the “important” of the potato.
To be honest, I have been in that position where I didn’t skip that testing, even though I wanted to. There are two reasons for this, one is that the discussions it might lead to could cost more time than the testing, the other is that I could be wrong.
Quality metrics to use to help leadership team understand quality of the product?
I don’t like metrics, and avoid them as much as I can.
I have never seen them do any good (not saying that measurements like response time aren’t useful).
I try to inform with words that people understand, or by showing if possible.
One example is usability/accessibility where people often won’t understand the problem until they see someone experiencing it.
What about the most important relation between testers and developers? Not important or obvious?
Obvious. Good catch, the ”obvious” is sometimes forgotten.
Thank you all for attending, asking good questions, and reading this, it has been a pleasure!
Check out all the software testing webinars and eBooks here on EuroSTARHuddle.com