Home › Forums › Software Testing Discussions › Terrible Test Cases.
- This topic has 15 replies, 9 voices, and was last updated 8 years, 8 months ago by Ben.
-
AuthorPosts
-
February 17, 2016 at 8:38 pm #10874
Hi all,
Thanks for having me!
I’ve come on here initially as I switched companies just before Christmas to a far more interesting role. I’m loving it but the person I work along side writes the worst test steps for manual functional testing I have ever seen. Functions like, updating numerical values in a field just to make sure the system recalculates the quantities correctly has been written in such a long winded and indecipherable manner that it’s really getting on my nerves. I’ve politely asked why said person has to be so immoderate with such basic functions but I received short shrift.
This, unfortunately, is what happens sometimes when you enter a project half way through the development life cycle and you have to work to other peoples styles before you can write your own steps!
Should I suck it up or can you all please feel sorry for me and share similar experiences?
Thanks in advance.
G
February 20, 2016 at 11:41 am #10928I’m sorry for you 🙂 That you have to work with longwinded manual testcases about calculations. It seems these could be automated(?)
try work with your own style and prove that more interesting. ..oh also consider, what the motivation for your coworker to do this is? this might reveal insights, to motivate her.
Wehen it comes down to it, it’s probably a people problem
/J
updated Feb 23rd.February 26, 2016 at 5:10 pm #11035Writing in an indecipherable way is often the sign of the knowledge hog: i.e. someone who doesn’t want to share their knowledge, either because: 1- They don’t really know anything, 2- Think they are better than others, 3- are trying to protect their position (If you can’t do understand my work, you can’t do it, so you need to keep me to do it).
Giving you ‘short shrift’ seems to be backing up my assumption.
What role do you have in the team?
If you are a lead then you have the opportunity to put into place some test script standards and reject scripts that do not match it.
If you are a peer, are you suffering because you have to execute the tests this tester has written? If so then you need to ask for clarification. Enough times and they may change their attitude, or complain and then you get to put your point across.As a leader I have had to deal with poor script writing in the past. It was extreme – the person was writing tests without sufficient info to run the test, and when you did work it out, it wasn’t even covering the requirement. He was given opportunities to improve. We had standards documented, that he was pointed at; we explained what we felt was wrong; we explained what we expected to see; we gave him every chance. Then had to let him go due to inability to perform the job. This was made easier because he was a contractor. But the process is still the same. The outcome may be different, and you have to be very careful with perm staff who are not performing. It’s harder to ‘change out’ a perm staff who can’t/won’t perform.
If you are a peer, and new, then you have the ‘excuse’ to fall back on that you have worked differently in the past and this style of test script is not what you are used to. And some help in understanding them would be good. During that you can offer alternatives; like – Wouldn’t this “….” do the same?
Good luck, I don’t envy you at all.
February 26, 2016 at 11:07 pm #11036Hi Ben,
Changing mindsets and ways of working is not an easy job. What I see as a possible solution is using some defined standards for writing the test cases and include a mandatory review phase. Sometimes people are not happy when they have to deal with complex procedures regarding their work inside an organization. But these standards for work are setting a common way to work because people are very different and they have different ways of working.
Since you are new in the project, you can tell the team or leader that you are used to design tests in a totally different style and something must be done in order to continue the work. Thus, the team can agree on establishing a common way of designing test cases by following some rules:
– tests must have a maximum number of steps: to be easier for the person who reads them, tests must be as shorter as possible.
– a step must have a maximum number of words: too many details are hard to follow and if a step requires many actions, then it can be split into more simple steps.
– pictures, screenshots, diagrams etc. can be added to reduce the amount of words descriptions which also helps the reader to understand tests.
– a common set of phrases for describing behaviors can be used.
– when a new test is designed it must be reviewed by other team members to be sure that everything is understood.It may look like a procedure with many rules to follow but it could be a solution in your case.
I am thinking on automation; maybe the tests (or at least some of them) can be automated, so these issues with writing manual tests can somehow be avoided.Regards,
AlinFebruary 27, 2016 at 12:26 am #11037Never underestimate the power of simple changes, like templates. Providing a structure is helpful to people who need that support to get to all the relevant details. Checklists are another example, and as @MichaelBolton might argue, all testing is really check-listing.
You can also try to engage in pairwise testing. Sitting down with a developer to better understand their mindset is always helpful in facilitating communication.
February 28, 2016 at 9:21 am #11040Hi Ben, I’d say this is one of those situations your going to have to take a step back and look at the bigger picture. There could be a valid reason he’s doing things that way. Are you finding his test terrible because it’s difficult to run or they are unclear? Be more specific about what the problems with them are . You’ll have to make the call if you want to take the direct route to resolve it or show him others ways by actually rewriting his test in the way you see fit and have a session to review both yours and his and see what changes you can both agree on. Chances are he won’t just do it your way. You’ll basically have to get his buyin by selling other options to him.
I’ve had a similar situation but with a defect tracker. I was assigned to help a BA who was the defect co-ordinator on a project. He was using a spreadsheet provided by the client to manage defects but all reports etc in the spreadsheet were Broken and the steps for updating it were so long. Almost half a day effort. My suggestions to fix and improve it fell on deaf ears. I was tasked by the client to merge theirs and our spreadsheet and I used this opportunity to Merge, fix and improve the spreadsheet. Producing 2 copies ( a merged only copy and an improved copy).
This lead to a heated argument but in the end he saw the improvements and benefits as it only took half an hour to make updates and the graphs worked. This caused a bit of conflict between us throughout the project but it’s an outcome I expected and accepted. I later discovered he didn’t want to make any changes as it’s the client that provided the spreadsheet and he didnt want to highlight issues directly to the client as the project was already delayed. His initial attempts to also use a different spreadsheet fell on deaf ears.
So basically identify what the actual problem is, saying they are bad tests is not enough to justify changing them. Identify solutions include his solution in the list and discuss all the options with him plus someone more senior if possible.
February 28, 2016 at 6:22 pm #11042Thank you all for your input. Some good points from all of your individual experiences. I really appreciate it.
Four months down the line now and I’m a lot more familiar with the program under development I’m in a much better position. I have had the unenviable task of completing over 500 manual tests in the last two months now. Its been a chore but It has given me the chance to become quite intimate with the product. So I can see now what is written correctly and what isn’t.
I guess we can all take for granted what we perceive the details of a situation to be and how we write that so that another person can follow it. Communication won the day here. I was able to politely point out that some of these scripts were pretty hard to follow when they really didn’t need to be. My college admitted that he was not 100% sure he knew what he was writing was correct either so I called a meeting with the entire team with a view to providing greater clarity and communication.
There are many new faces all trying to get to know each other and it has been a slow process of breaking the ice so we can all build working relationships with each other.
To this end I have persuaded the company to send us all on a team building event!
Maybe shooting each other with paintballs and playing Zorb Soccer will bring people together! 🙂
February 28, 2016 at 10:14 pm #11043Awesome. Good job 🙂
March 1, 2016 at 7:31 pm #11064I have some experience of this sort of thing also, and also don’t like test cases which are written to an extreme level of detail (however, I also accept that there may be occasions when this is appropriate).
The thing that I see which is a bit of a pet hate is when the test steps themselves perhaps aren’t pertinent to the test condition/what we’re actually testing. To give a very basic example – if we have a test to check that an XML message is produced by a system (for pick up by an interfacing system) when a new record is created, I’d write the test as something along the lines of:
1) Create new record and submit
Expected Result – XML message generated and passed to XXX locationI don’t like it when for example using the above, the test steps give the details into how to actually create the record (and include specific data to use) as these steps aren’t necessarily what we’re testing (these are ‘enabling’ or ‘preparation’ steps – we’re not testing a record is created, just that the XML is generated once the record has been created).
We probably should have a script for someone to follow IF they didn’t know how to create a record, but we can have a link to this for when/if it’s required.
As part of the test evidence we can capture the specifics of the record used to test, but being to prescriptive can stop the free thinking of the testers.Maybe I’m at the other end of the spectrum though and I’m not specific enough haha 🙂
March 1, 2016 at 11:52 pm #11065Yes exactly Paul. For the app we are developing all I need is:
1) Create new event profile
2) Add numerical value to distance field
3) Calculate against height fieldExpected: Trigonometric Formula will calculate distance against height
Actual: Results displayed
What I repeatedly get are unnecessary process highlights. I just want the steps that you need to make to say yes or no. Pass or fail.
March 2, 2016 at 11:33 am #11070Hi,
Should I suck it up or can you all please feel sorry for me and share similar experiences?
what about checking the test cases with inspections based on simple rules, like
– simple
– complete
– clearlike the definition in https://wiki.openoffice.org/wiki/The_Three_Golden_Rules_for_Writing_OpenOffice.org_Specifications which can be adopted also to test case specifications? Define an exit criteria and also the ones writing bad test specifications needs to fulfill your rules.
More documentation needed for doing inspections?
http://www.malotaux.eu/nrmc.php?id=inspectionsYours,
JogiMarch 2, 2016 at 8:27 pm #11078Hi, Ben…
Yes exactly Paul. For the app we are developing all I need is:
1) Create new event profile
2) Add numerical value to distance field
3) Calculate against height fieldExpected: Trigonometric Formula will calculate distance against height
Actual: Results displayed
What I repeatedly get are unnecessary process highlights. I just want the steps that you need to make to say yes or no. Pass or fail.
I hasten to point out that, to me, test cases, “pass or fail” and “actual vs. expected” are pretty terrible ways of thinking about testing. Test cases constructed like this bias the tester in the direction of “making sure that the product works”, rather than towards an investigation of the product, with a focus on identifying problems.
A product can pass a test based on some expectations, and still have value-destroying problems not covered by those expectations. A product can fail such a test, too&emdash;but the actual vs. expected paradigm distracts testers and others from problems not described by the test case. Notice, for example, that your example doesn’t specify that the results displayed should be accurate, or precise; nor does it note the oracle by which you might recognize a problem with the result. Supposing that the calculation happened correctly, but was annoyingly slow? Hard to read? Was truncated, instead of rounded? Did not clearly report a problem with problematic input?
Another issue here is that test cases make certain things explicit, and leave out a ton of other things. Notice, for instance, that in your account above, there’s no notion of any particular risk, and no notion of the kinds of things that might constitute a problem.
See “Test Cases Are Not Testing: Toward a Culture of Test Performance” by James Bach & Aaron Hodder (in http://www.testingcircus.com/documents/TestingTrapeze-2014-February.pdf#page=31)
Far better (and less expensive, too), I would argue, would be to
1) Identify a set of activities for the tester to perform, typically (but not always) looking for problems that might threaten the value of the product to some person who matters.
2) Identify oracles&emdash;ways of recognizing a problem&emdash;both general to the product, and specific to particular functions.
3) Instruct the tester to record his activities generally, with a specific focus on the activities that revealed problems. There’s a good example of this at http://www.developsense.com/examples/PCEScenarioTestPlan.pdfI disagree with Paul’s advice above, wherein he says “We probably should have a script for someone to follow IF they didn’t know how to create a record,” I’d say that if someone doesn’t know how to create a script, it’s far more valuable to have them try it (perhaps with someone observing, to assist when absolutely necessary) and to report on the problems they have doing it. That is: a script won’t solve the problem of lack of requisite skill.
In other words, you seem to be wanting to replace your colleague’s unhelpful and overly structured test cases with almost-as-unhelpful and slightly-less-overly-structured test cases. But testing is not about test cases, and showing that the product can pass or fail them; it’s about a diligent search for problems.
You might find this helpful, too: http://www.satisfice.com/tools/htsm.pdf
Cheers,
—Michael B.
March 3, 2016 at 8:46 pm #11082I disagree with Paul’s advice above, wherein he says “We probably should have a script for someone to follow IF they didn’t know how to create a record,” I’d say that if someone doesn’t know how to create a script, it’s far more valuable to have them try it (perhaps with someone observing, to assist when absolutely necessary) and to report on the problems they have doing it. That is: a script won’t solve the problem of lack of requisite skill.
@michaelabolton Though may not have come across I generally, 100% agree with what you are saying.If the tester is not actually testing the record creation (or whatever functional process) and this is a means to an end (in my example, the creation of the xml is what we really want to test/check) then the script can serve as a training aid maybe ??
However I do feel that scripts generally don’t encourage free thinking and deviation from a path and also don’t provide a greater understanding and context to your actions (which in turn limits defect and issue reporting) – the person following a set of instructions without really knowing why the steps, or specific data etc is being used. Over years I’ve unfortunately lost count of numbers of defects missed by others in places I’ve worked because of slavish following of scripts.
Overall, I really find this topic fascinating and it’s amazing how many people you encounter still feel that high detail scripts is “best practice”.
March 4, 2016 at 12:48 am #11083Hi, Ben…
Yes exactly Paul. For the app we are developing all I need is:
1) Create new event profile
2) Add numerical value to distance field
3) Calculate against height fieldExpected: Trigonometric Formula will calculate distance against height
Actual: Results displayed
What I repeatedly get are unnecessary process highlights. I just want the steps that you need to make to say yes or no. Pass or fail.
I hasten to point out that, to me, test cases, “pass or fail” and “actual vs. expected” are pretty terrible ways of thinking about testing. Test cases constructed like this bias the tester in the direction of “making sure that the product works”, rather than towards an investigation of the product, with a focus on identifying problems.
A product can pass a test based on some expectations, and still have value-destroying problems not covered by those expectations. A product can fail such a test, too&emdash;but the actual vs. expected paradigm distracts testers and others from problems not described by the test case. Notice, for example, that your example doesn’t specify that the results displayed should be accurate, or precise; nor does it note the oracle by which you might recognize a problem with the result. Supposing that the calculation happened correctly, but was annoyingly slow? Hard to read? Was truncated, instead of rounded? Did not clearly report a problem with problematic input?
<iframe class=”wp-embedded-content” sandbox=”allow-scripts” security=”restricted” src=”http://www.developsense.com/blog/2014/01/not-so-great-expectations/embed/#?secret=DxQtbuPrvs” data-secret=”DxQtbuPrvs” width=”600″ height=”338″ title=”Embedded WordPress Post” frameborder=”0″ marginwidth=”0″ marginheight=”0″ scrolling=”no”></iframe>
Another issue here is that test cases make certain things explicit, and leave out a ton of other things. Notice, for instance, that in your account above, there’s no notion of any particular risk, and no notion of the kinds of things that might constitute a problem.
See “Test Cases Are Not Testing: Toward a Culture of Test Performance” by James Bach & Aaron Hodder (in http://www.testingcircus.com/documents/TestingTrapeze-2014-February.pdf#page=31)
Far better (and less expensive, too), I would argue, would be to
1) Identify a set of activities for the tester to perform, typically (but not always) looking for problems that might threaten the value of the product to some person who matters.
2) Identify oracles&emdash;ways of recognizing a problem&emdash;both general to the product, and specific to particular functions.
3) Instruct the tester to record his activities generally, with a specific focus on the activities that revealed problems. There’s a good example of this at http://www.developsense.com/examples/PCEScenarioTestPlan.pdfI disagree with Paul’s advice above, wherein he says “We probably should have a script for someone to follow IF they didn’t know how to create a record,” I’d say that if someone doesn’t know how to create a script, it’s far more valuable to have them try it (perhaps with someone observing, to assist when absolutely necessary) and to report on the problems they have doing it. That is: a script won’t solve the problem of lack of requisite skill.
In other words, you seem to be wanting to replace your colleague’s unhelpful and overly structured test cases with almost-as-unhelpful and slightly-less-overly-structured test cases. But testing is not about test cases, and showing that the product can pass or fail them; it’s about a diligent search for problems.
You might find this helpful, too: http://www.satisfice.com/tools/htsm.pdf
Cheers,
—Michael B.
Michael thank you for taking the time give me your view. I really appreciate it. I have read a great deal by both yourself and James Bach with regards to testing with an emphasis on searching for problems rather than simply going through a robotic, one dimensional process of ticking a box when a function adheres to a requirement. I wish it wasn’t the way for me but I’m afraid it is. This is how the company that I work for expects things to be done. Over 800 requirements thrown into Quality Center with poorly written descriptions and even worse test cases with the aim of, you guessed it, ploughing through over 800 “performance tests” using “pass or fail” step by step!!
Bugs are discovered purely by accident and they woop for joy when one is found as they seem to think that this is the best way for gaining the confidence that the AUT does what is specified.
Sure It may garner sum sense of confidence that what is being developed works as required but it does nothing to prove that it is bug free.
They have no plans to test it for what they don’t want it to do.
I sincerely don’t want to write scrips like that. I want a less annoying way of dealing with an already frustrating process that is adding absolutely no value to the testing life cycle of what I am a part of. Its in the medical industry and its seems very stuck in its way and unwilling to try any new ways of thinking.
March 7, 2016 at 9:37 am #11094Hi, Ben…
You don’t want to gain confidence in the application; that’s the work of marketers. http://www.developsense.com/blog/2013/09/very-short-blog-posts-2-confidence/
You might want to look into the work of James Christie (who has worked as an auditor) and Griffin Jones (a colleague of James and me who consults with the medical industry).
It behooves you to read the regulations for yourself. This is hard, tedious work. But what you’ll find is that there is nothing in (for example) FDA regulations that require this kind of testing. The FDA has no problem with exploratory work. They advocate it. http://www.satisfice.com/blog/archives/602
The script-obsessed may be telling you stuff that the regulations don’t bear out.It would be a good idea (albeit a difficult and tedious task) to read the regulations yourself, if your testing is governed by it, Knowledge is power.
I’d go so far as to say that if a heavily and poorly scripted approach is actively inhibiting your organization’s ability to find bugs, you’re in an interesting ethical dilemma: eventually, some person may come to harm because of a bug that you didn’t find because you were doing busywork. You may want to consider discussing this with your testing clients. Maybe these will help: “Braiding the Stories (Test Reporting Part 2)” (http://www.developsense.com/blog/2012/02/braiding-the-stories/) “Delivering the News (Test Reporting Part 3)” (http://www.developsense.com/blog/2012/02/delivering-the-news-test-reporting-part-3/)
At some point, you’ll have to decide what your bottom line is: do you really want to be a part of this organization?
—Michael B.
March 7, 2016 at 9:42 pm #11099Great reads there @michaelabolton thanks again.
Sometimes I do get so frustrated I do think of leaving for somewhere else. But the product fascinates me so much it makes me change my mind to stay. It’s the only reason I love the job as a whole. I am determined to stick it out until the end and try my up most to get them to change their ways and adopt a fresher way of thinking. To take more notice of quality and risk. I try to make a case for the more heuristic methods of testing and even shared James’ video from NY University for AST where he talks about the Levy Flight. But I run out of ammo when they quiz me to explain how these new heuristic methods of exploratory bug hunting provide a better way of proving to senior management and the client that the software ‘works as specified’.
This is where I face palm myself!
It seems their entire testing ethos is just to prove it works. Just to please the bosses. Just to brown the nose a little more.
Is checking the functions against the requirements really a ‘can’t live without’ aspect of software testing?
For me the testing life cycle has always been: Unit, Integration, system and acceptance testing.
Not just Unit testing. Functional testing and off you go to the client.
How can I make them understand?? 🙁
-
AuthorPosts
- You must be logged in to reply to this topic.