Home › Forums › Software Testing Discussions › Should you put too much detail into test cases?
- This topic has 19 replies, 18 voices, and was last updated 6 years, 4 months ago by Monica.
-
AuthorPosts
-
November 17, 2015 at 10:54 am #10067
How valuable is putting as much information as possible into test cases. If as some suggest, test cases are the most important documentation should we be adding as much to the test case as you can?
Should you be as detailed as possible with test cases and include details like for example, scripting language, programmes used etc.
Do you think your management would prefer you to not spend time on test cases and do the bare minimum?
If that’s the case where do you draw the line when completing test cases? Do you try to find a middle ground or do you think that testers dont need to put as much detail in test cases as other argue?
I am sure there are members here with a few different approaches to this issue. What’s your approach?
December 23, 2015 at 9:36 am #10397My opinion on this is: do not write test cases with too much detail. In some cases even don’t write test cases at all. What I have experienced with (detailed) test cases is that testers often lean back and only work through these test cases, not looking to the left or to the right. What I prefer now in a first step is collecting test ideas for a certain feature, change request, bug fix, etc in a mindmap. Maybe with some data examples.
So, instead of “Write x test cases for feature A” I prefer “Develop test ideas to find important information about feature A”.
January 8, 2016 at 12:20 pm #10474I’m a developer who has come a long way in learning the benefits of embracing manual testing.
My limited experience in this area tells me this:
Manual testing is underutilised if it is only to execute rigid, narrow test cases. Testing is an opportunity to learn about and get intimate with your product. This is mercilessly thrown away by cold, mechanical, narrowly-defined test cases that only a robot would be fulfilled following.
The flip side of this is that a set of test cases should sufficiently cover a product. The less precisely test cases are defined, the less certain you can be that you’ve thoroughly tested your product.
I think it’s a balance of:
- at least specifying the key areas of your product
- giving space to testers to use that as a starting point to really learn the product
- instilling a culture of exploration within the team.
@buschfunk:
Develop test ideas to find important information about feature AI like this definition. “Important information” can be so much more than just a list of bugs.
January 8, 2016 at 12:43 pm #10475It all depends on several factors:
1- The working practices and expectations of you company.
– I’ve worked in companies that value the Testers and trust them to do a good testing job. I’ve also worked – to often – in companies that don’t trust the testers and need to see what they are going to do, need to have evidence to support coverage, need to see what their money is going to be spent on and have evidence that their money is being well spent. i.e. If we estimate 4 weeks to test, what is that time being spent doing.
2- A formal environment needs evidence of requirement coverage. Whether that is contractual, legislative or otherwise.
3- An informal testing (i.e. Not detailed scripts but still governed and directed) does give better defect detection and risk assessment, but you do need to manage and assess the testing. Almost test the testing, to ensure testing is providing coverage and not all testers focusing on the same thing, or not testing effectively.Thinking about it as I write: It depends a lot on whether you or your company trust the testers. i.e. if you have good testers who are worth their weight and are effective testers, then as long as there is guide lines and directives etc then they will work better without detailed scripts . If however you have less than optimal testers, or not real testers, then you may need to give them more detailed instruction on what is expected and what they have to.
e.g. I one company we employed 3 levels of test professionals. 1- Leads who set up a team and project and managed it. 2- Senior Testers/Test Analysts who did most of the work up front and during adhoc testing wrote formal test scripts with detail, and found a majority of the bugs early during ‘Test Prep’. 3- an army of Test Engineers/Technicians who took the scripts written and ran them for each release whilst the L1 and 2 looked at the next project. This works for a large team and can keep costs down. You don’t need to pay the engineers the same as you would an analyst, and you expect less of them. It also gives a good route into testing.There is also another dimension – UAT. In UAT I prefer not to give detailed scripts – and fight anyone that wants me to. This is because you can’t test like a user, or ask a user to test and accept if you have scripted their every move. They won’t have scripts in the wild and will do different things. UAT must be as live like as possible in both the environment and the actions of the user.
January 8, 2016 at 1:01 pm #10476Neither Managers nor client accept “No test cases” at all approach. It is most often considered as not enough testing is done for product,
Our approach is to write at least common test cases for basic functionality of product as well as for critical features. In addition to that complimenting it with are automated cases ( if applicable) also adding Exploratory testing as major part of testing. In most cases, this approach works for us. Unless, we are forced to write complete intensive test cases to satisfy client.January 8, 2016 at 2:47 pm #10482AnonymousInactiveSo, you are asking, what is my approach.
Hence, my approach to write my test case:
1. I first need to know,
– What is the scope of the project?
– How much cost? How much is to be spent on Testing?
– How long is the project? How long must we spend on Testing?
– How much resource we have for testing?
Eventually, we then finalize:
– Resource – senior / junior
– Man-days – senior / junior
In some company, senior will think of the test cases, junior will execute those test cases.
If it is a fast track project, I would suggest let just one resource think and execute the test cases, and at times, when he execute the test cases wrote by himself, this might prompt him to discover areas that he might have missed.This planning must not take long.
And, most importantly, the focus of both the Senior/Junior, the test case creator and the tester, they must have a clear objective, aligned to produce ‘something’ that the client wants – or we say, they must UNDERSTAND THE FEATURE, AND PRODUCE THIS FEATURE TO THE BEST EXPECTATION OF THEIR CLIENT.
Always, when we are asked to develop a FEATURE, we need to learn, to study, and to UNDERSTAND it, in order to give the best to our client.
2. I agree with the approach that we may come out with a mind map, to know at a very high level, the main objective of the testing, and most importantly, WHAT we want to produce / achieve out of this test.
January 8, 2016 at 4:05 pm #10490Test cases are not the most important documentation. In my personal opinion the most important thing to document is the thinking around what to test, what not to test – and the resulting observations. Notice that I do not pinpoint to a specific type or representation of documentation. Even when I juggle test plan documents and various forms of test cases every day, their specific form is relevant to context. So
if you need to write down the scripting language, test data, environment, browser version, programs used – by all means, do so. Same thing for when I think I should get a haircut, – then it’s time for a haircut. (aka the Reefing Heuristic by Michael Bolton in the Rapid Software Testing vocabulary).
I have a hunch that adding more and more details to the test cases just waters down the value, and makes the test cases less 1) useful for other situations and less 2) open for creative thinking about where to observe the object under test. (The wider you spread, the thinner it gets).
The thing about both testing and details in general is perhaps, that the more you dig into it the more there is to dig into.
/Jesper, shoe size 42 European, sitting in the kitchen – yet out of coffee.
January 8, 2016 at 8:22 pm #10492The question already points to the answer:”…too much detail?”: Anything too much is too much.
Detailed test cases are written for dumb testers.Much more important is to think what you should do (or should have done) to help development to prevent most of the issues they were going to inject in their software, because testing won’t find them all. As Crosby said a long time ago: “The error that isn’t there, cannot be missed.”
Testing itself is checking that the software works as it should. It’s not supposed to find any issues, unless there are. Based on this, you can design your testing strategy (what to test when and why). Only once you have your testing strategy clear, you will be able to answer the question yourself about whether to have test cases and if so which, how many and to which level of detail.The mantra during testing should be: “No question, no Issue.” The moment testers find one issue, or have to ask one question, the testing can be aborted as failed. Back to developers, not to ‘repair’ but to start all over again with the design.
If you do that once or twice, developers won’t produce issues and more. Because initially you will find any issue quickly and probably have to ask something, writing a lot of test cases is quite useless, as the software probably will change anyway.Better check with developers when they are designing (if they are at all – if not you can ask why), because then you will already find so many issues during design that checking the code is a waste of time.
If I’m asked to test some software, my first question is: “Where are the requirements?” and the second is “Where is the design”. Usually that creates time to go on holiday.
January 9, 2016 at 10:41 am #10494Having test cases is like sticking to the syllabus in a text book for an examination. It is true that we can learn more if we performed practical experiments, referred other relevant books not to just pass / fail an examination (in this context to just pass / fail a test) but with an intent to learn more about a feature and the product.
Developers are not asked to pass on how to manual for other testers, then why expect the same from testers? Is this the reason why testers are only responsible for quality. Any tester who joins a team at any stage needs to perform testing in his / her own approach. Making them follow an instruction manual of test cases written by other testers will not make them thinking testers.Documentation helps as far as we document the ideas that helps build a feature and test it. There needs to be flexibility to add more ideas to test and not sign off on a set of tests. Yet I say that there are testers who are when forced to be in a setup where they are asked / told at times commanded to execute only documented tests still go ahead and test more. And the mindset by some one in the role of a test lead / test manager should be of acceptance when there are new test ideas and the tester should not be punished for adding new test ideas later / while testing.
The mindset of punishing a tester for adding tests later I feel begins with the number game that is associated with the execution of tests.
Example:
432 / 500 tests automated.
12.6% failure rate to be maintained every sprint.
If this is your goal / vision then I feel you are staying far from the real deal of adding value and quality. But purely are puppets dancing to the tunes of the management which demands you to function this way.Get away from such a mindset, encourage free and open discussions between anybody in the team, outside the team which can really help us build a product which we call benefit from.
Document what is required and add on to it, when needed. Let the number game stop. Don’t let the fear of management let you do something which is against your integrity.
January 9, 2016 at 10:59 am #10495Peter Drucker wrote: “If you know you have to test for a problem, you can prevent the problem, of catch it earlier.”
Use ‘test-cases’ for prevention, not for testing.January 9, 2016 at 8:45 pm #10497Ronan, you’ve already indicated by “too much detail” what you think the answers should be. Perhaps the presumed intent would be served better by asking, “What is the proper amount of detail in test cases?” For a different perspective, see my “What is a test case?” article, picked as the top software quality tip of the year, at http://itknowledgeexchange.techtarget.com/software-quality/top-ten-software-quality-tips-of-2010/.
Robin F. Goldsmith, JD advises and trains business and systems professional on risk-based Proactive Software Quality Ass
August 1, 2016 at 1:31 pm #13240I think is better less detail on test cases in order to capture the best possible user-end experience. Also, less details help to test very different scenarios by different people with different approaches.
Regards.
September 14, 2016 at 3:31 pm #13655In my opinion test cases that are written in a clear and unambiguous manner have many benefits.
– They ensure the core functionality that needs to be tested is covered
– In case of multiple executions of the test cases, it is easier to track what was working in the previous version and what was not
– A person new to the application can follow it easily.
– It is written once and used many times, so if it is clear you are actually saving time.But just following test cases, however detailed they are, is not enough. Exploratory testing done from an end-user perspective is also a must in addition to the test cases
In some cases like when the application is small, it is better to not write test cases at all.
December 2, 2016 at 11:52 am #14511I think more important use of test cases is that it should be understandable by others, so writing only test cases which are effective as well as reusable is very important; good test cases save a lot of time in the later stages of testing. But the test cases must contain only precise information not very descriptive. Also, it depends more upon people following different approaches.
December 2, 2016 at 10:32 pm #14518Let’s take a new look at this. I am not here to say someone is wrong, but to initiate thinking – to make people look critically in what they think. There are some premises behind the words, but those things are not said. We only see the conclusion of thinking, so let me ask about the reasoning before reaching the conclusions.
@ronan wrote “<span class=”ms-translatable”> If as some suggest, test cases are the most important documentation should we be adding as much to the test case as you can?</span>” in the original post.I believe code is more important documentation. It’s debatable, of course, as we didn’t define for whom the documentation should be valuable. Most important to user/PM/tester/colleague/manager/x?
@softwaretesting / Monica above wrote many claims I would like people to explore.1) Why should they be understandable by others? Is this for a specific context or applicable all-around?
2) Why test cases need to be reusable? Are you sure you need to execute “the same” test case more than once? If you mean repeating the steps that you found important, have you really thought what are the valid reasons to do that instead of doing something else?
3) They save time from what, exactly, in later stages? What stages? How they save time? What kind of stages do you have?
4) What is precise information that is not very descriptive?
5) Why it depends on those people? (I think here she made a wonderful point and I would like to emphasize that. We definitely should think about different needs, different people, different products, etc.)
January 19, 2017 at 1:48 pm #15072As many here mention I also think it depends on the context, and on the environment (also company) you are testing.
For us it means we do a mix of more detailed test cases, and rough mind map that give the executing testers more ideas ind impulses what should be considered in hit thoughts.
Detailed test cases might work well whenever:
- you want to retest the exact thing (we do it for regression tests when it fits), they are, depending on your situation replaceable by test automation
- if several testers have to execute the test (especially if your application is very complex, and some of the testers are not so experienced with the specific feature). In our case it can take years to test certain areas of an application self responsible without guidance (except you have been an experienced user before your tester life)
- for validation of requirements (did we deliver what we agreed), but that’s always part of the “truth” you have to complement such tests with other types to check “did we deliver something really useful”
Brings us to some cases/reasons for less strict documented tests:
- a first fast quick check
- all sorts of exploratory testing (reasons for it can be to either check also if the requirements/specifications defined a useful product, or if you can’t test your whole feature in depth on each test run, so you let the testers decide and vary from time to time, or if the details of the feature change that fast, so that documenting test steps would cause an insane documentation-adaption overhead…)
- if only one tester will test that feature (and in your industry no long term documentation is required) – that usually doesn’t apply to us due to the team size. But still you could give only a rough idea based on a mindmap, add a few comments with helpful background information for people less specialized in this feature.
- too many scripted tests will force testers brains on rails too often, and as mentioned above they will stop thinking as testers (question things, break patterns, look left and right, and most important around the corner)
My opinion is: keep the documentation lean whenever there is no reason to document more.
The reply from Jesper (especially the first part “thinking around what to test, what not”) made me think whether the test case might be the right place to store information about useful background information (“according to x.y. even if not obvious, this feature also interacts with feature B, E, F, consider that in your test”). Such information might change over time, while your testcase becomes outdated. Probably documents shared and maintained by both, developers and testers might be the right place and you just link from your test cases to them.
Cheers,
Andy
January 26, 2017 at 3:01 pm #15120XML based test description file containing fields like
- SRS Reference
- TC Summary (information related to test case e.g. workflow etc.)
- Prerequisite
- Supported SW version
- Tested Items, Interface etc.
- Platform e.g. Windows (x86,x64)/ Linux etc.
- Possible Extensions
- Known Bugs
Last but not the least
- Pass Fail Criteria
Would be an approach towards writing a concise and easily understandable test case description.
July 18, 2017 at 10:53 am #16767As per my opinion we can not put too much in detail in test cases we put in test cases i.e. within that documentation.
A test case is a group or a set of actions that are executed by a tester, to check that a particular functionality or a feature of the software being tested is as expected or not.
Whether it be manual testing or automation testing, test cases are important. Without them, one cannot proceed further in the testing process. Selenium testing which is carried out using the Selenium automation tool, also has the facility to record test cases in it.
[ Link removed MOD/JO]August 2, 2017 at 6:36 am #16967No need to add more detailed test cases. If we know the product or software very well. we can test the product without test cases or less test cases also.
[commercial link removed /JO]
July 20, 2018 at 1:45 pm #20063According to me, not too much details but it should be written in such a way that anyone can understand. For the detailed plan, you can create docs and share the links.
[commercial content removed /mod]
-
AuthorPosts
- You must be logged in to reply to this topic.