Home › Forums › Software Testing Discussions › How do you document and keep test cases?
- This topic has 12 replies, 12 voices, and was last updated 9 years, 2 months ago by Cristi.
-
AuthorPosts
-
April 2, 2015 at 8:58 am #7438
Hi there,
The organization I’m testing in demands – so far – that at least one test case needs (expected results, steps to be taken, etc.) to be written in a test management tool for each requirement. The result is that there are tons and tons of test cases which may never been used again. Somehow I feel that this approach is not very efficient. So my question to you guys out there is: to what degree do you document test cases in a test management tool? Do you follow the above approach? Do you keep only those test cases which are most relevant for the business value? ..
I’d love to hear your experiences 🙂
Cheers,
RobertApril 18, 2015 at 8:20 am #7775Hi Robert.
Interesting mail. I certainly understand what you mean. We use a tool called ApTest which I find adequate but nothing more.
I find many of these test management tools very similar to be honest. I think they are quite rigid in their structure and I don’t find them particularly good as test design tools. Therefore recently I have started making more use of confluence for my test design, which fits in with our attempts at working more lean and agile. It allows me to be more dynamic in my updates providing a more live view of my test plan and cases. It has also improved our review process as development and other reviewers can add their comments directly to the confluence page. I would then map/import my test cases into the test management tool for recording test results and measuring test progress. That’s one point!
The other aspect to this is looking at your process and asking yourself if it is correct. You’ve said that you have a granularity of one or more tests per requirement, which seems reasonable. Presumably afterwards you will be re-running all or some of these tests as a regression suite? So these tons and tons of tests that are never used again, are they tests that are not being carried forward into regression for some reason? In an ideal world maybe they would all be automated into a regression suite but I understand that there are reasons why this might not be possible. Perhaps you need to identify which tests you are throwing away and then decide on the merit of having these tests in the first place. Were they worthwhile tests? Can you categorize/streamline them in some way? It comes down to test management the process I think and not relying on the tool to do it for you!
Let me know what you think. I’m always interested to discuss and find better ways of doing this. I’m also particularly interested at the moment to understand how teams/people manage test captured in a more exploratory manner and not necessarily defined at the outset of testing.
April 21, 2015 at 9:31 am #7801Currently I use EXCEL sheets, because nothing else is available in our company that I can share easily with the customer for UAT.
But in the past I have used test managment tools. For a reasonable tool I insist on the concept of a pool of tests that can be assigned/branched to any test run and have a many t0 many test to requirement relationship.
Efficient tests will cover more than one requirement. As we progress through test design, tests are identified as potential Regression tests and branched to a reggression test suite.
It’s true that for most test projects there will be tests that are never run again. I would still keep these for 2 reasons. They may become usefull again, e.g. re-running for defect verification, and you need some form or audit trail/historic proof of testing, that may need to be maintained for some time.Once you have an established testing process and team, I work on trying to build in re-usability to our tests. That may mean making the test detail a little less specific and parametising the test, i.e. removing specifics to a test data/Test config file. Leaving the common high level test detail to be re-used time after time.
If you do this rigth you will only have redundant tests that are related to retired features.April 22, 2015 at 9:24 am #7813I use either testLink or Excel sheets for my test case documentation. A simple document with these columns
Test Case or Scenario
Steps + Expected Results
Status
Bug IDs + CommentsMay 2, 2015 at 6:46 pm #7938… I find that, regardless of the tool used, the problem is how to report functional test coverage of the system. Code analysis tools can report that 56.43% of the code is unit tested (and I know this can be argued about till the cows come home!) but reporting to stakeholders how much is actually covered by manual tests (however they are recorded) and autotests is a nightmare. I have found no way to dynamically build a report of which autotests/scripts/test cases cover which functions. This would be useful also to the team to identify areas that are lacking testing. I have always resorted to tedious manual listing and using tags in test cases which inevitably get stale really quickly.
May 2, 2015 at 7:36 pm #7939It sounds like the management of your organization is not about efficiency or effectiveness. The approach you describe is neither, it is simply a rule that creates boxes for check marks. At the end of the day the only important things to your management is “Do all of the check boxes contain check marks?” If yes, we are done. Your Management team has a “one size fits all” mentality when it comes to a testing strategy. They really don’t care about quality, this approach has clearly served them well and so they are content to not incur change. They don’t like change and even ,more they especially don’t like people who try to change their status quo.
You have the choice of sitting down and rowing the boat or finding another job. You can choose to rock the boat but you’ll end up looking for another job going that route, so save yourself the aggravation.
A Management team that thinks this way is unlikely to be dissuaded by ANY argument and I mean any!!! They are comfortable with the status quo. They have rationalized the status quo as the only acceptable solution. You risk your livelihood to challenge their beliefs, just run! You will be better off in the long run.
Do you use “patterns” in designing and specifying your test cases? A collection of test patterns might prove to be very helpful, but probably not in your present organization. Do you document your test case specifications using the UML? See the OMG specification of the UML Testing Profile. The Eclipse IDE supports UML Testing Profile modeling in the Eclipse modeling environment. By employing patterns and tracing test specifications to those patterns you may find some opportunity to re-use and be more efficient. Patterns could make you more effective.
So much more to talk about, too little space.
May 4, 2015 at 1:11 pm #7940We have the actual results associated with the test cases. Because of this, we keep all of our test cases. Not a big deal as they are associated with specific test plans. This works out well as our manual testing application allows one to copy the test cases & scripts from an existing test plan in to a new test plan. In order to be more efficient, we attempt (when feasible) to validate multiple testing requirements within each test case.
May 5, 2015 at 9:20 am #7941The organization I work in uses Quality Centre by default and traditionally we have fallen into the habit of using it for every project regardless. This approach is quite labor intensive and we have found like many others that the tests simply become redundant and are never really used again.
More recently I have been stressing the point that you should use the best tool for the job. If that happens to be excel or scribbling on a piece of paper that is fine.
It is useful though to capture information about what scenarios you want to test/have tested; mainly for regression purposes if you want to automate or reuse them at a later date but this doesn’t have to be a tester specific task it can form part of the requirements definition/refinement process and can also be transferred to a test first approach.
My advice would always be to keep it simple and do what you need to do to achieve your purpose.September 30, 2015 at 12:39 pm #9518@martinp I have only a vague understanding of confluence. How do you use it for testing?
October 1, 2015 at 8:23 am #9535Thanks all of you guys for your ideas and comments. Sorry for not having replied sooner…
I especially agree with @gashuebrook:
it is simply a rule that creates boxes for check marks
In the current project phase I try to establish an approach of Aaron Hodder using mind maps to visualize the areas under test, the test ideas and a color to indicate the “status”. I like this approach but I’ll have to check how I can sell that to the management 🙂
@martinp:
Could you tell us some examples how you use Confluence? We also have this tool so maybe you could give us some new input?October 2, 2015 at 10:33 am #9549To answer the question in the title:
1. Yes I sometimes document some test-cases because my lead or manager ask me to.
2. Yes we keep the test-cases, but ignore them. The test-cases become obsolete in a few weeks after being written.October 5, 2015 at 7:36 pm #9577I use Access and Excel to save my testcase, only those worth to document. I agree most of documented testcases are rarely used by myself later.
But it is good when I mentor a new tester in my domain.October 5, 2015 at 8:06 pm #9578I like Jira with Zephyr but also Testlink is a nice tool
-
AuthorPosts
- You must be logged in to reply to this topic.