- January 19, 2017 at 10:05 am #15023
I know that generally test cases should pass but what if test cases fail?
Just out of curiosity, I wonder is it a good thing in the long run or can it be a bad reflection on testing/development?January 20, 2017 at 9:54 am #15078
What is the objective you want to achieve?January 23, 2017 at 8:43 pm #15100
During the development process it happens often that tests fail. There are many changes during implementation and basically this is the reason of running tests (especially automated): to find problems. Of course, we all want to see “green” colored reports where all tests passed, but any small change in the code or configuration can affect the product functionality, so tests must find if bugs/defects were introduced.
Generally, the tests must be reliable and when they fail, they should reflect a bug or a defect in the product. After the problem is solved, the tests should be rerun and they should be passed. I don’t agree with the idea of removing the failed tests from the next runs and keeping only the passed cases just to have “green” reports that look very good.
There are also situations when failed test cases are not related to development like random fails which need an investigation and a stabilization phase in order to have stable and reliable runs. Some tests can fail because the product specifications have been changed and they were not maintained accordingly, so tests need to be changed to meet the latest requirements.
In later stages of the product, especially in the acceptance testing phase, the tests should pass.
AlinJanuary 26, 2017 at 11:36 am #15114
What is the objective you want to achieve?
Good question. No real objective. I am just curious about the process and if test cases are made to pass, how important is it if they fail?January 26, 2017 at 2:30 pm #15119
Failed test cases are important to catch any regression caused due to changes in your code.
But those who fails and are non-reproducible are the most nasty ones.January 26, 2017 at 7:42 pm #15123
This is a kind of unicorn question, I’m afraid. Providing a helpful answer involves taking the question to the emergency room and giving it a little treatment.
Let’s start with the idea of a test case. There are lots of ideas about what a test case could be. I’d say that a test case is a formally structured, specific, proceduralized, explicit, documented, and largely confirmatory expression of a test idea. More generally, we could say is that a test case is a question that we’d like to ask and answer as we’re operating the program; a set of conditions for an experiment.
The idea that a test case “should pass” might be a pretty confused way of thinking about things too. It’s not that that the test case “should pass”; it’s that, in the end, the program should do the things we want it to do, and should not do the things that we don’t want it to do. Testing is how we discover what the product actually does, based on experiments that we perform. If the test exposes a problem in a product, some people might say that the test failed. I would say that the test was successful: it revealed something we wanted to know!
Meanwhile, a test case focused on a particular output might “pass”—but there still might be terrible problems in the product with respect to some condition that the test case doesn’t identify. In that case, “no news” may get misinterpreted as “good news”. Even though it “passed”, the test failed to reveal something that we would probably prefer to know about.
All this can lead to a world of trouble. Instead of thinking in terms of development, learning, and engineering, some people starting thinking in terms of games and scorekeeping. Passing tests good! Failing tests bad! Make failing tests go away! (The easiest way to do that is not to perform any tests that reveal problems, or not to perform any testing at all.)
So let’s rephrase the question: Do
test casesexperiments need to passsuggest a happy outcome all the time? It seems to me that the answer to that question is No, unless we want to think of testing as demonstration, and unless we want to fool ourselves into believing that the product is problem-free.
As an alternative to thinking of testing in an overly simplistic, pass/fail kind of way, think of testing as empirical research, study, investigation, discovery, analysis, and learning. A test (perhaps framed in a test case, but by no means necessarily so) isn’t about passing or failing; it’s about developing and refining our understanding of what the product is and what the product does. In that case, “pass or fail” is the McGuffin; a distraction. The value of the test is in the information it reveals.
—Michael B.January 26, 2017 at 8:37 pm #15124
Test cases that fail due to errors in the test cases impacts those that wrote the tests. To many is not a good sign, but might be due to other reasons than sloppiness (time pressure, no info, no access to subject matter experts). I would always expect a few % to be disregarded. But it depends on the context. In pharma contexts there is stricts adherence, so bugs in the tests themselves are also bugs to be managed by defects etc.January 27, 2017 at 4:47 pm #15155
Thumbs up for Mr Bolton’s answer. I wanted to respond with a similar phrase that he already mentioned:
“So let’s rephrase the question: Do
test casesexperiments need to passsuggest a happy outcome all the time? It seems to me that the answer to that question is No, unless we want to think of testing as demonstration, and unless we want to fool ourselves into believing that the product is problem-free.”January 29, 2017 at 1:59 pm #15167
The idea that a test case “should pass” might be a pretty confused way of thinking about things too. It’s not that that the test case “should pass”; it’s that, in the end, the program should do the things we want it to do, and should not do the things that we don’t want it to do.
I agree with this. I think this is a nice statement to sum it up.
You must be logged in to reply to this topic.