On Test Cases : Sometime Stress Lets You do Stupid Things

Sometimes stress lets you do stupid things. Here is what I did not so long ago.

I had to automate some test cases about setting (or un-setting) options. To check if the test had passed or failed I decided to also extract the complete options table from the database. In this way I could use this same check for all my new test cases. After running each test I checked it and then set it as expected result.

This went well for quite a while and so I was really surprised when suddenly all aforesaid tests failed!

On closer examination the reason was obvious: development had added an option and so the currently extracted table file was no longer identical with the expected results file, even if actually all tests had passed!

What irks me most is that in my very own Test Automation Patterns we have a pattern that tells you how to make a comparison, sensitive or robust.

But I didn’t think about it until after the failures. Simply stupid!

Here as info the patterns from the wiki:

SPECIFIC COMPARE Expected results are specific to the test case so changes to objects not processed in the test case don’t affect the test results

Description The expected results check only that what has been performed in the test is correct. For example, if a test changes just two fields, only those fields are checked, not the rest of the window or screen containing them.

Implementation Implementation depends strongly on what you are testing. Some ideas:

  • Extract from a database only the data that is processed by the test case
  • When checking a log, delete first all entries that don’t directly pertain to the test case
  • On the GUI check only the objects touched by the test case

Potential problems

If all your test cases use this pattern you could miss important changes and get FALSE PASS. It makes sense to have at least some test cases using a SENSITIVE COMPARE.

 

SENSITIVE COMPARE Expected results are sensitive to changes beyond the specific test case

Description The expected results compares a large amount of information, more than just what the test case might have changed. For example, the comparison of an entire screen or window (possibly masking out some data). Sensitive tests are likely to find unexpected differences and regression defects.

Implementation Implementation depends strongly on what you are testing. Some ideas:

  • Extract from a database the entire tables touched by processing the test case
  • Check the whole log and not only the parts directly pertaining to the test case
  • On the GUI check all the objects on each page

If you are checking the whole of a window or screen, you may want to mask out data that you are not interested in, such as the date and time of the test. Otherwise, the date/time would be a difference shown up by the comparison, but you don’t want that information!

Potential problems

If all your test cases use this pattern you would probably often get FALSE FAIL!. It makes sense to have at least some test cases using this pattern, for example in a smoke test or high-level regression test. Other tests should use SPECIFIC COMPARE

About the Author

Seretta

-
Find out more about @seretta