Thank you.

Your uploaded files are waiting for moderation.

CLOSE

Blog

Sometime stress lets you do stupid things

Go Back

Sometimes stress lets you do stupid things. Here is what I did not so long ago.

I had to automate some test cases about setting (or un-setting) options. To check if the test had passed or failed I decided to also extract the complete options table from the database. In this way I could use this same check for all my new test cases. After running each test I checked it and then set it as expected result.

This went well for quite a while and so I was really surprised when suddenly all aforesaid tests failed!

On closer examination the reason was obvious: development had added an option and so the currently extracted table file was no longer identical with the expected results file, even if actually all tests had passed!

What irks me most is that in my very own Test Automation Patterns[1] we have a pattern that tells you how to make a comparison, sensitive or robust.

But I didn’t think about it until after the failures. Simply stupid!

Here as info the patterns from the wiki:

SPECIFIC COMPARE Expected results are specific to the test case so changes to objects not processed in the test case don’t affect the test results Description The expected results check only that what has been performed in the test is correct. For example, if a test changes just two fields, only those fields are checked, not the rest of the window or screen containing them. Implementation Implementation depends strongly on what you are testing. Some ideas:

  • Extract from a database only the data that is processed by the test case
  • When checking a log, delete first all entries that don’t directly pertain to the test case
  • On the GUI check only the objects touched by the test case

Potential problems

If all your test cases use this pattern you could miss important changes and get FALSE PASS. It makes sense to have at least some test cases using a SENSITIVE COMPARE.

 

SENSITIVE COMPARE Expected results are sensitive to changes beyond the specific test case Description The expected results compares a large amount of information, more than just what the test case might have changed. For example, the comparison of an entire screen or window (possibly masking out some data). Sensitive tests are likely to find unexpected differences and regression defects. Implementation Implementation depends strongly on what you are testing. Some ideas:

  • Extract from a database the entire tables touched by processing the test case
  • Check the whole log and not only the parts directly pertaining to the test case
  • On the GUI check all the objects on each page

If you are checking the whole of a window or screen, you may want to mask out data that you are not interested in, such as the date and time of the test. Otherwise, the date/time would be a difference shown up by the comparison, but you don’t want that information! Potential problems

If all your test cases use this pattern you would probably often get FALSE FAIL!. It makes sense to have at least some test cases using this pattern, for example in a smoke test or high-level regression test. Other tests should use SPECIFIC COMPARE

[1] Testautomationpatterns.wikispaces.com

See Also

Go Back

Blog Post Added By

Profile photo of Seretta

Seretta Gamba has more than 30 years experience in software development and testing. As test manager at Steria Mummert ISS GmbH, she improved the test automation process, and developed Command-Driven Testing and a supporting framework, later enhanced to enable the test automation team to “harvest” test case information by supporting manual testing. A description of this experience became Chapter 21 in the book “Experiences of Test Automation” by Dorothy Graham and Mark Fewster. In 2012 she started writing about Test Automation Patterns, which she is now working on along with Dorothy Graham.

Join the discussion!

Share your thoughts on this article by commenting below.

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to toolbar