March 10, 2014 at 11:58 am #874March 12, 2014 at 6:18 pm #934@testingmentorOnly available when logged in
I want to thank everyone who took time yesterday to listen to the presentation on Crafting Better Automated Tests. If you missed it you can view it here
I would like to hear other ideas that have worked for you that helped you improve developing and maintaining your automated test suites.March 13, 2014 at 1:49 pm #943@matthew-churcherOnly available when logged in
Great presentation BJ. This was the first presentation I’ve seen on automation best practices than wasn’t a vendor pushing a particular angle or product. It very much fitted with my own philosophy on the best way to go about things which was great to hear.
In particular for us here, it was great to hear about your thoughts on logging. I’ve also observed a lot of time can be spend diagnosing test failures and debugging tests so I like to have very helpful logging to speed up the process but some others here are were convinced and I think you’ve helped sell it.
The concept of blocked vs failed tests was totally new to me and I love it so I will be giving that a go to see how it pans out.
Thanks, Matt.March 13, 2014 at 7:57 pm #946@testingmentorOnly available when logged in
Hi Matthew, thank you for your kind words and I am glad the talked sparked some new ideas.
In my opinion, an automated test should only report a pass or fail condition when all the steps necessary to achieve the objective of the test have successfully executed and a proper and adequate confirmation that some pre-established conditions or criteria is satisfied or not. So, when I get to the end of all defined steps and make a call to my oracle I want the oracle to tell me the test passed and the feature is working as prescribed, or the test failed and there is a bug in the product.
However, just as when we execute a test manually sometimes something unexpected happens as we are manipulating the application and we see or encounter something abnormal such as the dialog not appearing, or buttons grayed out and we can determine there is a bug in the software or it was an errant anomaly. However, an automated test cannot always determine if an anomaly that occurs during execution is a bug or some errant behavior. So, in those situations I can write my helper methods to verify proper state before continuing to the next step in the test (very similar to what a person would do when testing manually). My method can “see” if the handle to an expected dialog is null, or it can “see” if the button state is grayed, etc. And if my system is not in an expected stated then I throw an exception to notify me that something abnormal happened while executing the test that I need to go investigate. The cause of the failure could be a product bug, or it could be a bug in the test case. Basically the outcome of the test is indeterminate, and I or a tester needs to go investigate what is going on. 🙂May 10, 2015 at 3:28 pm #8051@shickyOnly available when logged in
Really enjoyed that webinar, late to the party as you can see but the content was fantastic. I’ll definitely be looking for more of Bj’s material!May 11, 2015 at 9:19 am #8052May 11, 2015 at 1:48 pm #8068@shickyOnly available when logged in
It was a popular webinar alright. Hopefully we will have Bj back presenting webinars for us again soon
Great to hear Daragh, that’s the best webinar I’ve watched related to automated testing inside and outside Test Huddle, would certainly love to hear from Bj again!
You must be logged in to reply to this topic.