Better Practices for Crafting Automated Tests

Home Forums Software Testing Discussions Better Practices for Crafting Automated Tests

Viewing 7 posts - 1 through 7 (of 7 total)
  • Author
    Posts
  • #874
    Daragh
    Participant
    @daraghm

    Bj Rollison
    Presented by: Bj Rollison, Principal SDET Lead at Microsoft Corporation .

    View the webinar on-demand here

    #934
    Bj
    Participant
    @testingmentor

    I want to thank everyone who took time yesterday to listen to the presentation on Crafting Better Automated Tests. If you missed it you can view it here

    I would like to hear other ideas that have worked for you that helped you improve developing and maintaining your automated test suites.

    #943
    Matthew Churcher
    Participant
    @matthew-churcher

    Great presentation BJ. This was the first presentation I’ve seen on automation best practices than wasn’t a vendor pushing a particular angle or product. It very much fitted with my own philosophy on the best way to go about things which was great to hear.

    In particular for us here, it was great to hear about your thoughts on logging. I’ve also observed a lot of time can be spend diagnosing test failures and debugging tests so I like to have very helpful logging to speed up the process but some others here are were convinced and I think you’ve helped sell it.

    The concept of blocked vs failed tests was totally new to me and I love it so I will be giving that a go to see how it pans out.

    Thanks, Matt.

    #946
    Bj
    Participant
    @testingmentor

    Hi Matthew, thank you for your kind words and I am glad the talked sparked some new ideas.

    In my opinion, an automated test should only report a pass or fail condition when all the steps necessary to achieve the objective of the test have successfully executed and a proper and adequate confirmation that some pre-established conditions or criteria is satisfied or not. So, when I get to the end of all defined steps and make a call to my oracle I want the oracle to tell me the test passed and the feature is working as prescribed, or the test failed and there is a bug in the product.

    However, just as when we execute a test manually sometimes something unexpected happens as we are manipulating the application and we see or encounter something abnormal such as the dialog not appearing, or buttons grayed out and we can determine there is a bug in the software or it was an errant anomaly. However, an automated test cannot always determine if an anomaly that occurs during execution is a bug or some errant behavior. So, in those situations I can write my helper methods to verify proper state before continuing to the next step in the test (very similar to what a person would do when testing manually). My method can “see” if the handle to an expected dialog is null, or it can “see” if the button state is grayed, etc. And if my system is not in an expected stated then I throw an exception to notify me that something abnormal happened while executing the test that I need to go investigate. The cause of the failure could be a product bug, or it could be a bug in the test case. Basically the outcome of the test is indeterminate, and I or a tester needs to go investigate what is going on. 🙂

    #8051
    Nicholas
    Participant
    @shicky

    Really enjoyed that webinar, late to the party as you can see but the content was fantastic. I’ll definitely be looking for more of Bj’s material!

    #8052
    Daragh
    Participant
    @daraghm

    It was a popular webinar alright. Hopefully we will have Bj back presenting webinars for us again soon 🙂

    #8068
    Nicholas
    Participant
    @shicky

    It was a popular webinar alright. Hopefully we will have Bj back presenting webinars for us again soon :-)

    Great to hear Daragh, that’s the best webinar I’ve watched related to automated testing inside and outside Test Huddle, would certainly love to hear from Bj again!

Viewing 7 posts - 1 through 7 (of 7 total)
  • You must be logged in to reply to this topic.