• Author
  • #19170

    As bad automation test suites are not easy to notice, I would like to hear your thoughts on how to identify the problems with bad automation


    Hello buddy,

    You start a new business and get delivered an automation project built a few years ago.

    Various teams worked on it during this time and created a few thousands of tests.

    The tests run with a 40% fail/error rate.

    The code has

    • it holds various applications (mobile site, desktop site)
    • Various static delays and implicit waits
    • classes with thousands of lines of code
    • tons of duplication

    The page object model is implemented inaccurately with each page class having different rules for clicking, typing, getting the value of each element.

    The test environment is shared with other manual test teams.

    The test data is very challenging to configure and recreate.

    It is your responsibility to clean the project up, to stabilize it by reducing the fail/error rate and to bring the project in shape.




    @alishahenderson, when a test run has a 40% of fail rate, of course, it is the responsibility of the AT engineer who created such tests to make the code more reliable and thereby it is easy to identify a bad automation test suit, also code quality will be reviewed with every pull request he makes.. But when there is a 100% pass rate one cannot be always sure that the application under test is bug-free,

    So I would like to give a list of suggestions to go through and check if the tests are crafted in such a way, else more likely it would lead to a bad test automation.

    – Being more specific on expected results
    – Tests should follow the user approach rather than taking an easy way
    – Review the test results (ex: screen-shots)
    – Assess the test suite and tests frequently to ensure coverage.

    If there are more points to add, feel free to add your comments 🙂

Viewing 3 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic.