Home › Forums › Software Testing Discussions › Eliminating Flaky Tests
- This topic has 5 replies, 4 voices, and was last updated 5 years, 3 months ago by Raj.
-
AuthorPosts
-
December 9, 2018 at 11:13 pm #21279
How do you eliminate flaky tests in your test scripts?
December 10, 2018 at 6:38 pm #21302Nothing new and shiny here. Excluding for a while and debugging. Running new or questioned script a few times in a row (like 4) to see if it runs without problems.
December 22, 2018 at 1:20 pm #21354From an optics perspective I’d carefully categorize them. Flaky Tests implies the test itself is at fault. It may well be the application under test has intermittent defects…. Ive seen this frequently. Completely agree with Ramon here, Id also ramp up the level of automatic debug information gathered on failure to allow easier isolation on failure. Hope it helps
July 3, 2019 at 7:45 pm #22674There are many things to keep in mind to eliminate flaky tests (at least minimize them) based on my experience-
- Not everything can be automated and in trying to do so, we will move the focus away from actual testing and finding bugs. Some scenarios where automation may be a bad idea are-
-Using automation to catch rendering issues is a bad idea.
-Using automation to figure out element location on the page, to see whether the location is changed in a page or not may not be a good idea. Once we start messing with x,y coordinates, it is a slippery slope
-Using automation to test integrated systems which involve software, hardware, web services, API’s and cloud services all communicating in real time with each other, would be a bad idea. Eg. Testing Fitbit. We can try as hard as possible to simulate real human movements and mock services, but it is going to be a really difficult task to automate the entire process of a fitness tracker. We could rather have real humans do exploratory testing; in parallel to some automated - Having stable Test Environments to run automated tests
- Not trying to automate scenarios that are prone to constant changes
- Start small and automate simple scenarios. Once they are running consistently and stable start adding more tests to the test suite
- Use more stable location strategies like id’s, names instead of Xpaths which can change constantly or use tools that can handle these dynamically during run time
- If you have a large test suite, run them consistently and identify those tests that are unstable and are failing consistently/sporadically. Separate them out from the stable tests. Then start fixing/eliminating these flaky tests one by one.
-Raj
July 29, 2019 at 12:39 am #23016Hi Raj, thanks for your comment based on your practical experience, now I have added few of them to my current checklist 🙂
July 30, 2019 at 5:05 pm #23066Your welcome 🙂
- Not everything can be automated and in trying to do so, we will move the focus away from actual testing and finding bugs. Some scenarios where automation may be a bad idea are-
-
AuthorPosts
- You must be logged in to reply to this topic.