In March, we recorded a live webinar with Best Tutorial winner Jani Haapala on his award winner tutorial ‘Rookie Tester to Test Automation Expert. The response was so incredible that we asked Jani to sit down and answer those questions that we couldn’t get to on the day.
How do you decide how many tests is the right number?
I feel that one can’t say how many tests is the right number. There are just that amount of tests that the test case creator taught that was the correct number and the review grpup accepted. If you have very complex code it needs more tests per number of code lines and if not then you can manage with less… The important thing is that the people testing the system feels that they have covered everything necessary things.
I’m coming up with a big mindset issue where manual testers fear/distrust the existing automation suites. Any tips on getting manual testers on board with trusting the automation output so they don’t just duplication the automation areas again manually?
Manual testers also needs to be part of the team creating, running and debugging the test automation. If they are left only running those cases it generates very easily a situation where automation is just something that somebody else knows how to do or debug. And with proper knowledge it is a fear of unknown that makes manual testers to fall back to the safe old ways. They can’t get the proper confidence with automation if they do not understand it well enough.
Do you think companies are giving more or less importance to test automation?
Unfortunately I have to say that when it is new and “cool” it gets lots of attention and importance but when it starts to become hard, the importance quite often also fails. It is much easier to do things like we used to do than go the extra mile to get the automaton to work perfectly. It is a shame since when not paying enough attention to automation it leads to vicious cycle where testing takes more and more time and fixing automation takes more and more effort.
Were you pushed towards the automation by others or did you organically shift towards that mindset on your own?
It was quite organic for me. I did manual testing for a while in the beginning of my carrier but then with the thesis I found it very interesting that I can put machines to do the work that I should do by hand. I have always been very curious about coding but since I am not a developer on my background it felt much easier to start learning coding with test automation. Lately with my interest of automating things has lead me to the infrastructure automation which is mostly untouched territory in many places still.
At EuroSTAR 2018, you mentioned that maintenance is the toughest part of automation. What practices do you implement to manage maintenance as well as getting your current deliverable automated and added to a regression suite?
Best “tool” to keep your solutions maintainable is to first go through your need very carefully and then implement agreed concept to that. After that you need to be extremely strict about the usage of your concept that it stays like it was intended and everybody knows how to use it. Too often people take some tool or concept and adds this and that to it just because they like to do it so. This creates a huge amount of alternate usages, and concepts that are hard to verify later. Especially when you add something new it is very easy to break old concepts if you do not fully understand those. Also if you decide to refactor or improve something it means that you go through the whole concept and fix/modify all places at the same time, otherwise you quite easily end up to the situation where you have the old way, new way, new_2 way, new_new way to do things…
When you encountered tests that were flaky, did you find yourself overhauling those flaky tests, getting rid of them or was it a simple case of refactoring? And did you find that flaky tests exposed gaps in code / test coverage?
In my cases the flaky tests have mostly exposed some limitations or characteristics of the environments. Test automators are quite good of verifying that the test works on their machine when the test is written but quite often it is extremely hard to see all the variations that different environments can have. The other top reason is to make the test in too low level that even slight change on sequences can break it with no obvious reason. I have felt that it is very hard to debug or refactor these cases in the flaky state since you do not fully yet understand that why those are failing. When I have managed to label a small set of flaky tests to certain category, then it is easier to see the bigger picture and also find gaps on env/code/coverage, but not before having a multiple piece of evidence on same table.
In my on journey, I found that tests could be categorised so multiple test failures could be resolved by the same fix. Test data was a major reason for failing test cases. Time spent in investigation failures was the overall maintenance Expense.
Test data is one major candidate for failing tests. I feel that there is not yet enough automation or good tools to automate the test data creation. Each system has it’s own characteristics that leads the system to be heavy on certain kind of failures. For example, embedded systems are heavy on environment failures and Insurance systems heavy on test data failures. This should be taken into account when designing test automation and debugging it. It is important to have proper categories on the beginning to be able to address the whole category as soon as possible
In my experience only one in 10 to 50 test cases needed to be debugged… How do you abstract test cases? Have you encountered the Screenplay Pattern?
It depends a a lot about the level of the people writing test automation that how many test cases per 100 needs debugging. Best way to avoid large numbers is to create clean and easy concepts and properly train all users in the beginning of usage. Also if there are new concepts coming it is worth of having more training. In test case abstraction I always try to draw it from the tested domain. If the domain is for example person doing something, then the test automation should reflect the actions that the person can do. And if the domain is machine talking with other machine then the testing should reflect the message traffic. Screenplay pattern is a good way to get the focus on right place. There are several techniques for getting the developer focus and mindset to the things that matter. One other is for example BDD (Behavior Driven Testing) or TDD (Test Driven Development. For me all of these methods are good if they work in your context, the ultimate goal is to do more verification and do more of the important verifications.
Do you love learning about software testing and improving your testing? Online learning is the first step of your journey with the EuroSTAR Community – the longest established and widest testing community in Europe. Join Huddle as a member a receive unlimited FREE learning resources.