Test Environment Virtualisation – Challenges of the Setup for Testing in a Complex Platform World

In the beginning…

…life was easy, you had one app, one web page and you needed to test it. Manual testing was sufficient and if you wanted to stand out in a crowd you learned some Perl or Python (I actually started with Basic) and you could go grab a coffee while the script was working. Everything you needed to test was there on the page, there were no 3rd party providers, no connected pages, no unstable services on whose response you were dependent on. There were an input and output, some variations with positive and negative scenarios, boundary values, and standard stuff. Easy life…

Fast forward…

…to today and you quickly realize that you’re in the chaos. Every tree that you need to check is connected with whole jungle. Your requests are being redirected, there are many connected endpoints, and providers are unstable…

Imagine you need to test travel management system, searching for free seats on a certain flight – not only that you can’t guarantee that there will be a flight on a certain date from A to B but expecting free seats on every request is almost impossible. And if you need to check if the passenger will be offered rent a car in addition to his flight search then things are getting really complicated. Manual exploratory testing can handle this uncertainty (at significant cost and with slowing down time to release), switching to a different route, or choosing a different class but it is a real challenge for test automation.

To emphasize the point, let us scratch the surface in financial systems. High-security levels, no test access to services or even chargeable access are making hard limitations on any attempt of automation. One of the projects that I’m working on has seventy (70!) different providers, all connected and communicating, and not always accessible of course. Best what Test Team managed to pull was a semi-automated test, having some parts automated and somewhere manual action was needed.

Obviously, that was not good enough. We wanted to achieve Continuous Integration, fully automated tests attached to deployment in Jenkins so we have invested quite some effort in search for a solution…that I will tell you more about in second part 🙂

Solution

The main challenge in test automation is a stable environment. The only thing you can control is your test script. But what to do with the environment? With unavailable service or an uncertain scenario outcome? We wanted to remove provider restrictions, to enable quality assurance on complex platforms and to significantly improve development and testing efficiency.

One solution could be mocking those services and this can work well on unit level. However, we wanted something more flexible and also availability to record and playback traffic between our product and a provider. There were also some solutions on the market but they were way too expensive for us. So we decided to sit down and build a…

SIMULATOR

We have agreed on 3 core functions:

Magic number – this was mostly used in our testing. A special trigger in the request (passenger name, age or something else) would trigger predetermined response that was expected in our automated test.

Queue – usable for special cases. Single response will be returned on a certain request parameter.

Transparency – no interruption in data traffic. If no preset response has been triggered request would go to the real provider. The Simulator will only observe the traffic and will not manipulate with it, basically acting as some kind of a proxy – useful when the simulator is set but we need the original provider’s response.

The Simulator has 3 modes:

Recording – In this mode simulator will record all traffic and store it in files

Playback – In this mode simulator will check for the pre-existing recording of a request and, if such request is found, return its pair response.

Full transparency – This will skip all checking inside the simulator. While in this mode you can still use recording mode.

And plugin is there in case we want to add some logic and modify the request or the response and reuse it later.

Outcome:

Improved testability – automated and manual testing can now be done regardless of a 3rd party services availability

Better efficiency – Easy handling of corner cases. Development can work without full 3rd party access

Resolving dependencies (extra added value) – Removal of internal dependencies resulting in speeding up environment

We noticed a couple of things happening once the Simulator is introduced to the team. First, there was an obstruction or in the best case, indifference – in order to implement it we needed support from developers, but they were reluctant to give any. However, that quickly changed once they realized that they don’t need an unstable provider in order to check their code. They could use the simulator. It was much smoother with test automation engineers – scripts stayed the same, only thing that changed was the URL where they were pointing to.

In short, development became faster and testing became more stable. One very important advantage of developing this solution in the house (instead of buying finished one on the market) was that we were able to upgrade it as we wanted to. The latest addition was the web app that shows provider’s availability – very useful for testers and developers when you work with 50+ providers.

And how do you solve your problems with unstable providers in complex platform world?

About the Author

Aleksandar

Extensive Automated and Manual testing experience with several projects across various industries (storage, telco, gaming, education). Specialties: Experienced with Mercury’s Quick Test Pro (QTP), Test Complete, Rational XDE Tester, Selenium, Watir, Sikuli, SOAP UI and Protractor. QA in gaming.
Find out more about @acoristic