Mobile Test Automation Strategy – Best Practices and Potholes to Avoid

The mobile app landscape is changing rapidly with an estimated revenue projection of $188.9 billion by 2020 (Source: Statista). The trend has permeated virtually every industry segment as users find apps to be convenient, highly usable, and cost-effective. Enterprises have realised lately the potential of building mobile apps to reach their target customers, quickly and consistently. In doing so, they have created app stores that are aligned to their genre, such as; utility, banking, retail, entertainment, gaming, and social networking, among others. However, notwithstanding the popularity of such apps among the users and the inclination of enterprises to build them, there are challenges galore. The first challenge deals with building a robust mobile test automation strategy.

So, what makes building a mobile test automation strategy a challenge? The answer lies with cost escalation. Yes, mobile app testing has become a costly proposition, perhaps even more than development. There are a number of reasons:

  • Testing apps on different device models, operating platforms, and networks takes time, effort, and money.
  • Mobile app testing needs to be done continuously due to aspects like upgrades to the operating systems, the release of new mobile features and functionalities, and the launch of new device models.

To overcome the mobile test automation challenges and improve the quality of apps, enterprises ought to focus on automating the maximum aspects of testing. Let us discuss the best practices to be followed and the challenges faced while planning a mobile test automation strategy.

Best practices (and challenges) for mobile test automation services

  1. Cost estimation

Best Practices

  • Estimate the cost of setting up a test environment
  • Validate the most critical requirement in testing
  • Estimate the effort towards writing and executing scripts
  • Conduct dry runs of testing wherein the integration of third-party tools, such as CVS or Jenkins, is ensured
  • Analyze the final testing report to understand any deviations. These highlight the areas that are not doing well viz., wrong tool selection, and the lack of writing automation scripts
  • Target the quantum of automation based on the combination of device and OS, and the user base of the application. Use mobile app analytics to estimate the quantum
  • Get the best test automation tool with the capability of recording user stories irrespective of its way of working. For example, MonkeyTalk has Monkey ID while Appium has UI locators to locate native or web elements

Challenges

  • Automation may not give the desired benefits vis-a-vis manual testing
  • Test automation may take a longer turnaround time thereby necessitating the use of manual testing
  • Proofs of concept taking more time and effort than feasible
Info
  1. Tool selection

Best practices

  • Find out the feasibility of selecting a tool by creating a test case during the planning stage. An improper tool can lead to missed deadlines and the loss of money and effort
  • Record a test case to save time from writing manual scripts
  • Calibrate the test script by running it from the UI panel of the tool. Thereafter, convert the recorded script into a programming language to conduct dependency testing, looping, and parameterization
  • Select a tool containing a programming interface for modifying the script. This is important as many a time, the tester needs to make changes in reporting and user scenario
  • Select a tool that allows testing on both simulators and devices. Also, the tool should be backed by a developer community for periodic upgradation

Challenges

  • Choosing the most suitable tool from a plethora of tools claiming to be the best is difficult. Also, there is price variance for tools ranging from free to premium
  • Tools come with multiple architectures and configurations, say emulators or cloud-based
  • The fact-sheet to select the tool does not give full information on various parameters
  • Inconsistent commitment from vendors
  1. Continuous integration and reporting

Best practices

  • Incorporate continuous integration and reporting in the project plan as their successful run can achieve a series of positive outcomes. These include creating automated builds, receiving failure summary, and automating the process of email notification, among others.
  • Select a tool providing better documentation and support
  • Make scripting better readable and dynamic by using the Eclipse environment with your choicest programming language
  • Execute the test script in parts should there be a problem with reporting due to data or network issues
  • It is better to write stateless test cases

Challenges

  • Requires the knowledge of configuring the CI tool
  • Every application may need the setting up of a continuous delivery pipeline

Conclusion

Testing mobile applications entails factors such as network and environment setup, availability of tools and their compatibility with various test environments and devices, recording test outcomes, and configuring systems for desktops and servers. By following the best practices for automated mobile application testing, the mobile app can deliver better user experiences and ROI.

About the Author

Hemanth

Hemanth Kumar Yamjala has 10+ years of experience in IT Services, predominantly Marketing, Branding, specializing in Digital. Currently a part of the marketing for Cigniti Technologies with functions such as leveraging digital marketing channels for lead generation and promotion.
Find out more about @hemanth-yamjala