According to a research conducted by Pinch Media Data in 2009, the average shelf-life of a mobile application is only 30 days. The number of mobile applications has since exploded and the shelf-life figure has only reduced even further. These shifting demographic trends mandate software quality assurance teams to recalibrate their approach to software testing and closely align with both, the mobile app development teams and customer base. Choosing the right mobile test Automation tool should be part of this approach.
With the variety of applications available and growing number of features that users demand from the applications, ensuring the quality of mobile applications is extremely indispensable to both, retain an existing customer base and onboard new users. Given the short time windows available for software development and quality assurance (testing), software testing automation becomes a necessity at some point in the company’s lifetime, even though alternative strategies exist. An App development company could decide to automate its testing activities due to a myriad of reasons – internal or external.
Regardless of the underlying reasons, once a company has decided to automate its testing activities, a structured approach is required in identifying the tool for the automation process. Success in testing automation depends to a high degree on set of tools employed. Given the variety of automation tools available in today’s marketplace, converging the right set of tools that meets a company’s unique testing needs could be enervating.
When a product is being developed, it is relatively unstable. During those phases, manual testing is relevant to quickly verify that the product works as expected. Software testers should use this phase, not only to gain acquaintance with the product specifications, but also write test cases for Verification and Validation (V&V) purposes. Once the product specifications are finalized, testers should start thinking about how they could automate the test cases.
Often times, software development companies have to reconcile between choosing and investing in select tools for specific short-term client-projects and selecting tools for long-term projects / products it’s developing to avoid re-tooling and incur expensive overheads later. The short shelf-life of mobile applications particularly poses a management conundrum in forging a coherent tool strategy. In such cases, a scenario-based approach helps managers undertake a coherent investigation of its requirements, prepare their companies for mobile test automation and make the right tool investments for both, tactical and strategic projects.
1.Supported mobile platforms
With any given requirement specification, one has to select the right set of tools that support, not only the target operating systems, such as iOS, Android, Windows and their different versions, but also the underlying hardware configurations. Mobile applications present several unique challenges that quality assurance teams need to account for when structuring their test efforts. One of the most fundamental issues is to understand how an application (code base) will perform across several operating systems, interfaces and form factors. Though the major players in the mobile platform market are Google and Apple, developers still need to account for Symbian and Windows Phone users as well. Even within a single platform, there could be a permutation of software versions and form factors to consider. It is, therefore, extremely important to check the oldest and the newest supported versions of the platforms before thinking about the right mobile test automation tool.
2.Supported application types
Once an initial set of automation tools has been shortlisted, one has to check for the type of application that could be managed using these tools. Most tools are so specific that they do not concurrently support native, hybrid and web applications. Most mobile testing process are not developed to a one-size-fits-all approach. Therefore, it’s highly probable that several tools have to be selected in the automation process chain. Depending on the type of application under test, at least 80% of testing activities could be automated, following Pareto’s Law. However, when factoring an application’s functionality on a range of platforms, some amount of ad hoc manual testing is required. Leveraging the right set of tools can help increase efficiency and reduce costs, while providing an objective environment to assess the quality of the application and predict user’s experience in the actual environment when the application or service its deployed.
3.Source code requirements
For the best testing quality, native mobile applications should have some tool-specific framework bundled within installer so that software testers could send some instructions to the device/emulator to perform activities directly with the native application. Most conventional browsers have their own web drivers, so testers could test the web applications with the help of these browser-specific web drivers. In most cases, mobile applications are not delivered with the source code or framework to the testing team, so that they could simulate the same functionality on different mobile platforms. In some cases, solution like the App Package for iOS is available; though this module does not deliver test coverage as would a process with source access, it does provides more capabilities for testing than the leaner application install file itself. Hence, the source code and platform frameworks are significant pointers to consider, as it is not always possible to gain access to the source code for testing purposes, especially when the testing activities are outsourced to a 3rd party.
4.Application refactoring requirements
Next obstacle in mobile test automation is the requirement to modify the application i.e. refactoring to make it testable by the automation tool. The trick to refactoring is to be able to verify that the functionality is retained. A testing professional needs to make sure that that all changes are verified before and after refactoring. Though automating this process is not required, it could help during subsequent regressions. Refactoring complex applications or code modules is an art and automating these elements should be performed with the utmost diligence. The chosen tool should meet such scalability requirements to deliver the expected results at different levels of granularities; it may be required to include 3rd party libraries into the test project and build the test version of your product or modify the existing app version that is delivered for testing.
5.Test scripts generation
For mobile applications that require extensive test coverage, creating real-time test scripts could pose a significant challenge. Though test automation greatly improves execution efficiency, these efficiency gains come with significant costs, especially when a developing library of test scripts to ensure testing coverage requirements. Automated test-case script generation tools could further improve efficiency and broaden test coverage by helping create script test scenarios around operational requirements. For scalability, the tools chosen to automatically generate test scripts should support script parameterization. This approach, however, is usually limited by tool capabilities and cannot deliver a degree of coverage as with a programmatic approach whereby the power coding and capabilities of the underlying programming language is leveraged. The programmatic option is not as fast as automated test script method, but the outcome is more effective and flexible. It is, therefore, required to evaluate the resources available in order to choose one approach over another in the tool evaluation process.
6.Programming language specifications
On a broad note, programming language used in developing the application plays a significant role in the quality assurance process. Most testers often choose procedural languages, such as Perl, Python, Ruby, etc. to create scripts for automating test cases because these programming languages are usually easier to learn, do not require compilation (which results in significant time savings), have a large user base and libraries to choose from to solve various automation challenges. Object Oriented languages, such as Java, C++, .NET, etc. are often chosen for automating tests when the test subject has been developed using an Object Oriented programming language, which has a significant influence on the solution’s architecture. In addition to selecting right tools and programming languages, test staff allocation is also very important. It’s more effective to reuse existing in-house knowledge, experience and skills than adopt new technologies.
7.Runtime object recognition
There is a fundamental difference between functional and load testing tools. Functional testing tools operate at the user interface level, while load testing tools work at the protocol level. Runtime object recognition for functional testing tools is almost never 100%. If object recognition success rate is less than 50%, test automation team will perform so many workarounds, which defeats the objective of test automation. For load testing tools, this question is less relevant. Application changes and their impact on object recognition in test scripts are always a challenge for the test automation team. Having unique object identification greatly reduces the impact of changes and simplifies test scripts maintenance. One has to understand and evaluate how object recognition is performed during runtime using a given tool and if possible, gain access to the specific objects so that checks could be easily performed on the recognition properties in collected object library.
8.Data driven inputs
Today, most applications are interactive, requiring users to key in something at some point. Knowing how the application responds to the various set of inputs is essential to delivering a stable and quality product to market for your mobile test Automation tool. Data-driven testing helps understand how an application deals with a range of inputs. Rather than having testers manually enter endless combinations of data or hard code specific values into the test script, the testing infrastructure framework automatically pulls values from a data source, enters the fetched data to the application and verifies that the application responds appropriately before repeating the test with another combination of values. Automated data-driven testing significantly increases test coverage, while simultaneously reducing the need to create more tests with different variables. An important use of data driven tests is ensuring that applications are tested for boundary conditions and invalid input. Data driven tests are often part of model-based tests, which include randomization to cover a wide range of input data. To enable test execution with different combinations of data, the data sources should be properly managed. The chosen test automation tool should include drivers and support a range of data formats, such as flat files, spreadsheets and database stores.
9.Result and error logging
During test case development and execution, it is often necessary to log messages showing more developer-specific information. For a test manager, however, it would suffice to know whether a particular test passed or failed. Depending on the needs, it may be necessary to automatically capture screenshots or videocasts for failed test process to make it easier for developers to reproduce the issue and identify the root cause of the problem. The automation tool should also have the necessary filters to mine log messages by their type, text, priority, time and other important attributes. Tools that allow reviewing log summaries from one automated test run to another across timelines and the ability to configure report formats are also features to be considered in choosing the test automation tool.
The central idea behind a continuous testing approach is to frequently promote code changes and rapidly get feedback about the impact these changes have on the existing system. A strong test automation framework should have the ability to support team work and integration of automated testing infrastructure components, such as the Integrated Development Environment (IDE), test framework, revision control, test configuration management, issue tracking, report generation, etc. Continuous testing and integration with the existing quality assurance tools and technologies are of paramount importance for the efficiency of QA processes. Not performing continuous testing leaves too much room for defects to creep in; by the time a defect is identified, more code has been layered on top of it, which makes it harder and more expensive to fix the defects discovered later on. Testing the changes right away dramatically reduces the cost of addressing defects, so the automation tools should trigger a build with each commit and execute the relevant tests automatically or at scheduled intervals throughout the day. In addition, decomposing a test suite into smaller batches, running test cases in parallel and automatically dispatching the defects to developers working on on the code branch where the defect is identified is the cheapest and fastest way to achieve quality outcomes. This approach also gives developers room to experiment, while simultaneously protecting the master code base from regressions. As a result, each code branch is tested as rigorously as the master. Such an application of continuous integration to new branches as soon as they are created helps uncover compatibility problems and ease the final integration with master.
One of the main reasons that test automation is often perceived as an expensive affair is that the automation activities are done in silos, entirely disconnected from the core development efforts. Effectively shielded away from the ramifications of design decisions that hamper testability, developers continue to create software that’s almost impossible to automate. Effective Agile teams break down these silos wherein every developer on the team is involved in automating the tests and the automated tests go into a common repository where the developers could check in their code. As a result, the cost of test automation decreases dramatically.
Choosing the right mobile test Automation Tool can be difficult. There are several free open source and proprietary test tools that are candidates for evaluation. When open source tools are selected, it’s important to check how stable the tool evolution is and how fast those tools are upgraded to support latest changes in technologies. As for proprietary solutions, the price of the tools is one of key factors to be considered in justifying the investment and ROI calculations. It’s also important to check the licensing models, such as pay-per-use, per node, site license, period of validity, etc. Another important consideration is the availability of add-ons, support and updates and whether these features cost extra. And, last but not the least – the chosen tool’s ease of use trumps all other considerations. The tool’s complexity should be in line with the test team’s ability to adopt new tools and programming talent available at a company’s disposal.
Mithun Sridharan is the Managing Director of Blue Ocean Solutions (BlueOS) PLC, a Germany-based Inbound Marketing and Digital Transformation company focussing on Technology companies. He brings with him over ten years of International experience in Business development, Marketing, Global Delivery and Consulting. He holds a Master of Business Administration (MBA) from ESMT European School of Management and Technology, Berlin and Master of Science (M.Sc) from Christian Albrechts Universität zu Kiel, Germany. He is a Harvard Manage Mentor Leadership Plus graduate, a Project Management Professional (PMP) and a Certified Information Systems Auditor (CISA). He also served as the Communication Chair for the German Outsourcing Association in 2013 and is based in Eschborn, Germany.