Continuing the Start-Up series, here Gordon goes in deep with the history of his testing company CodeFuse and how they devised an automated regression testing framework.
In this month’s blog, I’ll be covering the background to my start-up company, CodeFuse. For a part-by-part breakdown of this blog series please see Part Zero.
When CodeFuse was in its inception, we firmly believed (and still do) that people want to take pride in creating a quality product. To build in that quality you need great regression testing, particularly in an agile environment. Too often however, there are problems that undermine test automation endeavours and too often too much time is wasted getting to the solution a team requires.
So, the questions are – what is the fundamental business problem that teams want solved? What barriers stand in the way of successfully introducing automated regression testing? Finally, what do CodeFuse believe the solution is?
The Business Problem
Fundamentally businesses need to test their products because it lowers the risk of:
- A poor experience for customers
- Customer support having to deal with issues
- Reputation damage
Regression testing is a substantial part of this testing effort and checks whether any old features have been adversely affected by the addition of new ones.
However, software regression testing is often a time-consuming activity. The larger your product offering becomes; the higher the risk of failure and more regression testing is required.
Companies can try to reduce the risk caused by this problem by ploughing more manual resource in, but are likely to be hit by the following issues:
- Your development time to market suffers as manual testing is slow
- More defects escape as manual testing limits the scope of what can be tested
- Manual resource is taken away from other testing to perform rote regression testing
Automation is the obvious answer. But why does it fail so often?
Barrier 1: Poor understanding of what is to be automated
Most areas in most businesses are looking for efficiency gains through automation. This includes software but also areas like marketing and sales.
I have spent around 15 years of my career in software testing and there appear to be some distinct differences between business functions when discussing automation. If we take a marketing automation system as an example, there is commonly a good understanding of what the automation is doing – for example sending emails, gathering and analysing responses. There is no suggestion that the solution will generate content. The world of automation in software testing however is different. It is often not viewed as an aid but a replacement for a human tester. It’s unclear why this difference exists but my theory is it’s to do with knowledge and perception. What is it the testers are actually doing? Are testers a ‘necessary evil’ who simply mechanically execute tests and report negative events? Testers often do not help the situation by not clearly communicating what testing they have, and have NOT completed.
Of course, software testers know the wide variety of activities where they add value. These include spotting defects early in requirements analysis, new feature exploratory testing and in regression testing old features. They also cover ensuring quality with bug re-tests and creating test data to support the team. During all testing tasks different dimensions such as functionality, performance, usability, security etc need to be considered. Add to that the fact that sub-sets of all of this may need to be done on different environments and platforms too! Some of these tasks are repetitive, some are not.
The perception should be that test automation, like marketing automation, enables the people using it to get more done and do a better job. Automation still needs guidance.
For anything to be considered a success there needs to be an understanding of what it is you were trying to accomplish.
Barrier 2: One of these tools must be perfect for me!
A fixation on the selection of the automation tool itself is very common. Of course it is sensible to do some analysis in this area to make sure that your application-under-test is compatible with the tool you will be investing in (in terms of time and/or money).
However, this fixation seems to detract from the far larger problems in the introduction of the tool and team ways-of-working.
Getting understanding and buy-in from team members about changes, what investment they need to make and what the rewards are should be should be 90% of the change management job, but 90% of it often seems to be tool selection.
Barrier 3: What is it we want to regression test again?
There is also a slightly more granular version of the question “What are we trying to do?”
It may be that a team or department has moved on from, or hopefully bypassed altogether, “we want to automate testing” and now states something like “we want to automate all of MyApp v1.0 regression testing”. This is a giant leap forward, but what does “all” actually mean?
Some companies will have manual scripts representing all the regression tests they want to execute, but sometimes there is nothing. Are these manual scripts really everything? Are they correct and up-to-date?
At some point the business logic for what “all” actually means needs to be encoded into a machine language. Is it possible to communicate it?
Barrier 4: Test creation takes too long
Often there are issues surrounding underestimating, or not prioritising time for, test creation problems like the following:
- It’s too slow extracting UI information for tests
- My tests only work on one browser & OS
- Much of our test code is not re-usable
Barrier 5: Test execution time takes too long
Likewise, the underestimation problem can apply to test execution…
- We waste time with test code compilation issues
- We spend too much time diagnosing test errors
- It’s tough to run the tests I want, when I want
- Not all our test code has error handling
Barrier 6: Results analysis takes too long
… results analysis…
- My test results are unclear and ambiguous
- It’s too hard to make the results visible to my team quickly
- I’ve no idea what my test automation ROI is
Barrier 7: Maintenance time takes too long
… and maintenance time too!
- Keeping multiple platform test machines clean and operational
- Duplicated data tests
- Duplicated configuration tests
- Minor UI changes always break my tests
- UI and test data is hard-wired and duplicated
First up, to address the first barrier, we make it very clear in the messages we use that we do automated web regression testing. We don’t automate testing, that is highly likely to confuse. Of course as CodeFuse grows we may be able to automate more, but the point is that if we want to be a success we need our customers to agree on what success is.
But how can we deliver on that promise? If you look at the bulleted items in particular, the problems are diverse. You have probably seen some of them dealt with in a customised system or possibly coded solutions yourself. But these are issues that are common across companies the world over. Rather than re-invent the wheel, we’ve wrapped up these solutions into one easy-to-use cloud-based framework which solves the issues end-to-end. We’ve worked hard at providing features such as the “browser-in-browser” object spy tool, multiple platform test execution integration, clear reports, codeless key-word driven tests and multiple-configuration (e.g. languages) support.
As with any solution we don’t aim to please everyone. There is going to be a trade-off between a SaaS solution and an expensive (in time and money) customised solution. But for a segment of the market, a simple and convenient answer to many of their regression testing problems is axactly what they are looking for.
Our target segment is often time poor, so we also wrap the SaaS solution with a simple way to supplement your resources. Clients can make requests to CodeFuse to create, execute and report on tests. They can still use the CodeFuse SaaS solution directly but also call on CodeFuse experts too. In this way, there is a completely clean split between clients holding their domain knowledge and CodeFuse holding the test automation expertise.
We hope that CodeFuse will allow more teams to be proud of the products they create.
About The Author
Gordon is the founder of CodeFuse Technology. CodeFuse reduces software development time by making regression testing faster, better and easier. Gordon graduated with a 1st class degree in Computer Science and Artificial Intelligence at Sussex University and also holds an MBA from Imperial College with a specialisation in Entrepreneurship. He has worked successfully for blue-chip, SME and start-up companies. His passion is software quality and making sure that continuous improvement is used to enhance quality efforts across the entire development lifecycle.