Every decision we make, including in testing has economic benefits and costs but what is the economic cost of software testing?
The Economics of Testing
What makes us professional?
I’ve heard many answers. But over the last few years I’ve established my own definition: Professionals understand the impact of the money they are given. Professionals do not waste that money, but invest their skills for maximal customer value. Professionals understand that value in terms of business impact for the customers.
I came across this as part of writing my book, “Everyday Unit Testing”. I started asking myself, and others, how professionalism translates to the testing world. Apparently, professional testing, in terms of understanding and applying economics, means quite a lot.
To understand the economics of testing, we first need to go back in time.
The making of a tester
Testing as a profession is new, and it’s part of the software world which is new itself. Testing, however, was needed starting with the first piece of software. Back then, developers tested the software they wrote, it was part of the job. After World War II, the software field grew bigger. With bigger markets, came economic opportunities, but also big risks: big foul ups can lead to reputation loss, market loss and loss of big piles of cash.
Then, in the 1980s things changed: Computers became cheaper, and operating systems were finally stable and accommodating. Software was now cheaper to write.
But now, developers became even more scarce a resource. Now with bigger markets and opportunities, demand exploded. Businesses had no choice but to make a logical, yet risky decision. They transferred the testing activities to cheaper people, and kept the existing developers working on features. This made economic sense in terms of labor, but impacted quality, as developers were free to write more code, without the responsibility for quality.
The first testers were bug-catchers and later gate-keepers. Since then, the testing profession has grown, and it now involves many responsibilities. Testers report the status of the product from all sides: inside, outside and sideways. They give the business tools to make business decisions.
In order to do this valuable work, testers need to understand the risks, the market and the users. They find ways to reduce risks, by exploring areas of uncertainty in the product, proving and disproving assumptions, and suggesting corrections.
Because they don’t have all the needed time (and who does?) testers define strategies based on risks and value, and they prioritize their activities based on those. Testing strategies not only cover testers work, but cover any testing activity by other team members in the organization. Testers understand the skills of the team members: who’s better at coming up with test scenarios, and who just tests the happy path; What kind of testing already exist, like unit and integration tests that cover parts of the scenarios, and what needs to be additionally tested; And finally, where automation fits, and where exploratory testing can help.
In short, good testers understand economics. That’s why good testers are expensive.
Let’s see how professional testers can make an impact on the bottom line.
Structure
In agile organizations, testers are part of the development team. Believe it or not, it’s not for social reasons. The economic benefit of co-location is shortening feedback loops. Collaboration with developers, product owners and anyone else on the team, makes information flow quickly, errors are caught early, and assumptions gets discussed. All this results in less waste and a better product.
This works wonders in small teams, but does it scale? When teams grow larger, and work on big products, the economic impact is larger, and probably when you don’t expect it to be.
Imagine you had 5 teams working on a web application, and had only one performance testing expert. Her expertise is also needed by the other teams, so what are your options?
You contemplate different models of solving the growing need of performance testing in the company.
- Maybe have performance tester in every team? It may be too much, and does that mean we need everyone to know everything about everything?
- Maybe start a rotation in the teams. Our expert will not belong to a team, but will do a tour of duty when needed. However, we know teams work better when they don’t go through many changes.
- Maybe our expert can turn consultant. Keep her in the team, but limit that time and rent the rest to other teams. But will this work over a long period of time?
Every choice has short and long terms on the ability of the teams’ performance. It looks like a management issue at the team level. But go a bit higher, and you’ll start seeing a talent management issue at the organization level. Who should be recruited and who should be let go? How can we retain testers and keep them happy? How are testers fit into teams, and how do they keep their collective profession growing?
HR organizations are already starting to struggle with this. They understand the impact of having the right people in place for the business goals of the company.
Structure and culture has a large economic impact. The again, testing activities themselves do as well. Let’s talk about the humble bug.
Bug story
Everyone knows bugs are waste. They cause rework in reproducing, document, fixing and verifying them. All this, while we lose the opportunity to work on something that brings in money.
Zoom in, and you’ll notice patterns of waste that we usually aren’t really aware of. For example, the cost of documented bugs, that don’t get to see the light of day. Once they are logged, bugs will be discussed, prioritized, discussed again, and re-triaged. It’s like bugs are the gifts that just keep taking (time). How much does that long backlog of bugs cost you?
And then there are the showstoppers. Drop everything, we need to get a hot-fix yesterday. We’ll fight fires, but lose more on the context switch, working through the night and getting more bugs in on the way.
Bugs are a lot more work (and therefore costly) then we seem to think
With no bugs, imagine how much time will be saved by not having meeting, documentation, emailing, and irritating each other.
Wishful thinking, you say. Yet professional testers understand that building quality in, fixing bugs as they emerge, working with developers before the “actual testing” begins and formal bugs are logged – all these are giant time savers.
And time is money.
Preventive testing, and other testing activities, requires assistance. For example having a testable architecture. The difference between testable and hard-to test architecture also has a large economic impact.
Untestable code is expensive code
Usually architecture is not planned, and if it is, it’s planned only once. Usually it’s a patch-up job, and building it test-ready comes as an afterthought, too little, too late.
In a hard-to-test system, either there areas that eventually don’t get tested, or require a big investment in making it so. In economics terms, we either introduce big risks or slow development down. We need better options.
Testable architecture gives us these options. Plus, it comes with a benefit: Testable architecture can usually change in less risky, and a cheaper way. As requirements change, the application changes, and we can continue testing it, and putting features out and make money.
Having tests around the application can also help delay “The Big Rewrite”. People underestimate the possible longevity of their code. It should last for decades, but without testability it will survive only until the developers will have enough, and decide that it’s quicker to write the application from the ground up, rather than continue maintaining the existing code. Big rewrites are costly and risky, and as such we want to delay them as possible. Testability helps us do that.
Of course, having a testable software is part of a holistic testing strategy. Preparing and maintaining the strategy can have a huge impact on the business.
The plan
Professional tester have many skills, among them project management. Understanding the risks, resources, what can be covered, and what can have marginal impact if not. Building a test plan is not just specifying test cases, is using the resources at hand.
A good strategy relies on feedback loops. A better one shortens them continuously. Does a suite of automates tests take two days to conclude? That’s too long. What tests can we drop and which should we keep or replace?
It requires continuous analysis. What kind of bugs do we find and when? If manual testing finds most of interesting bugs, we should divert automation resources there.
If the strategy includes ATDD (Acceptance Test Driven Development) combined with unit TDD, there’s a definite economic impact boost. Test-first approach aligns everyone with the business requirements. That means more code that application requires, and less YAGNI (You Ain’t Gonna Need It) code. Less code means less bugs, and we already know how much those cost us.
TDD has its own benefit: It slows development to before actually writing the code. Thinking before coding is one of the best bug prevention methods I know.
A good strategy follows the market needs. Start-ups, for example, want to be first out there, and see if their solutions work. They require a different strategy than an enterprise product. Instead of big investment in automating features, that they don’t know if needed, the strategy should include just enough manual testing. For enterprises producing version 3.0 of a product, the plan should cover risks of losing existing customers, and therefore automatic regression testing is of high importance.
The Cost of Software Testing
Everything we do has value and costs. Professional testers try get the most value for their investment, while reducing the risks for the product. They continuously examine strategy, getting feedback and adjust, in order to reduce wasteful activities, and invest in valuable ones. The cost of software testing can be varied and applied differently. You have to think about it.
That’s what professionals do.