Regression testing in Agile Universe: Where It Fails and How To Fix It

Agile methodologies have taken over software development in recent years, and for a good reason. The agile approach promotes multiple and frequent iterations of the product, which is great from a product development/customer perspective.  But how does that work for quality? Imagine a mobile banking application under development with 20 developers simultaneously pushing code in the same hour! How do you know which code adds value and which line of code crashes the system? 

Agile methodologies encourage development teams to push their code consistently into a shared environment and build quality throughout the process. 

But without a clear and effective automated regression testing strategy, this would not be easy. Let’s dive deeper into where regression testing fails in Agile and how to fix it.

At The Management’s Desk:

1) Lack of Budget and Resource allocation: 

Many management teams prioritise development velocity over the need for a robust regression testing strategy. Sometimes, there is simply not enough budget for investing in regression testing (tools & people) even though there may be a management understanding  on why regression testing is important.

2) Lack of understanding of benefits: 

Some sponsors/stakeholders/managers may not fully understand the importance and value of regression testing, leading them to prioritize development capacity over a robust testing strategy.

How to fix it

1) Educate management on the benefits and ROI of regression testing. Show them data on how it can increase product quality, reduce production issues, and ultimately improve customer satisfaction and retention.

2) Within the team, allocate budget and resources towards regression testing, including hiring dedicated testers. In the worst case (if you cannot add capacity), make regression testing the first priority (over progression testing) in your team’s backlog.

3) Create a clear regression testing strategy and plan, with buy-in from management and all team members. This includes identifying high-risk areas and prioritizing them for testing.

4) Repeat step 1. Keep the management updated on how regression testing has improved your product quality and customer experience.

None of the suggestions below in the article is rocket science. The technologies to execute these exist in abundance. The biggest challenge is the lack of understanding of the benefits and the will to execute.

That is why management buy-in is the single biggest challenge you’ll need to overcome. 

At the Developer’s desk:

Most developers have a narrow focus on completing their user stories and may not have the time or interest to think about the bigger picture. And with frequent code pushes, it can be difficult to isolate which change caused a bug – leading to finger-pointing and delays in fixing the issue.

How To Fix It:

The simplest version is to get every line of code to pass an automated regression test before pushing it into the shared environment.

Developers usually add/edit code in their workstations/sandboxes before they check the code into a common repository. The best place to stop a bug from entering the system is here. 

Stop the bug before it enters the shared environments!

Step 0: Ensure developers have the time to run Unit testing. Unit testing is a non-negotiable step before checking in the code. Measure ‘how much’ unit testing coverage you have on your code base. 

Step 1: Integrate an automated regression test tool into your CI/CD pipeline. Define an MVP sanity/regression coverage that must run for every code check-in.

Step 2: Trigger (automatically) the sanity/regression test suite each time there is a code check-in. 

Step 3: Allow only code check-ins that pass the automated sanity/regression test. 

Step 4: Run a deep-dive on most common sanity/regression failures – the results could lead to better system design or user story refinement – fix the root cause!

Automation testing like clockwork
Automate your Sanity/Regression Tests – Remove manual decision making from the process

More about automating regression testing in CI/CD here.

This additional check might temporarily slow down the number of merges. Still, each merge would be more assured, and new builds will likely go through without causing additional troubleshooting overhead.

Typically a ‘sanity’ regression pack covers 5%-10% of the critical business flows and runs within a few minutes. While extended coverage might help higher quality, adding a few hours of testing to each change might become expensive and outweigh the benefits.

Each organisation needs to determine what the right balance is for them. A team of experienced developers working on the product for ears might need less, and a new product team might need extended coverage.

Pro tip: Find ways to assure your management that a slightly reduced speed (temporarily) is better as it will help you achieve a higher velocity & quality later on.

A simple checklist for the team would cover (before code check-ins)

  • Is this functionality in contradiction with old functionality?
  • Are there any edge cases we haven’t considered?
  • Is the code easy to read and maintain for future changes?
  • Are we hard coding any values/configurations that could cause problems later on?
  • Will the current regression pack (& test data) cover the changes implemented via this functionality?
  • What additional test scenarios should we consider adding to the regression pack

At the Testing Desk:

In agile, there is always a focus on delivering value quickly and frequently. This can lead to a temptation to focus on only testing the new features and skipping regression on old features.

But this can be a slippery slope to disaster – you never know when a seemingly small change to an old feature may cause a larger ripple effect and break something else entirely.

The pressure to get to the ‘Definition of Done’ is real and leads to testers overlooking some minute but critical details.


How To Fix It:

Again, the solution lies in proper planning and communication.

Include regression scenarios for new and old features in your sprint plans, and ensure they are executed during each sprint – not just left for the end (or the big release).

A stitch in time saves nine is true for testing. A bug stopped at the ‘user story’ level saves you nine more in production!

Doing a regression cycle manually each sprint (or more frequently) is a recipe for failure. As the size of the product and its features grows, manual tests cannot catch up, and teams tend to over-prioritize more than necessary.

A clear automation strategy and prioritization of automated regression testing in the team’s backlog would help ensure teams do not leave ‘automation’ tech debt open each sprint.

A tester’s (and the team’s) checklist would cover these each sprint

  • Did we plan for enough capacity to cover progression + regression tests adequately?
  • Did we automate the MVP regression pack to include the latest features/changes?
  • Are we carrying forward any automation technical debt?
  • Are we leaving any critical business flows for the ‘release testing’? Is that a worthwhile risk?
  • Are we picking up any configuration that is hardcoded? 

Wrap Up

The most successful scrum teams put quality and customer experience at the centre of their product. The number of times regression tests are run is a good metric to indicate whether it is just talk or real action. Ultimately, regression testing in an agile environment requires a team effort from developers, testers and the entire team. By incorporating it into your CI/CD flow and sprint plans, you can prevent disasters and maintain a high quality of your product.

About the Author:

Ilam Padmanabhan is a veteran in the Tech/Financial services industry. He’s delivered many multi-million dollar projects across the globe in various Delivery and QA roles. He’s very passionate about sharing his knowledge and is the founder of

Check out all the software testing webinars and eBooks here on

About the Author


Two decades in Tech/Financial services industry. Experienced in delivering large programs, played various roles in QA /delivery.
Find out more about @ilampadmanabhan

Related Content

DevOps Uncovered

Mat Rule & Stevan Zivanovic

DevOps: Test Alone

Bjorn Boisschot

Continuous Everything

Jeffery Payne