Startup Series II – How We Test At Our Startup

“QA is ready to deploy to Production”

Seven words that always put a smile on my face. Why? Because my Co-founder and I have signed off yet another release over Slack.

We’re doing things differently to how we used to, in an established product development team. With just two of us in our startup careful prioritisation is key, and we are learning what that really means again and again.

There is a very real gulf between a desire to produce something of quality and actually doing so. Will some automated tests get you there or a room full of testers? Can’t you just outsource it, or should you put quality aside until you’re funded? Quality is at the heart of our product, and it always will be.

We’re fortunate to share great common ground in our approach to building a product and running a company. We are obsessed with continuously improving and use the lean triad of build, measure & learn.

Startups are different. It’s okay to be different.

Testing at a startup generates a lot of guess work and out-and-out experimentation. The product domain is still an unknown. There’s no product owner or business analyst handing over a spec saying “Hey, please test this!”.

In a startup you are defining the product. There are no boundaries. Sounds great right? This is what makes testing at a startup so exciting! But imagine the same product owner or business analyst handing you a blank sheet of paper. Where do you start?

Testing in a startup doesn’t have to be overwhelming. In fact it can be empowering, particularly if you have the passion for building something awesome, you know your domain inside out and you believe you’re bringing genuine value to your “ideal customer”.

Performance & stress testing is not our priority — we have some users using a pre-alpha-version and Heroku has us covered. We made the choice to support just one device for now.

We’re not building a product to satisfy the goals of internal stakeholders and the most important conversation is with potential users. That’s not to say that my Co-founder and I don’t talk to each other! We talk constantly about what exactly we’re trying to learn with each development task and how we’re going to measure those learnings.

How we test features and get them live

We use Trello, GitHub, Heroku and integrate with two Slack channels. Our ‘Trello’ channel keeps us informed of priorities and our ‘Release’ channel has all GitHub and Heroku activity. All automated out of the box.

Slack is super important as it keeps the two of us connected. We have clear visibility of what’s going on with code commits, deploys and story updates.

We have three columns on our Trello board:

  • Futurelog: for every item we could work on, roughly prioritised. Includes dev and some business work
  • Presentlog: items that are top priority but haven’t started yet, limited to two cards max a la Kanban
  • In progress: work items that are actually in progress. If we pause work the item goes back to the Presentlog

Every release has three test phases.

Phase 1: Feature and Exploratory testing

  • Define test scenarios, ideally before coding
  • Add missing scenarios on-the-fly during testing
  • Raise bugs as GitHub issues and mark scenario ‘Failed’ with issue link
  • Capture & share learnings
  • On completion, share on Trello

Phase 2: Regression testing

  • Add scenarios from Phase 1 to regression session
  • Run tests collaboratively
  • Post “QA is ready to deploy to Production” in Slack

Phase 3: Release testing

  • Run tests from Phase 1 on production
  • Post “All release tests pass” in Slack
  • <excitable people may insert high five here>

Key takeaways

Takeaway 1: Eat your own dogfood

We’re building a product which helps facilitate the three test phases so we’d be pretty silly if we didn’t use it in our own work. You’ll be pleased to know we do!

Running real-life testing scenarios helps create mountains of unexpected learnings, stuff we wouldn’t have got from testing in a synthesised environment. This is invaluable.

It takes effort to think about how to use your own product, particularly when it’s far from complete. This involves thinking about how our product can integrate with day to day work life, which means integrating with existing tools.

This approach is extremely powerful yet doesn’t negate the need to observe and learn how users interact with our product. Tests or assumptions ‘inside the building’ need to be validated ‘outside of the building’.

Eat your own dogfood works hand in hand with getting out the building.

Takeaway 2: Embrace the unknown, and the joy of exploratory testing

“The richness of [the exploratory testing] process is only limited by the breadth and depth of our imagination and our emerging insights into the nature of the product under test” — James Bach

For a product evolving at rapid pace it feels natural that tests should evolve in the same way. Powered by creative thinking, exploratory testing yields great results.

Not limited to a predefined list of scenarios we are free to explore our product and capture scenarios in real-time. This approach combined with pre-defined scenarios is a force to be reckoned with.

Ultimately, you’re not only exploring what to test. You’re exploring your product, and what your product could be.

Takeaway 3: Share learnings and talk about everything

We continually ask ourselves the following questions:

  • Is there something you’re not talking about?
  • Is there someone you’re not talking to?

We prefer to document our learnings over documenting a specification. We use our product to capture learnings per scenario and per test session then continue the conversation on Trello or Slack. Not so obvious resolutions require a FaceTime call.

My Co-founder and I share everything we learn, and I mean everything! When we don’t, assumptions thrive and we risk losing our way.

We’re at an advantage with just the two of us. We don’t know how this will scale, but we’re adamant that sharing and learning is key to the success of our business.

Takeaway 4: Don’t obsess over test automation too early

“What! You don’t have any automated tests! Are you crazy?” — A lot of testers

In the three test phases none currently have automated tests. Some may argue that we’re doomed, but this is out of priority. We’re in customer discovery, so why maintain a load of automated tests for a product looking for market fit?

We don’t often regress but when we do it’s straightforward to fix. We have 106 manual test scenarios and it takes two of us about 30 minutes to complete. Running manual tests keeps us close to our product, which is far more valuable to us right now.

Large development teams can learn from the world of startups

“Testing is, amongst other things, gathering information about the product, its users, and conditions of its use, to help defend value.” — Michael Bolton

The core of our startup is perhaps not yet to defend value but to validate that the product creates value for our users. And then defend it.

That’s how we test. Now tell us how you do it in the comments below, or feel free to share your experiences with just us.

Read Simon’s previous blog post on Why Am I Building A Tech Startup For Testers?

<<Read more from the Start-up Series>>

About Simon Tomes

Simon Tomes Co-founder of Qeek, the SaaS product for testers, Simon grew quality-obsessed technology teams at Rightmove and Gumtree/eBay.  Head of community at Croydon Tech City, Simon understands the growing needs of the startup community.  An experienced agile coach, helping teams to realise their potential, Simon draws creativity and joy from performing as a musician.

About the Author

Simon

Co-founder of Qeek, my mission is to help product makers get intimate with their products and features. I love tech, music, startups, quality assurance, design, business culture, organisational behaviour and playing the drums. Prior to Qeek I cut my chops in various QA and Dev leadership roles at Rightmove, Gumtree and eBay. I believe in simplicity, transparency and collaboration.
Find out more about @simon-tomes