RisingSTAR February update

February was about research and reflection… you might say exploratory!

Once talk about automation starts you often start to hear talk of the tester fades out as the spotlight shines on the technology. A big part of Inclusive Automation is keeping the tester center stage in automation. To help get more clarity and understanding around exploratory testing I opted to read Explore It by Elisabeth Hendrickson, plus its been on my reading list for some time :).

I’ve also been trying to focus on roles and relationships in automation. I have categorized the roles into 3 groups

  • Creator – you can think of this as the author.
  • Executor – The person or system responsible for running the automated test suite
  • Consumer – The person or system that derives value from the results of test execution.

By varying the people or job title in the roles, its been helpful in getting to the motivations and outcomes both positive and negative that can occur. For instance the dynamics when a tester serves as Creator, Executor, and Consumer are very different from when the Creator is an Automation Engineer on a separate team, the Executor is the nightly build, and the Consumer is a test manager reviewing the test results.

I have also been doing the same modeling process in the context of unit testing and its been an interesting comparison. I plan to write more in detail about this in the coming months and am looking forward to sharing more with you.

I’ve also been reflecting on spectrums within testing and their heuristic value. So rather than a binary of manual or automated testing, considering testing a spectrum what insight and opportunities do we find. If you see polarization rather than a spectrum this might be a sign of dysfunction, or a missed opportunity.

Engagement while testing is another spectrum.

In exploratory testing high engagement is important since during testing information gathering is primary focus. Post testing session engagement drops off dramatically, often the notes or reports we create are of lower value that the internal models and insights that are built up.

The opposite is true in automated testing. There should be little to no engagement while automated tests are running. All the attention and focus is on results/reporting
In this polarization is likely a sign of health.

I’ve been starting to consider Informative / Indicative tests, still working details out on that but I think something of interest is there.

I’ve intentionally spent less time on Jupyter Notebooks and technology. I am a fan of the notebooks and believe they are a useful tool and worth pursuing, I also believe there is more to inclusive automation as a concept. I want to de-couple the concept from a specific tool.

To that end in the coming month I am targeting investigation/research/effort on:

  • Creating a working definition of Inclusive Automation. Then armed with that identify the types of tools that can be assembled or created to start to fill in what occupies the spectrum between manual and automated testing.
  • Digging more into test automation patterns. I am now reading A Journey through Test Automation Patterns by Seretta Gamba & Dorothy Graham. and hope to glean clarity and insights from it.
  • Continued work on building building out the Quality Cathedral Model and inclusive Testing Strategy

See all RisingSTAR updates and consider submitting an idea for the 2020 RisingSTAR Award. The RisingSTAR is about mentorship and experienced hands helping testers bring their new ideas further.

About the Author

Brendan

I am a Software Design Engineer in Test based out of Santa Barbara, California. Working in a variety of testing roles since 2009. I am responsible for creating and executing testing strategies and using his coding powers for developing tooling to help make testers lives easier. He writes tests at all levels from unit and integration tests to API and UI tests. I blog on testing and automation at Brendanconnolly.net or follow me on Twitter @theBConnolly
Find out more about @brendanconnolly