RisingSTAR June Update

This month has mostly been about getting ready for the Award Winner’s week webinar. It was a great opportunity to get more work transitioned from fuzzy ideas into more cohesive concepts supported by more tangible materials. In case you missed it, you can view the recording here. I have also been looking at use cases and examples of inclusive automation.

Expectations of Automation

Much of the work so far has been about framing automation differently. To set the stage for this, I started with two questions:

  • What makes automation good?
  • What makes testing good?

When comparing the answers to these questions some common themes tend to emerge. Good automation is typically test centric described by qualities of execution, how fast, reliable, and frequent the tests themselves are. Whereas good testing is typically product or information centric, describing the state of a system in comparison to expectations, helping to identify risk or potential impact.

Relationship Dynamics

The gap between those underlying expectations is kind of the bridge that needs to be built to bring automation into testing in a healthier, more holistic way.

This serves as a segway to using the roles/personas I’ve been developing (creator/executor/consumer). Using the personas to help show just how different a testers relationship with automated tests is and what are reasonable expectations or outcomes we can/should expect based on other types of automated tests.

One of the conclusions I’ve been drawing is that other types of automated tests are a means to enhance productivity. The automated tests focused on in the QA/ testing space are primarily some form of attempt to replace actions of a tester. This has been helping me kind of hone in on a better definition and explanation of what inclusive automation is in practice. When the only narrative around automation in the testing world is replacement it’s no wonder it often triggers an emotional response. Inclusive Automation is a means to openly acknowledge the value of testers by intentionally creating tools to aid their productivity and hopefully decrease frustration and increase their happiness while testing.

Use Cases and Examples

This month I took Alan Richardson’s course Automating the Browser Using JavaScript on Test Automation University and found it to be a great example of inclusive automation. It promoted a very healthy approach encouraging testers to learn just enough of the browser, ccs, html and javascript to improve their testing. The end result wasn’t building a set of tests, it was building a bot that would interact with a website at random, but in the journey to building that bot you ended up building a bit of a tool box to interact with the application as well.

Inspired by that I recreated the process, but instead of using the browser devtools and javascript, I used ruby, selenium and jupyter notebooks. In practice this demonstrates two of the use cases I have been focusing on, a testing toolbox, and automating setup and teardown.

To showcase the awkward to automate use case, I used Github and the diff view of a set of changes in a pull request. Using the github api inside the notebook I could demonstrate using automation to drive certain steps while allowing the tester to interact and explore the system under test with minimal context switching.

I was able to demonstrate some of this in the webinar and believe it was well received and hopefully sparks some interest. I also have the sample code and notebooks on Github for people to reference and explore. I do need to add contents to the readme but hopefully it’s a good start at making realistic examples to serve as a reference point for people.

What’s next

I plan to take a deeper read through A Journey through Test Automation Patterns focusing on identifying existing patterns, or potentially identifying new ones.

I’m also looking to get back to getting more insight and feedback from the supporters and need to start coordinating those chats.

 

See all RisingSTAR updates and consider submitting an idea for the 2020 RisingSTAR Award. The RisingSTAR is about mentorship and experienced hands helping testers bring their new ideas further.

About the Author

Brendan

I am a Software Design Engineer in Test based out of Santa Barbara, California. Working in a variety of testing roles since 2009. I am responsible for creating and executing testing strategies and using his coding powers for developing tooling to help make testers lives easier. He writes tests at all levels from unit and integration tests to API and UI tests. I blog on testing and automation at Brendanconnolly.net or follow me on Twitter @theBConnolly
Find out more about @brendanconnolly