The RisingSTAR Award was created to encourage new ideas from within the testing industry. It brings together an impressive group of influential testers as ‘Supporters’ who will mentor the winner to help develop their idea for the overall benefit of the testing community. The RisingSTAR ‘Supporters‘ will vote for the testing idea they deem to be the best and the one they see bringing the most benefit to the community.
Congratulations to Brendan Connolly, winner of the 2019 RisingSTAR Award.
RisingSTAR Award Winner 2019: Brendan Connolly
Idea: Inclusive Automation – Foster, support and promote automation that integrates exploration, collaboration, monitoring/observabilty, reporting and ultimately the human tester into a holistic solution to help teams succeed in the agile/devops world.
Automation in its current state is a highly exclusionary activity. It excludes tests that don’t fit well into the confines of headless execution. It excludes those testers that cannot code, encouraging a class divide between “manual” testers and SDETs or automation engineers. It excludes the monitoring and observability data that teams are increasingly relying upon. I would like to build, demonstrate, encourage and enable automation that draws people and teams together. Providing a holistic approach to an automation enabled testing strategy by:
- Integrating and instrumenting exploratory testing with observability to allow and enable easier access to deeper application insights during testing sessions. Leveraging automation to track user actions while at the same time capturing real time data from the system under test and external systems that can be used for detailed testing notes.
- Integrating automation UI and API projects into hybrid automation enabled testing playbooks where automated steps can be intermixed with manual steps. Reducing repetitive tasks, allowing deeper focus while still being open and encouraging of exploration. Extending their use outside of headless execution and allowing value to accrue in frameworks earlier as not all components have to be automated to begin being used.
- Integrating the human element of testing with the drive for rapid feedback and the repeatability automation can enable. Extending the value and return on investment in automation frameworks beyond the simply executing automated tests.
- Enabling and educating testers to understand the roles of different types of testing to reduce the number of expensive and brittle end to end tests.
Ultimately driving a healthier and more robust approach to automated testing that is inclusive of the diverse range of team skills, while shifting automation from something testers run towards something testers use. Helping teams to reduce the number of expensive and brittle end to end tests and have a deeper focus on tools that facilitate quality.
This concept is something I have been building towards over the past year in my writing and speaking under the umbrella title of Automation Yoga. I have developed a maturity model for automation frameworks inspired by the Richardson Maturity Model for REST API’s and a session and workshop around leveraging Jupyter Notebooks in test automation with further descriptions here and here.
I also have begun writing about how testers and automation engineers can use Jupyter for test automation. This article is the first in a planned series to start exposing testers to additional ways they can stretch their automation to gain more value out of it.
Read Brendan’s article
2019 RisingSTAR Finalists – in order of their submission date
RisingSTAR Finalist 2019: Wayne Rutter
Idea: Visualising high-risk product areas by using machine learning to analyze data from diverse sources
While holding a managers role I was always aware I didn’t know as much about some legacy systems or new systems that I never worked on. So when people were asking how long do you think it will take you guys to test an area within a system I didn’t know I would go and ask the people who had worked or worked on these systems. I got feed up of hearing the air hissing through someone’s teeth as they went ‘well if I was wearing red shows and turned my chair clockwise the system could end up doing this’. I wanted to know how problematic systems or areas of systems were so I created an ML system that took all the stories from Jira and clustered them together to create areas of similar stories it would then see how many bugs were in each area and report that in the heatmap like the one shown in this linked image.
This simple proof of concept only works with JIRA at the moment and needs improvement but when I looked at other tools that might be able to do the job better I found them to be more than my company was willing to pay for or concentrating on the wrong areas (BT have also developed their own in house tool like this). This got me thinking that I should open-source this project and try to get some backing for it to expand it so that it can help inform everyone. I then started to think that I shouldn’t stop there that the product needs expanding.
So say the heatmap can be produced by linking into the product backlog (JIRA, Trello, Mondays, excel, etc) as well as any service desk tool (Zendesk,ServiceDesk etc) and using k-means clustering as an example, cluster the information together and go through the information to see what proportion is bugs/unwanted features. This could then be used to show any high-risk areas so that if you are about to do any development in those areas you know that there has already been a high amount of problems so you might want to focus your testing more. The map could also show you the least liked area of your product so that you could then see why and maybe try to improve or rebuild.
This could be expanded further into looking into the logs and server reports to see if any patterns could cause issues with the product. Search documents from all over and link common sections into the one area so that testers save time by just having to go to one place to find information out about the product that might already exist or might be out of date. Saving time and energy and allowing for a better one source of truth to exist.
This would only skim the surface of what this could become but by implementing just a few of these into an open-source product could benefit all.
See more about Wayne’s idea here
RisingSTAR Finalist 2019: Luca Finzi Contini
Idea: PuppetMaster semi-automated test generator – a framework to help generate instrumented UI tests to cover all possible usage paths of an Android application
I am currently writing instrumented Android UI Tests for our main application using the PageObject pattern adapted to our Android application and the Espresso testing framework: I usually write classes representing activities and methods that contain Espresso statements that either perform actions on the UI (e.g. clicks) or checks on the UI information that is displayed, and combine them in JUnit instrumentation tests.
– Any click that brings to another activity returns a new PageObject representing the target activity.
– Any click that significantly modifies the state of the current activity without leaving it returns either the same PageObject or a new instance of it with modified state.
In order to achieve UI coverage, my idea is to:
- Create annotations to be inserted in test PageObjects by the test developer to identify PageObject representing activities (graph nodes) and methods that represent actions that lead from one activity to another one, or that generically ‘move’ the UI (graph arcs).
- Extract the ‘graph’ by processing the annotations. Create a graphic representation of it as a by-product.
- Given the graph, standard graph algorithms on graphs could be used to obtain all paths from a node to another one.
- Given the paths, new java androidTest JUnit classes could be written to implement the paths.
- The generated test code could be used as is to enhance coverage testing at the UI level, and/or as a starting point for more in-depth testing.
See more about Luca’s idea here
RisingSTAR Finalist 2019: Lokesh Gulechha
Idea: Intelligent Testing approach for Robotics implementation
RPA Automation brings efficiency and effectiveness for the repeatable business processes by introducing ecosystem of Human and Robots (Co-HuBoTs) which achieves operational efficiency and better customer experience. RPA implementation can be attended where humans are required or unattended where no human intervention is required, and it can use AI/ML and cognitive features to complete the tasks.
Testing RPA implementations could be challenging as its not just about testing the automation of functional aspects but also technical implementation of these Bots. The technical implementation is not limited to just automation scripting but also extends to rules engine, Algorithms, effectiveness of cognitive features. The testing approach should address each step in the RPA journey right from requirement gathering till deployment & live proving, and should address below challenges, but not restricted:
- Assuring RPA efficiency which introduces new level of quality, productivity and accuracy improvement
- Adherence to regulatory and compliance policies and not altering data ‘on its own’
- Ensuring data integrity across heterogenous systems and applications
- Guaranteeing security of sensitive data that is being consumed & transferred
- Exception handling to have robust and strong recovery mechanism
Approach provides information required to establish a successful Testing ability for Robotics implementation and ensures above challenges are met by detailing:
- What focus areas need to be tested?
- How to validate & verify identified focus areas?
- When it should be executed & measured?
This holy trinity of What, How and When needs to be continuously fed with the patterns and learning data to make it more effective. This insight driven testing approach which not only provides focus areas but also optimize the testing effort as well as prioritize the testing assets. These actionable insights will help early defect identification and avoiding failures
My idea also includes a testing framework which is tool agnostic and can be seamlessly integrated to consume data from any tool (e.g. ALM tools, Test Data) as well as provide actionable insights to the testing tools to drive action. This framework can be consumed by any testing tool of choice for functional, automation and NFR testing of Robotics solution
See more about Lokesh’s idea here
RisingSTAR Award Winner 2018 – Sanne Visser
Idea: To adapt and/or develop a testing framework for Blockchain-based applications to positively shape the adoption and development of blockchain technology
1. I want to correctly represent this technology in a fun yet accurate manner. I want to develop a testing framework for Blockchain.
2. Thus my primary goal became to enable people without any IT or Blockchain knowledge to understand the most basic principles so that they can ask the right questions before considering a blockchain solution. Most coverage of blockchain in the media is about cryptotrading, scams and bitcoin exchange rates. It misrepresents the technology and overhypes the risk that a blockchain solution is vulnerable to scamming from third parties. This is the first part of my mission, to inform. To break down complex technology into terms anyone can understand, which is why my explanation uses Pizzas and Excel.
3. Secondary to this would be developing a testing framework for dealing with Blockchain-based applications. One of the current bottlenecks is that there are very few Blockchain software projects and this increases the need to gather diverse experiences testing Blockchain and sharing that knowledge, so this will be my focus in the short term. I will not to this alone and will build a framework with other testers and other companies, potentially forming a Blockchain testing group.
4. I also hope to develop a workshop with a demo-blockchain environment to teach about the technology and beyond that to teach how to test this technology.
How will it help? Develop a testing approach for Blockchain-solutions. How will I make it real? I will continue my current efforts to explain Blockchain basics and how to test Blockchain through presentations at testing conferences. Secondly, I hope to join (or form) a Blockchain testing group and share experiences and knowledge there and build and write a testing approach for Blockchain-based applications.
Video – Video Intro Link