RisingSTAR Finalists 2024
These are the finalists of the EuroSTAR 2024 RisingSTAR Award. This award fosters new ideas, and helps bring those ideas into being through the mentorship and support of industry experts called ‘The Supporters‘.
The 2024 RisingSTAR Award winner is Bart Van Raemdonck (Belgium), for his idea of creating a Quality Evolution Tracker that revolutionizes quality measurement of a product.
The winner is chosen by the Supporters, based on the idea they think will benefit the testing community the most.
Geosley Andrades - India
Geosley is currently Product Evangelist & Community Builder at ACCELQ.
Green Coding: Driving Sustainability in AI-Driven Testing
As a software tester for more than 15+ years, I have witnessed the rapid pace of technology’s evolution, especially around artificial intelligence and Machine Learning. These technologies have transformed our approach to software testing, providing incredible efficiencies and capabilities. However, this advancement comes with its own hidden cost: significant energy consumption and consequential environmental impact.
With the rat race for deploying AI models and leveraging AI to assist with our testing needs, we must realize AI’s environmental impact. Therefore, estimating and curbing the energy used and emissions produced by testing and deploying AI models is essential. This led to the idea of creating a tool to enable testers to track CO2 emissions across their test code and AI experiments.
The primary goal of my project is to introduce green coding practices into the software testing field, thereby minimizing the carbon footprint. By integrating carbon tracking into our testing workflows, we can become aware of the environmental impact and make informed decisions to reduce it. This tool will be especially beneficial for teams leveraging AI and ML or working on projects that require large amounts of data and processing power, which often involves high energy consumption and CO₂ emissions.
This initiative will directly address the environmental impact of the most rapidly advancing sector of AI and ML by integrating sustainability into our core testing process. The project goes beyond creating efficient code to creating responsible code.
My vision is to standardize green coding practices across the software testing industry, making carbon tracking as fundamental as any other security or accessibility testing. Testers should not only strive to write clean code but be aware of the environmental impact.
The next steps involve developing a robust platform that can seamlessly integrate with diverse testing frameworks and CI/CD pipelines to provide real-time feedback on carbon metrics.
By advancing green coding practices within the testing community, we are not only improving our professional standards but also contributing positively to a more sustainable planet. Join me on this critical mission to pave the way for a sustainable future in software testing.
Willem Keesman - Netherlands
Provide an Accessible Platform where companies can host Testpeditions for Testers.
In March 2023, the very first testpedition took place in the Netherlands. Four testpeditions and lots of positive feedback later, i look forward to taking the concept of a testpedition to the next level.
A testpedition is an accessible event consisting of two parties. The testpeditionists and the testpedition host.
As a testpeditionist you will visit other companies and learn from the challenges they face. You engage in discussions with other test professionals and meet lots of friendly and highly motivated people. You encounter real world testing problems and use your experience and/or fresh ideas to help solve these.
As a testpedition host you open your company’s doors to the world and showcase the testing challenges you face. In return you receive lots of feedback on your challenges from different angles and often new perspectives as well.
A testpedition is a win-win event, it usually consists of:
– An opening and welcome by the testpedition host
– The presenting & discussing challenges
– An informal gathering such as a lunch or beer & bites
– An interactive session to really get the blood pumping (A brainstorm or a workshop)
– A retrospective of the testpedition and the sharing of slides, ideas and exchanging contact details.
My idea is to create a global platform on which companies are able to host testpeditions and testpeditionists are able to join them. Also add the lessons learned to an archive which is freely available to all visitors of the site, the testpedition knowledge bank.
Willem is Practice lead Auto|Q at Ordina Noord a Sopra Steria company.
Isuri Navarathna Mudiyanselage - Finland
Isuri is currently a Quality Engineer at If Insurance.
Concept: AI Tool for REST API Testing
I have been engaged in API testing for a considerable duration, encountering a notable challenge in managing the testing of an increasing number of APIs manually. Even with tools like SOAP UI or Postman, there is a significant manual effort involved. Automating the process helps to some extent, but still necessitates manual intervention for maintaining and updating data, versions, and various other aspects. Consequently, I recognized the need for a tool capable of analyzing patterns, generating, and maintaining test cases for APIs. Such a tool would streamline the testing process and eliminate the requirement for extensive technical expertise to comprehend the APIs. Hence, during my master’s thesis, I delved into this domain and conceptualized a tool. For further insights into this idea, please refer to the attached thesis link, specifically the Discussion section.
Brijesh Patel - India
Intelligent Reporting Assistance for Software Test Management Tools
Problem Statement: Software test management tools often provide static, predefined reporting that lacks the depth for meaningful data interpretation and analysis. This results in insufficient support for understanding complex data dependencies and predicting trends, which are crucial for optimizing testing strategies and improving software quality.
Solution: Intelligent Reporting Assistance to revolutionise software test management by introducing an integrated system comprising four advanced components designed to enhance data analysis and user interaction:
AI ChatBOT: The AI ChatBOT employs natural language processing (NLP) and machine learning (ML) to understand and respond to user queries in simple English. It processes large datasets efficiently, offering functionalities such as reporting on test coverage, identifying high failure rates by platform, and generating team effectiveness scorecards.
Virtual Assistant: Functioning as a digital personal assistant, this tool executes tasks based on vocal or typed commands. It can handle administrative functions like sending emails, adding platforms, and managing custom statuses and user permissions, thus streamlining workflow management.
Summary & Actionable Intelligence: This component delivers concise report summaries and suggests actionable steps based on in-depth analysis. It includes anomaly detection to spotlight outliers and unexpected patterns, providing targeted insights that drive decision-making.
Predictive Analytics: By analyzing historical data, our predictive analytics feature forecasts potential future software failures. This proactive approach helps in prioritizing critical testing areas, enhancing resource allocation, and ultimately improving software reliability.
Benefits and Impact: The Intelligent Reporting Assistance system significantly enhances decision-making by providing deeper insights and actionable intelligence, streamlining workflow through automation, and fostering proactive problem solving with predictive analytics. Users benefit from a more efficient testing cycle and improved software quality, while the intuitive user interfaces ensure easy and productive interaction with the tool.This integration of AI and ML into software test management not only transforms static reporting into dynamic systems but also empowers QA teams to anticipate challenges and strategize effectively, ultimately leading to more robust software solutions.
Brijesh is currently a Product Specialist at QMetry.
Bart Van Raemdonck - Belgium
Bart is currently a QA Coach at Axxes.
A Quality Evolution Tracker that revolutionizes quality measurement of a product
Are you tired of traditional methods of measuring product quality that are time-consuming, lack accuracy and are most of the time a little vague? Because I am! Imagine a solution that harnesses the power of artificial intelligence to revolutionize quality measurement. Introducing the innovative AI-driven Quality Evolution Tracker!
This solution will benefit the testing community by providing a robust solution for measuring product quality and tracking its evolution over time. I often got the question how I can prove that with testing, the product will have more quality? This product will help delivering that prove. Our solution helps various stakeholders involved in the product lifecycle. They can use it to measure quality, optimize and evaluate the (testing) process. The idea came from my own frustrations with outdated quality measurement methods, or the wrong ones like the number of defects or the lack of it besides a score of an app in the app store for example. I envisioned a solution that leverages AI to analyze vast amounts of data and provide actionable insights.
The vision is to create a go-to platform for quality measurement. If this becomes a standard, we won’t have to constantly have to explain or prove ourselves why software testing and quality matter. People will refer to this kind of metric. Our next steps involve to gather information, explore and study algorithms, and forging partnerships with industry leaders.
I plan to conduct pilot studies with select partners to validate the effectiveness of our solution in real-world scenarios. Additionally, we can setup user feedback mechanisms to iterate and improve the platform continuously.
While we anticipate challenges such as data integration complexities and model interpretability issues, we are confident in our ability to overcome them through collaboration and innovation.
We seek support from the testing community, industry partners, AI and LLM experts and investors who share our vision for transforming quality measurement through AI. Together, we can revolutionize how product quality is measured, ensuring higher quality standards and greater customer satisfaction.
Konstantin Sakhchinskiy - Poland
Structured approach to complex testing scenarios for data migration and state machine testing with detailed logging for debugging.
Using a clear testing approach for state machines and data migrations with detailed logging may help QA dev teams a lot. It gives QAs/testers, and devs a better way to find, reproduce and fix bugs quickly. These ideas came from my experience of how hard it’s to test complicated systems and reproduce “random” bugs. I considered using automation/scripting and detailed logging to make testing more useful and effective but without advanced programming skills. Teams can spend less time on complex bugs and more on improving apps. My goal is to make testing more straightforward, find bugs faster, and make better software – users happier. My ideas look at the whole testing process of complex data migrations and the inner app’s states and transitions, combining thorough testing with deep logging to understand and solve issues better. I would like to share my experience and use it as a guideline for others with similar projects. Share these ideas with more people and get their feedback to improve them. I expect some challenges, like getting people used to these ways of testing, mindset, and learning new things. I’ll need support to share my experience with people and help from the community to make my ideas better with their ideas and experiences.
Data migration and state machine testing: The use of real and synthetic data for migration testing, and systematic testing of state transitions using scripting, gives a guideline for dealing with complex testing tasks – thorough coverage, from backward compatibility to handling heavy loads and race conditions, improving the reliability of software systems. This approach is particularly beneficial in agile dev envs where rapid iterations require quick but robust testing.
Logging for problem solving: Utilizing detailed logs for troubleshooting and analyzing complex, “random” issues. My examples include overcoming the limitations of built-in logging and helping not to lose critical data during long tests.
By writing scripts with detailed logging, QA/testers, devs can improve their diagnostic. This will help in the identification of elusive bugs and improve the debugging process. Correlating actions, responses, and system states with specific issues helps to pinpoint root causes more accurately and faster.
Konstantin is currently a Lead QA Engineer at Octo Browser
Rini Susan V S - USA
Rini Susan is currently Pursuing Post Graduate Program in AI&ML at UT Austin
Enhance Software Performance Testing with Machine Learning techniques
Software performance testing checks and validates an application’s capacity and ensures that it works well within the acceptable Service Level Agreements. Performance testing has evolved a lot over time and to keep up with the agile mode of development, testing teams need to bring in automation. Artificial Intelligence [AI] can play an important factor in test automation [reducing the time consumption and manual intervention involved in various test phases] and Machine Learning [ML], a subset of AI can aid in these activities.
ML-based anomaly detection systems can help to identify performance bottlenecks faster and accurately. Machine learning models help to predict the server performance for future events. Accordingly, manual efforts required for test monitoring, result analysis, and time taken to identify performance issues can be reduced.
Various licensed and open-source toolkits are available that can be easily integrated with testing tools to enable machine learning capabilities. To sum up, organizations with foresight can utilize machine learning technologies to take a proactive approach to performance testing, rather than a reactive approach after performance issues hit the application.
Best of Luck to All
Join us in congratulating this year’s RisingSTAR Award Finalists and wish them well as The Supporters now review their entries and choose the 2024 RisingSTAR Award winner.
See our RisingSTAR Award introduction page for more information about this award and see further details about the 32nd EuroSTAR Software Testing Conference.