RisingSTAR Finalists 2023


Meet the finalists of the EuroSTAR 2023 RisingSTAR Award. This award fosters new ideas, and helps bring those ideas into being through the mentorship and support of industry experts called ‘The Supporters‘. It is to help individuals and is not a corporate award.

The winner is chosen by the Supporters, based on the idea they think will benefit the testing community the most. The 2023 winner is Joonas Palomäki, for his idea on using an artificial intelligence technique to make manual test cases 50% faster.


Geosley Andrades - India


">

 

Geosley Andrades - India

Geosley is currently Product Evangelist & Community Builder at ACCELQ.

Game On with iSAT: Gamified Platform for streamlined Resume Screening, Skill Evaluation, and Engaging Onboarding

As someone deeply entrenched in the world of software testing, I’ve personally experienced the extensive hours spent searching for the right candidate, only to face further hurdles during the onboarding process. Swamped under piles of resumes, and even after meticulous screening, miss finding the perfect candidate. This observation propelled me to seek a solution.

Introducing iSAT, an intelligent Screening, Assessing, and Training platform designed to automate, gamify, and streamline the hiring and onboarding journey, saving time, enhancing efficiency, and ensuring the selection of the most competent candidates.

Step 1 involves a cutting-edge resume analyzer. It screens, and scores resumes according to the provided job description, effectively reducing hundreds of applications into a manageable and highly relevant shortlist. This tool will allow hiring managers to focus on the most promising candidates, reducing the risk of overlooking potential talent.

Step 2 reimagines the interview process, replacing traditional Q&A with real-time, gamified challenges that measure a candidate’s practical skills. This innovative approach will ensure that we assess the ability of candidates to apply this knowledge in a practical setting. The end goal is to identify candidates who are true problem-solvers with robust exploratory, analytical, and technical skills.

Finally, Step 3 revolutionizes the onboarding process. Instead of dry, one-way training modules, we incorporate gamified learning paths tailored to the specific roles being hired. These paths will integrate expert advice, years of testing research, and role-specific skills, leading to a comprehensive and engaging learning experience. The platform will also allow the easy addition of organization-specific modules to customize the learning journey further.

What sets this idea apart is its holistic approach, integrating every phase of the hiring process into a cohesive, streamlined, and engaging experience. This platform will not only benefit hiring managers, testers, and organizations, but it will also foster a more skilled and competent testing community.

The vision is to transform hiring and onboarding into an engaging, efficient, and effective process. With the platform freely available to the community, the aim is to elevate the standards of hiring, training, and professional development in the testing field. Together, let’s redefine the future of testing. Link to Presentation

This will be an individual project worked on by Geosley.

 


Robin Gupta - India


">

Testavatar: A Chatbot version of a Famous Software Tester to Revolutionize the way Software Testing is Taught and Learned

With the advancements in Large Language Models, it is now possible to create a chatbot that can imitate the communication patterns of a human based on their publications. Think: Asking “What is a test case” to Michael Bolton.

The first step in creating this digital version of a software tester is to identify the individual whose traits will be imitated. Once the individual has been identified, their publications should be collected and analyzed to extract the key traits and their communication patterns.

The second step is to create a chatbot that can interact with users in a natural and engaging way. This chatbot should be trained on the publications of the software tester to ensure that it accurately imitates their traits. The chatbot should be able to understand the natural language of the users and respond to their queries in a way that is relevant and helpful.

The third step is to promote the digital version of the software tester to the software testing community. This can be done through various channels, including social media and conferences.

The final step is to continue to refine and improve the chatbot over time. As more users interact with it, the chatbot will learn and evolve, becoming even more effective at imitating the traits of the software tester. Regular updates should be made to ensure that the chatbot remains relevant and useful to the software testing community.

In conclusion, the creation of a digital version of a famous software tester using a chatbot is an innovative idea that has the potential to revolutionize the way that software testing is taught and learned. By following the steps outlined above, it is possible to create a valuable resource for anyone who is interested in improving their software testing skills.

We might face the problem of scaling the backend services, after a certain threshold, which can be handled by open sourcing the code, so that users can run local versions of the application.

The above work will be based on explicit permissions/collaboration with the person, whom we will pick for Testavatar project. Also, I will be taking this on as a personal endeavour on weekends and evenings.

Proposed Tech Stack: LangChain and GPT 3

NOTE: “Avatar” comes from the Sanskrit word avatāra meaning “descent”. Within Hinduism, it means a manifestation of a deity in bodily form on earth, such as a divine teacher. For those of us who don’t practice Hinduism, it technically means “an incarnation, embodiment, or manifestation of a person or idea”.

 

Robin Gupta - India

Robin is currently VP of Engineering at Provar.


Parveen Khan - United Kingdom


">

 

Parveen Khan - UK RisingSTAR Finalist

Parveen is a Senior QA Consultant at Thoughtworks.

Making CFR’s Inclusive – To Identify, Adapt and Approach to achieve Holistic Quality as a Whole Team

Cross-functional requirements (CFRs), most commonly referred to as Non-functional Requirements (NFRs), form an integral part of software quality. And testing for them and making them as part of the team’s process is an absolute necessity for any team that promises to deliver high-quality software to their users. Often, the emphasis that is placed on the functional requirements are not equally placed on the cross functional requirements in software delivery teams and by the business stakeholders. There could be multiple reasons for teams to make this decision but one of the reasons I have seen working on different teams is the lack of awareness on how to approach CFRs testing as they come across really vague, like say testability or maintainability, for example.

My main goal is to create awareness and enable people to understand CFR’s by creating and sharing the list of CFR’s that they can use to apply on their context of project/product. I would like to create a visual(something like risk storming – https://riskstormingonline.com/) that any tester can use within their teams that can help them to pick the right and important CFR. And I would like to create heuristics and question prompts for each of these CFR’s that they can use to generate ideas when thinking about these CFR’s.

This would help the testers and the team to build these CFR’s as part of functional feature and apply as continuous testing because features are visible but CFR’s are very easy to forget. For example the list could be – performance, accessibility, reliability, security, operability, testability, data.

I would also like to create a workshop to demo this entire approach and write some blogs/papers that can help testers to learn this approach and adapt it according to their context.

I would need help to make this scalable so that it’s available for everyone in different formats and build the visual.

Background
As a consultant I have worked on so many different projects and different teams. I realised very soon that CFR’s were never part of the quality discussions. I facilitated a workshop with the teams and business stakeholders by having this list of CFR’s and prioritised the most important ones. Then added this list to the Jira story template with some prompts and questions so it can help the team to generate some questions before implementing the feature/user story. It worked well so I tried making changes and facilitating this workshop on most of the teams that I worked on so I wanted to share this approach to wider testing community to help them.


Joonas Palomäki - Finland


">

An Artificial Intelligence Technique to make Manual Test Cases 50% Faster

Artificial Intelligence is a very helpful tool for creating manual test cases from specification. It saves a lot of time and helps creating better test cases. This is a technique that will soon change the way how manual QA is done. The idea will be incorporated as a feature to the tool to make it easier for everyone to use. The technique of using AI to generate test cases can be used using LLM:s ( ChatGPT for example ) so there is no need to use Melioras service to benefit from it. It makes sense for our company to integrate the feature to our tool, but it is not requirement.

The idea I’m promoting is how these AIs can be used to help design test cases from specification. AI can create test cases in seconds (or minutes when creating multiple ones with slower and better AI ) but it is still very, very fast when compared to human. A human needs to inspect and correct the creation thus the time saved is only around over 50% and not more.

So that in mind, I see the benefit being able to learn a new technique that can be used to generate test cases faster.

If that sounds interesting topic I’ll be happy to demonstrate it in general way and how it can be utilized by community. It will not work for everyone, but it works well when there are text based specification (requirements or user stories).

This is fairly new and evolves every day, so no public description of the feature is yet available. I can demonstrate the if needed.

 

Joonas Palomäki - Finland - RisingSTAR Finalist

Joonas is currently Lead Consultant at Meliora.


Sushmitha Shivakumar - India


">

 

Sushmitha Shivakumar - India

Sushmitha is currently Performance Test Manager at BT.

Performance Automated Tool for Execution of Load Testing – HaRboR

HaRboR stands for High-level analysis of response time and Bandwidth measurement /Bottleneck identification of Resource. It is a dashboard. The idea is to run the Load test using distributed model of Jmeter and agent in the remote machine picks the results and plots it on the dashboard for the monitoring.

Benefits of HaRboR
1. HaRboR is cost efficient ,open source tool, integrated with Devops pipeline, Zero touch automation.
2. Free cost Load generators and we can run “n” number of user load test, endurance test and LG and server monitoring as well
3. HaRboR keeps in track of application downtime, detect anomalies, understand trends, optimize resource usage, and troubleshoot performance issues before they impact end users. Empowers us to gather insights that help ensure customer satisfaction and drive business growth.
4. HaRboR manages the overall performance, starting with code, application dependencies, transaction times, and user experience.
5. Monitoring is a small – but critical part of every business also it gives confidence on go -live action.
6. HaRboR measures – User satisfaction, Response time, Error rates, Number of application instances, Request rates, Application and server, virtual machine, or container CPU usage, Application availability/uptime (SLAs) and It points out, typically via alert notifications, that there is a problem.

Background
Businesses used to spend on licenses for doing load /endurance/volume/spike testing, licenses used to be based on the number of users that we used to simulate for performance testing. We wanted to come out of the licensed tool and create our own benchmark for performance execution irrespective of what kind of script we build, we wanted a base tool to perform the execution with user load. Then we got a grip on the “Distributed load testing model” This concept helped us to create an execution space for our load testing.

We had multiple servers that we used as our load generators on the licensed tool, we connected them all internally picked one high-end system as the Master server (In which we place all the performance scripts), and maintained the rest all the servers to become slave servers (on which the user load is distributed to execute). We wanted to build a clear and finite front end to view the performance execution that was scheduled from the master to slave, we build an agent system that picks all the performance tool info and drops it onto the dashboard in a comprehensive manner. We then wanted to automate the start and stop of the script execution, also we wanted the developer, tester, and any team who can access the respective script and trigger test at any point in time, so we built the scheduler system which keeps in track of a list of scripts on the Master System and on one click which used to schedule the execution.

During execution, the agent used to pick the traffic details and plots on the HaRboR dashboard for a detailed view. We started building the HaRboR dashboard to contain all the performance parameter details.

Link to a video about the idea.


Indranil Sinha & Yaroslav Himko - Sweden


">

Automated Production Bug Tracking in Test Autobahn

Tracking production bugs has multiple benefits. We have been tracking our production bugs since 2019. We started with only 2 teams, but with time the number of development teams increased to 10 and it was getting harder and harder to manually tracking production bugs. We wanted to make this process automated and the number of prod bugs to be seen in a dashboard. Based on this idea, we have implemented the following: As soon as a bug (either during testing in Test or UAT or a prod bug) is registered in Azure DevOps, it is picked up by this new implementation and is shown as a bar diagram in a dashboard.

In the dashboard (the name of the dashboard is Test Autobahn, which was a finalist in 2022 RisingSTAR Award), we have a tab called production bugs. There we can select any teams (out of 10 teams) and any year (since 2019), and an excellent bar diagram is presented with green bards as test/uat bugs and red bars as prod bugs. This gives us an overview of number of production bugs each year during each sprint.

Based on these numbers, the chart automatically calculates and displays the average number of production bugs from each team, each year. This empowers us to follow the overall quality of our software products over a long period of time.

When we take the mouse over any red bar (representing production bugs), tooltip shows us the number of production bugs. When we click on any red bar, the actual production bug ID, title of the bug and status of the bug is displayed as a list from which we can directly open those bugs in Azure DevOps. In this display, we also calculate the ratio between Test and UAT bugs vs Production bugs. A pie chart is also displayed to show the areas / microservices where the production bugs is from. This automated process is enormously helping us today and will continue to do so for the years to come. Based on available data, we can predict future production bug occurrence and resource management.

Link to a video explaining the idea.

This nomination is about two persons. This work was conceptualized by Indranil Sinha and implemented by Yaroslav Himko.

Indranil Sinha & Yaroslav-Himko RisingSTAR finalists

 

Indranil is currently Head of Quality Assurance and Yaroslav is a Technical Tester & Test Automation Developer at Marginalen Bank.


Cas van der Kooij - Netherlands


">

 

Cas van der Kooij

Cas is currently a Test Automation Engineer at Capgemini.

Testing Strategies for Ensuring Reliability of Cloud-Based Applications

Cloud computing has gained significant importance in today’s landscape due to its advantages over conventional hosting methods. One of the main contributors to the growth of cloud-based computing are higher flexibility, cost-efficiency, agility, and availability. By investing in testing, organizations can identify and resolve issues early, minimize risks, and build trust among users and stakeholders.

Goal:
This concept aims to provide testers entering the domain of cloud-based application testing with practical testing strategies. Testing cloud-based applications presents specific challenges that differ from testing traditional applications. We will gain a better understanding on how and when to apply these practical testing strategies when we investigate the unique challenges of cloud-based applications such as scalability, distributed architecture, dependency on external services, virtualization, data storage, and security considerations. The concept will show real-life examples of successfully tested and deployed cloud-based applications demonstrating the benefits of applying these testing strategies.

What’s next?
The knowledge shared in this concept is going to be used for the creation of a manual on testing strategies for cloud-based applications. This manual can act as a guide for testers that newly enter this domain of software testing. This will help them to get up to speed quickly and help them understand the quirks and complexities of testing cloud-based applications. The ultimate goal is to empower readers with the knowledge and tools necessary to confidently test and deliver reliable cloud-based applications. Applying and knowing when to apply these testing strategies, helps to create trust among testers and stakeholders which is crucial for a (positive) collaboration between parties minimizing risk and contributes to success.

Creating this manual on testing strategies for cloud-based applications is not new. However, when you Google ‘cloud-based application testing’ most of the results are using cloud-based tools for testing, not the testing of cloud-based system under test. The results that go into the strategies of testing of cloud-based applications are in the form of paid lectures and in my opinion could benefit from using best-practices based on real-world application of the described testing strategies in this manual.


Bart Van Raemdonck - Belgium


">

A Multi Browser Device/Tool where you can see one action on different devices and browsers instantly

In today’s digital landscape, ensuring a seamless browsing experience across multiple web browsers is crucial for businesses to reach their target audience effectively. However, manual cross-browser testing can be time-consuming, tedious, and error-prone. That’s where my idea for a multi-browser tool comes in, a software testing tool designed to streamline and simplify the process of manually testing web or mobile applications across multiple browsers and devices simultaneously. This multi-browser tool will empower your team to conduct efficient and comprehensive cross-browser testing, ensuring your web or mobile application functions flawlessly for every user.

Currently, I can run my web or automation tests on different browsers at the same time, but if I want to check the same scenario’s manually, I have to do it on browsers one by one? That doesn’t make sense to me. I want to do one action and see it instantly on different browsers.

• Unparalleled Efficiency: This multi-browser tool empowers testers to execute manual tests simultaneously on a range of popular browsers, including Chrome, Firefox, Safari, Edge and this also on several devices which will drastically reducing the time required to validate the compatibility and functionality of your web or mobile application.
• Recording and comparison: This tool could also have recording functionality, allowing to share and maybe compare print screens or recordings.
• Comprehensive Test Coverage: Expand your testing capabilities, from different browser versions to various operating systems and devices, this tool ensures your web or mobile application to perform flawlessly for all users, regardless of their chosen platform.

This idea will need the support of engineers who are familiar with Web Browser Anatomy, we can also reach out to great browser testing tools that already exist.

The idea is that you would perform an action on a browser (f.e. Google Chome), and every action you perform as a user will immediately also happen on other browsers like Firefox, Edge, Safari, … so you can see the behavior simultaneously on other browsers in the same screen. The approach would be that you will use the underlying layers (in this case Chromium) to send or transform the actions to other browsers. You could do the same for apps and mobile devices, but I would do this in a more advanced stage because you will need to build an extension that would screen an app and create page objects with the right elements before you could send the actions across different devices at the same time. I would first focus to make it work on browsers and web applications.

I found a lot of tools where you can easily use a browser on a certain environment or even open up a few browsers at the same time on the same screen, but I always miss that function where you can do the same actions on only 1 browser and simultaneously see the behavior on all other browsers. Tools such as Sizzy or Polypane focus on viewports on different (simulated) mobile devices (at the same time).

My idea would speed up the process of manually checking if (for example), a menu on website or app.

Now I have to open up Chrome, Safari, Firefox on a Mac, Edge on a Windows or use a tool like Browerstack to check that menu. Every different Browser is a different test session and I have to click on that menu and check if it opens several times instead of one time. I would like to have a solutions where I once click on that menu and see on real browsers the behavior. If those tools cover that, great, I didn’t find that functionality yet.

The second part of my idea is that you could do the same with an app that you are developing within your project:
• That you could click on a menu on a iPhone X and it performs that same action on a real iPhone 13, iPhone 8 or even on an iPad at the same time
• That you could click on a menu on a Samsung S20 and it performs that same action on a real OnePlus9, Pixel 7 or even on an Android table at the same time

This is an idea from myself and my company is not involved. If I were to win, we can discuss this with different parties who are willing to support this including my company, but at the moment they are not in the loop.

There are a lot of (open source) automation tools out there that can control browsers and apps, so I think we need to start from there and need expertise from that community (Selenium, Appium, Playwright, …). In an ideal world this would also be an open source tool that would be build and maintained by the community.

 

Bart Van Raemdonck RisingSTAR Finalist

Bart is currently a QA Coach at Axxes.


Ken S'ng Wong - Singapore


">

 

Ken S'ng Wong Singapore RisingSTAR Finalist

Ken is currently a Senior QA Engineer at Autodesk.

Goal-based Gamification of Visual Workflow Testing to train Reinforcement Learning Models to allow high-level Exploratory Testing to be performed on applications with complex UI workflows

A key test with application testing is to test the UIs based on a designed, or predefined, workflow to see if they fulfill its design goals. The most conventional method is to pre-write the steps a typical user takes to achieve what the particular UI tries to do. However, this only fulfills the predefined workflow path, but not the alternate paths that users may discover either by accident, experimentations or through word of mouth. These alternate paths can be discovered during testing through Exploratory Testing. Presently, it is not possible to fully automate Exploratory Testing to discover discrepancies in the workflow steps. Such tests are usually done manually, having the QA engineers take the role of the end-user, or through Beta, or early, releases to users.

To assist the QA engineers, the same, or similar, application can be gamified in Reinforcement Learning (RL) Models, where an AI agent can be trained in RL environments, called a Gym. The agent takes the role of the end-user that slowly, but surely, experiments with different ways of achieving the same goal. For each success, the agent can record the workflow steps it took and log the behaviors of the application for each step. These successful workflows will be documented as possible alternative paths and the development teams can decide if they will bring the application any harm in the long run, or add blockers in the next release to prevent the users from invoking it. As for each failure, the steps taken are compared with the predefined, or previously recorded, workflows. If the deviation of the steps are too large, the QA engineer (a human) will label those workflows as not possible and impose heavy penalties to the agent. Else, a bug ticket can be filed to have the development team to further investigate the issue.

This idea will greatly augment QA’s job as it is now possible to have the pre-released application experimented on by an AI before the end-users. The alternate workflows discovered by the AI can be used to validate any complaints from the end-users, or fixed by developers.


Best of Luck to All


Join us in congratulating this year’s RisingSTAR Award Finalists and wish them well as The Supporters now review their entries and choose the 2023 RisingSTAR Award winner.

See our RisingSTAR Award introduction page for more information about this award and see further details about the 31st EuroSTAR Software Testing Conference.