RisingSTAR Finalists 2022


Meet the finalists of the 2022 EuroSTAR RisingSTAR Award. This award was created to help encourage innovation within the software testing community. It aims to foster new ideas, and support those ideas through mentorship from an impressive selection of industry experts called ‘The Supporters‘.

The winner will be chosen by the Supporters who will choose the idea they think will benefit the testing community the most. We will announce the winner at this year’s 30th EuroSTAR Conference, taking place in Copenhagen, 7-10 June 2022.


Robin Gupta - India


">

 

 

Robin Gupta RisingSTAR Finalist 2022

Mentoring for Testers to Supercharge their Careers

Mentorship for testers is a real problem. All of us have been lost, out of work, or marginalized at some point in our careers. My idea is to set up a free Mentorship platform for QAs, Testers, Leads, and Managers in the Software Testing community, so that we can ask/give advice to other software testers and grow together. While there are numerous YouTube tutorials, blogs, and articles around the same questions, I want to facilitate human connections in our community.

I’ve been part of the top 1% Mentors on Adplist and one of my mentees there mentioned that it was really hard to find one-on-one advice from peers in the testing community. At that moment I realized that we (as testers) have many communities to collaborate on work and outside of testing, but we don’t have a centralized platform to either grow as a Mentor, or seek advice as a mentee.

Therefore my vision is to set up this open source safe space for testers to seek advice at all levels, and strengthen the bonds of members in the software testing circle.

If the platform hits a critical rolling mass, we might need to move away from No-Code/Low-Code stack to a full-fledged web platform. I will need support on both the marketing and onboarding sides of the platform. Because a Community / Mentoring platform can only grow via active engagements and authentic conversations. I’ve already started building out the platform using Softr (No-code tool) and Airtable here. Here is the work in progress platform.


Rahul Parwal - India


">

Create a Testing Assistant for Software Testing Practitioners

Over the past few decades, technological advancements have revolutionised the software industry. Yet, most of the innovations and developments have been in the automation tools and development fronts. Consequently, testers also shift their attention from testing on to tools, automation, and programming languages. Having all of these skills is certainly beneficial to testing, but at the same time, they have distracted a lot of testers from understanding and practicing the fundamental role of testing. The majority of the available testing tools fall under the automation or test management category. Modern software testers require powerful “testing” tools that can help them to think and test more effectively. In order to assist testers, I plan to develop a testing tool called “Testistant.”

“Testistant” is a thinking tool, that would prompt testers to think well by asking them questions and sharing guidewords. It will also guide them to resources, ideas, possibilities, considerations, suitable tools, test techniques, quality criteria, learning pathways, cheat sheets, checklists, etc.

In short, it will help testers to think well and test better. It would be built from publicly available resources and creative commons licensed materials. Testers would use it as a companion. It would also provide them with mind-sharpening challenges, exercises, and updates about the upcoming testing sessions/events. This tool would be developed as an open-source project, which could be made available for free to testers. In the prototype, I visualize this as a web application that would make it easily available across platforms and devices.

I prepared this mindmap to brainstorm and note down the idea of this testing assistant tool.

 

 

Rahul Parwal RisingSTAR Finalist 2022


Laveena Ramchandani - UK


">

 

 

Laveena Ramchandani RisingSTAR Finalist 2022

Quality Embedded in Data Science – Data is the New Gold

Everyone is excited about Data Science and Machine Learning models, but there isn’t much exposure to the world of Data Science as a Tester. Spreading testing eminence is great but having a quality mindset within data science is something that can help teams deliver an even better product with a holistic approach. Data science is the study of data. By studying the data and analyzing it we can gain many insights and knowledge. Ultimately the aim is to learn from the data and keep optimizing the models to create tangible business value. IBM anticipates that Data Science will soon account for 28% of all digital jobs.

Using data science has helped my team. I started by pairing up with data scientists and data engineers to understand the core product and how the model will help the clients make better decisions. My strategy included defect management, risk management, testing processes, along with different types of testing one could perform on the product, such as automation testing, accessibility testing, unit tests, integration tests, and front-end GUI tests.

As far as I am aware, I was one of the first testers to have tested data science models, and I shared my experience to help motivate others to get involved. This has been done by talking at conferences on the topic of ‘Testing data science models’, and I also share blogs with testers around the world in order to share more knowledge and contribute to the community. The blogs explain various topics around testing in very simple language, and can be found on Medium. I have written about how testers can get involved within the data science teams. Testers have reached out on what sort of resources will help them gain the right skills and expertise in this area too.


Pallavi Sharma - India


">

Shift Left Locator

We say testing is everyone’s responsibility. But my question is – are we doing enough about it? We are in 2022, and still, there are more tools in the market available to help web automation testers to find locators, than the knowledge and tools for the developer team to make more testable web applications. The part which is more thought provoking is that before making the application testable, they have to ensure that the application being built is actually W3 Compliant. The idea came to me in 2005 when I was building a no code platform called OpKey, along with a dedicated team. One of the areas I worked on was creating our own locator finder. While trying to build complex algorithms to build just the right locator which will finally work to find the element which doesn’t have a unique identifier on the html page.

The idea is unique and time worthy as it will bring a shift left approach in the way web applications are built. We test at the HTML level while they are being built, and ensure they are following all compliance standards. We empower developers and testers with the help of learning and tools to ensure that we are making the thing right, by doing the right things. The problem I foresee is to break the current mindset where people will have to unlearn that instead of finding locators and building too cool AI tools to automatically manage and heal your test, you build the thing in the right way. The support I look for is to empower me to work on this idea with like minded people who believe that change is the only way forward. https://www.w3.org/WAI/ER/tools/

 

Pallavi Sharma RisingSTAR Finalist 2022


Indranil Sinha - Sweden


">

 

 

Indranil Sinha RisingSTAR Finalist 2022

Automatic Calculation & Display of Test Automation Coverage of Microservices

I work in Marginalen Bank, and we have conceptualized and implemented a tool for automatically calculating test automation coverage of microservices. The coverage is displayed in a dashboard we named “Test Autobahn”, a term coined by us.

Any team in the world, working with Microservices, can relate to how hard it is to measure the test automation coverage. Worse again, there isn’t any ready made tools available in the market today which can automatically do the calculation. Thus, in one hand test automation is being developed, but on the other hand, we don’t how much automation is done so far – unless we calculate manually. Manual calculation of test automation is possible, which takes a lot of time (depending on how many microservices you have) and is error prone. The calculated value becomes null and void after a few weeks (depending on the pace of system and test automation development).

Our solution automatically calculates the total number of microservices, total number of endpoints, total number of test automation developed, and the automation coverage is displayed in percentage on a dashboard. We are not selling our implementation in the market but this idea can be embraced by teams across the world and developed within their own organization to best fit their needs.

This is a video of a talk from last year where I spoke about Test Automanuation and Test Autobahn.


Kimberly Snoyl - Netherlands


">

HeuriChecker to Automate the Usability Heuristics

The idea came about as I have been speaking about UX Testing at conferences and meetups. I see that the focus of testers is too much on test automation and working functionality; rather than on usability and user experience of the product. I think that UX testing should be done in all stages of the development. One of the reasons they don’t test UX is that testers don’t know a lot about how to test UX in a structured way. And since testers are usually in a time squeeze, UX testing needs to be easy and fast.

UX is seen as something that is quite subjective, and difficult – or maybe even impossible – to automate. In my talk I talk about the 10 usability heuristics of Nielsen: 10 rules of thumb that are always the same… That sounds like something than CAN be automated!

In 2020 I asked an intern to research if these heuristics could be automated. He found that some of them can be automated! Since I am not the most tech savvy person myself, I have asked some colleagues of the UX Testing Guild at Capgemini who are now making an MVP of the HeuriCheck application!

My vision of this application is that it will help companies easily check whether they are following usability rules, and whether their application can be improved. Also, this tool can be a permanent check testers always use when they are testing front-end applications. They can also learn from the tool, since it will give you information about all the heuristics, why you aren’t compliant, or what you can improve.

 

Kimberly Snoyl RisingSTAR Finalist


Best of Luck to All


Join us in congratulating this year’s RisingSTAR Award Finalists and wish them well as The Supporters now review their entries and choose the 2022 RisingSTAR Award winner.

See our RisingSTAR Award introduction page for more information about this award and see further details about the 30th EuroSTAR Software Testing Conference.