Language, Growing Your Career, and the Tester’s AI Toolkit: Jakub Konicki & Adam Matłacz on Evolving Roles and Skills

Jakub Konicki and Adam Matłacz

QA Manager at MakoLab and CTO at Space To Grow

This episode was recorded LIVE at the EuroSTAR Conference in Stockholm.

Thank you to our Community Hosts for Season One, Russell Craxford from the UK Department of Work and Pensions, and Gek Yeo of AIPCA.

On this episode, you will hear from Russell Craxford from the UK Department of Work and Pensions, Jackub Konicki – QA Manager at MakoLab and podcaster, and Adam Matłacz, CTO at Space To Grow, Poland, and EuroSTAR Speaker and tutorial lead. They talk about growing in your job with AI, the future of LLM’s and the complications from using different languages while creating with AI. We hope you enjoy

Joseph: Welcome to the Eurostar Community Podcast. For over 30 years, we have been bringing the global testing community together to share knowledge, connect, and grow. Check out eurostar conferences.com for our in-person conferences and access to online testing resources. Thank you for listening to this podcast.

Our first season was recorded live at the Eurostar conference in Stockholm.

In this episode, you will hear from Russell Craxford from the UK Department of Work and Pensions, Jackub Konicki – QA Manager at MakoLab and podcaster, and Adam Matłacz, CTO at Space To Grow, Poland, and EuroSTAR Speaker and tutorial lead. They talk about growing in your job with AI, the future of LLM’s and the complications from using different languages while creating with AI. We hope you enjoy!

Russell Craxford: Hello and welcome to another episode of the Eurostar podcast.Today we are at Stockholm in the Eurostar conference. And we’re recording in a special studio area, with some special guests. But first of all, let me introduce myself. I’m Russell Craxford. I’m hosting this episode today. And with me I have

Jakub Konicki: Jakub Koński.

I am the creator of the Polish podcast Poszklanie i Natestowanie. I think this is the most popular at this moment. Podcast in the Poland. So I, I’m glad to, to be here.

Adam Matłacz: And I’m and Adam Matlacz , CTO at Space to Grow a small startup, , and a speaker also on at Eurostar.

Russell Craxford: So thank you for joining me. , and it’s good to hear another podcast, , on the podcast, um, competition.

I think there , but you know,

Jakub Konicki: you know, my, my, my podcast is only in Polish, though. I’m probably not going to listen to your podcast. I think my Polish skills are a bit minimal, but recently I used an AI to make subtitles for English. So,

Adam Matłacz: okay. And you can use also 11 labs to then, then just translate to English and with your voice.

Yeah.

Russell Craxford: Okay. Well, see, this is the modern technology can do. It’s interesting, isn’t it? So, , is this the future of testing? Is this the future of it for us? Or is there another direction you think we’re going in?

Adam Matłacz: Would you like to start? No, you can start. So about the future of testing, I think on the conference over the last few days we had several opportunities to see different angles and different approaches and people talking a lot about technology that right now can transform the future of testing and especially the AI and LLMs, which are from my perspective a very empowering tool.

So in the past to, to enter the testing world, you would have to have some technical skills, technical knowledge, some experience and only after that you could, you could become like a mid or, or a senior position, right? But right now with tools like LLM that can help you gather the skills, can automate some of the stuff, can help you out in when you don’t know what to do, then, then this tool can support you, guide you, even generate code for you, right?

I think it will be much, much easier for people to actually join the stage and, , start working in the IT industry as a testers. And also, empower people that previously didn’t have the technical part , to work here, right? So, for example, instead of, , completing university in mathematics or physics, you can complete it in English, , major.

And that’s basically will be even better because your language skills are great and then that’s, that’s the thing needed for, for LLM.

Jakub Konicki: Yeah, I think that, , AI is, is the future, but, , I, I think that, , this is a tool that will be used, but not replacing the testers. And, , in the one of the lecture, , yesterday or two days ago there were a few mentions that, , I like a lot.

For example, , AI will be , will widen the gap between juniors and seniors. And, I think that juniors, make too big effort to get into the AI instead of using it as a tool, , to help their daily work. Okay. So interesting, right?

Adam Matłacz: Because I fully agree that, we still will be in the driving seats Yeah.

At this, that’s my perspective, right? Mm-hmm. But, uh, uh, what I think is that, uh, actually the tool is empowering juniors to be more senior , thanks to the, all of the, the additional things, possibilities that the tools are giving you, right?

Jakub Konicki: Okay. You, so that’s, you, you, you, you, you will faster get to the, the point that you, you can, write automation, et cetera, et cetera, but.

You don’t understand what is the laying behind the foundations.

Adam Matłacz: Yeah, that’s true. What I had in mind that, for example, if I’m a junior, I don’t know how to write a good test case. So I can support myself with the tool. Okay, please help me write a test case. If I’m a junior, I not necessarily know automation and programming and I can ask for help.

Please generate for me automated test case and then please teach me how this test case was, was written. So then actually I can learn the basics as well, right? Okay, but,

Jakub Konicki: but you, you talk right now from the senior perspective and you know that you need to be, explained by the, by the AI. But when you are junior, you type something and write me test cases.

Okay, that’s great. That works. That’s good enough. Yeah. And you don’t dive into it.

Adam Matłacz: Well, that’s true. But again, I think that’s, that’s kind of a, oh my God, character traits, right? So, so if you’re lazy and you just want to move faster to your job, then probably you will not dig deeper. But if you want to do your job good, then actually, and I actually, I think this is the really important trait of a, of a good tester, right?

You try to understand, you try to ask questions, you try to question things, right? And not like, okay, my job’s done, now I go play PlayStation, right? Yeah,

Russell Craxford: do you think though that AI, as you mentioned, you talked about obviously getting there faster and learning. Does that mean though to be senior or to become the next level up, actually the skills change?

The priority change. If you can do that with the AI, then actually I’m going to value as a leader or as a line manager, that skill plus other skills. And actually. We, we encourage people to have to be more multi pronged, multi skilled. Yeah, definitely. If you can do that with a tool, then anyone can do that.

So that’s not important now. That doesn’t mark you as a senior. Therefore, you need to also do X or Y, or you need the communication skills, or you need to work on some other

Jakub Konicki: technical skills.

You know, to be good at prompting, you need to be precise and you need to know how to communicate.

So if you are a senior and you are experienced in the area you work, you will be better in using AI than other people. So I think that, the skills for seniors also will be changed using AI. And the wider. audiences will be using AI properly only if you are experienced in some areas.

Yeah.

Russell Craxford: Yeah. You’ve gotta have the skill to use it properly. Like any tool, anyone can use a tool but doesn’t mean you use it. Well. Yeah. , I’m curious ’cause it’s something that’s just dawned on me, which is obviously I’ve seen a lot of ais and I’ve dealt with more in English. Mm-hmm . So our curiosity, pair of you, I think aren’t from England.

I know that much. You’re a Polish podcast and so on. So, you know, how are the experiences of these sort of things? Do you have to use them in English? Are there ones in Polish? How does that work? Because you said the nuance of the language is so important in getting value out of them. So it forces everyone to use English.

Jakub Konicki: Yes, but It’s dangerous, isn’t it? LLMs, we did a proof of concept of our own LLM in our company lately. And the problem is that LLM, behind the scenes, translates Polish to English, then performs all operations in English, and then translates it back to, to Polish, so

Russell Craxford: Oh, okay. Extra layers, yeah. Lots of translation,

Jakub Konicki: right?

Yeah.

Adam Matłacz: So also, I use it in English, so whenever I need something, and even if I need the output at the end to be in Polish, I preferably do all of my operations in English, and then at the end use Google Translate, or even the LLM to translate it to Polish, and then I have to still Polish it, right? So, Polish the Polish, right?

So basically, I prefer to use it in this language, so I don’t think at least, At this point in time, English will disappear as lingua franca. I think it will still keep going as lingua franca. At least for some time. Because right now, more and more tools have native support, like even my phone, Samsung, right?

Which have native support for live translations of the conversations and so on. So when this will become much, much better, then maybe, but It won’t matter as much, yeah. But at least for now, I keep, I stick to English.

Russell Craxford: That makes sense. I was just wondering whether it made English more important or less.

Because in theory, AI could replace the need to speak in a language, it could just do it for you.

Jakub Konicki: Yes, but LLMs or different AIs are trained to a particular thing. So, for example, when we use ChargePT or Gemini or something else, this is not a good translator, actually. So, we should do some, some, some prompts for In charge of GPT or other LLM and then use proper tool for translating, for example DeepL.

This is a great translator, way better than Google Translator, but this is AI only for, for translating. That

Russell Craxford: makes sense. Well, yeah, I think that’s, that’s one of the things like creating a general intelligence. Obviously, I think it’s the ambition, but in reality at the moment, we’ve got special intelligence.

We’ve got ones that focus on certain skill sets, certain. Abilities we haven’t got it’s going to use Einstein’s general theory of relativity, but let’s not go down quite that We haven’t got one thing that can describe everything yet That’s probably got to be the the ideal at the moment. We’ve got specialisms We’ve got tools that have for our purpose and if we use them for that purpose we get good results if we use them for other purposes We get mixed results So we’ve got to be careful with these things to a degree.

Adam Matłacz: I wonder about the opinion of From you guys about one thing when I went through all of this, spots in the venue from the vendors, the vendor spots.

Russell Craxford: Yeah,

Adam Matłacz: 95 percent of them of the tools right now is like, okay, we have LLM plus this for generating test data. We have LLM for generating test cases, right?

So, they put it almost everywhere. Yes. And, , I don’t know how it works today, but at least on one of the talks yesterday, I think yesterday, Okay. One of the guys said that the future of testing in his head is that the tools more and more will be self sufficient in the case of, for example, generating test cases that you can just enter the requirements, put the U.

  1. in the tool. And then each day like you will have a different set of test cases because they will be each time automatically generated by some kind of tool from maybe one of the vendors here, right? So do you think that this is really the future? And if so, when it will come or is like totally not something that is possible?

Russell Craxford: So I reckon a lot of tool vendors are looking for that because that’s what they think customers want. So if you have that sort of solution that you input some general text. Requirement of some form. Gave them some interface. Be it an API, UI, message, whatever system. That it would put two and two together.

But there’s still that element of judgment. Which I think they’re going to struggle to replicate. I think, if you write great requirements. You could make tests out of them quite easily. Like, you know, behavioral driven testing frameworks and stuff like that. But you’re going to struggle from a vague description.

And an interface to see whether they match. Because it’s opinion. and A. I. And eventually can obviously have that broad knowledge and understanding of human endeavor, but that’s a lot of learning. That’s a very big model in my head. So I think it’s a it’s an aspiration. I can see them going for because I can see it selling, but I can’t say it’s something they could easily achieve.

I see it as something that they might market, but it will not achieve. So I liken it to, um, chat box once upon a time. We were also this idea that chat box would mean call centers died. We didn’t need to speak to a human again because they’ll just deal with everything intelligently. Neuro linguistics, perfectly fine.

Reality came around, which was they weren’t that great, they weren’t that intelligent, but they have a purpose. They could answer basic queries, provide information, but they were never going to answer complex things. Software’s complex. So I think tools might generate ideas, but I don’t think it’s going to go as far as solving it.

So that’s my view.

Jakub Konicki: I think similar. First of all, AI is a buzzword at this moment. So everyone will say, Hey, we are using AI. So, so, so you need to buy our, our software. But second of all, it would generate fine test cases and many data, but you need to type perfect documentation, perfect requirements. Okay.

How many projects you were working, with and, they were perfectly described. So,

Adam Matłacz: zero, but maybe that’s the thing that right now we will move our focus to writing really good documentation. Shift further left,

Jakub Konicki: yeah. Shift

Adam Matłacz: left,

Jakub Konicki: yeah.

Russell Craxford: You’re right, better requirements would get better outcomes, but then a lot of it is you only learn what’s good when you’ve built it.

Yeah. Truest Agile sense is you kind of sketch something up, build it up, get some feedback on it, and iterate. So it’s going to be a

Jakub Konicki: struggle. I think that if we will have clear requirements, a lot of documents that we put to the software, it would be great. But as you said it would generate many ideas, many test cases, but basic test cases.

And I think that, for example, exploratory testing will be, on the top of it and it would. It need to be performed by, by human. But, for example if you have, some, simulator or something like that, that, need to be, pretty. You know, that UI must be, Yeah, user friendly, attractive.

User friendly. So, you need to look at it. And you, AI, could get some insights that, okay, the trends are that, the buttons are too big, et cetera, et cetera. But as a human, you will be better to make opinion on that.

Adam Matłacz: Yeah, I wonder about the progress because today it’s for us too hard to grasp that AI could do stuff like watch the UI and say if it’s good or bad.

But my point here is that, and I talk about this in my workshop. When I started preparing my workshop for this conference, I wanted to show to everybody all of the limitations of the AI and like how bad it is at things and basically that it’s just like a mere assistant for us. And when I was like two weeks or one, sorry, one month before the Eurostar in May, there was ChatGPT 4.

0 released, right? Which actually one of my slides, I have to scrap it because the limitation was no longer there. They fixed it, yeah. I use it and the example is there is this puzzle about getting a wolf a ship and a cabbage to a second across the road, so yeah, yeah, yeah, yeah. Everybody knows it in this version, right?

And when you put this in this version in charge G, PT 4.0, it works perfectly. But what I have done, I changed the farmer, the, the farmer to astronaut the wolf to I dunno, robot or whatever, right? So I changed three different characters, right? Yeah. Three different characters. The logic was exactly the same.

And it sucked. It didn’t solve the issue. But in May, I think 14 of May when the 4. 0 was released, it solved it perfectly, right? So my point here is that at that point in time, I thought, okay, it’s impossible for the chat to solve the puzzle because basically you need the logic and it’s learned on the text.

And since the text was always with wolf, cabbage, and sheep, right? It will not solve it. And right now, I don’t know how it works, right? But for some reason, it was able to get the logic, even though it’s a language model, right? Uh huh. So I wonder that maybe in the future, things like also the interface, it can scan the interface, knowing the best practices and stuff like that can give you some real good valuable

Jakub Konicki: Okay, but I wonder if we get into a feature like that, everything would be very similar, each other.

No, because there, there will be, could drive it all to the center. Yeah, yeah. Good practices, et cetera. And AI will, pick o AI would

Russell Craxford: feed AI in effect in the end.

Adam Matłacz: Yeah, that’s true. That’s true. That the, that’s the, the more we use the AI and then feed AI with the outcome from the ai, then the more blend that, the more AI generated Yeah.

Less think it might become, right? Yeah. So we, but if AI of that as well,

Russell Craxford: if AI is using human-centric data, like input, like how much sales generate from this image. What the customer perception of the image is. If you have data like that to go off, it can judge more fairly than just what’s, what websites exist.

Existence isn’t good, if you know what I mean. And quantity doesn’t equal good. , but it is interesting because these will learn and we will grow them and we’ll improve the technology. My history tells me that the ceilings, like, you know, we think things can solve everything, but, and it will solve most of it, but then we’ll hit a complexity level.

That’s really hard to get over. And for a lot AI, there was a long time where we really struggled. And then was it 2022? I think we hit kind of chat GPT came out the buzz that, you know, the genie came out the bottle, and potential was known. Lots of people jump on the bandwagon. It’s made massive improvements, but where it can get to, I think, is the debate.

The potential is the same potential we were talking about with AI 20 years ago, but we’ve now seen a ginormous leap. Haven’t made them. There’s much more we’ve seen proven it can do. and it will be interesting how it does affect testers and going forward and software and the tools because it’s created a new industry.

We now have to test AI based solutions, which I think is interesting. So it may have got rid of some of our jobs and some of the roles and changed them, but it created a new skill, new industry, you mentioned prompts, you know, understanding how that works and what to do and how to manipulate them, that’s a whole new area to talk about.

How many talks have been about AI or related to AI at this conference? That’s because of AI, so it’s shifting things.

Jakub Konicki: Okay, but, On the one hand, we have the lectures about AI, but the other hand, we still have the same problems as a decade ago. For example, why business don’t listen testers? Well, how to change it?

So, okay, we have AI, but It is only a tool to perform our work, but our work, often doesn’t work. No.

Russell Craxford: Good point.

Adam Matłacz: Yeah. What, I think we also tried, I think I don’t like the too much hype. That’s true, right? So I think there’s too much hype right now. But at the same time, I wouldn’t go on the second edge saying it will never take our jobs.

We always need more testers. Okay. And yesterday we had this conversation with our colleague from Poland. That she was studying. Today, she’s a tester, but she was studying to be an English translator. And during her studies, it was like 10 years ago, they had conversations on the university how Google Translate will not kill their jobs and they will still have jobs for them and all will be good because it’s so bad and people still need human translators.

10 years later, some of these people work in McDonald’s, right? Yeah. So that’s the problem, right? At this point in time, it looked so bad that they didn’t need it. And, yeah, things

Russell Craxford: change to get better. Yeah.

Adam Matłacz: And testing still will be needed. Josh, we will be able to do it maybe faster.

And so that means that either we need less people or we need to rescue and then think what else we can do in the area. Right.

Russell Craxford: If you think about what testers do in teams now, there’s the testing aspect of it, but there’s the question asking that, is this the right thing to do? Does it look good? Why are we doing this?

So you might see a shift towards more people centric roles, facilitation type roles, enabling type roles, and actually, but again, it’s future prediction. It’s hard to tell, isn’t it? So it’s just interesting to see where it goes. is there any kind of other, I guess, future directions of testing that you’re seeing or thinking about?

Like AI, obviously it is the thing. It is the hype, as you say. So this is probably going to be podcast number 3000 talking about AI, but that’s fine. But is there any kind of other areas? What’s that kind of interest that you see trends, traditions, switches or anything else?

Adam Matłacz: So I’m still a big fan of exploratory testing as the thing that you mentioned, right?

So so here people, human excel, all right, and, humor, humans are needed. Again, AI might support that in, in that, but, for sure, this kind of creativity is something that, I think for quite some time, still beyond and, I dunno, testers that today were just writing test cases.

Writing test cases might be replaced by AI so they then they can move their, their skills to more analytical thinking, analytical skills, exploratory testing and stuff like that. That’s at least my, my perception.

Jakub Konicki: I think also, first of all, this as, as you said, but also I think that, the testers. We will have the, the more technical, stuff to do.

For example, 10 years ago, testers wrote only , the test cases and clicks and performs the, the test that majority, now we have many, test automation and to many QA are, developers. Indeed. So I think that the, or maybe, the developers and testers role will be, Closer to doing, yeah. Makes a lot

Russell Craxford: of sense.

You know, the tools there to help us do build and test at the same time. Could see that evolutionary change. I think it’s interesting. I think you mentioned test cases, and I’m curious about if the industry is going to change away from them a little bit. Because certain fields, you need to have the evidence of it.

But actually, traditional test case solutions, test case management tools like some of the vendors here. We generate code. Now we generate automated scripts and documentation is code and different sort of models like that. And the AI tools can come back to AI, but they make it more. They make it easier to actually do some of these things in standard tool sets to make it real things.

So actually, we may find that we converge, as you say, with developers a lot more because historically I’ve seen lots of places where there’s different systems for testers and developers and actually I think what’s gonna happen is the tool set is gonna converge. So people are working together more on one thing, because the, as you said, learning you can do together easier, the basic standards are there, you get hints, you’re not sure, you can now ask, how do I do this?

And it’ll give you a pretty good answer. You know, Google exists for some of this already, but actually, AI tools are gonna make it so much easier, and so much more in your face. You know, if you’re in IDE, and you can get prompts, and kind of Help you write the code versus going to Google and figuring out what to do to select an element or something.

So I’m gonna, I’m gonna say that I think you might see convergence of tooling, as you said, the roles. I think that’s probably likely and we might see a, not the death, I’m not gonna be that sort of pragmatic to say, but we might see a, lower, lower expectations on test cases. Not that writing down what we’re gonna do or what we did do, but it’s gonna be more about auditing what we did do.

Then it will be writing down what we should do if you see what I mean, so it’s going to be more audits than creativity because tools can do it as we go. So exploratory testing can get documented easier and things like that.

Adam Matłacz: True that. For years I was working for a company delivering hearing instruments, so medical devices, and basically their having test cases was something essentially needed, right?

Yeah. I would love to see the world in which then I as a tester. Press a button that documentation together with test cases and execution and results of the test cases is generated somehow by the AI and test and then all the course and then I audited just to check if like all of the hazards for the people were covered and just to double check and then at the end say okay so so actually we are fine here and maybe we missed some other direction and then I as a human as the driver make the change and then the next test session is done somewhere else right but basically I don’t spend so much time on, on the writing, not the most creative task in the world, right?

Russell Craxford: Yeah, exactly. And we can record our steps much easier and things like that. So all these people that built recording playback tools for help with automation, you’ve got a lot of AI tools that are going to, I think, replace that needs or evolve that use case a little bit better because there’ll be more resilient, they’ll be better written.

It won’t just record your clicks. It’ll be actually thinking about how to structure these things, how to write them down. It’ll be interesting to see how the world does change. I think we can say one thing for certain in tech, the will, will change. Is there any sort of final notes anyone would like to add?

Jakub Konicki: No, I think, we said everything.

Adam Matłacz: One thing that I could add is that, yeah, I wouldn’t be so much afraid about losing my job. I, if there would be like one advice I could give to the, to the listeners, Try to keep up, right? So try to keep up, observe what happens, what is happening there.

Don’t be afraid. Be curious about what’s happening. Curious, that’s a good word for it. I think that’s a much better word than just being afraid of what happens. Because you might just join the boat and then row with

Russell Craxford: Yeah, it might take you the current sort of thing, but it makes sense.

Jakub Konicki: I have one thing. At this moment, at these days, many testers complaining about the, Not having time to perform everything.

So using AI generating the Boring stuff, etc. We can now sit and do what we actually like. So spend more time Yeah, so so don’t be afraid and and get into it.

Russell Craxford: Tools are there to assist us. That’s the point of them You know, yes, they may replace and change the world, but they are there ultimately to assist the outcome I think we all care about quality software

Adam Matłacz: But also don’t be too optimistic, right?

So don’t, don’t, don’t expect that AI will do everything for you, right? So I think Michael Bolton very often tries to attack AI. And to some degree, I agree with him that if you take too much hype, you still need to test your tests that you have written using that. So

Jakub Konicki: yeah, yeah.

Treat, treat AI like a assistant and like a Google alternative. Yeah. If we want to search something or learn something, AI is a great tool, but it doesn’t work for you. Don’t make

Russell Craxford: it, don’t believe it’s the perfect truth. Believe it’s giving you an answer, but may not be the answer, I guess. But like, anyone, speak to Adam here, you know, ask him a question, and take what he says, learn from it, but ask the question to ten people.

Then use that knowledge.

Jakub Konicki: But, but we still need to think about that, AI also makes many mistakes.

Russell Craxford: Yeah.

Jakub Konicki: So

Russell Craxford: Well,

Jakub Konicki: if

Russell Craxford: AI I’m gonna end on a note, I guess, which is, if AI uses general knowledge and information on the internet, you know, there’s people on the internet that say the Earth is flat, there’s people on the internet that say the Earth is round.

So the answer to if the Earth is flat or round, AI generated models and things from it, will, if you make it so the model is 100 percent certainty You’re going to get sort of vague answers and things like that. So you’ve got to take it with a pinch of salt because all the things we know aren’t finite.

We don’t know everything yet. So they have a I can’t know everything. And sometimes we trust things explicitly. We shouldn’t. So no, of course, maybe to end on that. But you know, it will help us. It is helping us. I think we can both also be interesting to see which way it goes. But thank you very much for joining us.

Thank you.

Adam Matłacz: very much for having us.

 

About Me!

In this episode, you will hear from Russell Craxford from the UK Department of Work and Pensions, Jackub Konicki – QA Manager at MakoLab and podcaster, and Adam Matłacz, CTO at Space To Grow, Poland, and EuroSTAR Speaker and tutorial lead.


See more



Similar Categories