In their haste to embrace new technology in the name of innovation, many software professionals are guilty of displaying a lack of critical thinking. Artificial Intelligence (AI) is the latest example of a trend that has leapfrogged over, “Let’s think about this,” straight into, “Quick! Let’s get AI into every corner of our platform now!” With much ambiguity around AI’s definition, let’s first consider the benefits of applying critical thinking before we rush to implementation.
The philosophy in the Critical Engineering Manifesto recognises that the language of technology and engineering is, “the most transformative language of our time, shaping the way we move, communicate, and think,” and it is the job of the critical engineer to study, understand, and expose misuses of this language and therefore misuses of technology. Granted, the Critical Engineering view is on the extreme end of the “technology scepticism” spectrum. Nevertheless, it’s a tantalising challenge for the modern technologist and particularly relevant to the new wave of AI products and resulting news cycles. It would be foolish to assume that software, and therefore AI products, are naturally benign, neutral, or even rational. Like any software application, AI systems will, consciously or not, inherit the biases and fallacies of the engineers who wrote them and the datasets on which their models are based.
Applying critical thinking to AI
In his 1961 book, Computer Programming Fundamentals, the influential computer scientist Jerry Weinberg wrote about “formalized testing,” meaning any kind of testing that could be given a definitive structure and therefore automated. Sound familiar? Weinberg said, “It is, of course, difficult to have the machine check how well the program matches the intent of the programmer without giving a great deal of information about that intent. If we had some simple way of presenting that kind of information to the machine for checking, we might just as well have the machine do the coding.” Sound more familiar?
Weinberg is an example of a kind of technologist who is increasingly rare in our age of ubiquitous automation. He understood the division between the work a human is best suited to and the work to which a machine is best suited. In 2023, it seems most of the industry is determined to automate every aspect of software development and testing that doesn’t require a human sitting at the keyboard.
How can we apply more critical thinking to our encounters with AI? When you see an article or video about the latest AI testing tool, ask the question that the great Roman thinkers often did during criminal trials; cui bono? Who benefits?
The definition of AI is up for debate
The most enthusiastic articles that turn up after a Google search for “AI software testing” are essentially vendor infomercials marketing a new AI feature. TechCrunch’s Devin Coldewey put it best when he wrote, “‘AI-powered’ is tech’s meaningless equivalent of ‘all natural’”, i.e., a marketing buzzword. Much like automation, the cloud, and microservices, there is still lots of ambiguity about what the universally-accepted definition of AI is.
I have been working as a solutions engineer for more than five years and still encounter teams that describe “test automation” as something completely different from what I’ve heard other teams describe it as (sometimes within the same company). If we’re struggling to agree on what test automation is or isn’t, then we’re a long way from achieving consensus on a useful definition of AI. As the late, great computer pioneer Larry Tessler said, “We might as well define AI as ‘whatever machines haven’t done yet.’”
At the risk of being called a Luddite, I do believe AI has its place in the toolbox of the modern software tester. If we refine our definition of AI to refer to technology like machine learning or large language models, we can start to enter more useful rhetorical territory.
Let’s slow down the rush to adopt AI
There’s only so much that an individual can do to mitigate AI hype at the macroeconomic level, but the good news is that we can take steps in our day to day work think more clearly and make better decisions about the applications of AI. For example, the next time AI is proposed as a solution to a problem that your team or company is facing, don’t be afraid to ask why. Is there a reason that the solution has to involve natural language processing or generative AI? You might find that a better solution might involve a simple process change or the refactoring of some suboptimal code. Apply Occam’s Razor to the challenges facing your team; the best solution is often the simplest one.
As James Bach puts it in Thinking Critically About AI, large language models can help software testers by “brainstorming test ideas, or creating a shallow set of output checks which we can quickly verify are not completely broken.” Nevertheless, it’s important to remember that, despite all the hype, a software testing tool that is “AI-powered” is still just a tool, and to paraphrase Bach, the best thing that software testers can do is to apply some good old-fashioned critical thinking and use AI responsibly.
Joe Joyce has been a solutions engineer at SmartBear for more than five years. With over 10 years of experience in the tech industry, he has held roles at Blue Tree Systems (an ORBCOMM® Company), eir Ireland, and SQS Group.
EuroSTAR Huddle shares articles from our community. Check out our library of online talks from test experts and come together with the community in-person at the annual EuroSTAR Software Testing Conference. The EuroSTAR Conference has been running since 1993 and is the largest testing event in Europe, welcoming 1000+ software testers and QA professionals every year.