Historically, it’s safe to say you haven’t often heard AI and test automation discussed in tandem. But that is changing. AI testing automation is poised to play an increasingly important role in the future of automated testing.
AI test automation is still a relatively new concept to me, but it’s also one that I am exploring eagerly as I work to stay at the fore of the automated testing field. In this article, I want to take the opportunity to highlight why AI testing is so important, explain how AI bots can be used in automated testing, and discuss some of the challenges that we still need to solve in order to make the most of AI testing.
The Role of AI Testing
Automated software testing is a definite MUST. It's an exciting time for the testing community. Everyone is embracing the importance of building testing guards around everything. But what is the role of AI testing? It will eliminate how we approach testing and how it gets done. In theory, I see two or more potential solutions involving AI within your testing ecosystem.
The first reasonable use of AI focuses on test management and the creation of test cases automatically. It reduces the level of effort (LOE), with built-in standards, and keeps everyone consistent. The second reasonable use of AI focuses on generating test code or pseudocode automatically by reading the user story acceptance criteria. The third option, codeless test automation, would create and run tests automatically on your web or mobile application without writing any code.
These days AI is everywhere—from Siri, Alexa and Google Search to Google Assistant, Slackbot, and more. Each of these AI applications has specific roles and goals. In order for AI bots to work, you need to define the specific goal of your AI—whether it’s creating test cases automatically, generating test code, performing codeless tests, or something else.
Training the AI Bots
The general concept of AI is the ability of a machine to understand the environment and process the input data to perform an intelligent action, then learn how to improve itself automatically. Voice-activated search took to the road a couple of years ago in Android Auto. By pressing a button on the steering wheel of my Volkswagen GTI to activate Google Assistant and saying, “Play Chris Stapleton,” Google Assistant uses AI to process the input and perform an intelligent action. In a few seconds, Chris Stapleton music is playing. It adds safety to my daily commute and allows faster retrieval of my favorite music artists.
There’s a lesson here: The smartest developers let bugs through, and most of the time the development teams are reacting rather than preventing. If you are a tester or work with a tester, you know that they like to ask a lot of questions. To build AI test bots, we must train the bots to process input data by asking questions to perform an intelligent action, just like Android Auto Google Assistant. The bots will only get better as we continuously strengthen the algorithms to recognize input patterns and behaviors.
Challenges with AI-powered Applications
AI test automation still has kinks to be worked out. The challenges and possible problems you may face when attempting to build AI-powered applications for testing are:
- Identifying, perfecting all the algorithms needed
- Collecting lots of input data to train the bots
- How the bots behave from input data
- Bots can repeat tasks even when the data inputs are new.
- The process of training your bot will never end, as we’re continuously improving algorithms.
In many ways, AI testing is like teaching a child by example. It’s an arduous process, but one that pays off when done properly.
AI is no longer a buzzword. It's a reality. That’s just as true within the automated testing world as it is anywhere else.
If you take a moment to think about all the technologies we use on a daily basis, AI has already begun silently integrating into our lives. Get ready! The role of automated software testing is on the edge of dramatic change thanks to AI. They may not quite be here yet, but AI test bots are coming.
Greg is a Fixate IO Contributor and a Senior Engineer at Gannett | USA Today, responsible for test automation solutions, test coverage (from unit to end-to-end), and continuous integration across all Gannett | USA Today Network products.
In the last two years, he has helped change the testing approach from manual to automated testing across several products at Gannett | USA Today Network. To determine improvements and testing gaps, he conducted a face-to-face interview survey process to understand all the product development and deployment processes, testing strategies, and tooling. He provides a formal training program for teams still performing manual testing that allows them to transition to automated testing.