With the adoption of agile methodology, companies are churning out new products like never before. This means that products need to be built, tested and validated in a matter of months. Though the shift to automated testing allowed for a huge leap in efficiency and accuracy, AI has the potential to do much more. Continuous testing backed by AI will change the way that we approach test creation and maintenance.
Automated tests reduce the potential for human error. Machines can “run” test cases and look for appropriate behavior, enabling people to spend more time looking at aesthetic issues and drilling down on rare edge cases rather than having to perform mundane and repetitive tests. That's pretty much what the industry expected and wanted, until now. To meet the continuous integration and delivery needs, we need to turn to continuous testing backed by AI.
The first viable use of AI will be the automatic creation of test cases. This will not only reduce the amount of effort that teams will need to put in, it will also lead to more consistent and standardized tests.
AI will also make a great impact on the maintenance of generated test cases. As products evolve and grow, tests should also be modified. With an AI system, tests will be able to acknowledge change and adapt the end goals.
Companies have a wealth of product data from sources including log files, screen recordings of user actions, and event results of A/B testing. We can use AI/ML techniques to gather, examine, and observe production user data and look for patterns.
The first step is to choose stable production data. This means that you must remove any data generated by erratic behavior, malicious activity, and of course, the bugs themselves. This data will be used by the AI model to generate tests, and this will be useful in integration tests.
AI can also recommend that certain tests should be performed. For example, by feeding the AI with video data on user usage patterns, it can uncover the common patterns. These heatmaps can then be used to create unit test suggestions for developers.
According to test automation architect Greg Sypolt, “we are closer than ever to eliminating the burden of manually understanding how customers use the entire system, which will allow us to generate tests automatically. Moving towards AI/ML builds the right kind of quality coverage — no more guessing how to test your system.” (Sypolt, Using AI/ML and Production Data to Improve Software Testing)
When preparing for test case automation, we usually only estimate the effort involved in their creation, and we tend to forget about the cost of maintenance. A suite of tests will become obsolete if no one makes an effort to update them as products evolve.
Tests are incredibly useful when making large, breaking changes to your product - like before a big release; however, with each release comes new UI which renders your tests useless. Ideally, the tester would be provided with wireframes detailing changes, but of course, that never happens in practice.
With an AI system in place, it will learn more about your application every time you run a test. Over time, it will learn enough to identify individual elements of the UI. Thus, when something eventually changes, the AI will be able to modify test cases.
There is also scope for AI in improving UX. Instead of limiting UX to making products accessible on mobile devices, AI tools can be used to create automated alerts when SLAs are not met, and to automatically set up emergency meetings. In addition, AI tools can be further integrated with RPA to automate activities related to test data management, test environment provisioning, and real-time reporting.
More often than not, it will take far more time to build an AI tool to generate test cases than it would to create the test cases manually. Organizations will have to invest a great deal up front, and it will take time for the benefits to pay off.
Right now, AI/ML tools that help in testing are mostly theoretical. This is, however, a great market for these types of products. If your organization is already building with AI/ML, it would be wise to invest in software testing. The next innovators to do this successfully will be unrivaled.
Swaathi Kakarla is the co-founder and CTO at Skcript. She enjoys talking and writing about code efficiency, performance, and startups. In her free time, she finds solace in yoga, bicycling and contributing to open source.