Back to Resources

Blog

Posted February 14, 2019

Using AI/ML and Production Data to Improve Software Testing

quote

You may not think that artificial intelligence (AI) or machine learning (ML) have much to do with software testing. So far, software tests have not been a major part of the AI and ML conversation.

But I’m here to suggest that they should be. In this post, I offer some tips on how you can use AI or ML in conjunction with production data to drive a smarter type of regression testing to improve system quality.

What are AI and ML? And how are they different?

Let me start by explaining what AI and ML mean, how they relate to each other and how they are different from each other. These two buzzwordy terms are tossed around so frequently these days that it can be easy to misinterpret what they actually mean.

Artificial Intelligence emphasizes the creation of machines with the ability to apply intelligence to carry out tasks in ways that reflect human reactions.

Machine learning is an extension of artificial intelligence. It relies on working with large datasets (Big Data), by gathering, examining, and analyzing the data to discover common patterns and exploring differences.

Thus, AI and ML both involve data and efforts to drive decision-making using data, but they are not the same thing.

Using AI/ML and production data to generate tests

Now, on to the meat of this article: What do AI and ML have to do with software testing?

In a nutshell, it’s this: We can use AI/ML techniques to gather, examine, and observe production user data to generate a smarter type of regression testing.

Companies are already collecting large volumes of data to understand customer usage every time they visit systems. It becomes a component of their machine learning datasets to build models that intend to solve problems. There's a lot more to machine learning than just developing machine learning algorithms. A machine learning system involves a significant number of components to collect, examine, and extract features utilized by customers.

To ensure the system has no quality gaps, we need to use the same data collected for testing. We are closer than ever to eliminating the burden of manually understanding how customers use the entire system, which will allow us to generate tests automatically. Moving towards AI/ML builds the right kind of quality coverage — no more guessing how to test your system.

In principle, everyone can agree on the benefits of using AI/ML to collect production data of customers’ usage to improve software testing. But most importantly, it can provide a better end-user experience.

Barriers to AI/ML in software testing

To be sure, the most obvious challenge in incorporating AI/ML into software testing routines is the effort required to build the requisite algorithms. Collecting test data is easy enough, but writing algorithms that can interpret it intelligently is much harder. There will be a great deal of upfront effort required in this respect before organizations can start reaping the benefits of AI or ML-assisted testing.

That said, the potential payoffs justify the time required to build a solution to generate smarter types of tests. That’s especially true if your company is already adopting AI/ML, and the company can easily extend those efforts to cover testing as well.

Conclusion

For now, using AI or ML to improve software testing remains mostly theoretical. It’s not something organizations are doing right now. But that’s true of most AI or ML technologies. They remain in their infancy with respect to what developers hope they’ll eventually become.

The benefits of applying AI and ML to software testing are clear enough. Now, it’s just a question of allocating the resources necessary to build the algorithms and routines. If your company is already looking at AI/ML initiatives in other areas, I’d suggest they consider extending them to software testing, too, so as not to be left behind when the AI and ML revolution becomes part of this niche.

Greg Sypolt, Director of Quality Engineering at Gannett | USA Today Network, maintains a developer, quality, and DevOps mindset, allowing him to bridge the gaps between all team members to achieve desired outcomes. Greg helps shape the organization’s approach to testing, tools, processes, continuous integration, and supports development teams to deliver software that meets high-quality software standards. He's an advocate for automating the right things and ensuring that tests are reusable and maintainable. He actively contributes to the testing community by speaking at conferences, writing articles, blogging, and his direct involvement in various testing-related activities.

Published:
Feb 14, 2019
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.