How can you improve software quality while also increasing speed? Here’s one idea that I think will play an increasingly important role in the future of software testing: test generation. What I mean is that in the future, software testing will include auto-generated tests that allow code to reach the testing stage faster while also improving coverage. Let me explain...
There are so many challenges when it comes to manually writing automated tests. The biggest problem I see is that you have to manually fill the inputs and expected states for different scenarios of a module that you want to test. You find yourself copying and pasting the boilerplate code inside every test script, which is redundant and hard to manage! We need to make testing better with test generation methodologies.
What if we could auto-generate test pre-conditions, inputs, and expected states from models or template files? We could take it further and generate tests for specific scenarios from the planning algorithm focus on targeted testing. It sounds too good to be true, right? We are heading down the path where artificial intelligence (AI) and machine learning (ML) will soon become the norm of software testing. In fact, we are already interacting with some form of AI nearly every day. Netflix, Spotify, and Amazon make amazing recommendations based on our search inputs. There is no doubt that in the coming years, AI and ML types of testing will creep into our daily lives like those companies’ recommendations.
The advantages of auto-generated tests over manually developing automated tests greatly outweigh the disadvantages.
Eliminates the need to manually write automated tests, which is a huge time saver
Generated tests are written in a more uniform way across the codebase
All the business rules are centralized, not scattered across test scripts
Able to auto-generate tests from user analytic data; target only pull request (PR) changes; choice of selecting between the various combinations of data inputs; choosing one or multiple paths; and more
Random test data generation
Easier test maintenance
Reduction in costs associated with the development time of automated tests
Test generation will increase testing thoroughness and improve your test coverage
Increases software testing efficiencies
Improves the quality of life for the person responsible for authoring automated test manually; no longer a repetitive task
Developers and testers can spend time on more important things
If this all sounds too futuristic to you, bear in mind that tools for generating tests already exist, even if they are not as widely used as manual test-writing processes. Below are some examples of test generation solutions.
Simulato is an open source testing tool written in Node.js by the Quality Engineering team at Gannett. The tool is designed to meet the need for testing web user interfaces, using the theory of model-based testing. When using model-based testing, developers create small, reusable components found in a website (such as input boxes, select boxes, forms, etc.), which are combined to describe the system under test. The tool using these components then generates test scripts. The Selenium tests can then be run against the system to test expected behavior through Sauce Labs.
Curiosity is a solution that wraps model-based testing around the Selenium testing framework. You can generate tests automatically from a model that are equivalent to paths through the model of a system, using coverage algorithms to generate the minimum number of paths possible with the maximum coverage.
Testim, AutonomIQ, and TestCraft concentrate on script-less testing to speed up test creation. Testim and TestCraft leverage AI and ML for self-heal maintenance on automated tests when the system under test changes.
When it comes to automatic test generation, we still have a ways to go in terms of changing the way QA teams think about what is possible, as well as the tooling they use. Still, I am of the firm opinion that test generation is the next big thing in software testing. Never stop learning about the future of software testing, because it is changing rapidly.
Greg Sypolt, Director of Quality Engineering at Gannett | USA Today Network, maintains a developer, quality, and DevOps mindset, allowing him to bridge the gaps between all team members to achieve desired outcomes. Greg helps shape the organization’s approach to testing, tools, processes, and continuous integration and supports development teams to deliver software that meets high-quality software standards. He's an advocate for automating the right things and ensuring that tests are reusable and maintainable. He actively contributes to the testing community by speaking at conferences, writing articles, blogging, and through direct involvement in various testing-related activities.