Model-based testing is a software testing technique that helps you simplify and accelerate application development without jeopardizing quality. More importantly, the model-based testing technique allows your developers to work in parallel with the implementation of the code for the user story, even as requirements change.
If that all sounds great to you and you’re yearning to learn more, keep reading. Below, I explain what model-based testing is and how it can benefit you.
Describing the problem with traditional UI testing
You will find web user interfaces everywhere. Currently, using Selenium is the best method at the moment. Scripts can be made to create UI tests, but this is a tedious process, as web user interfaces can change quickly, adhering to the latest trends or technologies.
Yes, we all know that front-end testing is fragile. Our real problem is that not all sections of the page are tested. We must have blinders on when reviewing the acceptance criteria, and we don't realize all the journeys or combinations. My assumption is that most scripted automation focuses on a single journey of validation. The example image below is a perfect example of five test script sequences validating a few pages and sections.
A few of the secondary problems originate from testing as an afterthought. It’s not brought into the discovery stage (planning) early enough, or there’s no automation buy-in. By shifting to model-based testing techniques, the system can be broken down into smaller manageable components.
A quick look at model-based testing
By shifting to model-based testing techniques, the system can be broken down into smaller manageable components. The smaller components are modeled to capture the expected behavior, then use algorithms to recreate the entire system from a collection of models to generate tests from those models.
The advantage of modeling is that when the interface changes, only the smaller modeled components need to be updated. The ownership belongs to the developer who made the change.
The tool my test integration team created, Simulato, is an open source tool that allows the developer to develop models of the system they want to test using Selenium. Simulato generates and runs the test scripts on Sauce Labs based on the models created and fed into the tool test generation algorithm. It covers paths outside of regular journeys. Let's take a more in-depth look at how model-based testing expands test coverage.
Expanded and better coverage
The more significant challenge lies in understanding model-based test coverage and providing insight into the coverage provided. As a model, it's describing the system requirements and behavior in small and modular components.
The traditional testing method example above outlined five test script sequences.
Looking at the example of the model-based testing method below, we only needed to create three models to describe the system requirements and behavior. The test generation algorithm optimized the coverage by just generating two test script sequences.
How is that better coverage? The test generation only creates tests based on reachable states. It's also mapping modular UI component code with a model which is presenting true UI code coverage.
Let’s take an in-depth look at the web page below showing five sections and a Submit button. The traditional testing method only creates one test script sequence.
In reality, those five sections have 120 possible interaction paths.
The coverage of the test generation all depends on your test generation algorithms. We are looking to expand our test generation algorithms for Simulato to learn from data (how users are using your application) and tags and metadata to produce targeted testing, with offline replanning with options for long tests, and a distributed version for faster test generation.
In summary, there are many opportunities to inspire better model-based test generation, which leads to expanded and better test coverage. We're continuing to work on finalizing a new reporter that will show actions generated and executed, which will summarize passed/failed. We believe that the visualization of the testing will help instill greater confidence in model-based test coverage.
It's clear to me that model-based testing is the way to go. I would love to hear your thoughts on this technique and our open source tool.
Greg Sypolt (@gregsypolt) is Director of Quality Engineering at Gannett | USA Today Network, a Fixate IO Contributor, and co-founder of Quality Element. He is responsible for test automation solutions, test coverage (from unit to end-to-end), and continuous integration across all Gannett | USA Today Network products, and has helped change the testing approach from manual to automated testing across several products at Gannett | USA Today Network. To determine improvements and testing gaps, he conducted a face-to-face interview survey process to understand all product development and deployment processes, testing strategies, tooling, and interactive in-house training programs.