The Why and How of Tap Compare Testing
As software companies strive to release software faster and faster, they must ensure that the software they are releasing is properly controlled for quality.
One novel software testing strategy that can help to improve quality without compromising delivery speed is a process known as tap compare testing. Let’s take a look at what this entails and how it can help your organization.
What is tap compare testing?
Tap compare testing is unique in that it tests an environment without developing test scripts. It requires an automated design to fetch production requests, and by way of illustration, lets you fetch the top 25 production requests of the month. Then, it's ready to sample real production request by sending the fetched requests to a canary or blue-green production environment not servicing production traffic. Tap compare testing allows us to compare the results for readiness and evaluate the performance of the new implementation of the application within a controlled environment. (We could perform the technique against a staging environment as well.)
Why should you consider tap compare testing?
First, tap compare testing can help you to feel more confident about the quality of a new version of the application. We must demand additional testing against the system to ensure it remains highly available, scalable, recoverable, and non-disruptive to end users after an update.
It starts with greater collaboration as a team (such as DevOps, Developer, and QA) without putting all the quality responsibilities on the QA team. I believe this would encourage everyone to start discussing and exploring different forms of testing in production. And I believe everyone should consider a technique called “tap compare.” The strategy allows replaying samples of production requests against the new implementation and compares the responses to the previous version. It provides another layer of quality validation on top of production requests to find potential bugs before exposing end users to production bugs.
At the end of the day, the team is collaborating and thinking about quality at every stage of the deployment pipeline. It will lead to greater quality and end-user experiences.
Explore in more detail
The concept behind tap compare testing is simple, but the technique can be challenging for teams. It requires the following main components: an immutable infrastructure, storage for production requests, a testing harness, and performing blue-green deployments or canary releases.
Let's explore in more detail.
Immutable Infrastructure requires keeping the environment in sync; it must contain exact replicas from dev, staging, and production. Achieving consistent, disposable, and repeatable infrastructure requires DevOps methodologies.
Storage for Production Requests to fetch requests later. The requests must be stored by sending events to New Relic, Splunk, Sumo Logic, or data storage in the cloud.
Testing Harness to automatically fetch the top x number of production requests from the previous day, week, or month to replay the requests within a controlled production environment not serving production traffic. The replayed tests add an extra layer of validation to help us be more confident before switching traffic to the new version of the application.
Deployment Strategy to test applications in production in a safe way by using blue-green deployment or canary releases.
Now, let me explain how the magic happens with these blue-green and canary deployment strategies!
- Blue-Green is a deployment technique that reduces downtime and risk by running two identical production environments. Only one of the environments is serving all of the production traffic. After deploying and fully testing the new version of the application in blue, you’re now ready to switch the router so all incoming requests now go to blue instead of green. Blue is now live, and green is idle. The elementary concept is to have two easily switchable environments to switch between.
- Canary is a deployment technique that reduces the risk of introducing a new version of the application in production by slowly rolling out a change to a small subset of users after fully testing in blue before making it available to everyone. It lets you test the waters before pulling the trigger on a full release.
The faster the feedback, the faster you can fail the deployment, or proceed cautiously.
The goal of tap compare testing is to help us measure the quality of a release in a controlled production environment before making it available to everyone. Everyone should continue testing early and often, and start considering shift-right testing techniques as part of their testing strategy moving forward.
Greg Sypolt (@gregsypolt) is Director of Quality Engineering at Gannett | USA Today Network, a Fixate IO Contributor, and co-founder of Quality Element. He is responsible for test automation solutions, test coverage (from unit to end-to-end), and continuous integration across all Gannett | USA Today Network products, and has helped change the testing approach from manual to automated testing across several products at Gannett | USA Today Network. To determine improvements and testing gaps, he conducted a face-to-face interview survey process to understand all product development and deployment processes, testing strategies, tooling, and interactive in-house training programs.
- Accessibility Testing
- Appium Resources
- Best Practices
- Continuous Delivery
- Continuous Integration
- Continuous Testing
- Cross Browser Testing
- Guest Blog Posts
- Load Testing
- Machine Learning
- Mobile Development & Testing
- News & Product Updates
- Open Sauce
- Open Source
- Performance Testing
- Product Updates
- Quality Assurance
- Quality Engineering
- Sauce Product Info
- Security Testing
- Selenium Resources
- Software Development & Testing
- The Story of Sauce