Companies with web and mobile applications are under increasing pressure to rapidly release features and provide fixes to user problems. Most companies do this on a set schedule where product managers can keep track of the specific deliverables and associated milestones and everything gets planned out.
With a team like Selenium, on the other hand, people just work on the things they think need to be worked on. Whenever someone asks when the next release will happen, the stock answer is: “when it is ready.”
Some of the most successful web applications mix these approaches by relentlessly prioritizing and tracking deliverables, but releasing each of them when they are ready they just also make sure that the release is always ready.
The faster the release process is, the more challenging it is to provide the level of application quality users expect user. A comprehensive testing strategy is an absolute requirement to ensuring a positive user experience from any given release. A traditional manual testing approach to validating features can’t scale to the velocity required, and automation is limited in the functionality it can provide.
This post provides ideas for how to maintain confidence in your product while achieving the delivery pace demanded by the market and your users.
Releasing software continuously and on demand requires a fundamental shift in how we think about testing and regressions. The traditional approach is to spend tons of time, money, and resources up front to prevent bugs from getting released. This approach optimizes for the Mean Time To Failure (MTTF). However, bugs happen. The cost of preventing every single bug from getting into production would be astronomical—if it were even possible—when considering the complexity of the systems involved.
The efficacy of extremely short release cycles is predicated on the idea that, since bugs are going to happen regardless, it's more beneficial to focus on minimizing their impact on users. This approach optimizes for the Mean Time To Recovery (MTTR). Fewer bugs getting released doesn’t matter as much if the user has to experience them for a much longer period because of a lengthy process to create a fix and validate that it didn’t break something else. Advanced approaches like feature flags, A/B testing, canary testing, monitoring and observability open up a wide array of possibilities for continued improvement in this area.
When the amount of time available for regression testing is limited, it becomes necessary to focus on quality throughout the entire development process. Gone are the days of the “Engineering Team” being able to “throw the code over the wall” to the “QA Team.”
Manual testers used to have sufficient time to evaluate the site for everything from bugs to look and feel to user expectations. This now needs to be everyone’s responsibility.
Testers need to work with developers to understand what code has been changed and what things might or won’t be affected. User experience and design teams know how the application is supposed to look like and need to be part of the process to ensure it matches their concepts. Business teams know what the features are supposed to do and need to be involved in ensuring that the expectations were properly met.
Less time before release means that everyone needs to be responsible for their piece of the final product from the beginning.
The first instinct of testers when writing automated tests is to write them the same way they tested them. The thought is that a computer will do “the same thing,” just faster and on demand. This can produce horrible results because that’s not how to make the best use of a computer, and it provides none of the advantages of having human eyeballs on the product.
Hopefully we can skip past the “checking vs testing” debate and agree that computers do most human things poorly and shouldn’t be relied on as direct replacements. Thankfully, the goal isn’t a better or faster human process, it is to provide sufficient confidence in the end product within the time frame required for the company to capitalize on the rapid pace of improvements.
Another hurdle is wasting resources on duplicate or unnecessary efforts.
For one, don’t test other people’s software. Test against the contract provided by that software and hold the other teams or companies accountable for testing the other side of that contract.
Similarly, don’t test the same thing at different layers of the test pyramid—or test trophy. If you're using black box API tests to set the state of a system and evaluate desired behaviors, don’t repeat the same evaluation in the UI. Focus on making sure the UI is doing what it needs to do independently.
Regardless of who holds the role and what teams they are on, testers and developers must collaborate on how to best ensure the correctness of what gets released.
All testing processes need to be reviewed regularly for weaknesses and limitations. What changes have been made to the application or the process that might affect the tests that are being executed? How can new testing requirements like asset loading (performance testing), visual testing, or accessibility testing be best integrated into the existing tests? (pro tip: combining any of these types of testing with functional testing will cause problems, but all three of these tests can be easily managed together). What strategies can be deployed to improve performance without sacrificing confidence?
While test automation can’t replace everything that manual testing provides, the demands and requirements of the industry are forcing companies to find alternate ways to maintain confidence in their frequent releases that will minimize the impact on users. While focusing on strategies to improve MTTR is ultimately a prerequisite to actual continuous delivery, companies can take great strides toward improving release times with increased confidence without dramatically changing their process. Have teams talk to each other and hold each accountable for their component of the overall success of the product. Make sure to get the most out of what computers and automation are good at while avoiding unnecessary duplication. Finally, make sure your strategy does not become stagnant and you don’t get so caught up in the daily details that you miss finding ways to improve the big picture.
Learn everything you need to know about creating a test automation strategy, from designing and implementing a scalable test automation operation to overcoming common challenges.
What's the difference between test orchestration and test automation, and which strategy better fits your testing needs? Learn more in this blog post.