A little more than a year ago today, aiming to address what we saw as a significant gap in the testing market, we first launched the Sauce Labs Continuous Testing Benchmark, a new report organizations could use to see how their continuous testing efforts stacked up against both critical best practices and the testing efforts of other enterprises. In the time since, hundreds of organizations have leveraged the report to better understand and improve upon the state of their continuous testing initiatives. It has become an essential resource for the testing community.
It’s only fitting then that the Sauce Labs Continuous Testing Benchmark for 2020 is now available on the same day we kickoff SauceCon Online. The theme for this year’s conference is “Test Better, Together,” and perhaps no community resource better exemplifies the spirit of testing better together than the Sauce Labs Continuous Testing Benchmark.
The report looks at real customer data from the Sauce Labs Continuous Testing Cloud, a platform on which more than 3 million tests are run each day, and on which more than 3 billion tests have been run since its inception. It articulates the four benchmark metrics critical to continuous testing excellence and measures how organizations’ collective testing efforts stack up against these defined standards. Those metrics include test quality, test run time, test platform coverage, and test efficiency, the latter of which looks closely at the extent to which organizations are leveraging available testing capacity to achieve parallelization.
New for 2020, the report examines the performance of mobile tests (for mobile apps and websites) in addition to desktop tests (for desktop apps and websites). As organizations increasingly adopt a mobile-first mindset, understanding and improving the efficacy of our mobile testing efforts is paramount. Equally paramount (as is laid bare in the 2020 benchmark report) is the need for organizations to improve test quality. Test quality is, after all, the foundation on which successful continuous testing is built. Though organizations’ collective performance against the test quality benchmark did improve in comparison to 2019, we still have a long way to go.
That’s why the report also looks for the first time at the potential impact of failure analysis, a new method of applying machine-learning to pass/fail data in order to surface the most common underlying causes of failures within a test suite. Given the urgent need to improve test quality, failure analysis stands as one of the most promising new developments the testing industry has seen in some time.
More important than what the 2020 benchmark report reveals, however, is what it represents: an opportunity for testers and developers to continue learning, growing, and ultimately, improving. Amid the day-to-day pressures of developing great software, it’s easy to forget that most organizations are just beginning their continuous testing journeys, and most will face challenges as those journeys evolve. The best way for us to collectively overcome those challenges is to continue the sense of sharing, support, and community that has come to define the testing industry. We hope this report will continue to play a role in those efforts, making it that much easier for us to test better, together.