Back to Resources

Blog

Posted February 21, 2019

Best Practices for Working with Test Data

quote

We live in a world where data is king. Although you might not think that data has a huge role to play in software testing, it does. The reason? Because software testing is only productive if the data produced by the tests is used effectively.

That means that if you do not properly analyze and interpret data from tests, you might as well not be performing tests at all.

If you run just a few tests, it's easy enough to put this advice into practice. However, things get more complicated when you start thinking about how to use test data effectively at scale. If you are running dozens or hundreds of automated tests daily, how do you ensure that your test data is effectively analyzed, that the results of the analysis are communicated to all stakeholders, and that those stakeholders act upon the insights produced by the data?

I address these questions in this article by discussing best practices for effectively interpreting test data when employing agile software testing strategies. In addition, I'll discuss the metrics that can be used to assist in this analysis and how to go about acting upon the results of test data.

Various Types of Software Testing Strategies

Various types of software testing strategies exist to ensure application quality. And these strategies tend to lend differing metrics to be analyzed and evaluated by the DevOps team.

For instance, the practice of continuous testing is now commonplace amongst DevOps organizations. This testing strategy involves creating automated test scripts to be used for providing end-to-end automated testing for all critical test cases. The scripts are then executed throughout the development lifecycle, typically being run as part of the process of continuous integration. The automated test scripts are integrated with your CI tool, running at the time the recently modified application is built. If a test script fails, then so does the build. This allows for early discovery of application issues and allows for quicker and easier remediation by the development team due to the agile nature of this particular software testing strategy.

Another strategy for testing your application is the popular practice of testing in production. While this may sound particularly dangerous, as the terms “testing” and “production” used in the same sentence can serve to scare any good developer or testing folk, the process is relatively simple. The idea of testing in production typically involves utilizing software testing procedures such as A/B testing and performance monitoring. A/B testing refers to the strategy of releasing two versions of a feature into a production environment (version A and version B). The DevOps team can then collect and analyze data to determine the more effective and user-friendly version that will then serve as the “winner” and sole version of the feature in future releases.

Performance monitoring, on the other hand, is essentially a form of continuous testing in production. Utilizing a monitoring tool for collecting this data, the DevOps team will then have access to metrics such as the length of time for loading particular pages and error codes being thrown by the application based on use cases that may have been difficult to identify pre-release.

Best Practices for Analytics Interpretation

Now that we have established some of the differing software testing strategies that can be employed in an agile environment, let’s get into how to take the resulting data and interpret it effectively for use in bettering the application.

  • Understand the metrics being collected - The first step to analyzing your data properly is to be certain that you fully understand the data that is being collected. Too often, teams look at test data from one particular angle and draw a generalized conclusion that may not be entirely indicative of what is happening. For instance, tests run for your web application will provide many useful metrics, such as the browser type for your test, the number of test runs over a particular time period, the number of successes, the number of failures, etc. If you want to draw conclusions that you can use from this data, it is critical to start with the big picture and fully understand the definition of each metric.

  • Combine metrics to draw useful conclusions - Once you have a full understanding of what the data means from an individual metric perspective, the challenge is to produce the different angles for data interpretation. Combining metrics will allow you to see the data in a different light. Maybe a particular test only fails in Internet Explorer, but no such issues exist when using Google Chrome or Firefox. Maybe a test ran successfully for a particular date range, but now fails repeatedly since a certain set of changes were committed to the code base. All of these permutations of the collected data can help to provide valuable insight and likely help isolate potential issues within your application.

  • Use your time efficiently and act on the conclusions that will have the biggest impact - Understand the impact of particular errors within your application and deal with the show-stoppers first. Errors that hinder critical processes within your application will result in an extremely poor user experience, damaging your credibility with your customers. Efficient use of your time includes looking at data for those tests that examine the validity of features critical to application functionality.

  • Take advantage of available software that helps analyze collected data - A good developer/tester takes advantage of all available tools at their disposal. The more help to filter the test data, the more time you can spend interpreting the data to draw useful conclusions that improve the quality of the application. Tools such as Sauce Labs Test Analytics can provide this type of assistance. By providing functionality to filter data for test runs based on timeframe, browser type, OS, etc., this tool can save time and help in the effort to effectively interpret collected information.

Conclusion

Agile software testing is an important part of the application development process for any DevOps organization. But the information collected during testing is only useful if it is examined and interpreted to draw conclusions that improve application quality. By taking the time to analyze test data from different angles and taking advantage of data analytics tools at your disposal, you can improve your ability to track down issues efficiently and improve the quality of your application in a timely manner.

Scott Fitzpatrick is a Fixate IO Contributor and has over 6 years of experience in software development. He has worked with many languages, including Java, ColdFusion, HTML/CSS, JavaScript and SQL. Twitter: @sc_fitzpatrick

Published:
Feb 21, 2019
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.