Back to Resources

Blog

Posted August 14, 2018

Measuring the Effectiveness of your Testing Strategy

quote

Automated software testing has many advantages over its manual counterpart. From providing greater accuracy due to the elimination of human error to increasing both test coverage and the speed of testing with concepts such as parallel testing, automated testing has obvious advantages that cannot be ignored by a DevOps organization striving to build quality software on a tight delivery schedule.

The question, then, is not whether you should embrace automated testing. It is instead how to achieve test automation.

That begs the questions: How can an organization be sure that they are effective in their testing operations and strategy? What metrics can assist in determining the effectiveness of an organization’s testing strategy? What are some determining factors in the decision to add or remove a test from the testing process?

Below I will answer all of these questions in an attempt to provide valuable insight regarding an effective testing strategy.

Metrics Indicative of an Effective or Ineffective Testing Strategy

When attempting to evaluate your testing strategy, collecting and analyzing several key metrics can help to determine if changes to your strategy are necessary. The first metric that needs to be collected in order to assist in this evaluation is the number of bugs found as a result of the tests being run prior to the release of the application. This can be derived from the number of test failures throughout the builds leading to a particular release. The idea is to track and catalog each bug that is found through the use of your test set for the application prior to promoting the application to a higher testing environment or even production. The next metric to keep track of in this particular scenario is the number of bugs found outside of the test set - bugs that were either stumbled upon during manual testing in a higher testing environment or even found in production after a release was completed.

After a reasonable time frame of a couple of weeks or even a month following the production release, we can use the two metrics to analyze the effectiveness of the current test set. The calculation is simple. How many bugs found in the application were as a result of the set of tests that were developed to automate the software testing process, and how many were found outside of the test set? For example, if 50 total bugs were found during this process, but 25 of the bugs were found outside of the current set of automated tests, then some work needs to be done to help catch these bugs earlier in the process. It is likely that test coverage is inadequate and needs to be increased. On the other hand, if 45 of the 50 bugs were discovered as a result of the automated test set, then the coverage is pretty good, and the tests are proving to be quite effective.

No application that has complex requirements (as many applications do) is going to be deployed to production bug-free. While it would be nice, that is likely an unachievable goal. Instead the focus should be on expanding test coverage to an effective level that increases the chances of catching the majority of bugs through automated testing prior to a production release. Finding 45 out of 50 bugs in development thanks to the test set would be a pretty good indicator that the current testing strategy is working.

Does This Feature Require Automated Testing?

One major aspect to consider when evaluating testing operations and strategy is whether or not new or existing features actually should have automated test scripts developed and added to the application’s test set. While the idea of automating as much testing as possible to increase coverage may make sense in theory, it doesn’t always make sense in practice. As pointed out in Angie Jones' slideshow from SauceCon 2018, test automation can be expensive. Time is required to develop useful test scripts, but the process can also be expensive from a maintenance perspective. As features for the application evolve over time, so must the test scripts responsible for detecting issues with these features.

There are a couple of key factors to take into account when considering whether or not to automate testing for a particular feature. These factors include the following:

  • What is the impact on the customer if the feature is broken? Is this feature critical to the function of the application? Is the feature used by each customer each time the application is used? If this is the case, then it should be tested as thoroughly as possible, and thus would likely require automated testing to ensure that the functionality is working properly with each build. As mentioned in the slideshow by Angie Jones referenced earlier, adding/viewing tweets on Twitter would qualify as a high-impact feature requiring automated testing.

  • Is the feature difficult and time-consuming to test manually? Some features are far easier to test with an automated test script than they are to test manually. If this is the case with a feature you are considering scripting, then it’s possible that it makes sense to automate its testing. This can include features that require multiple test cases in order to fully ensure functionality. Maybe your application is a web application and on this website exists a form for user registration. This registration process may have several paths dependent upon the data input by the user. One case may simply require that you enter the minimum information possible: name, email and password. But other cases may exist that allow the user to enter additional information on different screens. Since user registration would likely be considered a critical feature of the application, and there exist several paths that need full testing to ensure proper functionality, it would make sense to avoid the tedium of manually testing this process with each build and to go ahead and automate the test cases for user registration.

As with anything in life, these factors are open to interpretation, and a case can be made to automate testing for just about any feature of an application. But just because feature testing can be automated doesn’t mean that it should be. One thing to keep in mind when deciding if automated testing is really necessary is the complexity of the application feature in conjunction with its importance. If the feature you are building is really simple in nature and also not part of the application’s core functionality, then it probably doesn’t need to be automated. It isn’t a showstopper, and likely wouldn’t garner a fast response from the development team if it was indeed broken.

Conclusion

Building an effective testing strategy is key to the success of any DevOps organization. This will likely be an iterative process that evolves over time until it is fine-tuned to support the organization’s ability to continuously deliver quality software to the customer. Evaluating the effectiveness of your testing strategy through the use of metrics related to bug-tracking while assessing the necessity of your automated tests that make up your application’s test sets will contribute positively to overall application quality.

Scott Fitzpatrick is a Fixate IO Contributor and has over 6 years of experience as a software developer. He has worked with many languages, including Java, ColdFusion, HTML/CSS, JavaScript and SQL.

Published:
Aug 14, 2018
Topics
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.