Test logs. What are they good for? What can you do with them? What should you do with them? These aren't always easy questions to answer, but in this post, we'll take a look at what's possible and what's advisable when it comes to testing log data.
What are test logs good for? Or are they good for anything at all?
Let's start with an even more basic question: What is (and what isn't) a testing log? A testing log is not simply test output. Minimal pass/fail output may log the results of testing, but a true testing log should do more than that. At the very least, it should log the basics of the testing process itself, including test files used, specific test steps as they are performed, and any output messages or flags, with timestamps for each of these items. Ideally, it should also log key processes and variables indicating the state of the system before, during, and after the test.
How important is this information? There are plenty of circumstances under which you probably won't need test logging - for example, when a change to the software consistently passes all tests, or if it fails as the result of an easy-to-identify error in the code. Testing logs can make a difference, however, under a variety of circumstances:
Identifying problems with the test process itself. Tests aren't perfect, and you need to be able to monitor the test process for errors and potential problems. This is particularly true with parallel testing, where concurrency is important. (See "Troubleshooting Parallel Tests.")
Even with individual tests, however, testing logs can help to identify problems with test data, basic testing assumptions, or initial test conditions.
As an adjunct to standard debugging tools. Sophisticated (or even basic) debugging tools are indispensable when it comes to such things as stepping through processes, tracing execution, and monitoring data values at key points during execution.
They may, however, miss such relatively simple factors as the initial state of the system, data values at the beginning of execution, or environmental conditions while the test is running. These are all things which a good testing log can record.
Tracking down intermittent or hard-to-trace errors. Nobody likes intermittent bugs. Most developers would rather deal with consistently occurring catastrophic system crashes than problems that pop up unpredictably but need to be addressed.
Standard debugging tools may provide little or no help in such cases. A testing log that includes a sufficient level of detail may, however, allow you to identify the conditions which lead to an intermittent error. This information is often the key to tracing such problems down to their roots.
Identifying regressions and tracking the history of closely related errors. We've all seen it happen—a bug that was fixed several builds previously suddenly turns up again. How can you be sure that it's the same bug, though, and not a new problem that simply looks like the earlier error?
A detailed testing log may allow you to spot key similarities and differences in the system state during and after the test, making it possible to distinguish between an old bug and a new but similar one. Testing log data may also be helpful in identifying a related group of errors, based on how they affect the state of the system.
What's the best overall strategy for dealing with testing log data?
Ultimately, you need to use the logging tools (and the configurations for those tools) which best suit your test environment and testing software, as well as your organization's specific testing needs. The best place to start is by taking full advantage of your testing system's built-in logging features, which, depending on the test system itself, may provide all of the functionality, flexibility, and integration that you need. When this is the case, you can simply configure the test system's logging features as required. If you find that you need logging capabilities which the test system does not provide, you may want to consider integrating third-party test logging tools or services with your testing system to provide the required features.
You may also want to consider setting up a system for full integration of all of your logs—not just in testing, but along the entire length of your development and operations delivery chain. Full log integration provides a number of advantages. It can allow you to easily compare your test environment with your software's actual operating conditions, as well as check system values logged during testing against those logged during operation, for example.
If you use a log integration tool that includes an overall logging dashboard, you can generally get a quick overview of log data from a selection of sources, often with drill-down capabilities for focusing on individual incidents, results, or types of data (test logs, combined with specific operations logs, for example). You can use a logging dashboard not just to organize, view, and search logs, but also to identify relationships between log data (by way of charts, graphs, and other means of visually representing information) which might not otherwise be apparent.
Adding testing logs to an overall system of integrated logging helps to keep them from becoming just another near-useless mess of raw data cluttering up your system. This is a serious consideration, since one of the most frequently expressed objections to test logging is that the process of searching through testing logs for useful information can consume considerable time and resources without producing any useful results.
Therefore, even if you do not integrate your testing logs with other logs, or make use of a logging dashboard, it is important to set up some kind of system for extracting useful data from testing logs quickly, efficiently, and accurately. A variety of scripted log analysis tools (both open-source and proprietary) are available. You can also create custom, in-house log analysis scripts.
You should not allow testing log data (or any other kind of log data) to take up storage space, doing nothing. Most log data has some kind of value. If you analyze that data and make use of it, you can use it to increase the value and the quality of your software, and your entire delivery chain.
Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the '90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues.