How often should you parallel test? If that sounds like a trick question, maybe it is. In this post, we'll let you in on the "trick" part of the question, and then we'll talk about what really matters when it comes to when and how often you should parallel test.
First, the trick. It lies in what parallel testing is, and more to the point, what it isn't.
What is parallel testing? The term "parallel testing" is generic and rather broad, but it typically refers to automated systems for simultaneously testing multiple applications or components, with each application or subcomponent being tested on a different computer. It is sometimes also used to describe automated testing of a single application or component on multiple platforms. The test computers can be individual hardware units, or more typically, separate virtual machines. In all cases, however, the combination of automation and multiple test systems makes it possible to run many more tests than would be practical with serial testing, and it reduces the time required for testing to a fraction of that required for the equivalent serial tests. The key points to keep in mind about parallel testing are that:
Parallel testing is a framework for testing. It consists basically of a system for controlling and running automated tests on multiple computers, the automated test scripts, the test systems, and reports of the test results. The framework can accommodate most, if not all, of the specific types of test that you are likely to run.
Parallel testing is, more than anything, the logical outcome of applying current IT technology to testing. It is coming into widespread use largely because it is now practical and relatively easy to run parallel tests on multiple systems. As is so often the case when new technology brings new capabilities, people are quick to take advantage of these capabilities, once they understand what is possible.
Parallel testing is in many ways a replacement for sequential testing, and manual testing in general. Sequential and manual testing have dominated software QA since the early days of computing, but not because they offer any real advantages. For a long time, they were the only ways to test. With no feasible alternative, people made the best of the techniques that were available. The advantages of parallel testing (in terms of time, scope, and cost) are so great, however, that it is well on its way to becoming the new testing standard.
It is also important to keep in mind what parallel testing is not:
Parallel testing is not a subset of the software testing regime, tied to specific functions or conditions. Distributed testing (with which parallel testing is sometimes confused) does use multiple computers to test the interaction between components. As we noted above, parallel testing is a framework (which can include distributed testing, if appropriate), rather than a type of test.
Parallel testing is not a set of spot-checks. It makes no sense to add parallel testing to your test regime as a sort of afterthought, just to catch anything that might turn up. As a testing framework, it should be able to contain almost all of your existing sequential tests, including both thorough testing and spot-checks.
Parallel testing is not an add-on to your testing regime to meet new testing requirements or follow current trends. If you're adding some parallel tests because higher-ups told you to, or because you want to keep up with the latest trend, you're likely to miss almost everything that parallel testing can do for you.
That, then, is the trick part of the question. Parallel testing is not just an add-on to your current system of sequential testing, any more than computers are an add-on to pencil-and-paper calculation, and for much the same reason - It is a new framework, and a radical extension of what is possible.
So, what is important about how often you parallel test? In many ways, it really isn't a matter of how often, but a matter of when - when the required tests can be done in parallel, when parallel testing will be more productive than sequential testing, and when parallel testing will cost less than sequential testing. And the truth is that when parallel testing will do the job, it is almost invariably much more productive and cost-effective than the equivalent set of tests in a sequential-testing environment.
You can break it down like this:
Analyze your existing test regime, break it down into the smallest logical units, and reorganize it for fully automated parallel testing.
Are there any tests left over - tests that must be performed sequentially (or manually), outside of the parallel testing framework? Be careful in determining this - some tests may simply need to be reconceived or restructured before they will fit into the parallel framework.
Separate out the tests which genuinely cannot or should not be done in parallel, and schedule those as required. In effect, you will be treating those tests as add-ons to your standard (parallel) testing regime.
So there it is. Welcome to the new world. Parallel testing is now the rule, and not an optional, "Should we toss in a few of these?" add-on. And in this new world, the real question may be, how often should you go out of your way to perform time and labor-consuming sequential tests, if you need to do them at all?
Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the '90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues.