How To Curate Your Test Suite
Software testing is easy, isn't it? Just run some tests, get the results, and do... well, whatever you do with test results. What kind of tests? Just standard software tests, of course. What other kinds of testing are there?
Sound familiar? It should. It's an all-too-common approach to testing among software developers, even among those who have enough experience to know better. The trouble with taking such an approach is that you really aren't testing - you're stepping through the motions of testing, in the hope that it will cover the ground. It may produce useful results, or it may not. And you may get useful information out of those results, or that information may sit undiscovered, because you have no system for reliably identifying and extracting relevant test data.
The Cost of Generic Testing
Taking a generic approach to testing is also likely to cost you time, rather than saving it. If half the tests you run are irrelevant to your application or its method of deployment, then the hours and money spent on them are basically wasted; it would probably be more productive to take everybody in your department out for dim sum on the company tab once or twice a week (a practice which I highly recommend, by the way).
So, what should you be doing? What is the right approach to managing your software test suite?
If software testing isn't generic, what is it? It is unique, and distinctive, somewhat like a collection of fine art. And like a fine art collection, it needs to be curated. What do we mean by "curated"? Your software test suite should be formulated, organized, and managed based on the unique character of your application, and on the conditions of its deployment and operation.
Your Application Really is a Special Snowflake
Every non-trivial application that has been developed is unique. Even if an application is very similar in overall purpose and function to a thousand other programs currently in use, the specific collection of components and interactions of which it is composed will be distinctive. And more often than not, it will include elements which are new, or which are so uncommon that they rarely occur together in combination.
This means that you need to select and configure tests to adequately cover the unique or distinctive elements of your application, including unusual interactions between otherwise common components. Beyond individual components and interactions, the overall collection of elements in your application will always form a unique whole, and must be tested as such. Even if you use nothing but generic, out-of-the-box tests, you need to configure them and filter their outputs in order to capture these unique characteristics, both at the component and overall application level.
Deployment is Never Generic
No two deployments are completely the same. Applications always run on individual machines, servers, or cloud services, each with their own characteristics. Cloud deployment may insulate applications from the underlying hardware, but no cloud service provides a truly generic environment.
And deployment is never just a matter of the platform on which your software runs; it is also a function of the conditions under which it operates. Who uses your software, and how do they use it? Where are your users located, when do they use it most, and what parts of it receive the heaviest use?
The Stress of Stress
Load and stress testing always need to be tailored to the actual conditions of operation. This means that you need to test for expected day-to-day conditions, expected peaks, and unexpected-but-possible overloads. None of this is generic; it always depends on the nature of your application, your market, and your user base. The only thing that generic tests can guarantee is that they will not capture the actual conditions under which your software operates.
Know What You Want, Understand What You Need
The bottom line is that you need to understand what tests your software requires, and you need to know what information you want to extract from the test results. This is in essence what software test curation is all about. Software testing should always be a function of the characteristics and functions of your application, and the conditions of its deployment and operation.
If, for example, your online store's inventory system includes a size/color/style matrix, you need to test its interaction with the store's GUI, with the inventory and order databases, and with the shopping cart functions. If you expect transaction peaks based on multiple seasonal demand cycles, your load and stress tests should reflect conditions if and when those peaks coincide. If parts of your application depend on services hosted elsewhere, you should test your application's behavior when those services become temporarily unavailable.
Becoming a Test Suite Curator
How can you move from a generic test suite to one that is curated? There are five basic steps:
1. Application- and deployment-centric analysis
Start by analyzing the architecture of your software and the expected conditions of its deployment and operation.
2. List key testing targets
These include elements and interactions which are not generic, or which may be subject to stress, as well as those which are essential to the basic purpose of the software.
3. Create a test schema
Set up a high-level testing plan based on your test target list. It should identify the kinds of tests to be performed, and the outputs that you need.
4. Set up the test suite
This includes configuring existing tests and creating new tests based on your schema. This is by far the most detailed and labor-intensive part of the process, but it will be much easier (and require less revision) if you do a thorough job with steps 1 and 2.
5. Set up a filtering and aggregation system for your test output
This is almost as important as setting up the tests themselves. You need to filter out test noise, extract key results, and aggregate them in a way that coherently combines relevant items. This filtered, aggregated data should be available in the form of reports, and ideally in dashboard format as well. (And needless to say, all raw test data should be logged.)
What should you do then? Run your tests, pay close attention to the results, and be prepared to revise your test suite as often as necessary in response to changes to the software or its operating conditions.
And the time and money that you save by curating your test suite? You just may find that you can use it to take your department out to dim sum on a regular basis, with more than enough left over for new development projects.
Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the ‘90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues.
- Accessibility Testing
- Appium Resources
- Best Practices
- Continuous Delivery
- Continuous Integration
- Continuous Testing
- Cross Browser Testing
- Guest Blog Posts
- Load Testing
- Machine Learning
- Mobile Development & Testing
- News & Product Updates
- Open Sauce
- Open Source
- Performance Testing
- Product Updates
- Quality Assurance
- Quality Engineering
- Sauce Product Info
- Security Testing
- Selenium Resources
- Software Development & Testing
- The Story of Sauce