There's no doubt about it: a user interface (whether it's graphic or text-only) can be very nice, at least when you need to make decisions in real time or enter data on the spot. But when you know exactly what you're going to do and how you're going to perform each step, and you have a set of tasks that you're likely to perform more than once or twice, any kind of user interface can slow you down, get in the way, and eventually become a maddening, time-wasting annoyance.
There's no doubt about it: a user interface (whether it's graphic or text-only) can be very nice, at least when you need to make decisions in real time or enter data on the spot.
But when you know exactly what you're going to do and how you're going to perform each step, and you have a set of tasks that you're likely to perform more than once or twice, any kind of user interface can slow you down, get in the way, and eventually become a maddening, time-wasting annoyance. This is nowhere more true than with testing, where complex and highly repetitive tasks are the norm, and there is nothing to be gained by waiting to enter an occasional command or piece of data.
This article explains the value of test scripts, and the circumstances under which you should consider using them to perform software tests.
I'll begin by asking a very basic question: How are you currently testing? Are you still testing manually? For most people, the answer is probably "yes, sometimes." There are situations in which you need to do at least some testing by hand (for example, if you're dealing with specialized hardware or software that does not lend itself to automated testing, or you need to test physical user input).
But sometimes, testing is done manually simply because that's the way that it has always been done, and nobody has gotten around to changing the process. Needless to say, this kind of manual testing is virtually always sequential as well, in part because there's really no sane way to manually conduct genuine parallel tests, but mostly because "that's how we've always done it."
There's really no point in that kind of testing these days. This is the second decade of the 21st century -- not the questionably good old days of the IBM 360, punch card machines, and three-day turnarounds. You can automate just about any kind of test these days -- put it in a script once, and after that, all you have to do is run the script. And while you're at it, you can parallelize most of your tests. Just think of all the time you'll save -- you might even be able to live a normal life!
Test scripts are the most fundamental level of test automation. Once you place a sequence of test steps in a script, you no longer need to perform them manually. Any further change to the test simply becomes a change to the script itself. But scripted tests are only the beginning of test automation. With a full set of scripts for individual tests, you can still find yourself stuck to the keyboard and monitor, entering the commands to run each script as it comes up, and watching for the output. This is automation, but only a half-measure kind of automation. You can do better.
The next level of test automation is the test regime itself. When you create a script with a sequence of test operations, the test itself becomes code, and you can run the test in the same way that you would run any kind of code. This includes running the code from a script.
So, you create test scripts for each set of tests that you want to perform as a group. These scripts simply run the individual test scripts in the proper sequence. This is the point where you can script such things as parallel testing (which must be script-driven to accomplish much), and the automatic creation of virtual machines or containers for testing (also a basic part of most parallel testing).
At this point, you've reached the basic infrastructure level of testing automation. Your tests run from scripts, and for the most part, those scripts are run from other scripts, which also control the test environment and at least the basic test output. But even at this level, you may find yourself spending too much time managing your test system by means of some kind of user interface. How much more can you automate?
A lot. With the right tools, you can in fact automate testing to the point where under most circumstances, your main user-interface contact with your test system will consist of viewing automatically generated test reports. Everything else can be handled by the same software that manages your continuous delivery system.
Consider, for example, the Jenkins Sauce OnDemand plugin. Testing by means of the Sauce Labs cloud-based test service can be (and very often is) fully automated at the test infrastructure level, of course. The Sauce Connect™ proxy server allows you to connect your test site to the Sauce test facilities when that site is behind a firewall (a process that, simply by the nature of proxy connections and firewalls, can be technically complex). You can use the Firefox-based Sauce Connect Launcher to take care of the connection process. This is the UI-based approach, which can be useful under a variety of circumstances.
You can avoid the UI, however, by adding Sauce Connect to your Selenium test scripts. This is automation at the level of the test infrastructure. It is still contained within the infrastructure of the test system itself, however, and not integrated with the overall continuous delivery infrastructure.
This is where the Sauce OnDemand plugin makes its appearance. If you are using Jenkins to manage your continuous delivery/continuous integration chain, the plugin allows you to automate the process of setting up (and tearing down) Sauce Connect, as well as integrate Sauce reports with Jenkins. Jenkins initiates the Sauce Connect link, and Jenkins displays Sauce output. This effectively integrates your Sauce tests with your continuous delivery infrastructure in a way that can be fully script-driven, with virtually no UI interaction on your part.
Can you avoid the UI when it comes to testing? Yes, you can. Will it make your job easier? Yes, it will. So what are you waiting for? Start automating, and start integrating!
Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the '90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues.