Back to Resources

Blog

Posted November 30, 2016

Teach Automated Testing With More Show and Less Tell

quote

Writing automated tests can be daunting. Teaching someone else is even more intimidating.

For many technology professionals, testing just doesn’t seem worthwhile and certainly isn’t as interesting as writing new code. Far too many organizations boast impressive-looking test suites that, on close examination, don’t add value. However, high-quality automated testing not only spares untold problems down the line, but also can be an enjoyable challenge to create. Ultimately, good automated testing comes from people who really care about the process.

So how do you help your colleagues see the light?

Teach using real systems, not toys

Test curriculum often begins by defining a simple system to test. You can see why - It’s hard enough to teach testing without having to spend time explaining the code you are trying to test. Maybe that’s sensible for generic training, but it just feels silly to test code that’s clearly correct. If you want to really engage students, test something that’s complex and changeable. Any production software will do, including code you can’t actually fix. Testing a competitor's product instead of your own can be just as informative (and a lot more fun).

If a test can never fail, it’s not going to feel worthwhile preparing it even as a training exercise. Instead, test something that might break down. Perhaps the most complex and changeable system is the modern Internet. Every new OS release and every new browser update has the potential to break sites in new and interesting ways. So rather than making a fake system to test, why not test your project’s website? For that, you’ll want to get to know Selenium, a robust and mature browser automation tool. Webpages might not be the easiest thing to test, but your students won’t be bored.

Don’t just require testing, demonstrate its value

It’s not unusual for managers to mandate testing, send employees to a class and assume the result will be fewer regressions and bugs. Unfortunately, that’s not how people work. Worse yet, managers will sometimes create incentives for building more tests, which encourages cranking them out as quickly as possible. Poorly thought-out testing is worse than no testing at all.

I interned at the National Weather Service where we evaluated new instruments for observing atmospheric conditions. Whenever a new version of our system’s software came out, we’d spend about a week conducting an exhaustive regression test by hand. Obviously this was a tedious process, but we knew that critical bugs could potentially cost people their lives. We didn’t need requirements for motivation. In fact, we voluntarily added requirements to our test procedure to catch tricky edge cases. For instance, some sensors were tripped up by icy conditions and we added more tests to spot that. None of us wanted to worry we could have prevented an airplane crash.

If you want to improve the quality of testing, figure out how testing can actually make a difference. If you have a customer support team, ask them to name a few problems your customers report. Then go back and see how many of those problems are currently being tested. How many of those problems were fixed once, but came back months or years later? Just the task of having your developers talk to support staff could be enough to renew a passion for automated testing.

Write tests that matter

As Microsoft’s Eric Gunnerson notes, “… We should lighten up on the ‘you should write tests for everything,’ because these expensive complex tests aren’t doing anybody any good.” A lot of the reason people get burnt out and fed up with testing is that they’ve been taught to test to completion. When every path has been tested, you can call it a day. Unfortunately, testing overkill has a cost. Every time you need to refactor production code, you’ll need to make matching changes to the testing code. It’s only appropriate when your code is so stable that significant changes invite disaster. For instance, flight software for a spacecraft will get locked down once it’s been verified. In those rare situations, the more tests the better.

But most shops write dynamic code, and testing every branch and every input slows down development. Instead, you should think about writing tests that ensure specific functionality. Generally, that means looking at what a user expects, and testing for that. If you have an API, by all means test all the entry points and border case parameters. But if you have an active website, don’t handcuff your students with excessive testing.

To summarize:

  1. Teach using real-world tests.

  2. Demonstrate real problems testing can solve.

  3. Just because you can automatically test everything doesn’t mean you should

Jon Ericson has spent over 20 years working as a developer tending production systems. He supported an instrument testbed for the National Weather Service, took the night shift on a Space Shuttle mission ground data system, and was the science data processing system technical lead for a NASA/JPL Earth-orbiting spectrometer. In 2013, he shifted careers to be a Community Manager at Stack Overflow, where he bridges the gap between developers building the site and developers using the site. He believes the key to organizational success, like marital bliss, is communication.

Published:
Nov 30, 2016
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.