Back to Resources

Blog

Posted August 27, 2020

Ten More Commandments Of Automation

In this article, Paul Grizzaffi highlights 10 commandments of automation that he's learned and adopted throughout his career.

quote

Go ahead, search the interwebs. There are more posts and articles on “The Ten Commandments of Test Automation” than you can shake a test case at. Go ahead…I’ll wait.

Welcome back!

To set the stage, I have not read any of those articles. Well, more accurately, I’ve not read any of them recently; most of them I’ve not read at all. I probably read one or two of them in the “before times,” but I don’t remember any of the specific commandments. Any of the ones I didn’t already know and found appropriate, I probably absorbed into my learnings and approaches long ago. I make this point not to belittle the other posts, but to highlight them in the highly likely case that some of the following “commandments” have been stated before; I have not copied them and if I repeat them, I’m hoping to amplify them. I also want to introduce some ideas that you’ve not yet considered.

As such, I will not be presenting THE Ten Commandments of Test Automation. Instead, I’ll be presenting Ten MORE Commandments of Test Automation. And away we go…

I - Thou shalt not only automate "from test cases"

There is a common misconception that automation for testing necessarily derives from test cases. The automators take an existing, or newly written, test case and turn it into an automated test script. This is called the automation drive-thru.

While there can be value in this approach, other approaches can provide similar or better value. By expanding the definition of automation beyond test-case-tool-test-script to “the judicious application of technology to help humans do their jobs,” we can exploit the power of computers to do the tasks for which they are best suited, leaving the humans—the testers—to do the remaining tasks. Fortunately, much of what humans are good at, computers are bad at, and vice versa.

II - Thou shalt treat automation development as software development

Automation development is software development. Even if we are using a drag-and-drop or record-and-playback interface to create that automation, somewhere, in the stack, under the hood or behind the curtain, there is code sequenced by our actions. We must start treating our automation initiatives as software development initiatives, lest we end up in a quagmire of unsustainability and early project death.

Treating it as software development means we must account for most, if not all, of the same activities and processes that application developers require. These include:

  • Design – we have to decide what to implement and how to implement it so that it’s maintainable and supportable.

  • Implementation – we must write the code.

  • Storage – the code and related artifacts need to be stored somewhere.

  • Testing – Test the tests? Absolutely! We must have sufficient confidence that the automation behaves the way we want it to. If we don’t trust the automation, it’s useless.

  • Bugs – All software has bugs; automation, being software, is no different. Testing will help but will not catch all bugs. Allow time in the schedule for bug fixes.

  • Logs – Logs are the lifeline of automation. Without them, we can neither understand what the automation is doing nor fix automation when it’s broken. Additionally, we wouldn’t be able to tell when there’s an issue with the automation as opposed to when there is an issue with the software being tested.

III - Thou shalt follow appropriate coding standards and idioms

In following with treating automation like software, we must incorporate appropriate implementation ideas in our implementations. Each tool or language has its own idioms and its own gotchas, but generally accepted approaches to design and implementation are usually appropriate. This article on encapsulation and abstraction provides a sample implementation that can be fodder for other Implementations that are specific to our context.

IV - Thou shalt consider maintenance and upkeep

Software is neither perfect nor complete; this is no different for automation. There will be bugs. As the application we are testing is changing, we correspondingly need to change our automation. We can’t prevent these, but we can develop our software in ways that reduce the effort it takes to support and maintain automation software; we also must allocate time for these activities. This article and this blog post shed some light on factors that are helpful when addressing anticipated maintenance.

V - Thou shalt not make scripts dependent upon each other

Creating a test script that is dependent upon another’s results is generally a strong anti-pattern. If scripts depend upon each other, they cannot be run singly, which makes debugging automation and app problems more time-consuming. Additionally, scripts that depend on other scripts cannot run in parallel with the scripts on which they depend.

There is a corner case where having all other scripts depend on the same, single script; this single script generally performs some setup of the test environment, test data, etc. This case is increasingly rare when using the available automation and continuous deployment frameworks, but it still may be appropriate in cases where the available frameworks are not available or appropriate for a specific automation endeavor.

VI - Thou shalt emit appropriate logging and reports

As described in this blog post, appropriate logs, results, and error messages are critical to understanding, trusting, and maintaining automation. These logs are our detailed view into automation execution: what ran, how it ran, what failed, how it failed, how it succeeded, etc. That is, as long as we judiciously emit appropriate logs that deliver this information to us in a digestible manner.

VII - Thou shalt influence testability and automatability

Testability, the extent to which an application or feature can be tested, and automatability, the extent to which testing-related activities can be performed by some automated mechanism, are not things that testers/QAs/QEs can implement, but certainly, they are things that we can influence. It’s incumbent on us to exert that influence. Developers don’t always know what we require to perform testing and to appropriately create automation. We must let them know. The blog posts here and here give some insight into some aspects of testability and automatability.

VIII - Thou shalt not succumb to the sunk cost fallacy

Sometimes, we make mistakes. Sometimes, they are big mistakes. We do our best with the information we have at the time, but it doesn’t always work out. When something in our plan goes awry, our instinct is to try to fix it, and to keep trying to fix it. Sometimes, however, we should just start over otherwise we run the risk of “throwing good money after bad.”

This is called the sunk cost fallacy. Simply put, this fallacy is the thinking that we’ve sunk so much money into an activity that we must spend more money to rehabilitate it and receive value for that money already spent or sunk into the endeavor.

Perhaps we are emotionally invested in our software “baby”; we spent so much time and money on it we can’t bear to throw it away and start over. Perhaps we are afraid; our leadership probably won’t be thrilled if we want to throw away the work that’s been done and “repeat the same work.” We must work to view situations like these through our business lenses instead of strictly emotionally.

IX - Thou shalt beware of Rube Goldberg machines

Rube Goldberg machines are complex machines that perform comparatively simple tasks, such as the Self-Operating Napkin. In our automation world, building these kinds of machines can be a lot of fun and they can do very cool things, such as chaining unrelated tools together in order to complete a testing task. Their downsides include being hard to understand and maintain; we must take care not to create something that is more effort to maintain than it would be to do the automated task ourselves. This blog post gives more details about automation Rube Goldberg machines; this post gives some thoughts about the state of “being automated.”

X - Thou shalt not make test data depend on transient data

Recently, I came across a test script that, one day, just started failing. After some investigation, we determined the failure had happened because the month changed from July to August. The script had been written in a way that the oracle was checking for dates in July, which was just fine when the script was running in July; application being tested was returning dates in the current month, July. When the date changed to August, the application began return dates in August, causing the script to fail.

In this case, instead of hardcoding dates in July, it would have been better to dynamically create the dates based on the current month.

About the Author

As a Principal Automation Architect at Magenic, Paul Grizzaffi provides technology solutions to testing and QA organizations, including automation assessments, implementations. He is also involved in activities that benefit the broader testing community. Paul is an accomplished keynote speaker and writer who has presented at local and national conferences and meetings. He is an advisor to Software Test Professionals and STPCon, as well as a member of the Industry Advisory Board of the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas, where he is a frequent guest lecturer.

Published:
Aug 27, 2020
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.