Back to Resources

Blog

Posted February 22, 2023

Slow Down! You’re Running Automation Way Too Fast

Speed in automation execution is important, but not at the cost of omitting the checking of valuable, possibly critical, scenarios.

quote

Ozzy Osbourne released his third studio album, Bark at the Moon, in late 1983. This was the first album to feature his new guitar player, Jake E. Lee. There are several versions of this album, including many from back in the ’80s. Some versions had different artwork while others had different songs. The European version had a song called Spiders; the U.S. version, instead, had a song called Slow Down. That was a shame, in my opinion. Though it took me many years to finally hear Spiders, I think the Europeans were (at the time) deprived of a great song. I loved the music, and I loved the lyrics, specifically, the chorus:

Slow down you’re moving way too fast

Slow down you know you’ll never last

Slow down your haste is making waste

Slow down and join the human race

As my frequent readers already know, I’m inspired by rock music, and I find many parallels between some of the lyrics and my career, so I often incorporate it into my writing and speaking. This song is no different, so, this time, I’m writing about automation execution speed and the value of slowing it down. 

Do it faster! It’s always like that with us, isn’t it? Who’s us? Techies, software people, people trying to deliver technology to users. And we, as users, aren’t blameless, are we? We want our stuff faster. Get it to me faster and, when I have it, it better run fast.

The reality is, however, that as users, we want our technology to be fast when we want it to be fast, but we also want it to be patient on us when we’re not ready to move to the next step in a process flow. When would we not be ready? Picture these scenarios:

  • A user is in the middle of making an airline reservation but has to take a phone call; it seems reasonable that the user would like to resume the reservation from where they left off, once they conclude that phone call.

  • A customer support person is waiting for their current caller to find the necessary paperwork or execute a reboot of their computer.

  • A social media user reads a post that they would like to respond to but would like to include a link to one of their blog posts (oddly specific, right?); that user would expect that the post they just read would still be “in view” when they returned from finding the blog post link.

Now, I’m pretty sure the offending apps from the last bullet above behave the way they do on purpose because that behavior somehow drives additional revenue. You know the apps I’m talking about. Aside from “intentional state change” by our software, users should expect that their software has some ability to wait for them.

Our world is fraught with delays: answering the phone, answering the door, refereeing an issue with kids, stopping a dog from chewing electrical cords, cooking a meal, caring for a sick individual; the list goes ad nauseam. Our software needs to account for at least some duration of inactivity. Also, not all delays are equal, especially when considering mobile devices. If we need to answer the door, perhaps we can remain in our app; if we stay at the door longer than our screen lock timeout, that might be a different state for the app. If we need to get a blog link from a different app, we’ll need to exit the first app, get the blog link from that different app, then return to the original app. This is yet a different scenario that each app might handle differently.

Do we test for this? I expect most of us do not.

I used to work in telecom. When testing a phone call between two entities, we often employed something we called a call hold time. People making phone calls seldom make the call, wait for the other person to answer, and then immediately hang up; unless you’re giving someone a secret signal, that is not a valuable use case for deeper testing. To make the testing scenarios closer to real life, we added the aforementioned call hold time, basically a “hard wait”, to cause the call to remain connected for a longer time. Longer call durations caused system resources to be further taxed, possibly revealing issues that would not be discovered without the hold time. Of course, waiting is a terrible bore. It’s also not usually a good use of a human’s (i.e. tester’s) time. Do you know what’s good at waiting? Computers! So we automated these kinds of scenarios.

Hold on, aren’t we taught that hard waits are to be avoided in automation? Yes, we are taught that (or at least we’re supposed to be), but that is in the context of waiting for something to happen: an element to appear or disappear, an element to be enabled or disabled, a message to arrive, etc. In most cases, hard waits are discouraged.

What we’re talking about here is an intentional delay in a test script flow to imitate how a user uses the system. In fact, over the last few years, I’ve started implementing a function called HumanDelay(). This function is simply a wrapper around the programming language’s sleep/pause/wait function, but by giving it a context-specific name, users of the function can better understand the intent of this function, namely “only use it where you want to imitate how a human might pause”. The added readability assists in code reviews; when I see a call to HumanDelay(), I know to pay attention to see if this hard wait is being used appropriately.

What does all this have to do with exiting and reentering an app, or answering the door in the middle of a software interaction? We can evolve the notion of call hold time to use HumanDelay() at strategic points in our test flows. If we know where our systems might be sensitive to delays or app exits, perhaps we can include these activities in some of our test runs; we may turn up previously unknown behaviors. If these are behaviors that we want to check frequently, automating them could provide some value. In general, having a human “just wait” has a higher cost than having a computer do that waiting for us.

We probably don’t want to do these think time delays on every one of our automation runs; this is something to be said for speedy execution in some scenarios. A great example of where execution speed is valuable is when we want to run a “smoke test suite” or “health check” on each deployment from an automated build-and-deploy system, such as a CI/CD pipeline. Teams want to know as quickly as possible if the current build and deployment are egregiously broken. Running tests that include think time should probably be part of a non-gating run unless those think times are a required step in a core feature set. One way to handle the “I only want think time sometimes” is to pend executing the think time on a variable value that’s set at run time, or whatever is similar in your specific execution environment.

Of course, we can get fancy:

  • If the binary nature of “waiting for X seconds” or “not waiting for X seconds” doesn’t fit your need, perhaps have the think time be a random value. Be aware of the range of your random values; ensure that the minimum wait time is something sufficiently long for your needs and that the maximum wait time is not intolerably long for the value it may provide.

  •  At the risk of being absurd, if you’ve encapsulated your actions and behaviors sufficiently, you could insert potential think time delays after every action. They could be randomly turned on, have random durations, or both (insert diabolical laughter here). While I think this kind of implementation could be cool, I’m not currently aware of a context in which this level of “random think time” would provide sufficient value to counter the cost of implementation and maintenance.

Let’s use speed judiciously, appropriately. Perhaps our automation implementations should incorporate some additional user-like steps so we might, as the chorus from Slow Down stated, “join the human race”.

Like this? Catch me at an upcoming event!

About the Author

As a QE Automation Architect, Paul Grizzaffi is following his passion for providing technology solutions to testing, QE, and QA organizations, including automation assessments, implementations, and through activities benefiting the broader testing community. An accomplished keynote speaker, international conference speaker, and writer, Paul has spoken at local and national conferences and meetings. He is an advisor to STPCon as well as a member of the Industry Advisory Board of the Advanced Research Center for Software Testing and Quality Assurance (STQA) at UT Dallas where he is a frequent guest lecturer. In addition to spending time with his twins, Paul enjoys sharing his experiences and learning from other testing professionals, as well as reciting lyrics from 80s metal songs; his mostly cogent thoughts can be read on his blog at responsibleautomation.wordpress.com.

Published:
Feb 22, 2023
Share this post
Copy Share Link

Deliver quality software continuously

Start testing in minutes

© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.