How to Write Less Chatty Selenium Tests

Posted Jan 25th, 2022

“Let’s get to the point” is something we’d all love to say in certain situations. The talkative restaurant server. The aunt who tells the same story over and over. The cooking blog that tells the author’s entire life story before the recipe. These people love to talk but they can’t read the room. They make the listener/reader work extra hard to understand the point, like finding a needle in a haystack.

Chattiness benefits no one, including software development teams. If your Selenium test doesn’t run quickly, smoothly, and efficiently – or if it fails entirely – then you may have a chatty test.

This article explores how to minimize chattiness in automated tests so you can optimize the speed and quality of your web app testing process.

What Does a Chatty Automated Test Look Like?

Does your Selenium, Appium or other automated test contain more than one assertion? Then you have a chatty test. The more assertions, the longer the test takes to run, which causes latency in your overall test process.

How to Fix a Chatty Test

Here are some strategies and best practices for designing tests that keep the chatter to a minimum.

Write Selenium test scripts to be Small, Atomic, and Autonomous

The best way to fix a chatty test is to break each test into smaller chunks. When you design your tests, each test should tick the following boxes:

  1. Small

  2. Atomic

  3. Autonomous

Small
Each test should be short, simple, and succinct. Keeping your Selenium tests small ensures that your test suite runs efficiently and delivers results faster. For example, if you have a test suite with 100 tests running concurrently on 100 VMs, then the time it takes to run the entire suite is equivalent to the longest/slowest test case in the suite. If you have 10 tests that are around 2 minutes each and one that is five minutes long, then your test suite will take five minutes to run.

Atomic
Each test should contain only one assertion. In other words, each test should test for only one thing. This also makes it easier to diagnose what went wrong and what needs to be fixed if the test fails.

Autonomous
An autonomous test runs independently of other tests and doesn’t depend on the results of one test to run successfully. Each Selenium test script should test against its own data so it doesn’t create conflicts with other tests over the same data. For example, if a test script assumes a certain user exists in the database, then the script should create that user in the database before it runs.

Add Visual Regression Tests to Your Suite

Another way to accelerate debugging is to add visual regression tests to your testing strategy. Not only are visual tests a great way to supplement your functional testing, but visual tests can also make your test suite run faster by replacing multiple assertions with one visual snapshot. This essentially allows you to add shortcuts for more complex tasks in Selenium and speed up the run time of your tests.

For example, for an e-commerce app, you have a test that makes sure everything on your checkout page works correctly. The test has multiple assertions for verifying that order totals on the page are calculated correctly (e.g., total, subtotal, tax). Each assertion has to be verified one by one, which will make your test take longer to run and also increase the chances of failure. However, you can replace all of these assertions with one visual snapshot, allowing you to delete the assertions from your code and still get the same functional result by comparing the new snapshot to the previous snapshot. You can learn more about Sauce Visual here, including how quickly you can add visual testing to your existing Selenium test script.

Conclusion

To get the best results from your testing process as quickly as possible, you need to optimize each test to run smoothly and efficiently. This starts with minimizing test chatter. Tests that contain only one assertion, run automatically, and are autonomous of other tests will run faster and be less likely to fail. And if a test does fail, you can more quickly diagnose what went wrong because you don’t have to sift through as much noise.


Written by

Erin Conrad


Topics

Automated TestingSeleniumAppiumCross-browser testing

Categories