Back to Resources

Blog

Posted June 16, 2016

Patterns and Coding Practices for Stable End-to-End GUI Tests

quote

We all know the importance of the Test Automation Pyramid and why it makes sense to align various automations in this way. Given that guiding principle, end-to-end GUI tests sit at the top, with a considerably small number of tests compared to other types (Unit, Integration, Service tests), and they are useful to verify business workflows. In the book Agile Testing: A Practical Guide for Testers and Agile Teams, the authors explain the testing quadrants, the GUI tests’ fit in the grand scheme of things, how to rationalize intention, and be smart about overall Quality strategy.

The intention of the E-E GUI tests is to verify “whether we build the right thing from the business perspective.”

Until some other technique evolves, webdriver-based GUI tests are the best choice today. When we’re trying to mimic the end-user journey through our application, these tests naturally represent an outside-in perspective. Testing from the surface of the application creates a vulnerability to many hidden factors, such as browser stability, browser performance, network speed, latency, sub-systems’ performance underneath the GUI, etc. Due to the many moving parts with every small action, GUI tests have a non-deterministic nature. How do we win the game? The following are the top 5 coding practices that helped me achieve success.

Adopt a page object pattern

Our tests need to interact with the HTML elements on the webpage. If we mix HTML element locators within the test case, tests will be brittle whenever the UI changes. HTML selectors may be scattered, mixed within the logic that interacts with the webpage. This will lead to high maintenance, and will limit the possibility of reusing test logic. Instead, consider separating the test from the logic that interacts with the page. A typical design looks like this:

You’ll want to define responsibility for these layers as well (this post from Martin Fowler explains the process in detail).

But in short —

The page object should:

  • Provide an API to perform actions on the application

  • Provide access to the state of the underlying page

  • Encapsulate and hide the details of UI/HTML structure from the rest of the world (i.e. tests)

The UI Map should:

  • Serve as a UI elements repository for the page object

  • Encourage reusability—create a standard way of accessing UI elements for easier maintenance

  • Abstract UI element finder logic from consumers

Follow “Tell Don’t Ask” pattern

Now that tests and page object responsibilities have been defined, what is the protocol between them? Where do we leave the logic to perform the action for the behavior defined by the test?

For example, below is the behavior we want to automate:

Given the user logged into AmazonWhen the user searches for a book “Mastering Ansible”Then the system should perform search and display results

While automating this, searching for a book is a multi-step process:

  1. Select “books” from the drop-down

  2. Provide the search string to the search box

  3. Perform search

We could perform these steps right from the test. However, one of the OOP principles recommends that instead of asking the page object for data and then acting on it, we should “tell” the page object what to do. This principle enables us to move the logic (steps 1 through 3 above) into the page object and combine data and behavior. This tightly couples data and the logic that operates on the data. This co-location helps developers achieve better understanding, and allows for easier maintenance.

Following the Single Responsibility Principle

Now that the page object owns the logic and data, where should we do the assertions? Who is responsible for assertions, exception handling and reporting?

I would suggest keeping this responsibility with the test and let the page object action always return an object with necessary information, either to assert or handle an exception. I prefer assertion-free page objects. For example, the below page object action returns an error object on the exception and expects the test to handle exceptions, report, etc.

Dealing with asynchronous behavior

One of the common challenges is figuring out how to deal with the asynchronous behavior of an application. Webdriver offers implicit and explicit waits where you can wait for “x” seconds before proceeding; however, this is unreliable because we are waiting for that magic number of “x” seconds. While there are ways to handle this in the framework, I would suggest thinking about it from the end-user experience perspective. Remember that E-E GUI tests’ intention is to verify the app from the business/end-user standpoint. So, how long are we asking our end-user to wait between actions? If an expected element is not visible, clickable, or whatever the case may be after a certain point, bubble an exception back to the test and let the test fail consistently.

Lint, build, publish and share the logic

Since a page object represents composite actions, they can be very handy and reusable for many behaviors.

  • Depending on the stack, Lint the page object code, since it likely contains logic and data handling, etc. Good coding practices are critical.

  • Run Sonar analysis and set quality gates on this codebase. I would even suggest writing custom rules to prevent some traditional mistakes such as Thread.sleep(), path selectors, etc.

  • Set up CI build; fail the build on code quality SLA violation

  • Package and publish reusable page object APIs and encourage wider reusability

Overall, treat test code the same way you treat production code. Write test code like you’re developing an app, not letting your developers run your tests.

Sahas Subramanian (@sahaswaranamam) is a passionate engineer with experience spanning across DevOps, quality engineering, web development and consulting. He is currently working as a Continuous Delivery and Quality Architect @CDK Global, and shares his thoughts on tech via https://cdinsight.wordpress.com

Published:
Jun 16, 2016
Share this post
Copy Share Link
© 2024 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.