Back to Resources

Blog

Posted July 28, 2016

Environment-Agnostic Testing and Test Data Management for End-to-End Test Stability

quote

In the Design Patterns for Scalable Test Automation webinar we discussed the importance of adapting proper patterns for the scaling and maintaining of E-E tests. A couple of additional important aspects for End-to-End (E-E) test stability are:

  • Environment-agnostic tests - Tests should be independent, self-contained units, and should run against any environment without code change, and with no dependency on anything else (apart from the runner)

  • Test data - How to prevent tests failing because expected data wasn’t available in the system

In the context of a web app (not legacy, thick-client applications), let’s take a look at how to deal with these challenges.

Environment-agnostic Tests

E-E tests need environment-specific configuration information such as the URL, role, user name, password, etc. Needless to say, hardcoding these parts of the test is not a good practice. It makes updates and maintenance difficult. A better solution would be to tokenize, keep the key/value pair separate and use them as part of the test flow. Different technologies offer different tactics to handle the need.

For example, in the case of C# (and .NET), app.config is a good choice for carrying all the configuration tokens and values. However, the challenge shows up when you want to be able to update the app.config seamlessly before execution. For example, URL for DEV is different than TEST. How do you find and replace them in the config? There are a couple of ways to handle the situation:

  1. Create multiple app.configs per environment respectively (dev.app.config, test.app.config, ..)

  2. Maintain one repository, maintain one app.config, and update the app.config just before test execution

Both approaches work in practice. However, I prefer the second approach, because it eliminates multiple sources of truth (app.config). A single source of truth is always better.

Tooling

We could roll up a quick utility to find and replace values in app.config. However, I want to introduce a utility that can help with this: xmlpreprocess.exe. It's a Windows-only tool, but it is self-contained, easy to use, and its able to integrate the CLI with CI/CD systems. Usage is simple:

  1. Create app.config

  2. Create an xml to supply values for each environment

  3. xmlpreprocess CLI will take the values from your xml input and update the app.config

Explore more documentation and examples here. In the case of WebdriverIO or other JS frameworks, we could keep them in a simple .json file and import them for use.

For example, in my sample repo, I’ve followed a composition model, which looks like this:

  1. master.conf.js - Carries all common config across environments

  2. local/wdio.conf.js - Carries local test execution configurations, which will be merged with the master config before execution

  3. saucelabs/wdio.conf.sauce.js - Carries Sauce Labs-specific configurations which will be merged with the master config before execution

Similarly we could create separate configs per environment inside the Sauce Labs folder. I’ve yet to find a xmlpreprocess.exe like the one for JS world.

These are just few tactics I’ve used, and I’m sure there are many other ways to achieve the goal as well. The advantages that these approaches offer are a clean separation of concern between test config and code, as well as easy maintenance and the ability to dynamically update the config as needed. So, give it a try.

Test Data Management

Here again, I’m talking about web application E-E testing, not thick-client applications. There are some heavyweight tools that offer professional test data management to restore data from production, scrub the sensitive data, mask the sensitive data and set up data on the target environment for non-prod testing, etc. However, there are a few ways to tackle the problem within the test automation code. They include:

CRUD flow - Try to combine scenarios for meaningful end-user behaviors. For example, let's assume that you are testing a Wordpress blogging application (create blog post, view the blog post, verify visitors, view by geography, delete the post, etc). If we logically group the end-user actions for that persona (i.e., author) and group them in Create, Read, Update and Delete flow, the data needed for the next step (Read) is likely created by the previous step (Create). We might end up testing a larger flow at the same time that necessary data is generated by the application part of the process. Data necessary for Create would have been created by the previous workflow. We don’t run into the stale data sitting in the system or data management issues where code refactoring expected a new field in the dataset, but we didn’t get a chance to update the test data generation script. In addition, if each one of the actions (Create, Read, Update and Delete) is an independent scenario, potentially some steps are repeated (i.e., launching the browser, navigate to website, login, navigate to posts page, etc). By forming CRUD flow, repeated steps are optimized, and as a result, tests complete faster.

Generate and clean up the test data as part of the test - Another approach is to call up your backend REST endpoints to generate the necessary data part of test setup. It’s similar to how we approach Unit testing, setup and teardown. Depending on the language of choice, there are libraries that can help in making calls to the backend and setup of the data. At the end, clean up the data as necessary. Another approach might be to use tools like JMeter to input necessary data before E-E suite execution.

Service virtualization - This is another approach that we leverage, typically when we need to interact with third-party services and every hit costs something, or in cases where a third party simply can’t stand up matching environments for all our non-prod environments. We use some tools like wiremock.org or mountebank or some commercial tools to help create the virtual services. These serve much like record and replay, if the dependent service is available at least once, or we can handcraft the request/responses as well. So, once these stubs are created, we need to run these virtual services in our data center and configure the UI to go through this. (There is an interesting walk-through here.) This will offer some stability, but we need to be cognizant about keeping the stubs up to date; otherwise we might run into issues.

While the commercial TDM tools might offer much more sophisticated features, it’s worth trying these approaches as a first step.

Published:
Jul 28, 2016
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.