Simon Stewart, creator of Selenium, Webdriver, and software engineer at Apple, shared with The Test Automation Experience.
Test automation quickly tracks any changes that are made to an application to minimize regression efforts. It increases test coverage, reduces test execution time, frees up resources, and saves money. In addition, receiving timely feedback about your codebase can greatly enhance the efficiency of your application development process.
While the benefits of test automation abound, a lot can go wrong if you don't properly implement it. When planning your test automation suite, you need to consider its overall architecture and devise a strong testing strategy that produces maximum output. This is where best practices come into play. A test automation suite that doesn't implement current best practices can suffer from flaky, ineffective, and unmanageable tests.
In this article, you'll learn about several best practices you can implement when establishing your test automation suite and why it's important to observe them.
Also take note of a few helpful tips from developers and engineers, like the one above, courtesy of the Test Automation Experience and Test Case Scenario community, sprinkled throughout!
Some of the best test automation practices include identifying the right tests to automate, utilizing the right tools and framework, and keeping records for testing analysis. By following these best practices, organizations can not only achieve better testing outcomes but also reduce costs, enhance efficiency, and accelerate the delivery of high-quality software products.
Let's take a look at these best practices in more detail.
Exhaustive test case automation is impossible. However, not every test case automation will yield a high return on investment. It's important to select a subset of test cases to automate that will bring the most value to your application. While selecting these test cases, try answering the following questions:
Will you run the test repeatedly?
Are human errors likely to occur when you're running the test manually?
Is the test time-consuming?
Does the test cover a critical feature of the business?
Is the test impossible to perform manually?
If you answered yes to any of the previous questions, you should consider automation. Review the following guidelines to help you further identify which test cases are best for automation:
Test cases that execute repeatedly: Automating test cases that execute repeatedly against every build/release of the application will save you time and help you avoid involving other developers in the testing process. Smoke tests, sanity tests, and regression tests fall into this category.
Test cases that are prone to human error: Automating test cases that are prone to human error can help increase the accuracy of test execution.
Test cases that assess the performance of the application (i.e., stress tests and load tests): These tests are hard to run manually because you need a lot of traffic to your application, which is why automating them is ideal.
Test cases that cover the same workflow but with varying sets of input data (i.e., data-driven tests): Since this type of test involves repeating data sets, automating data-driven tests saves you time and eliminates a monotonous task.
Test cases that run on multiple platforms and browsers: Automating this kind of test means you don't have to switch between different browsers and different operating systems, which can be cumbersome and time-consuming.
Tip #1 - Screenplay is a great way to act out a sample test case scenario from the point of view of an actor (people and external systems interacting with your interfaces) and the tasks that would perform.
- Jan Molak, Creator of SerenityJS
The overall ease of use and complexity of creating and running the tests depends on the tool you choose and the functionalities it provides. The following are some of the things you should consider when figuring out what tools are best for you.
Your application will determine the testing tools you use. For example, you can use tools such as Selenium, Cypress, and Playwright to build end-to-end test automation suites for web applications. Or you can use Appium or Espresso for mobile applications. Make sure you check each tool's features, community support, and maintenance status when deciding which one to use.
When selecting the right tools for your team, it's important to consider your specific team's expertise and experiences. This especially applies to the languages and frameworks the test automation tools are using.
For instance, some popular programming languages for test automation include Java, Python, Ruby, JavaScript, and C#. These languages have enormous how-to tutorials, blog posts, videos, books, and discussions on sites like Stack Overflow, and they have actively maintained testing libraries. For example, if your team is more comfortable with Java and Python, consider Playwright; if your developers specialize in Ruby, Selenium may be a better option.
Tip #2 - Test on the language that is most popular in the platform just because you'll be able to get the most help from (your) developer community and you may not struggle as much with the language; because it's easy to learn a new language, but its easier not to have to do it during testing.
- Hanson Ho, Android Architect, Embrace
Your budget will probably determine whether you choose to use an open source or commercial testing tool. However, keep in mind that each has pros and cons. While commercial testing tools typically provide more reliability, advanced features, and technical support, an open source testing tool can also provide what you need.
For instance, Playwright, an open source testing tool with frequent releases and rich features, has become a favorite of programmers all over the world. On the other hand, Cypress offers both a free and paid version. The free version provides basic features such as cross-browser testing, auto-wait, screenshots, videos, and record-and-playback functionality to help with automation, and the paid version comes with dashboard access, which stores all your test logs, video recordings, and test results besides facilitating test parallelization.
Tip #3 - "Assess your current situation: the project, your testing environment, skills, and work culture and team to choose the right automated software tool for you."
- Nikolay Advolodkin, Staff Developer Advocate, Sauce Labs
Ultimately, picking the right tool is all an evaluation process. Create a proof of concept with a number of different tools and remember to evaluate objectively.
Tests fail, and it's crucial to find out why to fix the underlying problem. To debug failed test cases, you need screenshots, logs, exception reporting, and video recordings of the test run. You also need to know the outcome of all the tests to determine your application's status.
One crucial component of test analysis is a test report. After introducing any change in your codebase, you can run an automated test to create the report. A test report guides you through test attributes such as outcomes, execution time, and environment-specific parameters that you can use to increase test coverage. After analyzing the test report, you can identify slow-running and flaky tests that cause bottlenecks, and you can identify valuable insights and feedback about your application's health, such as failure reasons, coverage, total execution time, and the percentage of tests that passed or failed.
Setting up your test environment is a crucial step to help you benefit from your test automation efforts. It's in this step that you'll implement strategies for test execution, report generation, and data storage.
A test environment includes the hardware, software, and network configurations necessary to support your test execution. To help identify any configuration-specific issues, you need to set up your test environment similar to the production environment.
Following are a few more suggestions to implement when setting up your testing environment:
Ensure the hardware and network configurations are properly set up. If not, your tests will produce false alarms.
Organize and create your test data so that it's available during test execution. After you've run the tests, remember to clean up the test data.
If you use production data, consider masking it to hide sensitive information.
After you've set up your test environment, perform a smoke test to validate the test environment's build stability.
If you use an on-premises infrastructure for running your tests, make sure to invest some effort in maintenance.
Instead of assigning all the tasks necessary for test automation to a single team member, consider dividing these tasks among several members based on their skills and level of expertise. For example, you can assign creating the test scripts to members who have relevant programming skills and experience. For other members who aren't well versed in programming but have experience testing user workflows, you can ask them to handle test case creation and test data generation for the test scripts. Dividing testing efforts helps promote collective ownership of tests.
Based on the tools your team uses, you can fine-tune the allocation. For example, you can assign more non-programmers to create test cases if your team uses low-code or codeless testing tools.
Dividing test automation effort improves efficiency, encourages collaboration, and mitigates the dependency on a single team member.
A set of input data is at the heart of any good data-driven test. This means you need to pay extra attention when you're planning and generating meaningful test data. Ultimately, doing so provides better results.
When you're creating test data, consider the following:
Ensure data is accurate: Test data should mimic real-life scenarios as closely as possible. For example, a reasonable test data for a student's age is "16 years old".
Some data should be invalid: Test data, such as "abc" for a student's age, should trigger errors.
Data should cover boundary conditions: Test data should contain boundary values. This means applications will often break around them. For example, test data for voting age should be "18 years old".
Data should cover exceptions: Test data should cover rare scenarios (i.e., a yearly discount for a purchase from an online shopping site).
Typically, you'll store the data in external files such as CSV, XLSX, JSON, or XML because they facilitate reusability, extensibility, and maintainability. Design your test automation framework in a way that makes parsing the test data and iterating its contents easy.
End-to-end (E2E) test automation relies on the user interfaces of your application. The core of these tests is to locate the web elements on a web page and to perform actions on them. Your tests should be robust enough to remain unaffected when your application's user interfaces change in the early stages of development or as you enhance it. Otherwise, your tests will fail.
Your tests are likely to be brittle when you use selectors that heavily depend on the ordering of web elements in the Document Object Model (DOM). You can resolve this by providing unique selectors to the web elements of your application. This also benefits you because you'll need fewer test code changes to adjust for user interface changes.
In addition, you have to incorporate design patterns such as the Page Object Model (POM) while designing your test automation framework. A good design pattern implementation helps minimize code duplication and code updates as you introduce changes in your application. It also improves the extensibility of your codebase. Creating atomic and autonomous end-to-end tests is also extremely important for receiving reliable results.
Tip #4 - “Approach test automation just as you would with any kind of software because test automation is software development. It uses the same tools. It requires the same skills and same practices. Test automation is a domain of greater software engineering.”
- Andy Knight, Principal Architect, Cycle Labs
Test automation is essential if you want to quickly ship quality applications. It results in faster test execution, greater test coverage, and increased accuracy. But to make sure you're implementing test automation correctly, there are a few best practices you should incorporate, including identifying the test cases to automate, using quality test data, using an appropriate testing framework, and keeping records of tests for analysis.
If you enjoyed these tips and would like to learn more, head over to the Test Automation Experience for a testing cheat sheet from a few tech legends.