Let’s get started.
Drafting code that runs as part of automated test cases should always be considered ‘production ready.’ The key difference is that the QA team needs to cling to other procedures and best practices when executing tests. Here is what they have to do:
Test Failure Analysis: In case of test failures, they need to provide a complete recording of steps and a trail of events that lead to those failures before they escalate those issues to developers. This is what we call Test Failure Analysis and it needs to be done with diligence so as to not alert people unnecessarily.
Recording test results: Keeping a good record of events that lead to a test failure is a first step towards understanding root causes and the criticality of them.
Using new testing methodologies: Learning more about new ways to test by testing different aspects like accessibility, visual regression or production testing.
Automating the Automation: Taking it a step further, they can employ novel techniques with AI, ML or code generation to proactively write test cases without even touching the keyboard or recording any steps in a browser or mobile device.
To achieve higher flexibility when writing test cases, the quality of the code needs to be consistently high. For example, in terms of code organization, here are some general guidelines you can follow:
Follow each tool's configuration best practices: For example, if using TestCafe, you should spend time to master this tool and read their official documentation for reference.
Keep configuration constants in one place: Configuration constants should be defined by environment or target. For example you have a common config called e2e.config.js and for each testing environment you have a more specific one. For instance, you have chrome.local.config.js for running a local Chrome headless and firefox.local.config.js for running tests in Firefox.
Separate helpers and common testing workflows in a library style code: Keeping reusable pieces of code in a library, for example: helper functions, page elements, login flows or classes that act as drivers for component testing. This way, you can write test cases and specs without worrying about how to find specific page elements or selectors.
Setting up Testing Goals
When setting up an automated test framework, it's really important to define and agree on a testing strategy. That means you have to set some hard goals about what is valuable and what is not necessary to automate. You cannot physically automate all the parts of the system, and even if you wanted to, it would require substantial maintenance overhead.
So what are the main considerations for creating those testing goals? Let’s step back a bit and borrow some ideas from Scrum.
In Scrum, for each Sprint, we have tasks taken from the Product Backlog. The Product Owner chooses to introduce them into each Sprint task with the highest value. The developers try completing using a definition of Done. Once this process completes, we add an Increment of Value.
If we align this methodology to our testing strategy, you can gain valuable insights on the testing strategy. You need to test as part of each sprint so you have a time box when automating test cases. This means having automated tests to test valuable things so that you can catch breaking behaviors. Each sprint should have an increment of value in terms of test automation, so that only the most critical business operations are covered.
Keeping counts and various metrics for bugs offers opportunities for improvements. For example, tracking how many bugs were found during development vs. how many found after they were deployed in production, over time can act as a motivating factor. Ideally keeping a good ratio would boost morale and bestow confidence that the team is performing well. Your test cases should reflect the overall momentum and the team's ability to deliver new features.
Once you have provided value from your testing goals, you can focus on improving the quality of testing cases and coverage scenarios, for example:
Perform regular code reviews and refactorings so that anything that gets pushed to production is peer reviewed and free of obvious bugs. Using, for example, modern IDE’s internal diffing helpers or Github PR review templates would help.
Be on the lookout for tool updates or upgrades so that your testing frameworks are up-to-date with the most recent features. You can install bots like dependabot to automate this process for you.
Write idempotent test cases so that you can parallelize them easier. For example, in order to make deterministic parallel requests to APIs that mimic the browser behaviour, you should consider using Mocks like RequestMock class from TestCafe.
Clean up resources after every test run. Tests cases should clean up every file database or service they create as part of the process.
The plethora of frameworks in that domain helps QA teams approach the problem of testing automation from different angles. As each framework has pros and cons, more often there are cases when it’s logical to use multiple frameworks mainly for supporting different testing backends or drivers.
For example with Nightwatch.js, which is a Node.js automated E2E testing framework, you know that it uses the WebDriver API for interacting with various browsers. With WebdriverIO you have the option for multiple backends, as it supports the WebDriver API and the Chrome DevTools Protocol via Puppeteer.
TestCafe adopts an alternative approach by having a URL proxy server in between the test runner and the browser, so that it emulates user behavior. This means it does not depend on any testing backend, either Selenium nor Webdriver, and it acts as a standalone framework. This architecture gives this framework a unique advantage over the others in terms of flexibility
Another popular choice is Cypress.io, which advertises itself as the most convenient way to test anything that runs in a browser. This is an example of a proper marketable testing framework that offers a nice UI for both the Developers and the QAs to use. This makes it very approachable and user friendly. It does not use Selenium for a driver. Instead it uses a Node.js server process to delegate events and tasks into the application in real time. For example, it delegates tasks to run in the browser and creates a dashboard helper that performs the test cases in real time. It’s quite important to know that Cypress is optimized around a tool for local development E2E testing and not for production testing. This leaves enough room for other tools such as WebdriverIO to perform those tasks.
CI/CD Pipeline Integrations
Continuous Integration and Delivery are not just some random buzzwords that you read every time you work through a QA tutorial. Those are the key business functionalities that promote agile operations and operational responsiveness. They also unlock modern workflows such as shift-left testing or security testing
To move an existing testing automation into a CI/CD pipeline, you need to align several things, both in terms of processes and tools, as well as in terms of communication.
The first and foremost action you need to perform is to evaluate the feasibility of a chosen framework to run as part of a CI/CD pipeline; and, what happens if something fails. For example when using Cypress.io, you should perform a spike to actually check how tests run inside the CI/CD infrastructure, and confirm the tests are valid. This is not only because you cannot trust the tool, but to verify it does not break the team momentum or any other workflows.
Identifying ways to improve scalability and test parallelization is the subsequent critical step. It's known notoriously that E2E tests can take longer than unit tests to run, often by orders of magnitude. You must explore possible options to speed things up such as writing associative test cases. You do this so that you can break test cases apart, scaling the test runners on demand. Have reasonable time limits for each test case or test on the right browsers so that you do not overstress testing.
Lastly, you need to have a monitoring and alerting stack in place that triggers the precise actions in case of a failed test. This should not be a catch-all scenario where you deliver thousands of emails to each team member, as they will be most often ignored. Ideally, test results should be consolidated and forwarded with an action plan to only the relevant parties, teams or channels. For example a successful run should give a link to the status report and a preview of the deployed application. A failed run should provide all the necessary contextual information ordered by severity and a list of recommended actions for debugging the test case. This would help the teams stay on top of any issues that come up during testing.
Next Level in Test Automation