Last month, QASource and Sauce Labs partnered together to present a webinar, Reducing False Positives in Automated Testing. We wanted to provide you with some answers to the most commonly asked questions in response to this webinar. Please feel free to comment with additional questions and let us know how the techniques to reducing false positives have impacted your automation testing.
Q: Are there specific tests to avoid while automating to eliminate false positives in automated testing?
A: When automating tests, first you must define your goal to determine which types of test to automate. While setting your goal, you should avoid the following:
Unstable areas or areas with frequent UI changes
Scenarios which are not supported by your automation tool. For example, if you are using Selenium, you shouldn’t go for tests that require interaction with Windows 32 components because Selenium does not support desktop based applications.
Areas which have been identified to have performance issues
Areas which cannot be identified using unique locators
Q: How do you identify a well written automated test?
A: A well written automated test is defined by the way we structure our test script, workflow, and tear down fixture. The script should only contain these strips and verifications points. This will allow test cases to have 1:1 mapping. In addition, well written automated tests should not contain any hardcoded data and exceptions should be handled. All well written automated tests should follow best coding practices, commenting and naming conventions.
Q: Does a frequent review of tests identify or reduce false positives? If so, who is the best resource to perform these reviews: a resource with QA background or development background?
A: Yes, having regular peer code reviews of your test is vital. Reviewing tests can reduce the chances of false positives. The reviewer should have a good understanding of the product and the underlying functional changes. This is essential so the script can be modified with functional knowledge making the script more robust. In addition, the reviewer should have good working knowledge of the language on which the framework and scripts are developed.
Q: When false positives occur, is it code specific or the functionality of the project under test?
A: While there is no specific time when false positives occur, there could be several reasons why. It could be because of code changes or a change in product functionality. False positives can occur because of the automation approach, written automation script or implemented framework.
Q: Should the test automation framework be written in the same language the application is being written or developed in?
A: Yes. Creating the automation framework in the same language of application will help the development team in testing specified areas whenever there is any code change or defect fix. This will help in integrating the automation framework with unit tests and integrating APIs of the application with the framework. Another advantage of using the same language is if the automation team is stuck in specific areas, they can always seek help from the development team.
Q: What is dynamic synchronization of objects?
A:Dynamic synchronization of objects is a process in which we must wait for a specific amount of time for the target element to become available for the automation event. This helps in improving the time of execution by waiting only when the element is not available.
Q: Does test data setup or expected results lead to false positives?
A: Yes. Test data setup can lead to false positives as this can change the underlying scenario. For example, if the target user has privileges of the Admin, then login by a guest will definitely break the test flow. In contrast, an expected result does not directly lead to false positives.
Q: For test data, is it a better practice to use a known set of pre-existing data, or is it better for each test to create their own data dynamically during run-time?
A: It will be better for a test to have its own independent test data. However in some cases, it is not feasible. In this case, test data can be grouped into global as well as local. Global means that data will available to the entire test suite and local is available to a specific test. Dynamically creating data is also a valid approach, but it is more dependent on the functionality of the application and can hinder execution time as well.
Q: How does moving the setup methods out of the test methods help you to make the test suites more stable? Can you give a more concrete example? It seems like it’s just moving the failures from one place to another?
A: It allows you to keep the setup method uniform. If something is wrong with the setup (i.e. launching a browser, passing desired capabilities, setting resolution, etc.) then you only need to fix it in one place. Code is supposed to be as modular and reusable as possible. Moving the setup to a common place allows you to do just that.
Q: How do you reduce inconsistency when clicking on elements?
A: To reduce inconsistency when clicking elements, wait until the element is fully done rendering (or animation is done if there was one). For instance, if an web app uses AJAX and we have to click on a element that loaded via AJAX, we would have to wait for that element to be in the dom and visible before interacting with it. Use "dynamic" waits instead of "static" waits (i.e. use wait_until instead of sleep 3).