Back to Resources

Blog

Posted June 23, 2016

Q & A : Design Patterns for Scalable Test Automation

quote

Thanks to everyone who joined the webinar given by Sahas Subramanian, “Design Patterns for Scalable Test Automation with Selenium and WebdriverIO”. There were a number of great questions that were posed prior to and during the session, and we asked Sahas to consolidate some of these questions and answer them in this follow-up post. Disclaimer: opinions shared below are Sahas’ and not those of his employer or Sauce Labs.

Should you have any additional questions, send a tweet to @Sahaswaranamam.

Q: How can you best handle security authentication pop-ups from specific browsers? What are the best ways to switch between tabs and to close tabs?

A: Use the getCurrentTabId API to get the handle of the current window. Once you have the pop-up window handle, you could close it using browser.close(popUpHandle)

Q: How should I handle SOAP/SOAPUI testing?

A: Generally speaking, Selenium and Webdriver are appropriate for UI testing. If your intention is to test the APIs, I would suggest using tools like JMeter and/or Taurus. Reference: http://gettaurus.org

Q: How do I create my own wrapper? (How can I check for page title?)

Q: What are your thoughts on using a recording IDE versus writing your own automated scripts in terms of time efficiency, maintenance, robustness, and efficiency?

A: While record and replay tools can help you get started faster they have an inherent weakness that leads to brittle tests – when the UI changes it is harder to update the generated code since the team won’t know the architecture and design behind the code. Other limitations:

– They are often proprietary and licensed.

– Some tools are not flexible in the respect that a small change might force you to regenerate the entire workflow.

Overall, record/replay tools might be a good solution for a UI that doesn’t change. For changing interface, well understood hand-crafted code is better from all perspectives.

Q: It was my understanding that there is no guarantee of the order in which unit tests will run. In your example you have unit tests that are running one part of your workflow, but if they did not run in the order you expect they would fail.

A: Mocha describes() can handle synchronous and asynchronous execution by just passing a callback to it() block. My example uses Mocha and leverages synchronous nature to orchestrate the workflow.

Q: Is there an expected condition that can do page reload?

A: Sometimes a test has to wait for a process to complete, but that element is not updated except on page reload. Webdriver.io offers an API to reload/refresh the page. Check out http://webdriver.io/v3.0/api/protocol/refresh.html. Depending on your workflow, try to reload the page and use one of the waitFor APIs for that specific element to be visible or enabled.

Q: How do you write tests that aren’t fragile? How do you best introduce approaches to limiting brittle integration testing and improving test reliability?

A: Some of my top picks:

  1. Make sure the UI test is the right technique to automate the requirement. If the requirement can be tested thru View

  2. Testing or API testing, prefer that over Webdriver driven UI Tests

  3. Prefer declarative vs. imperative BDD

  4. Adopt Page Object pattern and follow clear chain of responsibility between Tests and Page Objects

  5. Prefer “Tell Don’t Ask pattern”, build the logic in Page Objects and keep tests lean

  6. Avoid Thread.Sleep and handle asynchronous behavior via code in the Page Object logic

  7. Share test logic and engineer the automation with possible coding patterns & principles

  8. Constantly review and refactor test automation code much similar to production code

Q: We have an old legacy app and no team adoption of test automation. Changes break things all the time. How would you suggest we talk to the team about adding test automation to our process?

A: I would suggest to begin with visualizing the value stream map for your delivery process and understand the current engineering cycle time, bottlenecks, and make waste visible. Additionally, try to quantify feature dev vs. bug fix effort and the number of bugs found in production.

Typically, lack of automation will indicate high cycle time, long (manual) test effort, high defect rate and/or high bug fixing effort. With that initial measure you could work with the product and technology leadership to improve the situation.

Q: What are some good “quality” measurements we can use to demonstrate project success?

A: IMO, I don’t see quality/test automation as a separate effort rather it’s part of development. It should help to ship the product faster with greater quality. Each type of test should help to increase confidence. Given that, Value stream map before and after the effort should expose the benefits (if test automation was the bottleneck).

In addition, measure:

  • Total test automation (#unit tests, #view test, #api workflow tests, #UI workflow tests, #A/B tests). Expectation – overall trend should be up (we should be adding more automation), individual test automation technique trend should align to a pyramid.

  • Customer reported bugs. Expectation: this should be trending down

  • Automation success rate over time. Expectation: should be trending up and stay close to 100%

  • Automation execution time over time. Expectation: should be trending down

Q: How do I avoid Thread.sleep()? I’ve put in all kinds of waits in my code, but I still get periodic failures because some element or another can’t be found. Is that just something you have to live with when doing browser testing? Or is there ever a reliable method that you can trust every time? How can we tell Webdriver to wait until Ajax is done?

A: Generally, thread.sleep or waits are used when the UI is waiting on an asynchronous request from the back-end. The easiest way to handle the situation is to use the Webdriver-provided expected conditions class.

If your language of choice doesn’t have anything like ExpectedConditions class, I would suggest referring to how Webdriver.io implemented the same logic and try to make your own, if necessary. Soon we’ll have another blog post walking this thru.

Q: How can you speed up tests with Sauce Labs?

A: If your test is slow due to asynchronous behavior on the app, I’m not sure the “test” can run faster than the app. We need to look at the application performance to improve the situation.

Given that the app is faster but tests are running slow, there could be many reasons. Some of the common things I would try:

  • CRUD flow – try to combine scenarios to be meaningful end-user behaviors. For example, let’s assume that you are testing the WordPress blogging app (create blog post, view the blog post, verify visitors, view by geography, delete the post etc). If each one of them is an independent scenario, potentially some steps are repeated (e.g., launching the browser, navigating to the website, logging in, navigating to posts page, etc). Instead of separate scenarios, if we combine them to be logical workflow for a given persona, repeated steps can be optimized and as a result tests complete faster.

  • Scaled infrastructure – If all your UI tests are essential, run them in parallel. Leverage Sauce Labs or a Selenium grid as appropriate

  • Test category & parallel execution – Split the tests by different category, run them in parallel

  • Limit browser mix – From the utilization metrics, learn the most widely used browsers by your customers and prioritize that browser mix. We can’t test all scenarios across all different browsers, all the versions overtime.

  • Logging and visibility – integrate logging with some time series database, create visibility, measure flakiness, slow tests trends to focus and improve.

Q: How difficult is integrate Webdriver.io with a CI server like Jenkins?

A: We’ll have another blog post on this soon. However, it’s fairly simple to integrate with any CI/CD system. In my example project, all you need to do is:

  • Checkout the source from your repo

  • Navigate to *_tests directory

  • npm install

  • npm run test-Sauce Labs

This last command above will return zero exit code on success. Configure your system to fail on non-zero exit code.

Q: Are view tests part of the product code base same as unit?

A: Yes. We tag them as “View specs” and run part of unit tests.

Q: Can Selenium support shared object repositories (a concept of UFT)? Just like LeanFT can we build upon Selenium tests using such repos?

A: I’ve not used either of the above mentioned products. However, UI map can be considered as UI elements repository and shall be shared via package management.

Q: What is the best way to handle timeout issues with complex UI scripts from Jenkins, e.g. timeout occurred after 300 sec (randomly)?

A: IMO, this has less to do with Jenkins or Sauce Labs. This can be handled using the test runner (e.g., Mocha) and your testing framework (e.g., Webdriver.io)

Q: Is Webdriver.IO a part of Webdriver or a different product? In other words, can Webdriver.IO work with Webdriver? Is Webdriver.IO the same as Selenium Webdriver?

A: Webdriver.IO is a wrapper on top of Webdriver to control browser and mobile applications efficiently.

Q: Is there a way to do step-by-step debugging with Webdriver.io?

A: Yes:

  1. Configure your IDE for debugging node js. For example, I use VSCode – here is a reference: https://code.visualstudio.com/Docs/editor/debugging

Q: How should logic be passed to the configuration in WebdriverIO? For a variety of modes, e.g., multiple brands, environments, resolutions, and local vs. cloud runners?

A: Webdriver.io leverages JavaScript to receive configuration parameters. You can create a master config (generic one), environment specific configs, and merge them at execution time. Reference: https://github.com/sahas-/webdriverio-examples/tree/master/googleSearch_tests/config

Q: What is the best way to implement TDM (Test Data Management)? Is it a good practice to hardcode my data in the test itself or use an external resource like Excel, CSV?

1. Using Test DB separately.2. Generating data on the fly in code and using it.3. Use Excel spreadsheet.4. Use a TDM tool(Need Free Tool).5. Using SQL Inserts directly into appDB.

A: IMO, I try to do #2 as much as possible. Part of test setup, call the back-end service, and create necessary data. Also, delete them as part of clean up. Secondly, if we follow CRUD workflow based approach, create the data as first step in the workflow, test other operations such as edit, update and finally delete the record as a last step of the process.

Q: Which is the most commonly recommended framework to use with Sauce Labs?

A: It’s hard to say. IMO, your choice of tool depends on your:

  • Goal (i.e., we need ONE framework to test legacy app + web app + mobile app + native mobile app)

  • Development stack – should align with your development stack for developers to contribute and maintain tests. Ultimately support team owns quality principle.

Q: Why use WebdriverIO instead of Protractor? For testing an Angular-based website, how much can Selenium help? Or should I just use Protractor by itself? How different is this framework than Nightwatch.js? Is Webdriver.io a replacement/alternative to Nightwatch.js?

A: The webinar’s intention was to look at some practical patterns that can help stabilize and scale test automation. These patterns are applicable to almost any language of choice and it’ll be great if the framework of choice helps implementing these patterns. I use WebdriverIO for several reasons mentioned in the talk. You should evaluate the choice of language/framework based on your goals.

Q: What is the best way to run just specific tests within our test suite?

A: It depends on the testing framework that you have chosen. For example, I use Mocha in my example, below ——grep option allows me to run specific tests matching a RegEx.

Q: What is the ROI for automation specialists spending time explaining the advantages of programming unique ID and NAME tags to web developers?

A: Unique ID, names do offer stable ways to locate and act on the element. However, there could be some implementations where providing a unique ID impossible. For example, Grid component populates data dynamically based on the response from the backend and it’s not easy to provide a unique ID for every single cell. Given these situations, best bet is to collaborate with the UI/HTML developer who develops the component and let them provide you with the UI Map class since they know the best technique to locate the elements. This is one of the reasons I recommend separating the UI map.

Q: Does this technique work with Appium for Mobile App testing?

A: Yes

Q: Should we automate all scenarios included in US? If Yes, why? If no, why not?

A: You should automate as much as possible and leverage the machine to help boosting confidence in your app. However, you should pick the appropriate automation technique to achieve your goal.

Q: Do you prefer a monolithic structure for mobile application and website automation? Or is the best practice to create and develop as separate projects?

A: IMO, it would be great to leverage common code as much as possible and drive the web/mobile workflow based on the configuration. Less code, less maintenance.

Q: What is the best way to shorten Selenium code except POM and PF?

A: I need a bit more context. However, at a high level

  1. Implement possible OOP concepts to cut down redundant code and reduce maintenance.

  2. Leverage Agile testing quadrants thoughts to rationalize the automation and adapt appropriate technique. For example, view testing can be leveraged to increase automation confidence and reduce Webdriver based automation footprint.

Q: How best to use Selenium or WebdriverIO for microservices?

A: Microservices architecture doesn’t change the UI test automation paradigm. However, I would strongly suggest you reference the book “Building Microservices” in which the author has dedicated Section 7 for Testing and briefly explains the different techniques for stable, maintainable test automation.

Published:
Jun 23, 2016
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.