How Yahoo! Mail Transformed Its Functional Testing + Continuous Delivery Process [Q&A]

Posted by Bill McGee in Webinars

Yahoo Mail 2 (1)Thanks to those of you who joined us for our last webinar, How Yahoo! Mail Transformed Its Functional Testing + Continuous Delivery Process with Front End Developer Neil Manvar. You can check out Neil's slides and the audio from the webinar HERE. Neil's been kind enough to participate in a follow-up Q&A. See his answers below.

Q: How stable are your tests? Do you get flaky test results often? What driver do you use to control the browser?
A: The tests are fairly stable. About 90% of them pass in the first the run.  We have to use re-runs to weed out intermittent failures as well as failures due to parallelism and account conflicts. Our first re-run is in parallel, and our second re-run is in linear. After that, we rarely see intermittent issues or flakiness.
 
Q: Are the integration functional tests automated in the CI pipeline as well?
A: All tests are automated in the CI pipeline. Continuous Delivery means that every commit reaches production without any manual intervention. This means the tests must be in the pipeline, otherwise untested code could reach production.
 
Q: Do devs write functional tests? If so, how do you get them to do so?"
A:  Yes, devs do write functional tests at Yahoo; they include them as part of their pull request when editing or adding features and functionality to our products. To educate devs, we provide written tutorials and training on how to write effective functional tests.
 
Q: How often are you updating production? Every change, grouping of changes (daily, weekly...), etc?
A: We aim for updating production once a day. We update our test boxes with the latest code (every commit).
 
Q: Have you changed any of your tools in the tools stack since you started? In other words, can you easily swap new tools in and out as new tools come out?
A: No we haven’t changed any tools since we started. I did my research and picked these ones because of the open source activity around them, the amount of people using them, and on-going support for them. However if we needed to, we could easily switch out of a technology (i.e. switch the web driver, or switch test test runner since we abstract and encapsulate things pretty nicely in the framework).
 
Q:  Why did not you use Selenium? Why Watir?
A: Watir-webdriver is a wrapper around Selenium that makes interacting with elements easier and more concise. It translates to Selenium-webdriver eventually (i.e. it will translate into Selenium commands). We chose Watir-webdriver because it is easier to read, learn, and use when compared against raw Selenium-webdriver.
See a code snippet in Watir-webdriver, for example:
@browser.element(:class => ‘someclass’).when_present.click
Whereas the above command would have been longer in selenium-webdriver.
 
Q: Seeing as you have functional tests as a check before CD, how do you manage to catch UI defect bugs (for example, if a button is positioned incorrectly)?
A: We push in stages. Once the functional tests pass for a given commit, it will be pushed to test servers, and then eventually it will reach all internal users (our company) where we can dogfood it for a few hours before it goes to production. So the entire company is using this new package of Yahoo Mail. If we hear no complaints (i.e. no visual defects), it will automatically be pushed to production. So, we depend all of our employees' eyes and experience to make sure there are no UI defects that affect usability.
 
Q: Fundamentally, the QA effort is still there; it's just that the efforts were shifted to Dev team. Is it right?
A: Yes and no. The test automation efforts were shifted the dev teams. We do not explicitly do any manual testing before we push, whereas our QA team would previously certify the new code before we push. If the tests pass, the code automatically gets pushed to the next stage.
 
Q: As one transitions from manual testing to a CD model, do you think that at the beginning is it better to get all the test scripting for existing regression scenarios completed first before switching to this model? Or What do you recommend from experience?
A:  It is better to write out all the scenarios / test cases before switching to this model. Once you go to a CD model, the only thing protecting you from broken features / functionality are these test cases. Therefore you must automate any check / test you do manually before a push happens into the CD Pipeline (i.e. anything that would be done manually to certify a build must now be done automatically as part of the CD Process otherwise you would be pushing without that check / test.)
 
Discuss: How Yahoo! Mail Transformed Its Functional Testing + Continuous Delivery Process [Q&A]
0 Comments

Free Trial

Get access to a free 14-day trial version, or contact Sales for more information.