Back to Resources

Blog

Posted October 13, 2015

Using BDD To Automate Testing Of Single Page Apps (SPA) In Clinical Trials

quote

This guest post was written by Raymond Ponoran and Mihai Balint from Cmed Technology.

In the last year, we at Cmed Technology focused our engineering efforts on the development of a simple-to-use yet powerful software stack for Automated Testing. Over the past twelve months we've switched testing the web apps of our e-Clinical product suite from a mostly manual test process to a fully automatic one.

Looking back, we think we did well: testing web apps is faster than ever and we still have more room for improvement using Sauce Labs' parallel jobs and reducing the overlap in our tests. Along the way, we've learned a lot about shaping the testing stack and improving the testability of our web apps. Our goal was to establish a setup which will stay fresh for a long time and would scale with our testing needs. Working toward this goal, we chose to integrate a number of different test automation technologies, carefully selecting existing tools that are backed up by large communities. We wanted to benefit from the technical advances developed within these communities and we are excited to contribute ourselves. With our stack becoming more mature and stable now, we feel that the time is right to start sharing our experience!

This is an introductory post in which we'll discuss our approach in general, talk about the individual pieces, the connections between them, and try and touch on some key gotchas and best practice rules that we found along the way.

A Few Facts About Our Approach and Tools

Sauce Labs

  • We test our e-Clinical Web Apps on the four major browsers: Chrome, Firefox, Safari and Internet Explorer on Linux, Mac OS X and several versions of Windows as well as Apple's mobile iOS. These days maintaining an in-house Server farm that spins up VMs with various OS and Browser combinations is costly, time consuming, and cumbersome. Using the Sauce Labs platform for cloud-based testing made these problems go away and let us focus on quickly testing mobile and web apps on whichever platforms we wanted our product to support. It makes running, debugging and scaling test suites easier, saving us both time and money. Sauce Labs also lets us run tests in parallel, considerably speeding up our test suites.

  • Sauce Labs features such as videos and screenshots help us identify issues faster and are essential for the auditability of our engineering process. Each software version is accompanied by documentation and videos that show evidence of testing and business control of the SDLC, both essential for releasing apps in a regulated environment.

  • Of course we had technical issues along the way. Who doesn't?! For example, we found that Linux browser windows are much smaller than in Windows, and there were networking issues. But the Sauce Labs team support was really helpful in resolving them.

BDD With Behave & Behaving

  • We write tests in behave / behaving, a BDD (Behaviour Driven Development) technique and web application testing framework with a Gherkin-based language (similar to Cucumber, Lettuce, and Freshen) that is backed up by Python code. Below is an example scenario (taken from the examples showcased on the behaving website

  • We contribute back to the community with the test steps we develop and keep them as general as possible rather than making them app-specific so that others can benefit too.

Sample Behave Code

[code]Feature: Text presence

Background: Given a browser Scenario: Search for BDD When I visit "http://www.wikipedia.org/" And I fill in "search" with "BDD" And I press "go" Then I should see "Behavior-driven development" within 5 seconds

[/code]

Atlassian Tools

  • We use JIRA to track User Stories and Bugs as well as Confluence for team collaboration and release documentation.

  • We use Mercurial to store and version our Test scripts in Bitbucket.This is obvious from an engineering perspective, but critical in the regulated industry where we operate as it helps us providing traceability from requirements through to the Scenarios used to test our releases.

Behave Pro

  • We write tests in Behave Pro, a JIRA plugin which allows us to link each JIRA issue to at least one Test Scenario, to manage our tests and provide requirements traceability.

  • We export and version the Scenarios into Bitbucket and then finally, pass them to Sauce Labs for execution on various OS / Browser combinations.

  • Like Sauce Labs, Behave Pro's support team was very responsive and helped us out with the technical issues we've encountered.

Fixtures Our server implementation supports saving and restoring to previous states. This state includes everything from database content to user access tokens. We use these fixtures to set up our Test environment to a known state before running our tests.

Summary So Far All of the above is summarised in the following diagram:

Practical Behaviors That Helped To Greatly Improve The Quality & Efficiency Of Our Automated Testing

  • We continuously evaluate and improve the testability of our Web Apps. Often this is as trivial as adding CSS classes or HTML IDs to elements. Then there are the other, more challenging cases, like testing form fields that POST their values in response to blurred / focus lost user events.

  • We review every test scenario to make sure that the test coverage is correct, readable, and easily understandable. We use lots of CSS and jQuery selectors and absolutely love it when these are short and read like natural language.

  • Any test steps that touch third-party apps (such as OAuth2 providers) we package in macros or custom BDD steps. This isolates test suites from any changes that the third party makes to their app. For example, when Google decides to change their OAuth2 login screens.

  • We have created a couple of custom BDD steps to give us macros so that sequences of steps can be reused and even take parameters. Macros are created within our BDD feature files just like any other scenario. Each web app has it's own library of macros which we've defined in a feature file distinct from the feature files of the test suite. The macro library is tagged with @MACRO.

  • Creating Macros relies on an I define "macro_name(variables)" custom step as follows:

  • [code]@MACRO Feature: Test macros library Scenario: Define macro When I define "set_date(field,day,month,year)" as """ When I set "$field .day" to "$day" And I set "$field .month" to "$month" And I set "$field .year" to "$year" Using Macros is as simple as an I execute "macro_name(values)" custom step, for example: Feature: Test macros Scenario: Execute macro When I execute "set_date(#page .field__birthdt, 19, 02, 1983)"[/code]

  • Macros are defined within a “.feature” file on its own and are available across all of our tests.

"Migrating People" To Automated Testing

We've discovered that shifting people's mindsets when migrating to automated testing can be quite a challenge:

  • Tests are alive! We have a dedicated Test Team and we found that they are surprised and may become frustrated when perfectly good tests start failing because developers have implemented a change in the app. Be prepared and allocate time to update the tests to match the app changes as such breakages will happen with absolute certainty. Changes are inevitable to some degree but when these do happen, make sure you have countermeasures like macros or data-driven tables to support you in easily maintaining your tests.

  • Developers need to be asked to consider the testability of web apps. At first it is not natural for them to add CSS classes and IDs to HTML elements to reduce the length of CSS selectors used by tests.

Want to hear more? Leave a comment...we look forward to your questions and feedback!

Mihai Balint Mihai Balint is a Software Technical Lead with Cmed Technology. He is interested in software engineering and maintenance research and has been involved in several publicly funded projects researching software design defects and formal software specification. Recently he's been tasked with guiding the design and development of the next-gen browser-based web application suite that Cmed Technology is developing to support the conduct of clinical trials. mibalint@cmedtechnology.com

Raymond Ponoran Raymond Ponoran is a Testing Technical Lead at Cmed Technology who strives for quality and likes software that is not only well done but well tested too. He’s been leading functional, regression, system and sanity test runs. Ray has been working in defining testing strategies, improving processes and tools as well as in supporting the transition from mainly manual testing through to automatically testing software with BDD. rponoran@cmedtechnology.com

Published:
Oct 13, 2015
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.