Making Your App Testable
When writing test automation, one of the most important factors for determining the amount of time and resources you will consume (and ultimately the success or failure of the endeavor) is the testability of your application. By testability, I'm referring to how the app interacts with UI (and other) automation frameworks, the ease by which a test script can setup the scenarios you wish to test, and how you make your tests safe for concurrency.
Making Elements Accessible
Let's address the matter of controlling your application with automation frameworks first. Given that you are reading this blog, it is likely you are using either Selenium or Appium as your automation framework, so this blog post will only address these frameworks.
Web Content (Selenium)
Let's start with Selenium. In order for your web app to be easily controlled by Selenium, you need to think ahead as to how you will identify important pieces of the DOM when constructing tests. Selenium provides many means by which you can do this, called "Locator Strategies". Some are better than others. You should consider which of these you will be using in your tests as you develop the user interface. Ideally, each element would have an ID attribute applied directly to the tag for any element that the test will exercise.
Sometimes, for one reason or another, you may not be able to use an ID. If this is the case, the next recommended technique is to use a CSS selector. If your web app is developed with good principles, such as BEM (Block Element Modifier), it is likely easier to automate as it should have a relatively short globally unique CSS selector. If it does not, I would not recommend adding a CSS class just for automation, as it isn't the purpose of the styling language to do automation. Rather, I would suggest that you use an HTML
data-* attribute. You can use the CSS selector locator to grab an element by one of these attributes, but the good news is you aren’t adding unnecessary classes to your CSS, and the purpose of the HTML attribute will be much clearer.
As for some of the other locator strategies out there, it is my opinion they should be used sparingly. If you have an Angular app, where tag names are often unique, perhaps you would use the tag name locator strategy. I would strongly discourage using XPath, as its implementation across different browsers is inconsistent, which can result in flaky tests. In particular anything using indexes, or that is heavily dependent on the organization of the DOM, is likely to be brittle, as you can expect the DOM to change as you develop your application. I would also strongly discourage the use of locating by text as this will inhibit the ability of your tests to work in multiple languages. (e.g. "Hello" on American content, would be "Bienvenidos" on Spanish content)
Good article on XPath vs. CSS Selectors: http://elementalselenium.com/tips/32-xpath-vs-css
CSS Selector Reference: https://developer.mozilla.org/en/docs/Web/Guide/CSS/Getting_started/Selectors
XPath Reference: http://archive.oreilly.com/pub/a/perl/excerpts/system-admin-with-perl/ten-minute-xpath-utorial.html
Native Mobile Content (Appium)
Making mobile apps easier to automate follows similar principles to the web. In general, if you follow the accessibility guidelines set forth by Apple and Google, especially when constructing custom UI elements, you should be in pretty good shape. However, it is my experience that most apps do not adhere closely to these guidelines. No worries, if this is the case, you should still be able to make it work.
For iOS, the best technique for identifying elements in the app is the accessibilityIdentifier property on UI elements. There are 3 ways of setting this. You can add them in your XIB or storyboard using Interface Builder. They will appear on the "Identity Inspector" tab in the accessibility section. However, some UI does not use XIBs or storyboards and is generated programmatically. This is not a problem as you can call the
[UIElement setAccessibilityIdentifier] on most UI elements. You can also add a user defined runtime attribute for the key “accessibilityIdentifier” in either Interface Builder or via code. This is the preferred method, however using XPath for more complicated pieces of UI is OK, especially dynamic content such as tables.
Note that you may have heard of the
accessibilityHint fields in the past. The problem with using these fields is that users who are using VoiceOver or other assistive devices will hear the values you place here spoken to them, so it is strongly encouraged that you use
accessibilityIdentifier instead. You may also need to set the
"isAccessibilityElement" property to
'true' as some UIElements are excluded from the accessibility layer by default. Lastly, you may want to explore the
"accessibilityElementsHidden" attributes for parent elements if you notice your element is not accessible. The easiest way to verify this is to use the Appium inspector, or the built in accessibility inspector that can be enabled in your phone’s settings.
For Android, the story is a bit simpler. The resource-id is the most reliable way to identify an element (e.g.,
By.id("android:id/text1") You can find information on how to set the resource id here. If a resource-id will not do it, then I’d recommend building an XPath expression that heavily leverages resource-ids. You may have seen previous guidance to use the content-desc attribute for Android, but the problem is the same as with accessibilyLabel and accessibilityHint in iOS in that users with assistive devices will hear/see the values you use here.
Setting Up Scenarios
As part of your tests you will inevitably need to get the app into certain states to test certain things. There are many techniques to go about this, and you should consider how you will set up UI tests for different features in your application as they are created.
Technique #1 - UI Automate your way to the scenario
Imagine a scenario where you need to create two user accounts and then perform some action in your application involving those two accounts (e.g., send a message from one to the other) Without any tools, your test will have to complete your product's registration flow twice before attempting to send the message between the two accounts. This is lengthy and slow. When writing UI Automation such as with Selenium or Appium tests, a good rule of thumb is not to use the UI unless you are verifying it. The test for the registration flow ideally should be the one test that automates the UI of the registration flow rigorously. Automating this flow in other tests will make your tests take longer and will inject more brittleness into your test suite. However, I understand, that sometimes this is the only way, especially when using 3rd party components.
Technique #2 - Build a test API
Good applications are well factored into many layers. A common layer is an API. A good API will handle all transactions that modify the business data, i.e. the front-end is responsible for display and the back-end makes changes to the data on your servers and performs business logic and calculations.
Having an API just for testing can greatly simplify your scripts and will increase the reliability with which they run. Imagine the previously mentioned scenario. Instead of running the lengthy registration UI automation two times, you could simply create an API endpoint that will generate the two accounts in the required states. Perhaps your API might even already have endpoints that your web front-end already calls to do the same thing. Be sure to test the UI flow once in a specific test, but for follow on tests, such as the example of sending a message, you can just call the API to set up state and only test the piece you care about, in this case, sending messages.
Note: Please remember to secure your test API as it is likely you will create endpoints you wouldn't want your customers to use... (e.g., set account balance, etc.)
One of the biggest ways I see developers shoot themselves in the foot is by not considering that their tests will run in parallel. You should, whenever possible, avoid writing any tests that depend on shared state in your application, and should always take appropriate measures to isolate your tests from one another. It can be extremely time-consuming to unravel this mess later on.
Consider the above example concerning sending a message between two accounts. If we share the same accounts with other tests we may have a message show up that is detected by one test, but was actually sent as part of another test, leading to a false result. This is why testing in parallel requires advance planning, such as creating separate accounts for different tests.
Take another example wherein we need to test blocking a user from our service. If we create an account for the test and then block the account, we may think we have isolated the side effects of our test. However, imagine that if -- at some point -- we block enough accounts from the same IP address, the server will eventually blacklist the IP address for the account. Now we have introduced consequences that will cause errors in other tests that will be difficult to track down.
Bottlenecks and Chokepoints
Bottlenecks in automation amount to resources that tests are required to share. They can be anything from environments (e.g., servers), accounts, and IP addresses (as mentioned in the example above), or simply devices on which to execute the tests. It can be quite time consuming to completely isolate every aspect for all of your tests. To do so may require a lot of time or resources. Imagine having to reformat a device and spin up an entire environment and private network for each test; that is quite time consuming.
You should try to intelligently choose where your bottlenecks will be and how to manage them. For example, if you have a limit on the number of accounts for your tests, you may have to write code to lease a test account to a test and then, when it is returned, clean and reset it for the next test. You may have to run certain tests in series rather than in parallel to avoid them stepping on each other's toes.
In a perfect world, a well-written application can be spun up instantly in the cloud. Software from cloud providers, such as Cloud Formations from Amazon, make this possible. For devices and browsers, services such as Sauce Labs can dispense any configuration you desire nearly instantly. It's noble to strive for complete isolation and independence, but be prepared to make calculated compromises around this as you develop your application.
Having a highly testable application will make writing automated tests for that application a much less time-consuming and a much more worthwhile experience. Creating a user interface that is automatable and conforms to vendor supplied accessibility standards, providing convenient and reasonable ways to access application states, and preparing your test suite to run concurrently will put you off on the right foot towards achieving sustainable application stability via test automation.
I hope you found this article helpful. Feel free to tweet me at @thedancuellar with your comments.
Dan Cuellar is the creator of the open source mobile automation framework Appium, and Head of Testing at Foodit. Previously, he headed the test organizations for Shazam Entertainment in London and Zoosk in San Francisco, and worked as a software engineer on Microsoft Outlook for Mac, and other products in the Microsoft Office suite. He is an advocate of open source technologies and technical software testing. He earned a Bachelors degree in Computer Science, with a minor in Music Technology, from the world renowned School of Computer Science at Carnegie Mellon University in Pittsburgh.
- Accessibility Testing
- Appium Resources
- Best Practices
- Continuous Delivery
- Continuous Integration
- Continuous Testing
- Cross Browser Testing
- Guest Blog Posts
- Load Testing
- Machine Learning
- Mobile Development & Testing
- News & Product Updates
- Open Sauce
- Open Source
- Performance Testing
- Product Updates
- Quality Assurance
- Quality Engineering
- Sauce Product Info
- Security Testing
- Selenium Resources
- Software Development & Testing
- The Story of Sauce