Back to Resources

Blog

Posted September 14, 2016

Effectively Managing Appium-Based Test Automation Projects

quote

Managing development projects is always a challenge, but how are test automation projects different? What are the things to keep in mind, the obstacles to avoid and the tools to take advantage of? In this article, we are going to share our experience with you.

Before we jump into the details, let us say there is one mean beast you will have to deal with during an Appium-based test automation project, and that is maintenance overhead. The final purpose of a lot of the things you will be doing throughout the project will be to keep this overhead down to a manageable level. But what is maintenance overhead in this context?

Maintenance overhead

To put it simply, maintenance overhead is the sum of all the time and resources you will have to spend on adapting or rewriting parts of your test codebase for the duration of the project.

It is the one single factor that, if not properly managed, is most likely to turn a potentially successful Appium automation project into a failure, and one that gets very easily overlooked by people with no previous experience on scaling test automation infrastructure. But what influences maintenance overhead, and how? Among the most important factors, we can find:

  • Rapid changes in the application under testIf your tests are not working because one of the UI elements you are checking against is just not there anymore in the latest app build, they are not useful anymore. Keeping up with an evolving application in an Agile environment is not something that happens on its own, it requires a well-rounded process in place.

  • Testing on multiple platformsCross-platform automation is really neat, but comes at a cost. If you have an Android and iOS version of the application and they are not perfectly aligned and have near-identical flows and functionality, you will have to increase the complexity of your testing setup to take care of these differences. And, in turn, this means more points where your setup can break.

  • Testing on multiple OS versionsMobile operating systems evolve over time, so it’s no surprise that if you are testing across a wide range of OS versions, you might have to adjust to some major changes that have occurred over years of development.

  • Testing on multiple devicesSupporting different form factors can definitely increase the amount of time spent fine-tuning your tests. Depending on what you are doing, device performance can come into the picture. Believe it or not, sometimes even unexpected details, such as the device manufacturer, could influence how your tests run.

  • The app itselfSome apps are easy to automate, while others aren’t. Appium excels at targeting standard UI components, but things like maps, custom widgets and graphics (think mobile games) are not really its cup of tea. And sometimes, for the most different reasons, you might have a hard time testing one or more specific features that looked very much standard at first glance.

To make this clear, let’s consider a couple of examples.

Low maintenance overhead

Let’s say you are writing appium tests for a small client who is developing a mobile application for Android devices. Specifically:

  • You are the only engineer working on this small project, and your contacts are internal to your company and quick and effective in providing you any information you might need;

  • The app is only meant to be run on latest-generation Android smartphones;

  • The application, which is already live in the app store, is evolving very slowly from update to update, usually with only minor changes done to its UI, of which you get even notified beforehand;

  • The app is easily automatable: you know the developers personally, and after your requests they have been kind enough to provide IDs for each and every element in the application, so that everything is nicely accessible without any issues.

This translates to:

  • Quick information exchange with the people who can provide the information you need to keep the automation process rolling smoothly;

  • No cross-platform or form factor related pains;

  • Minimal changes to the application means tests stay relevant longer and are easier to update

  • Tests are easy to write and run reliably.

Of course we would be lying if we told you that this is a real-world example: what we just described only served the purpose of illustrating what we mean with “low maintenance overhead” situation. Run-of-the-mill test automation projects are usually somewhat in between this kind of situation and the one shown in the following example.

High maintenance overhead

Now, while keeping this same metric in mind, consider the following, very much different scenario:

  • You are managing a team of five engineers;

  • The customer you are dealing with, a large enterprise, puts you in a situation in which all of your contacts need approval from a supervisor before they can give you any of the things you need;

  • The app is cross-platform and is supposed to work (and to be tested) on all Android and iOS smartphones and tablets released in the last years;

  • The Android and iOS apps are not aligned, with the app sporting different features and looking quite different on the two platforms at this point.

  • Some of the features you are supposed to test are actually not trivial to automate.

  • The application is in an earlier stage of development, which means you are getting a new version of the app every week, sometimes with major changes to the UI and flow.

In practical terms, this translates to:

  • Keeping track of the status of the project is a full-time job, and communication with the customer requires both quick decision making and careful planning;

  • The setup will need to handle a number of exceptional cases, and your tests will require a great deal of work to be reliably working on a variety of different devices, operating systems, operating system versions etc.;

  • You will need to adapt your tests very frequently, with entire tests needing almost complete rewrite every release.

This is the quintessential “high maintenance overhead” situation. A project of this magnitude will require several times the resources of our previous, terribly optimistic example.

Five Questions to Ask Yourself When Evaluating the Complexity of a Test Automation Project

What we just said highlights the importance of understanding the complexity of the project from the very beginning (be aware that this is also considered one of the toughest tasks in the field).

Do not expect everything to be crystal clear from day one, but try to answer these fundamental questions early on:

  1. How many test cases need to be automated on how many platforms? On how many different OS versions?

  2. What will the average test case look like? What will its complexity be?

  3. How does the application look like? Is it automation-friendly? Are its features easily testable with automated tools?

  4. How often will you get a new testable version of the application? How quickly is the application changing?

  5. What kind of device coverage will you need to aim for?

Remember that, while we are diving into the issues specific to test automation projects, you will also be facing the more typical problems that are encountered when tackling software projects, such as communication mishaps, people problems, duplication of effort and so on. These will also contribute towards the overall complexity of the project, so don’t forget to factor them into your estimates and considerations.

Setting up the process

Anyone who has ever led any sort of timeboxed project knows how important it is to start on the right foot. The first few days before your project kicks off might very well be the most important of them all. Time invested early on into coming up with a clear strategy on how to take on the project might be the one that will bring the biggest payoff. Among other things, you should definitely have a clear idea about:

  • The overall workflow your team will be following to implement the tests; How are you going to ensure team-wide collaboration that works synergically with the rest of your setup? (Git Flow, as well as its variations, could be an example of this);

  • What kind of tools you will need to put in place to allow frictionless communication and collaboration with your teammates (and possibly with your customer);

  • How you are going to keep track of the status of the tests across multiple devices and platforms;

  • How you are going to make sure switching to a more recent application version won’t make you lose track of the current project status;

  • How you are going to ensure your tests run reliably;

  • What sort of guidelines you will have in place for writing code / reporting issues / approaching automation challenges;

  • How you will manage to exchange feedback with your teammates in a timely manner;

  • How you will monitor the overall project status over time;

  • How you will distribute responsibilities between your teammates and encourage autonomy.

At TestObject, we are routinely working on kickstarting our enterprise customers’ test automation infrastructure. This means we are leveraging our deep knowledge of Appium and mobile device testing to quickly automate a large number of test cases on one or more platforms.

Published:
Sep 14, 2016
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.