Where to Start with Mobile Automation?
Where do you start with mobile automation?
The approach I see most often is to take a set of test cases, pass them off to a group of automators, and let them work through the stack. Over time, you get some end to end tests, some create, read, update, delete ("CRUD") tests that handle data testing, and some amount of feature coverage.
You also get goal displacement.
Instead of helping the delivery team assess the quality of the software and find important problems, the goal becomes working through that stack of test cases. Test automation is reduced to just another software development project.
I suggest a slightly different route, something I call sprint and churn.
The popular way to build mobile UI automation is the sprint lag. In a two week sprint, the first week might be spent on building new software. The second week belongs to testing, bug fixes, and last minute changes. Any automation that happens during the sprint is usually done by the developers writing new features, and is underneath the user interface. It is common, though not advisable, for the automation team to hang behind by one full sprint. (A skilled reader will point out that is not scrum, and that is correct. Most organizations compromise their Scrum adoption...but that is a different post.) Once a development sprint is done and the changes are in production, automators finally start building new tests. Because the tooling is created after the code passes initial testing, it is what Matthew Heusser calls "not really test automation as much as change detection." Hopefully, the tests they are building are still relevant when the current sprint is over.
I prefer to make mobile UI automation part of the definition of done for a feature along with any unit and service level tests. That means the technical team gets a tiny bit less created this sprint, but it is done-done-done-done-done. Also, writing the automated tests can find bugs which can be fixed during the sprint, bringing the recording of the tooling into the feedback loop of development. In practice, that would look something like this:
Developers write unit tests along with their code and pair with someone in a testing role to build service level tests. Once there is a User Interface, testing and development are more intermingled. A tester might begin building a test that loads the app, navigates to a page, starts submitting data, and then discover that a field doesn't take floating point values when they run the test in their private mobile device cloud. At that point, they have to open the software on a real device, and explore that page and ask questions. Will the user need to enter decimal values, should the software accept the values? The interesting thing about UI automation, is that it is very difficult to build when the User Interface doesn't work. Building these tests becomes an exercise in exploratory testing. The tester then jumps back and forth between creating tooling and exploring the software by hand.
When feature development is done, and the change is ready to go to production, the change has been explored and there are a couple new mobile UI tests running in the device cloud when needed - less tests, but more powerful ones. There is also no lag between development and tooling.
Hunting the churn is a strategic mobile automation strategy designed to seek out risk.
Take a look in your code repository. You should be able to find a chart that displays specific repositories, or even files, based on the number of changes they have had over time. Changing code is a form of risk. The more a bit of code changes, the more likely it is that something could go wrong. Create a story for each of the peeks so that exploration can be put into the next sprint. When that story makes it into your queue, pair up with a developer. The actual work will be a combination of exploration, coverage analysis, and potentially updating or building new tests.
The first step here is to perform exploration in a developer tester pair. The developer will probably understand the area of the product affected by the change, and help guide the focus, while the tester will understand what test ideas to perform so they can learn more about the product and potentially find new problems. After this, take a look at the unit tests, API tests, and UI tests that are already running. Find the gaps; are there missing tests, are there existing tests that could easily be updated?
The goal of a churn story is to discover problems that might be hiding in a risky part of the product, and also to do some future-proofing. When the task is complete, the product is explored, bugs are fixed, and any new automated tests have been checked in and are running.
Instead of looking for a list of end-to end tests, or CRUD scenarios, or something else, ask yourself what the goal of your mobile automation project is. For the most people, the goal is a combination of discovering risk and helping the development get good software to production. The combination of a sprint and churn strategy will help you get there.
Justin Rohrman has been a professional software tester in various capacities since 2005. He is a consulting software tester and writer working with Excelon Development. Outside of work, he currently serves as President on the Association for Software Testing Board of Directors, helping to facilitate and develop projects like BBST and WHOSE. Justin is also a student in the Miagi-Do school of software testing, and is an occasional facilitator for Weekend Testing Americas
- Accessibility Testing
- Appium Resources
- Best Practices
- Continuous Delivery
- Continuous Integration
- Continuous Testing
- Cross Browser Testing
- Guest Blog Posts
- Load Testing
- Machine Learning
- Mobile Development & Testing
- News & Product Updates
- Open Sauce
- Open Source
- Performance Testing
- Product Updates
- Quality Assurance
- Quality Engineering
- Sauce Product Info
- Security Testing
- Selenium Resources
- Software Development & Testing
- The Story of Sauce