Android is, by far and away, the world’s most popular mobile operating system. Its global market share across mobile devices of all types stands at about 72 percent, as of 2021. Even on tablets, where Android was historically far behind iOS in terms of market share, Android now powers more than 40 percent of devices. And although Android’s market share in the U.S. is lower than it is globally, nearly half of North American mobile devices run on Android, according to 2021 data.
If you’re new to Android mobile testing, creating a testing routine may seem easier said than done. The wide variety of Android testing frameworks, languages, tools, and methods can be intimidating to navigate.
That’s why we’ve created this page: to help practitioners wrap their heads around Android testing tools, strategies, and best practices. Keep reading for actionable Android testing advice targeted at app developers and QA specialists.
Android testing can be broken down into several categories. Each type of test evaluates a different facet of your app. The point during the development lifecycle when you perform a test also varies depending on which type of test you are running.
Still, in most cases, you’ll want to perform multiple types of Android tests. Rather than thinking of the different tests described here as alternatives to each other, think of them as multiple techniques that work in different ways to help you achieve your ultimate goal: releasing an app that delights your users.
Manual testing is the most well-known kind of testing, and often highly valuable. It involves manually putting a new build of the application on a physical device and running the software, end-to-end, against real servers.
While some tests will always need to be run manually, manual testing is typically not a practical way to test how your app runs across the thousands of Android devices in existence. You’d have to load and test your app manually on each device, and that’s just not feasible in the Android market.
Compared to Apple, which produces a relatively small selection of mobile devices, the Android market includes a much broader variety of operating systems, screen sizes, hardware specifications, and backward compatibility requirements.
Compatibility testing ensures that your app works smoothly across these various devices and device configurations. Compatibility testing is the process of taking an application that has finished testing on one configuration and testing for correctness on another, slightly different machine.
Compatibility testing is a useful way to test app functionality across a limited range of Android configurations – such as those that are most popular among your particular user base. However, because compatibility testing requires manual application deployment and evaluation, it doesn’t scale. Dan Nosowitz's fragmentation image (shown below) shows why: if you tested on the top 12 most popular Android configurations, you wouldn’t cover even 25 percent of the total Android landscape.
Compatibility testing is also difficult because it requires you to have multiple physical devices on hand for running tests. Most developers don’t keep collections of Android phones and tablets in the closet to run tests. That said, one way to perform effective compatibility testing is to use simulators, emulators, or real devices in the cloud, a service that Sauce Labs can provide. Another is to rotate through cloud-based devices while performing test automation.
If you need to run Android tests at scale, automated testing is a must. Automated testing lets you evaluate how an application works under different configurations using an automated framework. In other words, instead of having to set up compatibility tests by hand, you can write tests that evaluate parameters you define, then execute the tests automatically. This makes it possible to test across hundreds or thousands of different configurations easily.
There are three main open source test frameworks that streamline automated Android tests: Selenium (for mobile web), Appium (for native Android), and Espresso. These libraries treat an application like an object, meaning the programmer can click on objects, get the text on the screen, and so on.
Sauce Labs can provide virtual devices to run tests on these devices, with no device lab required. So, by pairing Sauce Labs with an automated testing library, you can run a large number of tests efficiently across virtually any type of Android environment configuration.
Appium, Espresso, and Selenium support two main types of automated tests: user interface (end-to-end) tests and unit tests.
These are tests that mirror the way manual tests work by taking the perspective of a user and robotically negotiating the app’s various user paths. These operate at the Graphic User Interface (GUI) level. This means they exercise the full stack of the application, from the user interface to the front-end business logic, to any kind of server interaction. They can therefore find errors at any level of the technology stack.
On the other hand, these tests are relatively slow. A full suite of Android tests might involve two thousand user interface actions. At two seconds per click, that will take over an hour on one device.
To help alleviate this issue, Sauce Labs provides a grid product, making it possible to run tests in parallel. A set of tests for 16 devices, which might take about an hour without a test grid, can run in just four minutes with a grid. Due to the delays and maintenance efforts, end-to-end tests are usually reserved for testing the most crucial parts of the user experience. Selenium and Appium run in a dozen different programming languages; Espresso is limited to Kotlin or Java.
Generally written in the same programming language as the production code, unit tests isolate the individual methods of the code to exercise small, discrete bits of application logic. For example, if an internal app function is responsible for converting something the user enters into a different type of string, unit tests would provide examples of sample inputs and the expected result.
Because unit tests have access to app code and can directly target the smallest bits of logic in your application, they are blazingly fast. Hundreds or thousands of unit tests can run in seconds. Unit tests do not test interactions between app components, however, and thus they are no substitute for end-to-end tests.
Once tests are automated, they can run on every build immediately after the build finishes, finding problems introduced by that change. Continuous Integration (CI) systems can run tests to validate every change, with an awareness of who made the change and what it was, making debugging and fixing (or maintaining the tests) a snap.
Using Continuous Integration tests, small, isolated changes can be verified against the entire regression suite quickly and rolled out without a large human regression test effort. Then, if something does go wrong, it will be easier to undo the small change. There are a variety of CI systems like Jenkins or Travis, which facilitate the pulling of new code, building it, and exercising it with the various types of tests we discussed. Sauce Labs supports integrations with many CI servers.
Testing with one user is unlikely to simulate the load that may happen with many users in production. That’s why you may want to run performance tests, which evaluate how your application behaves under stress.
Performance testing is testing for performance and includes load testing, which is testing for multiple users. Both of these involve testing the server, but page rendering can also be important for mobile devices. Soak testing involves running the system for an extended period of time to see if there are memory leaks, and it can involve either client or server.
The two primary methods to perform security testing are automated scanning of source code and penetration testing. Because security tests require different types of expertise and focus on different aspects of an application, they are usually written separately from other types of Android tests.
What’s essential to remember, however, is that any Android app – like any app in general – can be subject to a variety of security vulnerabilities, so it’s critical to invest just as much in security testing as you do in automated compatibility and performance testing.
As you develop a strategy for testing your Android app, you’ll need to answer questions such as:
Which types of Android testing do you need? As noted above, most teams will want to use multiple types of tests – such as automated compatibility testing and performance testing – simultaneously. But very small-scale projects may be able to get away with just manual tests.
Which test frameworks will you use? Selenium, Appium, and Espresso are the gold standards for Android testing. But you may choose to use other test libraries depending on developer preferences or the specific nature of your app. For example, applications written in less common programming languages may benefit from testing with a framework designed for those languages.
Who will write your tests? If you have QA engineers on staff, they typically take the lead in writing automated tests. Developers may also be able to assist. And don’t forget to loop in designers and possibly even CRM leads, since these stakeholders may also have advice to give about what to test.
Which devices and configurations will you test on? It’s not practical to test across every possible type of Android device configuration. Identify the most common device types and software configurations used by your customers, and prioritize testing there.
Where will you run tests? If you have just a few tests to run and just a few devices to test on, you may be able to run tests on devices you own. For testing across many devices with ease, however, we recommend the Sauce Labs test cloud.
Will you test in parallel? Parallel testing on a test grid can exponentially increase the speed at which tests run. The only real drawback is that testing in parallel requires a bit more coordination because you need to be able to deploy multiple tests at once.
When you can answer each of these questions, you can devise an Android testing routine that aligns with the type of app you need to test, the types of environments your users run, and the aspects of the app that are most important to test.
There are several steps you can take to make Android testing as fast and efficient as possible:
Automate, automate, automate: While some amount of manual testing is unavoidable, you should strive to automate Android tests where possible. This may seem obvious, but some teams fall into the trap of performing too many manual tests because the initial investment required to write automated tests feels too intimidating. The reality is that, once your automated tests are written, you’ll exponentially increase the speed at which you can test and the quality of your apps – so a little upfront effort is well worth it.
Find the “sweet spot” for the number of tests: Again, you can’t expect to test every kind of configuration. But you also don’t want to test too few. You need to figure out what the sweet spot is in this regard. Consider your app and user base, and think strategically about how many configurations you can reasonably test for. If you can cover at least 80 percent of all configurations in your tests, you are doing pretty well. 90 percent is even better, and 95 percent places you above and beyond.
Use emulators sparingly: Emulators let you run tests in virtual software environments rather than on real mobile hardware. Although emulators can simplify some types of tests, and are useful earlier in the testing process, it’s a best practice to ensure that you run tests on real hardware devices before deploying software to your users.
Test every time code changes: When a test fails, you can fix the issue much faster if you know which code commit triggered it. That’s why continuously testing every time your code changes is a best practice: it improves app quality and reduces developer effort.
Make testing a collective effort: Testing shouldn’t be the province of QA engineers alone. Developers, designers, end-user support teams, and more should all be looped into the process. Even if they don’t run tests day-to-day, these other stakeholders can provide guidance about what to test for, as well as which devices and configurations to prioritize.
Lean toward open source testing: While proprietary test libraries may be useful for certain types of applications, using open source test frameworks like Selenium, Appium, and Espresso is a best practice. It maximizes test flexibility and ensures you won’t be left in the lurch in the event that a vendor stops developing a proprietary test library.
Learn how using a real device cloud can help resolve mobile app test errors and help teams streamline debugging.
There are different methods to mobile testing. How do you know which one to choose?