Is emulator testing right for you? Or should you be relying instead on real-device testing as part of your mobile testing routine?
The answer is that those are the wrong questions to ask, and in most cases, you should be doing both types of testing. Emulator testing and real-device testing both have important roles to play in the software delivery process, but they are typically best used during different stages of that process.
Keep reading for tips on when to test with emulators, when to test with real devices and when you might even want to test on both types of environments, simultaneously.
In software testing—and particularly in mobile software testing—an emulator is a software-defined environment designed to mimic the hardware and software conditions that would exist on an actual mobile device.
In other words, an emulator provides a way to run an application for testing purposes within an environment that (at least in theory) is identical to the real-world environment in which the application will run when it is deployed on an actual mobile device. But because the emulator environment is created using software rather than actual hardware, it is faster and easier to create and to modify. You don’t need to set up and maintain an actual piece of hardware, then load your applications into it to test them; instead, you can simply tell your emulator platform which type of environment you want to create, then fire the environment up and start running your tests.
In addition to saving time, emulators make it fast and easy to test applications against a range of different types of mobile environments — which is a key consideration for mobile testing in particular. There are something like 24,000 different types of Android phones out in the world, not to mention dozens of different iPhones and iPads, as well as more obscure types of mobile devices (like Windows Phones, which are still lurking out in the wild despite having been discontinued in 2015). Setting up and managing thousands of different physical devices in order to test your applications would be unrealistic in many cases. But emulators make it easy to create testing environments that mimic all of these devices.
The caveat to emulator testing is that the environments created by emulators don’t always do a perfect job of mirroring actual mobile devices. The real mobile devices might contain hardware that the emulator cannot represent, for example. Or there might be unforeseen quirks in the interplay between the hardware and software of a real device that cannot be reproduced within an emulated environment. For these reasons, tests run on an emulator might yield different results from those that run on a real device.
(Parenthetically, I should mention simulators, which, like emulators, let you create software-defined mobile testing environments. Some people use the terms emulator and simulator interchangeably, although in reality there is a difference: Simulators are designed only to mimic the software environment of a given mobile device, while emulators mimic both software and, to the extent possible, hardware.)
Because emulators make it easy to test how an application behaves across a range of different mobile environments, they are typically most useful early in the software testing process.
As soon as new code has been built, testing it in emulators is an effective way to determine whether it runs in unexpected ways on any particular types of devices. By running tests on hundreds or thousands of emulated environments in parallel (each representing a different mobile device or configuration), software delivery teams can achieve broad device coverage while still keeping the testing process highly automated and efficient.
Once new code has passed tests on emulators, there is rarely a reason to continue testing it on emulated environments, because the test results are unlikely to change.
The important thing to remember is that just because your app passed emulator testing, doesn’t mean it’s ready to deploy. In order to achieve the greatest level of confidence that your code will run as required in production, it is wise to perform testing on real devices as well.
Real-device testing typically occurs after all emulator tests have successfully completed, and just before code is deployed into production. Provided your emulator tests covered all the same ground as your real-device tests, and the emulators successfully emulate all aspects of the real devices, real-device tests should yield the same results as emulated tests. But if your emulated environments didn’t perfectly reproduce the environments of actual physical devices—or if there were aspects of your application that you couldn’t test in an emulated environment due to lack of access to resources such as an actual network or a biometric input device—real-device testing will help to reveal problems that may not have been detected during emulator testing.
So, you can think of real-device testing as a second line of defense. Although it may seem redundant in some cases, it’s necessary to help plug gaps that might crop up within your emulator testing.
The major caveat of real-device testing is that testing on many different types of real devices at once is often impractical. If you test locally (which you probably shouldn’t do, by the way, because it’s not at all scalable, and probably not cost-effective), your ability to obtain and maintain many different mobile devices is likely to be limited. And even in a cloud-based test environment where you can rent access to real devices, real-device testing typically costs more than emulator testing.
Thus, while it’s common to test on hundreds of different devices using emulators, you might have the resources to test on only a dozen or fewer, with real devices.
For this reason, it’s a best practice to be judicious about which real devices you test on. Identify the devices and configurations that are most relevant for your target market and target user, and test only on those. For example, iPhones are more popular in the United States than they are worldwide, so you might want to prioritize real-device testing on iPhones if your end-users are Americans, more than you would in other markets. Likewise, women are more likely to own iPhones than men, another factor that might inform your real-device testing selection.
Although, in general, you would test on emulators first, then test on select real devices just before deployment, in some cases it can make sense to perform real-device testing at the same time as emulator testing. If you are trying to achieve a particularly fast release velocity, running real-device tests earlier in the CI/CD pipeline can help to save some time by eliminating the need for a separate suite of real-device tests.
Likewise, you may find that emulators do a poor job of mimicking certain types of devices or configurations. This tends to be an issue in particular with very new types of devices, or with more obscure devices (like non-iOS and non-Android options) that are not a major focus for emulator developers. In these situations, you might opt to skip emulator testing for the impacted devices and instead go straight to real-device testing.
Emulators and real devices are not an either/or proposition. Testing on both types of environments is typically critical in order to maximize test coverage and minimize the chances of releasing bugs into production. And while, in general, emulator testing comes before real-device testing, in certain situations you might perform both simultaneously.
Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was published in 2017.