Simulated or real-device testing: Which one is best for your software?
Here's a hint: the answer isn't the same for everybody. It depends largely on the platform, the type of tests, and the nature of your software. But before we go into details, let's take a quick look at the terminology, which can be a bit uncertain (if not outright confusing) at times.
There are three terms that you're likely to hear frequently with reference to simulation and testing: "simulator," "emulator," and "real device."
What are the differences?
This one's easy. A real device is exactly what you'd expect it to be—the actual hardware (plus OS and built-in support resources) on which your software will run in production. For mobile software, it's the mobile phone or tablet. For specialized industrial, scientific, or medical monitoring software, it's the actual monitoring device.
An emulator is a virtual device that is designed to be a precise digital replica of the actual device in question. It duplicates the internal operation of the hardware, allowing the OS, software, and support resources to function exactly as if they were running on the real device. Ideally, this means that it will include accurate emulations not only of processors and ROMs, but also such things as memory and I/O hardware.
It's worth noting that most programs for testing Android devices are actual emulators. As noted in the next section, that makes them different from iOS Simulator.
A simulator superficially resembles an emulator, but it doesn't attempt to duplicate the actual operation of the device's hardware. Instead, it duplicates the device's outward behavior, running the OS and simulating I/O, for example, with little or no attempt to duplicate internal hardware-based processes. In effect, it treats the device as a black box, with known external behavior, but with no reference to internal behavior.
Probably the best known simulator is iOS Simulator, Apple’s official testing tool. Unlike most Android testing tools, iOS Simulator just simulates the iOS software environment. It does not attempt to emulate hardware.
Note, however, that even with real devices, input/environmental data may still be simulated. In many cases (manual user input, WiFi access, etc.), accurate real-world input may be easy to simulate if and when it becomes necessary to do so. In other cases, however (particularly with specialized monitoring devices), considerable care and attention to detail may be required in order to simulate realistic input and operating conditions.
At this point, it's important to take a look at the differences between an emulator and a simulator, because those differences can be significant.
What an emulator provides is a reasonably accurate virtual representation of the hardware's actual operation, including such (at least partially) hardware-dependent features as threading, stack operation, and caches. This allows it to uncover many potential low-level problems involving timing, conflicting priorities, and other logical bottlenecks.
A simulator, on the other hand, is more like a functional mockup. It allows you to test the user interface and high-level functionality, but gives you very little information about how the device is likely to interact with the real device.
In general, a simulator may be appropriate for testing during the early phases of development, particularly those involving the GUI and basic program logic. It is only of very limited (if any) use, however, when it comes to testing beyond the early-development stage.
An emulator, on the other hand, can be used both during early development and during at least some of the later phases of development; it will allow you to uncover and fix many of the hardware-based problems which a simulator is unlikely to detect.
For all of the differences between emulators and simulators, they have one basic feature in common: They are virtual, and not real-world test environments. The most important distinction, when it comes to software for mobile or specialized devices, is the difference between such virtual-environment testing and real-device testing.
The bottom line is that if your software is designed to run on a real device (rather than, for example, in a fully virtualized, cloud-based environment), there will come a point (sooner, rather than later) where you will need to test it thoroughly on that device.
If an emulator faithfully duplicates actual hardware, why do you need real-device testing?
No emulation can be counted on to precisely duplicate the functioning of real-world electronics. Slight variations in timing, response to changes in temperature or voltage, and unanticipated types of behavior under load can all affect software performance.
When you take into account the differences between hardware specifications and their real-world implementation in silicon, as well as the allowable performance range for individual ICs (which may be quite wide in the case of generic/low-end devices), the probability that major performance issues or serious bugs may show up only during real-device testing is significant.
The physical environment within a device consists of more than just processor and memory ICs, of course. Does the software place excessive demands on the battery? How well does it work with external and internal storage? How does it handle slow read or write speeds? Does it hang or fail gracefully if it cannot read or write crucial data, or does not get a response from an on-board component?
A physical device operates in a physical environment. How does the software handle problems with WiFi connectivity? How does it deal with real-time interruptions from messaging or voice calls? How does it interact with other applications which make demands on system resources? What does it do when other applications are placing high-priority demands on the processor, memory, or storage—or on all three at the same time?
The user's experience of a mobile or specialized-device application cannot be fully separated from the device itself; comprehensive testing of user experience requires real-device tests. How does the application look on the device's display? Are there resolution problems on some devices? How does it interact with the device's controls? How does it handle voice input, and audio output? What is the actual speed of response?
Even if your application's interaction with hardware features is minimal and generic, real-device testing will allow you to detect unanticipated hardware-based problems prior to release. In many cases (if not most), where hardware-specific features are not an issue, the most efficient use of testing resources may be to concentrate on emulated/simulated testing during all but the later stages of development, then switching to real-device testing during the beta phase.
How much of this testing needs to be done in-house? Perhaps very little, and depending on the type of device, perhaps none at all. A cloud-based testing service that provides emulator/simulator and real-device testing lets you move smoothly from one level of testing to another, and eliminates or reduces much of your setup time and labor.
A good online testing service will typically also provide a comprehensive suite of resources for monitoring and analysis, further reducing your testing setup time and overhead.
The actual balance between emulated/simulated and real-device testing will vary, of course, depending on the nature of your application. But the resources are available, and they can be scaled to your schedule and budget.
Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the ‘90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues.