Testing mobile apps isn't always easy. Testing the mobile user experience can be particularly difficult. But like virtually every other pursuit in life, it’s possible to achieve with the right approach and amount of effort.
In this post, we'll look at some of the major hard-to-test mobile user experience challenges, and how you can address them in testing your mobile app.
Nobody wants his or her app to crash. It’s not only frustrating for users and developers alike, but it’s also costly to the business.
In the mobile world, the cost of a crash can be much higher than it is for a desktop app. Unless you're lucky enough to be the developer of an absolutely indispensable app with a locked-in user base, chances are very good that if your app crashes, most users will simply stop using it (or just uninstall it) in favor of a competing (and non-crashing) app.
The conditions that cause a crash, unfortunately, are not always easy to predict. Even if your app behaves well internally, it may crash in response to problems with access to operating system or hardware resources, or other external conditions. It doesn't even matter whether your app is the one causing the basic problem, as long as it's the one that crashes.
The best way to test for crashes and other catastrophic failures is generally by means of a large-scale automated testing platform with a broad range of devices. This will allow you to test under the widest variety of conditions, and anticipate the great majority of potential crash scenarios.
Users don't like slow apps—In fact, they're almost as likely to uninstall a slow app as they are one that crashes. As with crashes, you can't always anticipate the conditions that will slow your app down, and those conditions may depend on circumstances that you can't control.
Slowdowns, however, may result from problems in accessing data, rather than system resources. When you're waiting for content, you may not be able to control how quickly (or whether) it will arrive, but you can control the way that your app deals with slow or non-arriving content.
This means that in addition to the kind of large-scale automated tests that you would use to look for crashes, you need to test your app's behavior when content arrives slowly or is unavailable. It should always appear to the user that the app is doing something (loading the framework of a page, etc.), and the app should always display a simple, informative dialog box in the case of a timeout.
Your app shouldn't draw any more power than it needs to. Chances are that you've designed it to be at least reasonably good at not draining the battery under ordinary circumstances. But what happens when the user or the system does something that you hadn't anticipated? Are there circumstances, for example, where a process which should only run for a short time will continue to run in the background? Are there times when your app may download unnecessary data, or keep high-demand system resources when it doesn't need to?
The same kinds of large-scale automated tests that can be used for crashes and slowdowns are also good for detecting conditions that may drain the battery. Be sure to include power consumption issues in your test results analytics.
The majority of user experience problems are the result of bad (or at least not well thought-out) design. Unfortunately, these are often the problems that are the hardest to detect. They don't crash the app, and they don't cause any obvious functional problems. They simply make what should be simple actions difficult, annoy users, and make it more likely that they will switch to a competing app. For the most part, design problems fall into a few basic areas:
Users need to be able to navigate through your app easily. It should use standard navigation conventions whenever possible. If you need to provide a unique navigation method, it should not conflict with existing standards.
Flow of use is as important as navigation. Flow depends on the main purpose of your app. How quickly and how easily can users get to key functions? They shouldn't have to take an indirect route or back out of one function to get to another (unless it's absolutely necessary).
Good flow also means that you shouldn't stop users and ask them for things such as authorization when you don't need to. (The time to ask them is when you know that you'll need the authorization, not simply when they start the app, or when they take an action which may or may not eventually require authorization.)
Each screen should be uncluttered, and whenever possible, focused on a single purpose. Functional elements should be visually distinct from strictly decorative elements. Screens should not present the user with more functions, information, or decisions than necessary, and users shouldn't have to decipher one screen to get to the next one (unless it's a game, and you're giving them puzzles).
Buttons should be of adequate size, with sufficient distance between them so that the user is unlikely to accidentally press the wrong button. If you need to crowd too many buttons onto one screen, it's probably a sign that you should reorganize the layout of your app.
Text needs to be of adequate size and contrast, and in an easily readable font. Text input on most mobile devices isn't that easy for many users, so you should require them to enter as little text as possible, or avoid text input altogether.
How do you test for design problems such as these? Ultimately, you will need to test with a focus group of hands-on users who can give you honest and informed feedback. By recording their actions and listening to what they have to say, you can fine-tune your design for optimum flow and ease of use.
You can, however, use automated testing to pick up a considerable number of design problems early on by logging such things as the time required to perform basic tasks and failed attempts to perform such tasks, as well as examining screen recordings of automated user interface tests. A well planned-out automated testing regime may allow you to detect even many very subtle user interface problems before you go on to (generally more costly and time-consuming) live user tests.
Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the 90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues. He is a regular Fixate.io contributor..