*Revised 26 Aug 2020 by Matthew Heusser @ Excelon Development firstname.lastname@example.org* Building mobile applications is getting easier and easier thanks to toolkits and new programming languages. Testing those applications, on the other hand, is going the other way. The applications are more complex, need to adapt to more form factors, different browsers and operating systems, along with different amounts of bandwidth. Just because the application seems to work for one use case on the newest iPhone does not mean it will appear reasonable on an Android tablet.
Automation is one approach to mobile testing -- get the computer to check a wide variety of devices, over and over again, in order to release more frequently and cover more scenarios.
In the broadest terms, there are two approaches to testing a Mobile Application. You can have humans use the application under different situations to see how it responds, or you could have an application drive the software into interesting places and look for expected results. This second term is commonly referred to as "test automation", or, perhaps "automated testing." It might be more accurate to refer to it as "automated test execution and evaluation", or "automated checking", but the earlier terms have taken hold. Both of these approaches can be valid in different circumstances, and that validity can change over time in different moments of an app’s lifecycle. In practice most organizations use a blend of the two, perhaps pushing some of the human exploration to customers.
Let’s have a closer look at these two testing practices.
Automated testing is the process in which pre-scripted tests are executed on an app before it is released into production. Automated testing helps you find flaws in your app quicker. Test automation can be used to run tests that are repetitive, that need to be run periodically and that can help you find bugs in early stages. While the aim of manual testing is to test the so-called “user experience”, automated testing aims to test all the functionalities that characterizes an app.
Sixteen years after I was first told that human testing would "go away," and nearly a decade after I first heard human explorers called "dinosaurs," manual testing remains the most common testing approach for both mobile and desktop applications. By manual testing, I mean a human actually using the application through the front end. There are a variety of places this manual testing might happen. Most programmers at least simulate an application, if not put it on a phone, before passing it on to someone else to test. Some companies employ testers to go deeper, looking beyond the happy path or into different models of devices. If the software is internal, the company may have the people who will use the software actually perform User Acceptance Testing, which is more focused on "can I do my job with this software." Some companies release the software early to "beta" testers, which might be employees, using a tool such as Microsoft AppCenter. Finally, companies like Applause and Testio exist to take that "beta" version and crowdsource it, providing dozens to thousands of eyeballs to look at the software, in a variety of configurations, over a short period of time.
Regardless of who how or when, testing manually gives you the real feeling of how it is to actually use the application. Manual tests can see if the buttons are in the right position, if they are big enough, if they overlap, if the colors look good together etc. Computers turn out to be particularly bad at evaluating if a picture on a screen "looks right." However, there are some actions computers can test for easily. For example, when you type in username and password and submit, you should go to screen that shows your name and that you have been logged in, and leaving password wrong should yield a particular error text. The challenge of mobile testing is less likely to be which of the two paradigms to accept, but how much, when, and who.
Manual testing can be done with either a real device in your hand or with an emulator/simulator, but real devices will give results which are more similar to what your users will experience.
Manual testing provides some feedback on usability and appearance along with functionality. The tester plays the role of a user trying out everything in that application, doing typical actions a user would, to see if or when the app crashes. With a manual testing you often get feedback on performance, battery drainage or overheating early enough to fix them before release. The feedback is often "free" just because the tester was paying attention.
You could try to manually test every device supported with every supported operating system version. Last time we checked three years ago there were over 24,000 different Android devices, and we stopped counting. Realistically, most organizations test with the newest version supported, one release back, and the oldest version supported. Between Android and iOS, tablet and phone, most midsize and larger companies we work with end up with a test lab of 10-20 devices.
When people use the phrase Automated testing, they generally mean having a tool, like Selenium or Appium, drive the user interface of the application, checking expected results along the way. This checking comes from a series of commands and inspection points, that can be stored either in something like a spreadsheet, or, more likely, an actual computer program. Thus the tests are pre-scripted; every test might be a computer program. Each test might click or type a dozen times, and have another dozen verification points. Once the tests exist, if the application behavior has not changed, automated testing can find defects in the application very quickly, typically within minutes of a commit to version control. Test automation can be used to run tests that are repetitive, that do not require human discernment, that need to be run periodically and that can help you find bugs in early stages. While the aim of manual testing is to test the “user experience”, automated testing aims to test all the functionality that characterizes an app. Automated testing will click the button that "looks wrong" and is in the wrong place and not register a problem (unless you thought to check in advance)--it is more likely to find errors, incorrect search results, and so on.
As the application grows, the time to test it grows also. Automated testing brings that time down for frequent release. That makes automated testing key to speed the testing process, decrease cost, and radically reduce time-to-feedback for major errors from days to minutes. Test automation allows you to:
Test functionalities that are repetitive and therefore error prone if performed manually; test cases that have a predictable outcome;
Easily setup and run complicated and tedious test scenarios
Most important: you can test on a higher number of mobile devices simultaneously, saving time. Using simulators or the cloud you can do this without buying or managing the devices!
This will vary widely on the application, and how big the "test case" is. If test cases are simple dom-to-database tests that check one logical operation, then a typical function might have four to ten tests, and a typical application might have four to ten features. If the application is coded with two different programming languages, one for iOs and one for Android, you may need to double that, or else write an abstraction layer and have one set of business scenarios and two implementations that vary by operating system.
Both Sauce Labs and Excelon suggest a blend of these two approaches. You might call that approach integrated testing, because it combines manual and automated testing for maximum efficiency, time and money saving. Skipping test automation results in slower feedback and a reluctance to cut and ship new versions because of the cost; skipping human exploration leaves entire categories of feedback ignored. Sauce labs recommends 80% automated testing and 20% manual testing. Based on research, Sauce expects combining the two approaches in that ratio can help you save up to 70% of your money and time spent testing. These percentages can change based on the complexity and the concept of your app.
The picture below shows what tests should be performed through manual testing and which one through test automation. An agile approach is also advised when developing mobile applications. (Revisor's Note: I would add a very small star under manual/functional that later says "Exploring while writing the automated tests will happen. Doing it explicitly with intention can enhance the value.")