SauceCon 2018: Mobile Content Roundup

I recently participated in the 2018 edition of the annual Sauce Labs conference, SauceCon. It was a fantastic event overall, and I had a lot of fun listening to talks and meeting people who are involved in all stages and modes of automated testing. What follows is a brief wrap-up of the nine mobile-focused talks from SauceCon. As a bit of an Appium guy (understatement anyone?), I was particularly interested in the Appium-specific talks, but there was a lot of mobile love to go around. Let’s dive in and see what each of these talks had to offer!

Using Mobile Analytics To Improve Both Our Testing And Our Apps (Julian Harty)

In this talk Julian motivates the use of a broad variety of analytics to make sure we’re testing the right things in our mobile apps. Information can come from a number of sources: analytics generated by automated tests, real usage, install rates, crash reports, ANR (app not responding) reports, vendor-run automated tests, user reviews, or vendor-generated benchmark comparisons (to name a few!). There are no silver bullets, and we have to sift the information carefully to find value. User reviews, for example, are extremely important (a 0.1-point difference in average user rating can mean a 10% difference in download volume), but it can be hard to extract actionable data from them in an automated fashion. We need a multifarious approach which mixes automated as well as manually-collected feedback, and looks at high-level concepts like user journeys. All analytics should be passed into a central database and shown on a dashboard.

Julian had lots of helpful case studies and specific examples and advice, for example to always test your app on the devices you get bad reviews on—chances are something is not working well with that device! And finally, we must beware “automation bias”—the temptation to take what our dashboards tell us as gospel truth. We always need to have a critical mind on the lookout for other explanations of what our dashboards show.

Transitioning From Selenium To Appium, How Hard Can It Be? (Sergio Neves Barros)

Sergio is one of Appium’s early contributors, and in this talk he tells the story of how and why he needed to hack Appium to make his mobile tests work. It’s focused around mobile web, and there are lots of entertaining details about how he was able to convince iOS to run mobile web tests using Safari on a real device. First, Sergio discusses how mobile web automation works with Appium for iOS in general, namely that it sends Selenium’s JavaScript Atoms over a remote debugging protocol to iOS webviews. There were two key technical challenges that he faced in making this work for real devices in particular, and when he resolved them he contributed his technique to the Appium project so that everyone could achieve mobile web testing on real iOS devices. He closes the talk with some ideas about how Appium could improve its architecture moving forward, to make setup even easier for web testing. The project will have to look into that!

Mobile Testing of Web Apps: Emulators vs. Real Devices (Darya Alymova)

Darya’s talk covers how we should think about mobile web testing platforms. How do we know what kind of devices to test on? She kicks off with a bit of history of web apps and the variety of web apps that are now common, for example Single Page Apps (SPAs) or Progressive Web Apps (PWAs). These more complex types of web app make mobile web testing a bit more complicated than we might otherwise have assumed. For example, these types of apps make heavy use of behind-the-scenes network calls to fetch view data. Depending on network provider, provider-wide proxy settings might get in the way of this data being retrieved, and we’d never discover this without testing on that particular network. In terms of the difference between emulators, simulators, and real devices, Darya recommends that basic web apps do most of their testing on simulators and emulators, whereas SPAs/PWAs should leverage real devices more.

There’s also the question of when to use the different types of devices. It’s all a matter of how expensive a bug is if found at a given point in the development process. Essentially, when bugs are extremely expensive, use real devices more frequently to be sure bugs are caught in a high-fidelity environment. For most teams, that will be the part of the dev process nearest production releases. Darya closes by sharing what Thomas Cook / OneWeb has settled on in terms of their physical / virtual device spread, namely the use of physical devices for acceptance testing of new features, and reliance on Sauce emulators / simulators for automated regression and smoke tests.

Appium: How To Write A Symphony (Dan Cuellar)

Appium creator Dan Cuellar came to SauceCon with some crazy futuristic ideas and laid them on us in his talk. Before that, though, he reached back into the past and summarized some learnings from the inception of Appium and Appium’s resulting success. For example, that basing Appium on a widely accepted standard was a good idea. It turns out that automation is generalizable across a lot of devices and platforms, so not much needed to be changed in WebDriver to make it work with mobile apps. But what’s next? Could we take the abstraction one step further and talk about a framework for automating everything? What would this universal automation standard look like? Dan himself doesn’t know, but he knows it would take us beyond the WebDriver protocol folks have spent so much time codifying as a W3C standard. To illustrate his point, he imagines an automated orchestra, with music notation playing the role of a test script. How do you represent music notation in something like WebDriver? Well, instruments have different capabilities, just like browsers. Some instruments are monophonic, whereas others are polyphonic, etc… But at the end of the day, WebDriver actions like “click” could be turned into something more generic like “press”, which could be interpreted by different instruments according to their method of being played. Finding elements, of course, maps to picking the right key or string to do the pressing / plucking.

Ultimately, Dan disposes of the musical notation illustration and moves on to his proposal, which is essentially that every level of software (app, OS, etc…) expose its own automation API based on a common future-WebDriver-type language, such that writing tests for these levels looks more like consuming a page object model than performing low-level instructions. A fascinating idea! Finally, Dan discusses the main challenge to his idea, which is adoption. How do we get everyone to buy in? That is a problem that we will have to figure out on our own!

Mobile Visual Testing: Uphill Battle Of Mobile Visual Regression (Dmitry Vinnik)

In this talk Dmitry proposes the novel idea of a parallel testing pyramid, this time for visual rather than functional testing. He spends some time motivating the notion of visual testing in the first place, and differentiating it from functional testing. The idea is that visual flaws affect user experience in a similar way as functional flaws, but aren’t caught by any part of the traditional testing pyramid. And manual visual testing is a lost cause—it’s been determined in various studies that humans aren’t that great at detecting minor visual differences. In his proposed visual testing pyramid, the base consists of CSS/DOM tests: tests which assert that the actual app hierarchy hasn’t changed between test runs. This helps prevent accidental changes of structure, but doesn’t catch other kinds of visual issues which aren’t reflected in the DOM/CSS.

The second level of the pyramid is the layer of visual component tests, where individual components are loaded into a view and screenshotted. Screenshots are then compared between runs on a per-component basis. The component granularity means that you’re not likely to be thrown off by the interaction effects of many components living together on one full page. However, at the end of the day you do need full-page visual testing, which comprises the capstone of the pyramid. For this, services like Applitools are helpful with the number of tools they provide to reduce the noise inherent in full-page visual testing. Dmitry closes with a demo of using Appium and Applitools together to verify his toy Android app visually.

From Point & Click to Pinch & Zoom (Dan Rabinovitz)

Dan’s talk is a nice introduction to mobile testing for those coming from a Selenium background. He first talks about why testing is so important, especially in the new world of mobile. He then dives into some of the conceptual differences between web and mobile testing, for example dealing with the additional capabilities devices have (geolocation, etc…) which create new testing requirements, or the new set of locator strategies for finding elements in Appium. To illustrate the transition from web to mobile, he shows example tests for the same basic test flow (checking out in a shopping app) on desktop web, mobile web (powered by Chrome’s Device Mode), and mobile native (powered by Appium).

The best way to get going with Appium is to use Appium Desktop, a GUI tool that makes discovering locators dead simple, and also has the option of recording app actions and generating Appium code, which can be useful for learning the Appium API. In sum, because of Appium’s similarities with Selenium, it’s not that hard to go from point and click to pinch and zoom.

Drivers Of Change: Appium’s Hidden Engines (Jonathan Lipps)

In my talk I’m trying to make a point about Appium’s design, namely that it encourages the development of new drivers by offering a nice environment and set of code libraries to plug in new automation engines. I go through the history of Appium drivers, from iOS to Android to the full set of eight drivers we have today, and then underscore the point by live-coding a brand new Appium driver. An Appium driver does four things:

  1. Set up binaries and system state for an underlying automation engine
  2. Manage that underlying automation engine as a subprocess
  3. Translate the WebDriver protocol to what the underlying engine expects (or simply proxy if it too speaks WebDriver)
  4. Fix any flaws in the underlying engine.

I took SafariDriver (for desktop Safari) as the “underlying automation engine” for my demo, and showed how in less than 200 lines of code we could have a fully functional Appium SafariDriver, which could be plugged into the main Appium server or used on its own. Nobody really needs SafariDriver in Appium, but it illustrates how easy it would be to take an automation technology we do want to use (say some imaginary Espresso 2), and set up the plumbing to connect it with Appium.

The Power of Polymorphism: Testing Android And iOS From A Single Test Suite (Craig Schwarzwald)

In this talk, Craig draws upon his experience in managing a cross-platform app to come up with some suggestions for others. Everybody knows how to use a page object model pattern to keep test logic and app locators separate. It gets more complicated, however, when we are trying to keep our framework clean as we automate two similar but not identical apps. Craig proposes that we use a polymorphic approach, which starts off with a base page model that is extended by platform-specific page models. In Java, judicious use of the `abstract` keyword can help ensure that each platform only implements what is unique, and what is common is kept in one place.

At the end of the day, what we end up with is a set of tests which don’t know anything about the platform they’re run on; instead, they make use of interfaces which are implemented platform-specifically. Craig explains all this with code, not slides. It’s a great approach to follow and if you are working in Java you’ll find a lot to take away.

How Life360 Went From A 6 Week To 2 Week Mobile Release Cycle With Automation (Amol Kher)

In this very interesting talk, Amol gives a high-level case study of how his company improved their mobile release cadence in the way the title suggests, partially by improving their automation but mostly by looking at the problem holistically and making changes up and down the organization. His three ingredients for faster releases form a sort of pyramid, with a “culture of quality” at the base, followed by good tools and metrics, and topped off with process improvements.

Those of us in mid-size companies often grew out of a startup phase where because of resource constraints and just plain trying to stay alive, the culture of the company is not a culture of quality. It’s important to initiate one by linking it in to the company’s values, and get buy-in across the board that everyone owns QA. His company (Life360) did away with the “QA” label for that reason. Amol then went through the tools and metrics that made the knobs of change easier to turn. Getting metrics related to quality is hard, but engineers love solving hard problems. At the end of the day everyone needs a dashboard of quality (regression suite health, crash rate, battery report analysis, etc…).

Finally, by fixing certain processes (for example adopting trunk-based development, or a release train model where releases go out on fixed dates with variable scope depending on what’s mature enough to go out that day), it’s possible to pull everything together into a virtuous cycle that dramatically improves release speed. The difference at Life360 was a threefold improvement, which as Amol pointed out in the case of a startup means 3 times as many opportunities to run product experiments before running out of funding!

Conclusion

That’s it for SauceCon 2018 from a mobile perspective. Of course, SauceCon was not just about mobile; there were two wholly different tracks devoted to a variety of other topics as well. If you’re into Selenium or automated testing in general, you should definitely check out the full set of talks (from Day 1 and Day 2) on YouTube. My favorites that I attended personally were Simon Stewart's and Jason Huggins’s keynotes, and Dave Haeffner’s take on test flakiness. Of course, there were so many talks that I couldn’t check them all out and I’m sure there were many other great ones. Stay tuned for news of next year’s SauceCon!

In the meantime, join some of the world's leading Appium experts for a special mobile testing focused webinar on Wednesday, March 14 to discuss all things Appium, mobile testing and the upcoming London AppiumConf! Get more information and sign up here

Discuss: SauceCon 2018: Mobile Content Roundup
0 Comments

Free Trial

Get access to a free 14-day trial version, or contact Sales for more information.