Don't Get Overwhelmed By Cross-Browser Testing

Posted by Justin Rohrman

Today nearly any software product people care about -- banking, insurance, healthcare, travel booking and so on -- are accessed through a web browser. Not just one web browser, of course. That would be too simple. After you consider browser version, Operating System, OS version, and hardware platform, there are easily hundreds of different ways a customer might be using software. This is the cross browser testing problem, where any one test that needs to be performed once might need to be performed in untold numbers of browsers or environments.

Luckily, there are methods to manage the complexity. This article will cover what cross browser testing is and why it is needed, when teams can solve the problem without many tools, and when they might want to use automation.

What is Cross Browser Testing and Why It Is Needed

There are a handful of popular web browsers available -- Internet Explorer, FireFox, Chrome, Safari, Edge, and their mobile versions. Add to that a few popular Operating Systems -- Windows, MacOS, Android, and iOS. That is a lot, but feels like a solvable problem, so let's make that even worse. There are also multiple versions of all of those in play at any point in time. And, companies can't control their user's environment when it comes to consumer products. Software written for the web, particularly javascipt and CSS, may look and behave slightly differently across different browsers, Operating Systems, and versions of both.

Each web browser handles javascript just a little bit different. FireFox uses the SpiderMonkey javascript engine developed in a combination of C and C++. The latest version of Google Chrome uses the Chrome V8, sometimes just called V8, javascript engine. V8 is also used in the Opera web browser, Couchbase database, and node.js. Starting in version 9, Internet Explorer and Edge both use the Chakra javascript engine.

Here is an example of why all this matters.

Think about a page element that is common across all different kinds of software, such as a date picker. Date pickers have some standard features like some way to select month, some way to select year, a way to select specific days, and some way to trigger the date picker to open and close. One browser might display the month chooser with left and right arrows for pagination, another might have them missing all together. Scenarios like this happen because of the combination of the javascript library used to implement the data picker, and the javascript engine in the browser.


When to Use Less Tooling

Testing across browsers and platforms is a game of combinatorics. There are X browser, Y browser versions, and Z platforms that might get used when a piece of software goes to production. The equation is slightly more complicated than that in practice, because new platform and browser versions emerge monthly now. We'll call the emergent platforms E If a tester attempts to get complete coverage, that means they are performing a single test ((X * Y * Z) + E times. The result of that combination might be 5 or it might be 5,000.

Complete testing is not possible and trying to get there is a fool's errand. For example, imagine there is a text field where a customer can enter a discount code. There are a few obvious tests -- a current code, an expired code, discount by percentage, flat rate discount such as $5.00. Those few tests provide some base understanding of what is going on, but that tester should probably go deeper. Consider very long numbers, discounts for more than the total cost and discounts that aren't a number such as ~@^. The testing possibilities never end; instead we need to consider when to stop and whether or not these tests are helping a tester to discover important information.

Test selection can make this easier, in the case of a discount code field, or browser and platform combinations.

The combination of Google Analytics and good logging can be useful for software that is in production. Google Analytics has the ability to report on usage statistics such as peak traffic times, pageviews, length of stay on each page, where the people using this software are located in the world, and what they are using to access the webpage. Often these usage statistics will be extremely skewed. A development might discover that 30 different unique environments are being used to access their software. But, after taking a deeper look at the data, they discover that 86% of those users are on an iPhone 6 with the latest version of iOS, another 10% of those users are using Chrome on the latest version of macOS, and then everything else is very small usage of Windows, Internet Explorer, Opera, and other smaller browsers.

Based on that information alone, that development group might focus their testing efforts on the top two environments -- iPhone 6 with the current version of iOS, and Chrome on the current version of macOS. These two environment would get the most testing attention, perhaps correlating to the percentage of users by device type.

Companies that generate revenue from their software, online retailers and brick and mortar scheduling tools for example, might want to augment this strategy with information about where purchases are coming from. Good logging in combination with monitoring tools can help teams discover this information. In the example above, customers are using three platforms heavily, and then a spectrum of others minimally. Using good logging, a company that uses their website to schedule and make down payments for car tire service, might discover that 80% of people that click the 'Buy' button on their website are not using those top three platforms. The customers that are spending money are using an iPad Mini with the current version of iOS and the Google Chrome browser. This makes the strategy slightly different, but still tells testers and developers where they need to focus.

Clear cut scenarios like this can easily be handled by a few testers and developers. Of course, software isn't always so clean and easy.

When to Use More Tooling

Sometimes, regardless of platform distribution and usage statistics, development groups really do need to test a larger set of browsers and platforms. That might be 5 environments rather than 20, but 5 platforms is enough to create the perennial 'too many tests, and not enough time' problem. Even if there is enough time, it can be difficult to be effective in testing multiple environments. People have a tendency to become accidentally blind to important things when they see them repeatedly.

UI automation tools such as the Selenium tool set, especially when used in conjunction with visual testing tools, make is possible to gain a large amount of coverage quickly, especially if a test can be re-used by simply changing out the browser name.

The UI automation + visual testing scenario might look something like this: Developers and testers spend the majority of their time testing and developing in a single platform regardless of how many others are in play. (My colleague, Matt Heusser, suggests having developers and testers working in different platforms to start, finding defects earlier. Both ideas have merit.) During primary development they explore, find and fix bugs, and either build a couple of automated UI tests or modify existing tests. That test includes a line of code that grabs a complete snapshot of each page. Every time the test is run, the new snapshot is compared with the baseline, and any differences are flagged in a dashboard that is accessible through the Continuous Integration system.

When the tests are run, instead of running against that single environment, they can be configured to be run against the set of browsers and platforms that are important to the customer base. Each difference reported by the visual testing integration with Selenium would be a trigger. Someone would need to look in the Continuous Integration dashboard and make a decision about whether or not the difference matters. If that difference does matter, for example a text label that overlaps a submit button on Chrome in Android, then the tester could spin up that environment to make the bug happen again, and then report to the developer or in a bug tracking system.

UI automation used in conjunction with visual testing is a powerful tool, but it is still incomplete. Selenium tools only return information through assertions, or accidentally if a page element can't be found. Groups that do this will still want a strategy built around a real live person interacting with and investigating the software.

The cross browser and platform problem can feel insurmountable at first glance. Development groups that take a 'test all the things' approach will find that insurmountable is the exact right word to use. Building a strategy, not a strategy document, but an actual strategy, can help. How many browsers and platforms are customers using. Of those, which actually matter. The intersection of time available, and people that can help will point to how much tooling, like Selenium and visual testing, to sprinkle into the strategy.


Free Trial

Get access to a free 14-day trial version, or contact Sales for more information.