Back to Resources


Posted November 17, 2020

A Beginners’ Guide to Manual Cross-Browser Testing

In this comprehensive article, Ashwini Sathe discusses what you need to know about manual cross-browser testing—and how it's still essential to delivering a flawless user experience to your customers.


Are you just getting started with testing? Here are some things you’ll need to know about cross-browser testing, including why to use it and factors to consider as you get set up. 

First of all, what is cross-browser testing? Simply put, it’s the process of making sure your application is compatible with different browsers. Cross-browser testing can help you verify if your website works as expected across various browsers, operating systems, devices, and resolutions. It is a key step to releasing software and applications that exceed your users’ expectations, regardless of which browser they’re using. 

Why is Cross-Browser Testing Important? 

Web applications are an integral part of the user experience. If a user comes to your site and the app (functionality) performs as intended, they get what they need and are likely to come back. That’s why the user experience is so critical. If a user has trouble with your app—it performs poorly, takes too long, or doesn’t work effectively on their chosen browser—you’re in a heap of trouble. They are likely to go elsewhere, never to return. In fact, according to research from McKinsey & Company, 25% of customers will abandon a brand after just one bad experience! This means without thorough testing, you risk losing customers (and their associated revenue) forever.

To be successful, your web application must deliver a flawless user experience to your customers, giving them the digital confidence that will turn them into a loyal user and truly set apart your product or service. 

Manual vs. Automated Testing

When you’re getting started with cross-browser testing, you’ll need to decide how much you automate the process vs. how much you perform manually. With manual testing, a human tester performs the tests step by step and executes test scenarios without automated test scripts. This enables manual testers to check on application elements (such as look and feel, usability, etc.) that do not get easily highlighted with automated testing and better evaluate the product from an end-user viewpoint. Depending on the project, manual testing could take hours or weeks to accomplish. 

With automated testing, tests are executed automatically, often using commercial test automation frameworks and tools. This allows you to shorten your release cycles and ship code faster. There are situations where both types of testing are needed, and a balanced test strategy includes both. For the purpose of this article, we’ll focus more on manual testing. 

Who Performs Cross-Browser Testing?

While the approach can vary for different companies, in general, any team that is responsible for the design, development, and quality of web applications is most likely to perform cross-browser testing.

  • Quality Assurance teams use cross-browser testing to evaluate the end-user experience and test the build’s compatibility with multiple browsers to ensure that it meets the specific quality standards.

  • Web Development teams use it to report and identify bugs, and plan resolution action for the issues identified.

  • User Interface teams (UI/UX/Web Designers) focus on the front end, running cross-browser tests to ensure the appearance and behavior of the website meets the expected user standards.

  • Marketing teams (digital marketing) could use it for A/B testing to assess the effectiveness of their content strategies. Generally, cross-browser testing is used by these teams to check on the website design and rendering of web pages across different browsers.

  • Customer Support might use manual testing to better understand and troubleshoot customer issues.  

  • Product Managers could also get involved in testing as it keeps them close to their product and understand how it performs in real-life situations.

When to Choose Manual Cross-Browser Testing

Automated cross-browser testing helps cut down the time spent on testing, aids faster error resolution, and provides efficiencies to scale. However, manual testing is still essential in various scenarios, such as the following:

  • Usability (exploratory) testing to identify the UX challenges that a real user can encounter while interacting with different browsers and devices

  • Testing complex functionalities in an application or in general, tests that are too complex to automate

  • Testing security components like CAPTCHAs that cannot be automated

  • Getting a closer look at bugs that were discovered via automated testing 

There are several types of cross-browser tests that you can run manually. 

Exploratory testing is where testers check the website or application freely on an ad hoc basis, instead of running pre-established test cases via automation. 

Functional testing can be done manually as well. This is a type of testing that validates the software system against functional requirements. 

Visual testing allows you to test your user interface (UI) against different browsers and devices, so you can make sure it looks right no matter what. 

Manual testing can be a helpful way to validate the look and feel of your application from an end-user perspective, round out your testing, and ensure that you have all critical bases covered.

Selecting and prioritizing browsers for cross-browser testing

To get started, you’ll need to determine which browsers to test on. Taking a strategic approach to selecting browsers can optimize your testing strategy and maximize market coverage. So what factors should you consider when determining your browser mix? 

  • Internal Sources: Analytics and website usage data Analytics tools (such as Google Analytics, for example) are a good source to track website statistics and find insights on which browser/OS combinations and devices are most commonly used by your existing users. You can view browser popularity by region and other factors that will help you narrow down the best ones to start testing on. 

  • External Sources: Browser/OS and device popularity data A good strategy is combining your internal analysis with external data from public websites that provide regional and global market share statistics for browsers and devices. This way,  you can better account for trends and the emerging preferences of your prospective user base.

  • Pre-release/older browsers Testing on pre-release browsers can help you ascertain if the issues identified with the latest browser version have been addressed in the newer versions, and also proactively manage risk by verifying that your website works seamlessly on the newer version. Similarly, depending on the share of users that are still on an older browser version, it would be wise to include them in your tests to continue providing a seamless user experience.

The data you have collected can now be used to condense the list and prioritize the critical browsers and devices you will test on to make sure a majority of users get a flawless app experience. Depending on the size of your target market you can prioritize 10-20 browsers or more. You may also want to consider the percentage of device/ brand coverage for your user base as a determining factor. This will help you better understand the extent of users impacted if an issue occurs and mitigate the risk early on.

Manual testing is time- and labor-intensive, so it's important to make an educated decision that takes into account your team goals, target audience, expected market coverage, and factor in estimates for the time and effort needed. 

Key steps in conducting manual cross-browser testing

1. Creating a cross-browser testing strategy

Your cross-browser testing strategy should align with what your organization needs to accomplish. The strategy should begin with clear goals. What does success look like to you? Determine where you want to go, exactly what you need to test, which browsers to test on, and how you’ll manage the process. 

Next, you need to determine how you’ll organize and optimize your live tests and maximize test coverage (browser/OS versions, devices, and resolution). You may want to investigate a cloud-based testing platform such as the Sauce Labs Continuous Testing Cloud that gives you access to all the latest browser/OS combinations and enables you to quickly reproduce user scenarios, identify issues, and fix bugs faster. 

Having a shared testing platform can be especially beneficial as it can improve collaboration and visibility between teams and help you work toward common goals. For example, if you’re using Sauce Labs, you can easily invite others to view your live testing session or share video recordings and screenshots via Slack or HipChat. Similarly, having the test results stored in one platform allows you to have several options (Public, Private, Teams, etc.) for quickly sharing these results with other stakeholders. 

2. Setting up the testing infrastructure

As you build your testing infrastructure, there are various factors to consider. 

Do you want to go in-house or subscribe to an online service? Do you want to test on real devices, or emulators and simulators? So, let’s explore these options a little further.

  • Emulators and simulators both make it possible to run software tests inside flexible, software-defined environments. These are often good for testing during the early stages of application development. They allow you to run tests more quickly and easily than you could if you had to set up a real hardware device. 

  • Using real devices is indispensable if you want to test the real scenarios your users will encounter by replicating issues on the exact devices, take into account hardware dependencies (GPS, etc.), and intimately understand the user experience.

  • You could also consider testing on virtual machines (vs. physical computers), which emulate computer systems. This option can help you cut down costs with better utilization of the host and allow you to scale up your testing as needed with the ability to add more VMs to test out various combinations.

  • Another important consideration is whether to build or buy the testing infrastructure. Depending on your goals, you can consider building an in-house browser and device lab. This option is suitable in cases when you have specific hardware/environment needs or security requirements. But it calls for higher investment, dedicated resources, as well as careful planning to catch up on the exploding device types and browser versions. Conversely, testing teams often subscribe to cloud-based testing platforms that enable functional testing of web and mobile apps without the need to purchase or maintain the physical infrastructure, allowing them to cut down costs and get started much faster.

3. Executing the test strategy

After planning and building your strategy and determining what infrastructure will best meet your testing needs, it’s time to execute! Start running tests to see where you need to focus your development efforts. And most importantly: refine your tests to constantly improve and ensure your efforts have a direct impact on product quality. 


We hope this post has given you some good guidance on how to approach cross-browser testing. Manual testing can be an effective way to round out your test strategy and ensure that you are covering aspects that your automated scripts might not touch. If you build and execute a solid strategy, you’ll be well on your way to reaping the benefits of testing and giving your product a real competitive advantage. If you’re looking for some help, give Sauce Labs a try by signing up for a free trial today.

Ashwini Sathe
Sr. Group Product Marketing Manager
Nov 17, 2020
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.