Shouldn't the adoption of open standards make it easier to conduct cross-platform testing for your browser apps? It should, and in many ways, it has streamlined browser testing. But the limits to actual compliance, along with variations in the implementation of those standards, mean that you cannot rely on successful tests in one browser to guarantee that your app will be compatible with all standards-compliant browsers.
In this post, we'll take a look at why that's true, and how you can optimize your cross-browser testing regime while still maintaining your target level of compatibility, despite variations in open standards compliance.
Why don't open standards mean full browser standardization? How can they be compliant, but not consistent with each other?
In order to understand the answers to these questions, it is important to understand what open standards actually are, and what compliance with those standards actually means.
What are open standards? Any set of standards is formulated and enforced by some kind of governing agency. This is true for official standards, such as those set for coins and currency, or local standards for building construction, and it is also true for standards which have semi-official (but not governmental) standing.
Internet and software standards fall into the latter category, for the most part. Organizations such as W3C and the ISO are consortiums made up of government agencies and industry and professional bodies. The standards which they formulate may be necessary for specific kinds of certification, but very often, they function as guidelines, rather than officially sanctioned and enforced requirements.
Vendors and developers are encouraged to comply with such standards, but compliance is voluntary, and they may choose to include features which are non-compliant.
And the truth is that at this point, no browser is 100% compliant. It is not entirely certain, in fact, that full compliance is possible on a practical level, since vendors need to maintain compatibility with existing applications and websites, which may themselves be out of compliance with basic open standards.
Browser vendors also have significant motivation to add enhancements of their own. Internet Explorer, to use the most obvious example, has always included both scripting commands and Windows-based features which were not available with other browsers. This has allowed it to make good use of underlying Windows resources, but it has also meant that applications and websites developed for IE cannot be guaranteed to function in other browsers, even if those browsers are fully compliant with existing open standards.
The bottom line is that you can't simply test your application or website with one common, standards-compliant browser, and expect the results to hold true for other browsers, even if they are supposed to be equally compliant with the same set of open standards.
What can you do? Even though the market is dominated by a relatively small number of browsers and operating systems, each browser and each operating system is likely to have a number of versions currently in use. The same is true of the development tools, runtime libraries, and other underlying resources which determine how a given feature is implemented.
The best and perhaps only effective testing strategies under the circumstances require some form of triage. You can, for example, determine (through analysis of logs or metrics, or even by means of user surveys) which browser and operating system combinations actually account for the majority of users in your target market, and devote most of your cross-browser testing efforts to determining compatibility with those platforms.
As an alternative, you could establish full compatibility with one major browser during your early development and testing phases, then use that browser as a benchmark for automated cross-browser testing with other browser/operating system combinations.
Along with triage-based testing strategies, there are some other key points to keep in mind when planning for true compatibility with standards-compliant browsers.
Even the most compliant browser vendors can make major unannounced changes in the basic functionality of their systems. This is particularly true when those changes are security-related. Vendors often have good reason not to give out details of security-based fixes, or to even announce them.
Ideally, of course, such changes should be fully compliant with existing standards. They may, however, involve functions which current open standards do not fully address, or they may override those standards, based on high-priority security considerations.
Browser vendors can also cause problems for developers by increasing their compliance with open standards. This can happen, for example, when an existing browser feature has not been fully compliant, and developers have incorporated the non-compliant elements of that feature into their code. Applications which had previously worked may no longer function correctly, simply because the browser is now more compliant with open standards.
This means that it is never safe to assume that since your application or website is compatible with current browser/operating system combinations, it will be fully compatible with the next incremental release, or even the next bug fix. Even if you are focusing your testing effort on a handful of specific platforms, you need to keep on top of any changes to your target platforms.
There are some basic principles that you can follow in order to make that task of maintaining full browser compatibility easier:
Use the most common, standard resources whenever possible and practical. Standard features and simple ways of doing things may be boring, but they are likely to remain the same across browsers, and from one browser version to the next.
There will always be people using the next-to-the-latest browser, or the next-to-the-latest operating system. There will also be people who use a browser or operating system that is older than the next to the latest, sometimes considerably older. You need to decide how far back you will go by keeping a running backwards compatibility list. The list can and should change frequently, of course, but as long as a browser or operating system is on that list, you should test for compatibility after implementing any new features.
If you have good reason to use a new feature, include it (with provisions for backward compatibility, of course), but don't toss it in just to look flashy, since new features tend to have more bugs and compatibility problems.
Avoid using platform-specific features, if possible, since they multiply testing and compatibility problems, increase the probability of bugs and vulnerabilities, and make your code even more sensitive to unannounced changes. Platform-specific features also tend to go against the idea of a uniform user experience.
And a final, major point: use automated parallel testing as a way to cover as much ground as possible, and to keep up with the often rapid changes to browsers and operating systems. Parallel cross-browser testing allows you to reduce testing time (often by several orders of magnitude), and scripted automated testing makes it easy to quickly update or retarget entire suites of tests.
In many ways, in fact, this type of testing is a necessity. It is not practical and it is probably not possible to rely on manual, serial testing to keep up with changes in browsers, operating systems, and underlying page-rendering and scripting resources. Automated cross-browser parallel testing is the only way to keep the playing field level, and to keep your applications and websites up to date.
Open standards are very important, and they have made life much easier for developers, but they are still no substitute for staying alert and maintaining an active, up-to-date, cross-platform testing regime.
Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the ‘90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues. He is a regular Fixate.io contributor.