Update: Sauce Labs recently acquired Screener. To register your interest and learn more please fill out the form on our Visual Testing page.
As development teams eliminate manual testing and focus only on test automation, everyone lives and dies by automated testing. It is repeatable, faster (than manual), and should be preventing bugs, right? How does your automated test coverage compare to manual testing? You have more automated tests for Unit, API, Integration, and Functional UI. But the forgotten test coverage is often Visual testing! Typically, automated testing does not extend checks for visual components of your application.
I have found that teams writing automated Functional UI tests do not understand the difference between these and visual tests. Do you understand the difference? Your functional coverage could be that you've identified the main workflows in the app, and visual coverage validates how specific visual components appear in the app. Or it could be more. Without a clear picture of when to use what and why, you could be missing out, or even wasting time. It is important to separate the two levels of tests. Don't blend the two.
The goal of visual testing is to catch unintended visual bugs when updating UI components. This activity verifies that the UI appears correctly from the user’s perspective. There is no way to know that a visual component that has been updated breaks by only running a functional automated test. Here is my initial checklist when defining visual test coverage:
Each UI component appears in the right color, shape, position and size
Ensure that a component doesn’t hide or overlap any other UI elements
Responsive content at different viewport sizes to ensure changes made at one screen size won’t break a layout on another screen size
No broken images
The goal of functional testing is to cover all workflows of the system by fully exercising the GUI itself. Here is my initial checklist when defining Functional UI test coverage:
Form validation, i.e. all mandatory fields
Navigation
Links on a page
Search and display the correct results
Pop-up or confirmation messages
Sorting
Create, edit, delete, and update (CRUD) tasks
Javascript is properly working in all browsers
Negative
To expand, check out this complete web application checklist.
As software development shops adopt automated functional UI testing, they tend to eliminate manual testing. How often are you changing or tweaking CSS? Today’s web and mobile applications are designed with data-rich interfaces, a responsive layout, AJAX, Javascript or complex UI. It is critical when changes are made to execute both visual and functional tests to ensure everything works with browsers, different screen resolutions, and responsive designs.
Browsers. Vendors are pushing updates automatically and often.
Screen Resolutions. People are browsing web applications from a variety of devices (mobile, tablet, desktop) for non-responsive applications.
Responsive Designs. Content at different viewport sizes ensures changes made at one screen size won’t break a layout at another screen size.
Functional changes are easily detected. One UI component change can have unexpected consequences, making visual mistakes hard to spot with only automated Functional UI tests. Visual testing is as important as Functional UI testing. Users nowadays are using all types of devices to access web applications. So testing visual components and functional workflows is important because the user experience on mobile devices is quite different from desktops.
It turns out that there are a lot of Visual testing frameworks available. They all work in different ways — some take screenshots while others aren’t even visual (CSS attributes). Of the many frameworks out there, you’ll find, most notably, DPXDT, Applitools, and BackstopJS.
ApplitoolsApplitools is a cloud-based automated visual testing solution that automatically validates all the visual aspects of web, mobile and desktop apps.
Cloud-based
Provides native SDK for development and testing
Supports multiple browsers and multiple devices
Commercial
DPXDTDPXDT (pronounced Depicted) is a tool that compares before and after website screenshots for each release. It shows when any visual and/or perceptual differences are found.
From Google, written in Python
Support of PhantomJS by default
Restful API style of development and testing
Report supports manual diff confirmation
Free and open source
Others
To satisfy your missing Visual test coverage, I recommend checking some of these out to see if they fit what you’re doing in your application. What do you want out of a Visual testing framework? I personally want to compare between iterations or releases to test our CSS, layout, graphics and other visual changes on our application.
The perceptual testing frameworks compare between iterations or releases — the new iteration screenshot (staging environment) compared against the golden screenshot (production environment). The tool performs a graphical comparison by generating rendering differences, not using the traditional pixel comparison method. The release manager still needs to manually inspect all differences flagged as failures. The manager will confirm if failed differences are unexpected or pass them as expected changes (creating a new golden image). Takeaways As you can see, visual tests are an important part of test coverage. Every testing strategy should include visual testing. It is critical to coach team members on the difference between Visual and Functional UI testing.
Greg Sypolt (@gregsypolt) is a senior engineer at Gannett and co-founder of Quality Element. He is a passionate automation engineer seeking to optimize software development quality, while coaching team members on how to write great automation scripts and helping the testing community become better testers. Greg has spent most of his career working on software quality — concentrating on web browsers, APIs, and mobile. For the past five years, he has focused on the creation and deployment of automated test strategies, frameworks, tools and platforms.