The Problem
When testing a web application with a responsive layout you'll want to verify that it renders the page correctly in the common resolutions your users use. But how do you do it?
Historically this type of verification has been done manually at the end of a development workflow -- which tends to lead to delays and visual defects getting released into production.
A Solution
We can easily sidestep these concerns by automating responsive layout testing so we can get feedback fast. This can be done with a Selenium test, Applitools Eyes, and Sauce Labs.
Let's dig in with an example.
An Example
NOTE: This example is built using Ruby and the RSpec testing framework. To play along, you'll need Applitools Eyes and Sauce Labs accounts. They both have free trial accounts which you can sign up for here and here (no credit card required).
Let's test the responsive layout for the login of a website (e.g., the one found on the-internet).
In RSpec, a test file is referred to as a "spec" and ends _spec.rb
. So our test file will be login_spec.rb
. We'll start it by requiring our requisite libraries (e.g., selenium-webdriver
to drive the browser and eyes_selenium
to connect to Applitools Eyes) and specifying some initial configuration values with sensible defaults.
[code language="ruby"]
# filename: login_spec.rb
require 'selenium-webdriver' require 'eyes_selenium'
ENV['browser'] ||= 'internet_explorer' ENV['browser_version'] ||= '9' ENV['platform'] ||= 'Windows 7' ENV['viewport_width'] ||= '1000' ENV['viewport_height'] ||= '600' # ...
[/code]
By using Ruby's ||=
operator we're able to specify default values for these environment variables. These default values will be used if we don't specify a value at run time (more on that later).
Next we need to configure our test setup so we can get a browser instance from Sauce Labs and connect it to Applitools Eyes.
[code language="ruby"]
# filename: login_spec.rb # ...
describe 'Login' do
before(:each) do |example| caps = Selenium::WebDriver::Remote::Capabilities.send(ENV['browser']) caps.version = ENV['browser_version'] caps.platform = ENV['platform'] caps[:name] = example.metadata[:full_description] @browser = Selenium::WebDriver.for( :remote, url: "http://#{ENV\['SAUCE\_USERNAME'\]}:#{ENV\['SAUCE\_ACCESS\_KEY'\]}@ondemand.saucelabs.com:80/wd/hub", desired_capabilities: caps) @eyes = Applitools::Eyes.new @eyes.api_key = ENV['APPLITOOLS_API_KEY'] @driver = @eyes.open( app_name: 'the-internet', test_name: example.metadata[:full_description], viewport_size: { width: ENV['viewport_width'].to_i, height: ENV['viewport_height'].to_i }, driver: @browser) end
# ...
[/code]
In RSpec you specify a test suite with the word describe
followed by the name as a string and the word do
at the end (e.g., describe 'Login' do
).
We want our test setup to run before each test. To do that in RSpec we use before(:each) do
. And to gain access to test details (e.g., the test name) we append a variable name in pipes to the incantation (e.g., before(:each) do |example|
).
To control the browser and operating system we use a Selenium Remote Capabilities object (e.g., Selenium::WebDriver::Remote::Capabilities.send(ENV['browser'])
). With it we're also able to specify the name of the test so it shows up correctly in the Sauce Labs job. We then connect to Sauce Labs by using Selenium Remote (specifying our credentials in the URL), passing our capabilities object to them (e.g., desired_capabilities: caps
), and storing the browser instance they provide in an instance variable (e.g., @browser
).
Then we open a connection with Applitools Eyes by creating an instance of the Applitools Eyes object (e.g., @eyes = Applitools::Eyes.new
), specifying the API key, and calling @eyes.open
(providing the application name, test name, viewport size, and the browser instance from Sauce Labs). This returns a Selenium object that is connected to both the browser instance in Sauce Labs and Applitools Eyes. We store this in another instance variable (e.g., @driver
) which we'll to drive the browser in our test.
After each test runs we'll want to close the Applitools Eyes session and destroy the browser instance in Sauce Labs. To do that in RSpec, we'll place the necessary commands in use after(:each) do
.
[code language="ruby"]
# filename: login_spec.rb # ...
after(:each) do @eyes.close @browser.quit end
# ...
[/code]
Now we're ready to write our test. In it we will have access to two instance variables. One for the Selenium browser instance in Sauce Labs (e.g., @driver
) and another for the job in Applitools Eyes (e.g., @eyes
).
[code language="ruby"]
# filename: login_spec.rb # ...
it 'succeeded' do @driver.get 'http://the-internet.herokuapp.com/login' @eyes.check_window('Login Page') @driver.find_element(id: 'username').send_keys('tomsmith') @driver.find_element(id: 'password').send_keys('SuperSecretPassword!') @driver.find_element(id: 'login').submit @eyes.check_window('Logged In') end
end
[/code]
Tests in RSpec are specified with the word it
, a string name for the test, and the word do
(e.g., it 'succeeded' do
).
Our test is simple. It visits the login page and completes the login form with two visual verifications being performed -- one after the page loads and another after completing the login.
If we save this file and run it (e.g., rspec login_spec.rb
from the command-line) it will work in a single screen resolution (e.g., 1000x600). Now let's make it so we can specify multiple screen resolutions and have it run the same test on all of them. To do that we'll need a little help from a library called Rake.
Packaging Things Up
With Rake we can create a file (e.g., Rakefile
) and store tasks in it (using Ruby syntax) that we can call from the command line.
Let's create a task that will handle executing our test for each screen resolution we want in parallel.
[code language="ruby"]
# filename: Rakefile
desc 'Run tests against each screen resolution we care about' task :run do RESOLUTIONS = [ { width: '1000', height: '600' }, { width: '414', height: '699' }, { width: '320', height: '568' } ] threads = [] RESOLUTIONS.each do |resolution| threads << Thread.new do ENV['viewport_width'] = resolution[:width] ENV['viewport_height'] = resolution[:height] system("rspec login_spec.rb") end end threads.each { |thread| thread.join } end
[/code]
In Rake you can provide a descriptor for a task with the keyword desc
followed by the description text in a string (e.g., desc 'Run tests...'
). Tasks are specified by the task
keyword followed by the name of the task (specified as a symbol) and ending with the word do
(e.g., task :run do
).
We start our :run
task off by specifying the screen resolutions we want in key/value pairs (a.k.a. a hash) inside of an array (a.k.a. a collection). This enables us to easily iterate through the collection (e.g., RESOLUTIONS.each do |resolution|
) and grab out the width and height values for each resolution. When we do that we're storing them in environment variables (e.g., ENV['viewport_width'] = resolution[:width]
) which are also used in our test code. So when we run our test (e.g., system("rspec login_spec.rb")
) it will be using the correct width and height values.
NOTE: The resolutions used here will trigger different screen layouts (e.g., desktop, smart phones, etc.). For a true test of your app, be sure to look at your usage analytics to see what screen resolutions your users are using.
For each iteration of our screen resolution we're creating a new thread, which will make each test run at the same time. So when we run this task, our single Selenium test will get executed three times (once for each resolution specified), and each run will use a difference screen resolution.
After saving this file we can do a quick sanity check to make sure rake runs and the task is listed by issuing rake -T
from the command-line. > rake -T rake run # Run tests against each screen resolution we care about
To run this task it's as simple as rake run
from the command line. And to specify a different browser, browser version, or platform you just need to prepend the command with different values. browser=internet_explorer browser_version=8 platform="Windows XP" rake run browser=firefox browser_version=37 rake run browser=safari browser_version=8 platform="OS X 10.10" rake run browser=chrome browser_version=40 platform="OS X 10.8" rake run
See the Sauce Labs platform documentation for a full list of available browser/OS combinations.
Expected Behavior
If we run this (e.g., rake run
from the command-line) here is what will happen:
The test will run numerous times (in parallel) -- once for each resolution specified
Each test will retrieve a browser instance from Sauce Labs and connect it to Applitools Eyes with the correct screen resolution
The test will run and perform its visual checks
The browser instance on Sauce Labs and connection to Applitools Eyes will close
The results for the job will be displayed in the terminal output
When the rake task is complete, you can view the visual checks for each resolution in your Applitools Eyes dashboard. Each resolution will have it's own job. In each job you can either accept or decline the result. Accepting will set it as the baseline for subsequent test runs. If you do nothing, then the result will automatically be used as the baseline. You can also see each of the test runs in full detail (e.g., video, screenshots, Selenium log, etc.) in your Sauce Labs job dashboard.
On each subsequent run, if there is a visual anomaly for any of the given resolutions specified then the test will fail for that resolution -- and you'll be able to easily identify it.
Outro
Hopefully this tip has helped you add automated responsive layout testing to your suite, enabling you to catch visual layout bugs early on in your development workflow.
For reference, you can see the full code example here.
Happy Testing!
About Dave Haeffner: Dave is the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by hundreds of testing professionals) as well as a new book, The Selenium Guidebook. He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.