Back to Resources

Blog

Posted November 14, 2012

The Eschaton: What The End Game Looks Like For Testing with Selenium

quote

This is the first in a series of posts by QAOnDemand, which offers self-service QA scripting and testing. For more info, visit http://qaondemand.com.

In theological circles, "The Eschaton" is defined as the end of time. In fact, there's a whole field of study called Eschatology that studies what the end of the world looks like. While I have to believe their office parties are pretty grim, they’re definitely on to something. Visualizing the end result of what you’re building before ever laying down any foundation can lead to better decision-making, which is a key thing to keep in mind when setting up a mature testing environment for the first time. To help you with this effort, I’ve devised a zombie readiness kit that includes how to set up the five key components for building a mature testing environment with Selenium:

  • A place to store your tests (source code repository)

  • A place to run your tests (Sauce Labs or a Selenium server)

  • A mechanism to trigger your tests (continuous integration server like Jenkins)

  • A place to log and track defects (a bug tracker like Bugzilla)

  • An army of human testers to test what cannot or has not yet been automated

While you can certainly test software without all five in place, it's radically more productive with everything integrated and running smoothly. I’ll be diving into each component in more detail, so let’s get started with the guide!

A place to store your tests.

The Z: drive of your network is not a place to store tests. Neither is Dropbox. Your tests belong under source control. All of them. And by all, yes, I mean both manual test scripts and your test automation code. There are many reasons why bringing your test assets under source control is a good idea, but here‘s the top 26: A) Continuous integration servers are designed to pull from repositories. This means that if you put your test code in a well-organized repository, it's relatively easy to configure your CI server to pull the latest copies from the repo and execute your tests automatically. Plus when a test fails, it's also easy to take a quick look at your revisions to see if a recent change to the test code could be responsible for the failed test. B) Products like the Atlassian suite are designed to broadcast repository activity. This is a good thing. Too often, QA gets a siege-like quality where we only come up from the dungeon when there's a problem or free food. The truth is that by continuously broadcasting QA test results into the main communication streams of your company, you normalize the process of QA for everyone outside the group. QA test results become less of an interrupt and more routine. That's a good thing and a worthy goal. C-Z) Revision control. If your test code isn't under revision control, then you've been living in the woods too long. You's ignorant so listen up! If it's worth doing, it's worth keeping track of. The ability to trace changes over time is one of the most underrated tools out there. If something breaks and you've got to pop open the code, the first thing to look at is what has changed. Did the code change? Did the test change? With Git, CVS, SVN, Merc, whatever, you can easily see the evolution of your test code. It solves so many problems and enables so many good things such as skills development, accountability, and humility.

A place to run your tests.

You're reading this on Sauce Labs' blog so this hardly needs mentioning, but there's a nuance here worth talking about. One of the best qualities of Sauce Labs is the visibility it creates. The value in being able to rerun a test or capture a screenshot cannot be overstated. Whether you've provided a manual step-by-step set of instructions or you've provided a link to a screen cast, the first step in remediating a bug is to reproduce it and observe it in action. So if that "place to run your test" is "Joey's Laptop," then you're going have a bad time. But if it’s a generally available service that anyone can access, it's going to be a whole lot more fun.

A mechanism to trigger your tests

We see a lot of test teams "kicking off" tests manually. This is fine; there are lots of cases where you need to do this, but it's waaaay better when a continuous integration (CI) server does this for you. Getting your CI to manage your test execution is tricky. Now I'd love to tell you there's a simple script ./make-it-so.py, but alas, there is not. There are different types of builds, different deployment scenarios, and different types of check-ins that should fire off different types of tests. But the net-net is that in the end, you want your QA process to be seamlessly integrated into the development process. And increasingly, CI drives development. Consider this question: In five years, is continuous integration and automated deployment going to be more or less prevalent? Is it going to be more common or less common? The answer is yes, so why bring up the rear of the parade? Get out in front. The sooner your QA process is wired into your CI, the easier it’ll be in the end.

A place to log and track defects

I'm going to move quickly through this point because most of you probably have this at some level. The one feature I've seen in the last few years that's really been a huge boon to development is tying tickets to checkins, such that you can see the code that is related to the ticket. This makes it infinitely easier to quickly find the business requirements associated with a ticket and cross-reference it to the code itself. If you can tie your test code to tickets in a similar fashion, it’s so much the better because you've created visibility for both the created tests and related revisions. I highly recommend you select bug-tracking software that does this. It's absolutely worth every expense.

An army of human testers to test what can't (or has yet to be) automated

In its heart of hearts, quality assurance is a request for an opinion that is inescapably a human observation. "Does it work as you expect?" can sometimes be described in a way that can be automated. Sometimes it can be described in a way that a tester with nominal knowledge of the system can test. Sometimes it takes a domain expert to tell if something works as expected. The point is, human testing always has been and always will be a part of QA. Anyone who says it can be completely automated is either an academic or a fool. So plan for it and work towards an end game that uses human testers in a productive and efficient way. We think it's helpful to break up the problem of organizing human-based QA work into three buckets using the following guidance.

  • a) Automate the simple stuff that's easy to maintain or is tedious. Automate the routine stuff that won't break often, but when it does, it's catastrophic. There's no shortage of people who will tell you that with a good framework and their secret sauce, you can automate everything. You can't. And more importantly, it's not worth it. Automation is fundamentally an economic problem, not an engineering problem. You should only automate that which the cost of automation is meaningfully less than the cost of just running the check repeatedly by hand when needed. The bottom-line? Don't get dragged into complex automation strategies. Automate the simple routine stuff in a simple and routine way.

  • b1) Outsource the intermediate stuff, such as layout, copy writing, new features, and regressing fixes on a well-defined but complex bug. Outsourcing works great where cultural nuance, domain expertise and a qualified point of view don't matter all that much. The overhead is much lower than it used to be and in the sage words of Eric S. Raymond: "Given enough eyeballs, all bugs are shallow" .

  • b2) Outsource the creation of simple automation -- you know, like writing your Sauce Labs tests. Just sayin'. When a well-defined bug gets checked, have an outsourced team write an automated test to check it again. Test harnesses with a thousand tests get built one test at a time. ***

  • c) Use your domain experts as big guns. Focus them on the hard stuff – subtle features that require an understanding of how the code works or how a customer sees the world. If your in-house QA engineers are testing to see if your upload feature correctly kicks out an error for oversized files or disallowed formats, then you are wasting valuable expertise. Again, it's helpful to think about testing as an economic problem and price your top talent. Give your top dogs a fully loaded cost and socialize the notion that it’s not a few hours, it’s a few dollars. Remind people of what they’re asking for in economic terms – that a detailed, cross-browser, multi-platform manual test by your in-house team is easily a $1,000 request. A simple question to posit: "Is that the best way to spend $1,000?"

So, what now?

In conclusion, take time to work through your endgame. Almost all QA work is time-bound, meaning we all work as hard and as fast as we can testing until the clock runs out. If nothing explodes during testing, we ship. It’s not fair, but that’s the way it is. So if you don't block off some time each cycle to build toward a better way of doing things, then you're selling yourself and your team short. Hopefully this post gave you a little perspective on what that end-game could look like. In the next post, we'll get into specifics of some of the tools we've found to be most helpful and more specifics about what works for us. *** Look, QAOnDemand basically does B1 and B2 for a living. We see a lot of people's QA efforts. Our services are structured and priced to make it easy for you to say yes to modest outsourcing. We'll knock out your intermediate testing without crushing you with a big contract or a lot of overhead. Plus we offer a decent free trial with no strings attached, so give it a go. We'll also bootstrap your Sauce Labs environment for you if you haven't done so already. It's much easier to add to a working system once it's set up correctly. Net-net, a little help in the beginning goes a long way. Ok, 'nuff said.

Published:
Nov 14, 2012
Topics
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.