Waiting for Green
Every now and then, you may encounter a time when you need to stabilize your automated UI tests (for myself, that time is now). Although you don’t want to add to a framework that you are stabilizing, you probably don’t want to halt development on new features. (Warning — telling your leadership team no one is allowed to add more tests until everything goes green might not go over well.) What do you do in the meantime? The answer is simple, and I look to some practices in Behavior Driven Development (BDD) as a guide - build a test skeleton into your current framework.
The First Rule of Stabilization: Don’t Create a Manual Test Suite
While you may temporarily need to revert to manual execution, it does not necessarily mean you’ll want to go back to a manual test suite for a couple of reasons:
(1) Having a manual test suite also adds to the amount (and types) of artifacts you will have to track/maintain for a feature, which causes more overhead. (2) At some point (when you are stable), you will have to take the manual tests and automate them. I speak from experience - converting is not easy (and should never be done). Most manual tests were not written with automation in mind. You will have to re-invent the wheel to rethink how you architect and automate the test. From a QA point of view, not much should change. Whether you practice BDD or not, the process is very much the same:
- Teams should still define test plans (driven by acceptance criteria), and continue to define what should be unit, UI, etc. (with as much unit testing as possible).
- DURING your sprint, you should still test your feature in every way you can think of (and manually run through your tests).
- Any test identified as an automated UI test should have a disabled test created with just a comment describing the goal of the test. (Depending on your framework, tagging may also be helpful here to assist with reports).
- While working on a feature, the scrum team owns ensuring all applicable test skeletons are created, and impacted tests reflect the changes.
- If you need a test run on, say, a release candidate, it should be easy to identify the test skeletons to be run manually (and still rely upon your green tests to help indicate release readiness).
Let’s take a look at what this could look like. Since my team uses Protractor, I’ll show a sample skeleton that is disabled, but easy to run manually.
In this example, our team runs a report looking for skipped tests (shown here as xit), and can refine our results looking for tests with particular tags (in this case, #uat or #manual). I now know what needs to be executed manually for the time being, and also have a quick list of what needs to be added to our backlog to automate once we are stabilized. I use .pend(‘reason’), which will populate as a message in Jenkins as well when I’m diving through results. I also like to include the ticket number I’m using to track the work. As we automate the test and bring it to green, we enable the test, remove the #manual tag, and remove the skip message. If you are implementing BDD, this could simply be using your Gherkin template and not writing out your step definitions yet.
Here, I have used the tag @wip (work in progress) so Cucumber will skip the test (as I’m not ready to let it fail on purpose). Now, I have a skeleton created for when I’m ready to add my step definitions. As above, I recommend adding a comment for any ticket you have created to track the work.
The Second Rule of Stabilization: Don’t Create a Manual Test Suite
It is very easy to fall into the trap (and sometimes comfort) of going back to a manual world, but the overhead is costly. Although you may need to run some automated tests manually to make sure your feature works, avoid the expense of producing more manual artifacts, maintaining them, and converting them when you are nice, green, and stable. Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices. Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen. In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.
- Accessibility Testing
- Appium Resources
- Best Practices
- Continuous Delivery
- Continuous Integration
- Continuous Testing
- Cross Browser Testing
- Guest Blog Posts
- Load Testing
- Machine Learning
- Mobile Development & Testing
- News & Product Updates
- Open Sauce
- Open Source
- Performance Testing
- Product Updates
- Quality Assurance
- Quality Engineering
- Sauce Product Info
- Security Testing
- Selenium Resources
- Software Development & Testing
- The Story of Sauce