Testing Documentation: Like Pulling Teeth
Last year our team was suddenly in the midst of not only a hiring blitz but also a cultural shift. As the new hires rolled in, we knew we had to get our act together and do something we all hated— testing documentation.
Here’s why testing documentation presented such a challenge, and how we overcame it.
“What Should We Write?”
The problem was deciding what we needed. We didn’t want transient testing documentation that would be useless by the next sprint, so we focused on what we thought would be most helpful in things we didn’t think would be changing much in the foreseeable future. As we were also addressing a testing culture, we wanted to ensure everyone was on the same page (including developers).
First, we started with ensuring we all had a common understanding of our tests, what we called them, and what they were for. This took a surprisingly long time! If you throw a scrum team on deciding one definition, you’ll find everyone has a different point of view. We had made a lot of assumptions that we were all thinking the same thing! As we sat down and really defined our tests, we all realized we had different ideas of what “key workflow” meant, or what an E2E test should cover. So, we went to work on understanding our suites. For each suite, we established:
- Suite name - What are we going to call these tests?
- Goal - What is the purpose of the suite? For example, the unit suite was defined as verifying code correctness.
- Triggers - When are the tests run?
- Gate conditions - What happens if the test fails?
- Requirements - What are the guidelines? Are there timing restrictions? What about test size? Are there environmental needs?
- Tools - How are we writing these tests?
- Where the tests live - Where can you find the tests? Are they near the code or in a separate repo?
- Responsible group - Who owns and is responsible for the tests?
We found this first document to be extremely useful as we dove into deeper guidelines and worked with teams to meet the standards we established. We have since found ourselves referring back to this guide for training and reviews. I think it’s all too easy to fall into a “test everything” trap, and providing these guidelines has helped provide clear direction as to what should go into the suite (and more importantly, what should not).
Going from Feature to Test...
Another artifact we found useful was more process-related—how to go from a feature or story assigned to your team to testing. This was more for new hires, but we have found it useful to show how to work with the scrum teams. Having examples in the context of our product has been so helpful to our team. This linked to other areas, like a test approach template, some deeper agile practices we follow, and non-functional requirements, but overall we were looking for a page that pulled together how to get from the inception of a feature to testing. This document outlines:
- Understanding the epic - What are the milestones? This should not be done in isolation.
- Understanding the stories - How to write user stories, agreeing on the definition of done, acceptance criteria, etc.
- What to do during the sprint (writing tests) - Working with the teams (and not just a tester in isolation) to understand what should be tested, and how. Is it unit? Integration? Manual?
- Best practices - While the document above goes into specific requirements for each suite, are there other general best practices everything should follow? DRY? SOLID? Especially for newer people (or those new to automation), it’s wise to remind them of good concepts in testing and automation.
- What not to do - It’s amazing what we assume to be common knowledge, and yet some people just don’t know what to avoid. We have seen a lot come our way that we would tend not to consider candidates for automation. If you have examples, list them! (Especially within the context of your product!)
- Other testing strategies - Beyond the general feature, are you accounting for accessibility? Mobile? Localization? While you may not write them out here, handy links to other strategies can be useful.
I get it. Engineers don’t love to document. It’s like pulling teeth. But when you are in the midst of trying to change the way people think, or you’re hiring what feels like a small army, it helps to have handy references. I also love just being able to send someone a link and say, “Read this!” after he or she asks how to do something (after a few training sessions, to boot). Testing documentation is a necessary evil, and will help you and your team in the long run. Happy writing!
Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices. Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen. In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.
- Accessibility Testing
- Appium Resources
- Best Practices
- Continuous Delivery
- Continuous Integration
- Continuous Testing
- Cross Browser Testing
- Guest Blog Posts
- Load Testing
- Machine Learning
- Mobile Development & Testing
- News & Product Updates
- Open Sauce
- Open Source
- Performance Testing
- Product Updates
- Quality Assurance
- Quality Engineering
- Sauce Product Info
- Security Testing
- Selenium Resources
- Software Development & Testing
- The Story of Sauce