Back to Resources

Blog

Posted September 3, 2015

Can You Test it All? Test Coverage vs. Resources

quote

During nearly every project I have worked on, the question Can I test everything? always comes up. The answer is (usually) a resounding NO. Sometimes it's because of time, sometimes it's lack of people. How can we still ensure a quality product, even if we can't cover it all? Sometimes, we have to test smarter.

The usual suspects

The typical scramble to finish testing and get something released is usually (in my experience) a result of one of the following (or a combination thereof): User stories that are WAY too big. When user stories are too large, it makes it difficult to break out tasks and identify all the acceptance criteria. They also become more difficult to plan for unforeseen scenarios, and can often blow estimates out of the water. Complex Workflows. Depending on your feature, the workflow could be very complicated, and it can be difficult to anticipate how a user is actually going to use the product. This makes it more challenging to find every possible scenario for end-to-end tests. Even if your user stories are small, the overall workflow comprising all user stories can still result in missed tests if it is too complex. Not using Test Driven Development. If you are still living in a world where Development works on their own and throws it over the proverbial fence to QA, you are opening up doors for late surprises to enter, and blocking bugs that hinder your testing progress. Date-driven releases**. Have you ever worked on a project that someone has already told stakeholders will be delivered by a certain date? A certain aggressive date? I have. These projects are no fun. If Development is behind schedule, but you still have to release on a certain date, how does that impact testing? Supporting too much**. Thinking about everything in a project that might have to be supported makes my head spin. Considering the number of versions of a product, browsers (and browser versions), mobile devices, operating systems (and so on) can give anyone a headache. Is it really feasible to get 100% test coverage across everything you say you support?

Then what?

The list seems stacked against QA to provide adequate coverage. But, in reality, do we need to cover everything? Not if you are testing smart. First and foremost, make those user stories small! If they are small enough, it is far easier to identify acceptance criteria and ensure coverage (at least for that isolated feature). You should still aim for the testing triangle (mostly unit, some integration, very few UI). Understand your hero workflows! Everyone is different. There is no way to predict (in full) how your users will interact with the system. But you CAN know what most users will do. Have you talked to your designer or UX lately? You should. Understanding these workflows can help identify what you really need to cover (and hopefully automate), and leave any wonky scenarios for exploratory testing. The Pareto Principle. Test smart. What are your areas of biggest risk? Ask yourself — How can I do as little testing as possible to uncover the most bugs? 20% of your tests should find 80% of your bugs. If you have an area that has had few bugs in the past, do you want to put too many resources into it? Or would you rather focus your resources on an area that is consistently a problem? Big Data. I have to admit, I used to cringe when I heard this buzzword, until it helped me. Finding out what clients were doing, what browsers they were using (hint: not IE), and what they were clicking helped shape my test strategies. I then knew where to focus my testing and my team to make the most of what time we had.

It’s a Team Thing

No, we'll never be able to test it all. But we can plan, and test, by testing smarter — though remember that it is the team’s responsibility to determine the right coverage! This is not just on you, the tester. We can drive conversations and make recommendations, but the team should decide when we can’t test it all, and what we should test.

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices. Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen. In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

Published:
Sep 3, 2015
Share this post
Copy Share Link
© 2024 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.