How do you know if you are wasting your time? What tests are effective and which ones are done because “That’s the way we do things”?
Tracking bug metrics can bring testing holes to light.
You know that feature you worked on, you were so proud of it. It was all fresh and shiny like a red Porsche:
And when it finally hits the app store, your ratings end up looking like this:
You ask yourself: How did it happen? Why are the customers finding so many bugs? What are we missing?
It sounds like you’ve got some holes in your testing!
You’ve probably heard that the definition of insanity is doing the same thing and expecting different results. Have you ever felt like you are on a development team that practices this? While change can be hard to accept, it can be even harder to implement. How do you convince people they need to change?
That’s right! It’s all about the data. It’s pretty hard to argue when the facts are staring you in the face. Being able to pull valuable metrics from your bug tracking tool is a simple, powerful way to discover where change is needed.
It’s important to understand the different ways bugs are discovered for your product. As we’ve noted above, clients are definitely a source. That’s one.
Let’s look at a simple development life cycle of a feature and see what else could trigger a defect report:
First, you have your developers, writing the code, and writing unit and integration tests which flag issues during continuous integration.
When stories are complete they might go through a design audit and a smoke test by the QA team.
The team is also writing additional automated scripts, such as UI tests against common workflows to catch future regressions.
When the feature is complete, additional exploratory tests are run by QA, plus maybe a large group Bug Bash (see below).
Finally, the feature is released to the clients.
Ok, so now we know when bug tickets are created. Are you tracking this? Why not?
I worked on a project that had a heavy investment in automated testing — so much so that it took the largest chunk of our budget. Our mantra was Everything Must Be Automated. We were successful — practically every test was automated. But it didn’t seem to affect the quality bottom line from our client’s perspective.
So, we started doing a deeper analysis of our bugs. Luckily we had a mandatory field in our bug tickets for the Issue Source, which identified where in the lifecycle the bugs were found. By running a report showing the percentage of bugs found per stage, we could easily see the most effective source of bug identification. This simple field allowed us to discover that our automation was not supporting its own weight. Why would our heaviest investment provide the lowest return?
Well, basically it forced us to re-evaluate our automation strategy. We determined where our tests were weakly designed and implemented new best practices to resolve them. (Stay tuned for another blog about the actual values of the metrics analysis.)
Having had the baseline of results thanks to the reports based on the Issue Source, we were able to monitor the percentage of bugs reported and determine whether our new practices were successful.
A great side benefit of using metrics is the ability to determine if a new process works.
I mentioned the Bug Bash as an Issue Source field option. My product development group is constantly reviewing the latest testing trends to bring new ideas to the team. You might have read a blog topic by Ashley Hunsberger about Bug Bashes. This is where you dedicate a couple of hours for the whole development group, including designers, developers, testers, and anyone who wants to join in, to pound on a feature.
We tried out the Bug Bash as an experiment. The first time, it turned out to be the most successful tool in the QA tool box. Once we implemented it on a regular basis, we were able to use the metrics from the results to determine when it is most useful in the process, and what tweaks we might make to keep it successful as a practice.
Get to know your metrics. They are powerful.
Knowledge is Power!
Joe Nolan is the Mobile QA team lead at Blackboard. He has over 10 years of experience leading multinationally located QA teams, and is the founder of the DC Software QA and Testing Meetup.