Sure you see your bug reports in JIRA, but how do you actually know the level of quality in your apps and processes? Bug count metrics are a great starting point, but if you really want to know if your team is producing a quality app and improving their internal processes, you need to look to other tools to see how your product is trending.
If your internal processes and test coverage are poor, most likely this will translate into a poor app. Get your house in order with the following:
Start by watching your automated build and test results, using a tool like Sauce Labs
Use a tool like Datadog to understand what you want to measure, and monitor trends in build failures based on automated test results, and analyze how long it takes to deploy. You can use this information to determine direct and indirect impacts to the different teams.
Determine the important things you want to include in your CI and Automated Test coverage. Have a percentage goal of coverage to strive for.
Monitor your bug backlog. Is it accumulating per release?
Keep track of your system availability for customers. Use a tool like Crashlytics to measure who is using the application, real time data about the app’s performance and problems.
How are the scrum teams doing? Are they improving their ability to produce more story points per sprint as they get used to working together and with their features?
You should review the metrics from each of these to determine if you are treading water or improving. Here is a shot of Datadog in action on a live system. The names were changed to protect the innocent:
The easiest and most obvious place to see your app’s quality is through the app store. Just look at the ratings. But ratings don’t always tell the story. Review all of the comments. Keep a running tally for things like positive vs negative comments. Also include whether the comments are requests for new functionality requirements, or actual bug feedback. Compile metrics based on this information based on release version, and use it to spot trends. Did some features cause more negative feedback than others?
Along the same lines as the feedback found in the app store, my team has a built-in feedback feature for users. This allows users to directly send comments, questions, and issue them to the team. While much of this is used by the customer support team, the QA team reviews each of those and shows the same type of metrics gleaned from the app store.
If your company has a user forum where you can see the conversations, you should take time to monitor them. Are there bloggers out there reviewing your product? Follow them. This is where you will find the super users who can really dig out the issues, from usability to bugs, while also providing useful tips to improve. Make use of the social media sites like Reddit. It only takes a couple of users to broadcast a negative post/tweet to expose the quality of the app. Or, you can use a tool like Brandwatch that monitors social feeds and you can get a feel for the ratio of positive to negative feedback.
Think your app is doing well based on the feedback? Good rating in the app store? Check out your competitors. Not only can you do a good side by side comparison, but you can also review the comments on their pages to see if people are saying ‘much better/worse then x’.
Your Client Support team has the best pulse on the app. They are first to be hit with customer feedback, though unfortunately it’s mostly negative at their point. But they can provide metrics as to the volume and criticality of calls and issues they receive. They have their fingers on the pulse of the app.
I’m saving my favorite for last — bug bash feedback from your internal teams. In Ashley Hunsberger’s blog post Testing from a Different Point of View, she discusses bug bashes, or as she calls them, “Exploratory Testing on Steroids.” Bug bashes are exploratory test charters where both members from your scrum team, plus anyone else (like customer support) can join in and run tests during a large-scale group test. These tests are great because they show you bugs that all of your other QA processes have missed, plus call out things like usability. (As a bonus, these teams bring early training and visibility to the app!) They are most beneficial when you set up a survey to be completed immediately after the test. Keep it simple, and use rating scales:
How would you rate the app’s quality?
Do you think customers will like the features?
Is the app ready for release?
If you were rating the app in the app store, what would you give it?
Allow for comments.
Pull all of this information together. Assign rating systems for each to give you a Quality number. Create simple dashboards to show the trends, and that allow you to drill down. Use your imagination! I’d love to hear what other methods you use to track the quality of YOUR apps. Joe Nolan (@JoeSolobx) is a Mobile QA Team Manager with over 12 years of experience leading multi-nationally located QA teams, and is the founder of the DC Software QA and Testing Meetup.