My niece and her kids were visiting from out of town last week. Somehow I ended up in charge of the two boys, six and eight years old. It all started nice and quiet, everyone playing on their devices. Next thing I know they are wrestling. Heads are just missing table corners and I’m freaking out! Don’t they know the risks? As a lead on a QA team developing mobile apps, I am constantly evaluating risk. What is the risk to the schedule? To the end users? While nobody (hopefully) will be bloodied like two boys playing, risks need to be accounted for.
Risks are evaluated differently by different people, and one person cannot identify all of the risks. So, how does risk identification happen?
One good way is during your sprint planning. Hopefully you are using a planning tool such as Fibonacci poker. This allows developers, designers, and QA to communicate about the stories. If someone has a higher number for their estimate this is definitely a flag that someone sees a risk that others are not considering. The best way to trigger the thought process is to ask the question of the group: “Does anyone see any risks involved with this story?” Developers can identify areas of functionality that are expected to be difficult. Designers might discuss the important features for users and what the impacts are if the feature isn’t working correctly. QA should take this feedback and emphasize these areas during testing. Creating a checklist of questions for the QA team to consider is always a help. If you’ve had past experience with surprises that impacted your testing, it should be included in your plan. For example, if you are relying on DevOps to stage some servers and you know they have a backlog, mark this as a risk. Finally, record all of the risks and their mitigation strategies on a wiki or somewhere the team can easily review. While it should also be noted in the related story ticket, this allows for those management types to have easy visibility.
You asked for it, you’ve got it. Based on my experience in testing our company’s apps I’ve compiled a list of areas that get my risk sense tingling.
Data Security - Will the app use sensitive data? You might need additional expertise from someone to test this.
Performance - Is there a lot of data being passed? Should the team account for testing different levels of network latency? This might mean that additional test infrastructure and knowledge is needed.
Usability - When reviewing the design, does it seem like someone would actually use this app? Is it simple to use? QA will most likely be using the app more than anyone during development, and if it’s not easy for them to use, it’s not only a risk that users won’t like it, but it may even impact the team’s use.
Quality - What is the risk if we release the app with X number of bugs? Will customers abandon it? You should agree to the QA exit criteria, or there can be a risk to the schedule for continuing bug fixes.
Team Capacity Focused
Team Skills - Does my team have the expertise to test this app? Have they done mobile testing before? Are they product SMEs?
PTO - Is anyone planning to take off during the sprint? Are they critical to the team? Can someone cover for them?
Test Automation Focused
Supported Devices - How many device types and operating systems must be supported? Do we have devices in inventory or should we use a mobile device cloud solution? Every supported platform/OS combination adds not only to the schedule, but the automation script support.
Reusable Components - Can we reuse code from previous tests?
New Technology - Are the developers inventing some crazy new animations that might be tough to automate against?
Interaction with the Device - Will we need to do things like file upload/download, or interact with the camera? Using natively supported apps to interact with our app can add to the complexity of the tests, along with having to make sure each device has the correct apps and versions installed.
Special Setup - Will some tests have special data needs? Any special setup, such as authentication methods or network latency?
Manual Testing Focused
Visual Inspection - Will the app require a lot of manual testing for things that require “eyes on,” such as usability or live video? This will increase the risk to the schedule.
Dependent on Other Systems - Will the app rely on other systems to work? Do we have that in place? If not, and if you don’t have control of them, expect delays and issues with interactivities.
Versioning - Do you have to worry about the app working with different versions of the connected infrastructure? Do you have access to each of these versions? Each component in the infrastructure has its own version which can easily impact the schedule when trying to coordinate testing.
Painful Tests Focused
Dates - Dates are always a pain to test. If you see any dates in the app design, expect aggravation. Do you have tools in place to assist with date manipulation for testing, or do you need to build them?
Time - Did I say dates were a pain? If you see time being used in a design, ask yourself if your app needs to take into account things like time zones.
Authentication Methods - For our app, which allows users to log in and interact with their own system on their servers, we have to account for different authentication methods. If the tests require special setup, anticipate problems.
Deadline - If you have a deadline and are being squeezed, you can expect not to be able to do the thorough QA you’d like. At this point you need to identify the riskiest areas of the app and prioritize.
On Other People or Groups - If your project is dependent on others that you have no control of, your schedule could be at risk. This is one of the biggest risks I’ve experienced.
This is just a starting point. As you can see, risks can come at you from many different angles. The better prepared you are up front, the easier to mitigate.
Just remember, you are not little boys happily oblivious to risks. If you choose to ignore them, someone might end up with a bump on his head!
Joe Nolan is the Mobile QA team lead at Blackboard. He has over 10 years experience leading multinationally located QA teams, and is the founder of the DC Software QA and Testing Meetup.