A Functional Tester Looks at Performance

Even if you aren’t directly responsible for performance, it is important to consider it under the umbrella of quality. As a tester, how do you move forward and help drive performance quality (especially when you are new to this area, like me)? What are the ramifications of not considering performance within QA? Let’s take a look at what performance is, the questions QA can ask during design and implementation, some of the types of testing that can be done, and making performance part of your acceptance criteria (and, therefore, part of your Definition of Done).

What is software performance, and why is it important?

As an end user, I think of performance as just how fast or stable something is. If I click on something, does it take forever to load in a website? Does my app crash every time I try to open it or submit something? Do I give up and find a better solution to meet my needs? Of course we want a feature to work, but do we think about the system holistically?

I can tell you now that if a website or app that I am using crashes, I instantly think that the quality is just not there. If I have a choice in what I use, I quickly delete it and find another that does work. You may be tied into an app and not have a choice, but your opinion of that app (and the company) can quickly plummet based on stability alone.

Although performance is multi-faceted, some basic topics to think about include:

  • Response time – How quickly does the system react to user input?
  • Throughput – How much can the system accomplish in a specified amount of time?
  • Scalability – Can the system increase throughput under an increased load when resources are added?
  • Availability – Is the system available for service when requested?

To retain customers, you must consider performance as part of overall quality.

Understanding performance during development

The problem I’ve seen is that performance is always deemed important, but is not necessarily addressed up front. All too often I recall discussing performance long after a feature was coded and tested. It was pushed until the end, and it can be difficult to make your features meet performance expectations after they’ve been built. This was difficult in a Waterfall world, but how do you adapt what was an afterthought as more and more companies are moving towards a Continuous model? Performance needs to be considered first and understood by the team.

Here are some sample questions to be considered DURING design and development to help ensure you are considering performance needs early on as you go through your non-functional requirements (remember, you will need to discuss as a team and get guidance in determining what is expected):

Category Non-Functional Requirement Questions
Response Time What is the acceptable waiting time for users?

Do we need to consider users on various devices and speeds? Do we need to simulate slower speeds? Some may be on modern desktop/laptops on high speed Internet, or modern mobile devices on 3g and above — but others may not.

Example - Changing a password. How long before I can expect a change to take effect? Do I need to show progress feedback?
Data Volume How do we ensure that data volume does not impact user experience?

What's the maximum and typical volume of data that will be involved?

Example - Entering a page that lists users. Do I show all 20,000 users in the system? Do I show the first 25? How long does it take? Can I perform other actions while the list generates? Do I see a blank screen while I wait?
Caching Cache is king. Queried and calculated data can be reused to eliminate duplicate work.

When do we need to invalidate the cache? Which data cannot tolerate staleness?

How long can the cache live for?

Could cache staleness impact the system or the user session only?

Example - Notification badges. How long do those notifications last once a user has viewed, or just upon first login? If the cache is stale, does it impact just the authenticated user?

Testing performance

There are several types of testing that can help ensure your apps are performing as expected. Please note that this is not a comprehensive list, just a high-level overview to get you started1)Summarized from http://goo.gl/RC4AaS and http://goo.gl/5ukqAi :

Type of Test Overview
Load Application is tested for response times during normal and peak usage. How does the app respond with a few users completing a few interactions vs. thousands of users completing thousands of interactions at a time?
Stress Finds ways to break the system by increasing the load. Start with a good benchmark (identified during your load testing), and increase the load until you see which components start lagging and fail first.
Volume Test if application performance degrades with more data volume. Do you access the database directly? How does it handle the query if there are millions of records?
Reliability/Recovery If your app does fail, testing will show if and how it recovers, and how long it takes to get back to an acceptable state.
Scalability Tests if your app’s performance improves if you add resources (hardware, memory, etc.)

Improving performance quality faster

It’s time to stop pushing performance to the end and hoping for the best. As stories are designed, add performance to your acceptance criteria. Make sure that everything in your acceptance criteria is marked as complete (part of your Definition of Done).

As with anything, the longer you put something off, the more difficult (and/or expensive) it is to implement later. Be proactive, and build performance in.

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices. Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen. In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

Written by

Ashley Hunsberger


Performance TestingSoftware Testing