What is an API?
Application programming interfaces, or APIs, provide a means by which software systems can communicate with one another. More specifically, an API is a mechanism that applications can use to access data or leverage functionality managed by another system.
APIs are services that are consumed by developers. An API is made up of a set of procedures that allow development teams to access data and functionality from other systems without exposing them to the complexity of the underlying implementations.
APIs allow developers to easily integrate with the systems responsible for managing data as they build features for their end users. Furthermore, when developers use APIs, they only need to understand how to successfully build the API calls they need to make and how to interpret the responses. This keeps things simple for the consuming developers, as it does not require that they have an in-depth understanding of the logic that exists behind the scenes.
What is API Testing?
Like any software product, APIs need to be tested. This testing is done to confirm that the procedures that make up the API function and perform as expected (among other things). The process of validating each facet of each API procedure is called API testing.
As mentioned above, APIs are built for use by developers, not the typical end user. While an end user interacts with an application through a user interface, a developer interacts with an API programmatically, writing code to construct API requests and then evaluating the responses. Therefore, API testing will differ greatly from the type of testing that is done to evaluate the look and feel of an application.
When Testing an API, What Should We Test?
To better illustrate what this looks like in practice, let’s take a closer look at some types of tests that need to be executed in order to thoroughly validate an API.
Testing for proper function
When testing an API, it’s critical to test each endpoint for proper function and to consider all cases in an effort to provide complete test coverage. This means testing all variations of each request and verifying the response for each variation.
For example, let’s consider the case of a REST API endpoint for making a GET request that is responsible for retrieving a set of products from a database and returning them in JSON format to the client application. The client may be provided with the option to pass various parameters to the endpoint to retrieve different result sets. For instance, the call might be designed to accept a product category, minimum or maximum price constraints, etc. This procedure would need to have a test case created for each possible combination of parameters in order to confirm that the correct data is returned in each scenario. This means testing requests where only the product category is passed as a parameter, where the product category and the minimum price are passed as parameters, and so on.
Furthermore, it’s important to verify that the data being returned is formatted as expected, thus ensuring that it can be parsed and utilized programmatically with ease. Finally, test cases should be created to validate that the appropriate HTTP status code and error message are returned in instances where the call fails.
While not a complete list, the following represent some of the issues that can be revealed through functional testing of an API:
Requests that return an incorrect response (e.g., improper filtering of results retrieved by a GET request with parameters).
Requests that return improperly formatted responses, which prevent client applications from effectively leveraging the data they return.
Exceptions that are handled in an unexpected manner.
Testing API performance (load testing)
Another crucial aspect of API testing is load testing, or performance testing. Some organizations prefer to limit the definition of load testing as a type of stress testing, perhaps in various traffic patterns, while performance testing of an API provides greater insight. Increasingly, SREs and other application and performance owners prefer to elevate API performance testing only to monitors (continuous API tests) that can execute functional and load testing together for a holistic validation of performance. In this way, not only can engineers know when an API fails under stress, they can instantly understand why it failed and how to fix the issue.
Load or performance testing is conducted to ensure that the API will not experience slowness when it is deployed to production and utilized in client applications. Load or performance testing allows teams to evaluate the impact of various load and traffic patterns on API performance.
Performance issues can be tricky to diagnose, as the conditions that normally lead to these types of problems are not typically present in non-production environments. For instance, manually hitting an API endpoint to test the performance of a call in a development environment is not a true test of performance, since it's not facing the quantity of concurrent requests that it’s likely to face in production. Additionally, the datasets that an API is working against may be much smaller in a development environment than in production.
Modern API tests that are reused as functional and performance monitors can be deployed in any environment: as monitors scheduled without dependence on a CI/CD platform, there is little risk of the functional API monitor giving flaky results throughout changes to endpoints in production environments.
This being the case, it’s important to test API requests against dynamic datasets that are representative of the data that will exist in production to ensure that complex operations don’t become unexpectedly expensive with more data to analyze. Furthermore, it’s important to simulate a variety of traffic conditions to evaluate API performance in different scenarios.
All in all, through performance testing, the owners of the API can minimize the possibility that they will be culprits of performance bottlenecks in consuming applications.
While not a complete list, the following represent scenarios to be tested in an effort to ensure that an API performs admirably in production.
Testing the design of API procedures
This can reveal issues with performance as datasets grow (e.g., an API GET request made in a local development environment performs well due to the small dataset it is running against, but experiences slowness against larger, production-like datasets due to poor query design).
Testing API performance against various traffic patterns
Request volume consistent with that expected in a production environment.
Sudden spikes in request volume that are experienced for a short period of time.
Abnormally high request volume. This testing is conducted to determine the type of traffic the API can handle in its current state without experiencing a significant drop-off in performance.
Automated API Testing
As is true with any software product, testing an API is accomplished most effectively when automated. With automated API testing, a variety of benefits are realized.
Human error is an unavoidable aspect of manual and repetitive activity. It’s reasonable to expect that a human will be able to develop a suite of automated tests to evaluate API functionality, but it’s not as reasonable to expect that he or she will be able to manually execute each test case and evaluate the results without missing something somewhere along the line. Furthermore, manual testing takes significantly more time to perform than automated testing. Due to that fact, manual API testing cannot be conducted as often or as thoroughly as automated API testing.
One of the most effective ways to leverage automated API testing is as part of a CI/CD pipeline. This means conducting API testing continuously, as an API is developed and code changes are integrated. In doing so, modifications that result in unexpected consequences will be detected immediately.
Take functional API testing, for instance. Imagine a developer modifying functionality that backs an API procedure for pulling a list of products from a database. They could make a change that unintentionally breaks filtering when results are retrieved under specific circumstances. While this developer may not catch this error prior to delivering their changes, automated functional testing of this API — configured to run as part of a CI/CD pipeline — will ensure that this problem is flagged the next time it’s kicked off. By doing so, this testing assists in preventing breaking changes from infiltrating production releases, thereby ensuring a consistent and positive experience for consuming applications.
As mentioned, API test automation enables testing earlier in (and continuously throughout) the development lifecycle. This concept of testing early and often is valuable for a number of reasons which we will dig into below.
Shift-Left API Testing: The Importance of Testing at the Earliest Stages of Development
It has always been important for development organizations to test their products as thoroughly as possible before delivering functionality to their end users. Traditionally, testing is performed after the development phase has been completed, just prior to release. In taking this approach, organizations leave themselves open to several potential problems:
Waiting until the end of application development to perform thorough testing means a greater risk of finding bugs that could derail the delivery schedule, with defect volume and complexity making it impossible to adhere to the original timeline.
By not testing sooner, development teams are more limited in how they can respond to serious problems. Consider this from a development perspective. If a critical design flaw is found early on, the situation can be mitigated in a graceful fashion, with developers having the flexibility to bake a solution into the code in a coordinated and thorough manner. In comparison, if such a flaw is identified late in the game, time constraints and the risk of a far-reaching change may prevent such a refactor from occurring. This makes it more likely that a patchwork fix will be bolted on — a fix that is more difficult to maintain and more likely to cause further issues as the product evolves.
Given these issues and the fact that development organizations are now prioritizing a quicker release velocity, greater emphasis is being placed on performing software testing as early as possible in the development lifecycle.
The practice of testing software during the earlier stages of the development lifecycle is known as shift-left testing — the idea being that testing earlier will lead to the earlier detection of defects. In turn, this early detection will lead to quicker, more comprehensive (and less expensive) resolutions that don’t threaten delivery timelines.
Putting it into practice: shift-left API testing
Shifting API testing to the left can be accomplished in various ways. One such method, mentioned in the previous section, is by leveraging automated API functional testing as part of a CI/CD pipeline. In this way, API functionality can be evaluated throughout the development process as the API evolves. This enables teams to build upon and modify an API with confidence, knowing that any breaking changes will be identified at the earliest possible moment in time. As a result, the impact that introducing a bug may have on the release schedule is reduced, in large part, by providing development teams with the opportunity to address the problem at the moment in which it is the least expensive to fix.
Also worth mentioning here are mock APIs. A mock API is a lightweight component that provides the illusion of a fully functioning API, and can be made available while the real API is under development. Through the utilization of mock APIs that produce responses consistent with what will be delivered in actual APIs, feedback can be relayed back to the API developers regarding response structure, information that is desired but missing from the response, etc. This provides critical insight that increases quality in the early iterations of API development.
Mock APIs can also provide the foundation for defining functional API tests. By analyzing the responses from mock API calls, test scenarios can be identified and test scripts can be designed accordingly, paving the way for conducting functional API testing as early as possible in the design and development phases.
Microservices and REST APIs
Over the past decade, a shift has occurred in the world of software development. Gone (for the most part) are the days of developing bulky, self-contained, monolithic applications in which changes and improvements go live slowly due to quarterly or annual release schedules. Monolithic apps are being broken down into microservices–building blocks that are each typically given a single purpose. This decreases development time and helps ship new features quicker and more often.
Most of the world’s mobile apps and web platforms run on REST APIs. These function as the “connective tissue” among all of the microservices so that they can collectively deliver the same functionality and outcomes as the monolithic app. Incremental improvements or other changes can be made faster with microservices, and then deployed into production or live environments with far less risk.
APIs are immensely popular and widely used. Many times, when you look up an address, check the news, or otherwise browse the internet, the websites or mobile apps that you’re using are leveraging APIs to help cultivate at least one facet of your user experience.
When developing an API, it’s important to test the product effectively. A few key considerations to take into account when developing an API testing strategy include the following:
What should be tested? Function and performance, as mentioned above, reflect two important areas to focus on when putting an API through its paces.
How to test in an efficient and effective manner? Automation is your friend. Test early and test continuously to increase the quality of the end product.