By now, we’ve all heard of Docker containers, which have received lots of attention from developers and IT Ops teams for their ability to simplify application design, deployment and management.
But what do containers mean if you’re a QA engineer? That’s a question that has received less attention.
In this article, I want to address the latter question by explaining what you should be thinking about if you are performing software testing for a containerized application. As we'll see, some core aspects of software testing don't really change when you are dealing with containers, but other aspects require you to rethink your overall software testing strategy, and perhaps add some new tools to your testing toolbox.
Docker containers don’t change functionality
The first thing to keep in mind about Docker containers and software testing is that the functionality of a containerized application isn’t (or shouldn’t be) different from the same non-containerized application.
For this reason, your approach to functional testing should always be the same, whether you are working with containers or not. While this can be done manually, it doesn’t scale and is problematic, as humans like to vary tests ever-so-slightly so that they are rarely identical. Using one of the many test automation tools available (like Selenium) will allow tests to be replayed with precision.
Packaging an application doesn’t change how these scripts run, but if you have a set of automated tests built, they can be incorporated into the continuous build pipeline used to produce your containers so tests can be performed and reported on with more consistency and much more timely results.
Often, applications need a custom build for each test environment, or a customized deployment process which needs to be followed after the build is delivered, and before tests can be run.
How Docker containers change things
While containerizing your app might not change its core functionality or the basic requirements for testing that functionality, it does change several other things. I’ll explain each.
Linux vs. Windows containers and multiple testing branches
Although people often say that Docker lets you "build once, run anywhere," that's not strictly true. A container built for Linux won't be able to run on Windows in most cases; neither can a Windows container run on Linux.
This is one way in which containers — and the software testing requirements associated with them — vary fundamentally from virtual machines. With a virtual machine, you truly can build once and run anywhere (or almost anywhere). But if you are using containers to deploy both a Linux and a Windows version of your app, you'll have to build and test separate branches for each operating system.
Security, certification and Docker container testing
Another caveat involves security and certifications. Each container possesses a little slice of operating system. In most cases, it's not a full operating system — It's just enough to boot a small portable environment.
But it's still enough of an OS for auditors to care about. If you have requirements around only using approved operating systems, you need to build every container using that as a base. Companies like Microsoft and Red Hat offer base container images that can be easily introduced.
This can become troublesome if the development team started with a random base container built on a purely open source distribution like Alpine Linux. There is nothing wrong with these distributions from a technical viewpoint, but they have not gone through the same certification processes as the commercially backed distributions. The whole application will need to be repackaged.
As a side note to security concerns, as containers allow a much easier path to introduce new libraries and frameworks into production applications, it might be advisable to introduce security and license compliance scanning into the CI/CD pipeline to catch licenses that aren’t compatible with your corporate guidelines. Not every library found on GitHub allows for production use without paying for it (HighSoft is a great example of this).
Testing for microservices
A third container-related challenge that arises for QA teams involves microservices.
To be sure, microservices and containers don't necessarily go hand in hand. Microservices by themselves are a concept that became popular slightly before Docker containers. However, containers have helped to make microservices easier to implement. Development teams are realizing they can use a container architecture to deploy numerous microservices in an efficient way.
Microservices make software testing more challenging, because in most cases, each microservice has an API that needs to be independently tested. These tests can be performed using common command line tools like cURL by forcing a UI test suite like Selenium to do things it isn’t natively built for, or using a purpose-built tool like Postman.
The Postman add-on Newman is particularly useful, and it allows any test scripts it has in its collections to be run from the command line and integrated with your existing CI/CD pipeline to automate the API tests you will need.
Even with the complexities introduced by using containers, overall, they are moving towards better and more frequent reuse and change tracking throughout all environments. This will make the life of a quality testing engineer generally better, but there will be a few bumps along the way.
Vince Power is a Solution Architect who has a focus on cloud adoption and technology implementations using open source-based technologies. He has extensive experience with core computing and networking (IaaS), identity and access management (IAM), application platforms (PaaS), and continuous delivery.