Docker containers have become massively popular in recent years, due in large part to their promise to make it possible to "build once, run anywhere." That feature, combined with containers' greater resource efficiency as compared to virtual machines, and the fact that Docker is fully open source, are key advantages of containers if you're a developer or sysadmin.
But the reasons to consider using containers don't end there. Nor are containers beneficial only if you do development or IT Ops work.
For QA engineers and software testers, containers have a lot to offer, too. Above all, they can make automated software testing easier and more efficient. Let's explore how.
What are containers, and what makes them different?
Containers are portable, software-defined environments that can host an application or service. Containers can typically be moved from one host server to another with little fuss (hence the "build once, run anywhere" mantra). In both of these senses, containers are similar to virtual machines.
However, containers have other advantages that virtual machines lack. One is that a process inside a container can easily access bare-metal resources on the host server. (Technically, this is possible when using virtual machines in some cases, but it's tricky to do.) In addition, containers don't require a complex hypervisor to run. That means that fewer system resources have to be devoted to hosting the containers, leaving more available for your actual applications.
For all of the above reasons, containers have become popular as a leaner, meaner and more efficient solution for deploying applications. (It has helped that Docker and most complementary tools, such as the Kubernetes orchestrator are open source, and therefore freely and easily available.)
Containers and automated testing
At this point, you might be wondering, "How do containers help a software test engineer like me?"
Well, in several ways. Consider the following advantages of testing a containerized application as opposed to one that runs in a virtual machine or directly on bare metal.
With Docker containers, environment variables outside the container typically have no bearing on how the containerized application behaves. Unless your container specifically depends on external resources, such as a data storage volume, you don't have to worry about how differences in configuration on the host server will impact your application.
This is a huge advantage for software testers. Not only does it mean that there are generally fewer variables to test for, but it also makes it easy to move your test environment from one server to another without having to make sure that both servers are identical.
Clean and simple updates
If you need to test an updated version of a conventional application, you'll have to wipe out and recreate your test environment in order to be able to test the new application cleanly. If you try to update the application from an older version to a newer one in the same environment, you risk configuration issues that can complicate testing results.
In contrast, containers make it easier to update your application in a clean way within an existing testing environment. That's because, when you update a containerized application, you build and deploy a new image of the container, rather than trying to modify an application that already exists. You can run the same test scripts in the same test environment on an updated container image with little difficulty.
Docker was designed in the age of DevOps and automation, and it shows. Docker containers are very easy to automate when it comes to building, deploying and orchestrating them. (In fact, without automation, it's probably not feasible to use Docker at all, because attempting to manage multiple containers by hand quickly becomes impractical.)
What this means for software testers is that Docker containers are particularly easy to integrate into an automated testing routine. Docker won't automate your tests themselves; for that, you'll have to use an automated testing framework. However, when it comes to starting a container in order to run tests, and stopping it when testing is done, it's easy to automate those processes using open source tools.
Containers start quickly
If you perform automated testing, delays are your enemy. Any process within your automation pipeline that takes a long time to complete can interfere with other processes.
This is why it's a big advantage to software testers that containers can start in a few seconds, as compared to a few minutes for virtual machines. Although virtual machines' delays may seem small on an individual scale, they add up when you are running multiple tests.
Better resource efficiency
Because containers can start quickly, it's more feasible to run them only when you are actually performing your tests, and keep them shut down otherwise. With virtual machines or bare-metal servers, you would keep the application and host server running constantly, in many cases, to avoid having to wait for it to start up again in order to run new tests. That's not an efficient use of infrastructure.
Due to the fact that containers can start quickly and that they don't waste many resources on overhead, you can fit more containerized applications on a single server than you could virtual machines. In other words, containers enable a high degree of density.
For software testing teams, this is advantageous because it makes it easier to test a variety of applications or application versions on a single server, reducing your overall infrastructure footprint.
Containers have a lot to offer software testing teams. In addition to simplifying testing complexity by reducing environment variables, containers make it easier to use infrastructure optimally and avoid delays in your automated testing pipeline.
So, if you are a test engineer and think it doesn't matter whether your organization uses containers or an older form of technology, think again. Containers can make your job easier while also saving resources.
Chris Riley (@HoardingInfo) is a technologist who has spent 15 years helping organizations transition from traditional development practices to a modern set of culture, processes and tooling. In addition to being an industry analyst, he is a regular author, speaker, and evangelist in the areas of DevOps, BigData, and IT. Chris believes the biggest challenges faced in the tech market are not tools, but rather people and planning.