“Well, it works fine on my machine.”
That's a phrase I loathe hearing when engineers are doing code review and an issue is discovered. Left to their own devices, engineers will approach setup of their development environments in whatever manner each one deems most efficient. Very frequently this creates two critical issues, a lack of consistency between the environments of the developers, and by consequence, disparities between development and production. These issues are fundamental. Quality is compromised when the application or the environments behave differently for each engineer. This is further complicated when there is a QA team. How were their environments built? How can they be confident they have written the appropriate test cases before release to production?
You want none of this…
The obvious benefit of ensuring consistent environments is improved code quality, but another great benefit is reduced finger-pointing and improved team morale. When setting up a team's development environment, the first consideration to make is where to locate it. Should it be local or remote?
Remote environments allow automation tools to be used to spool and control development servers, and can be instantiated to very closely replicate production. The limiting factor is the cost of hosting the remote cluster and adding an additional environment for your DevOps engineers to manage. Other challenges include the dependence on an Internet connection for developers and IDEs that appropriately interface with the remote environment.
Allowing developers to maintain local repositories gives them the freedom to use their IDE of choice, it's cheaper to maintain, and does not require an Internet connection to perform work. Hosting multiple virtual machines on a laptop can be taxing on the system and it can be cumbersome to replicate complex production environments. As laptops have become increasingly more powerful, locally instantiating the cluster necessary to build a feature is much more reasonable. This approach also has the value of making sure developers understand the infrastructure that supports their applications.
Vagrant has proven to be an effective tool for building consistent and repeatable local environments. Vagrant bootstraps virtual machines within your laptop/desktop and leverages optional provisioners such as Chef, Puppet, or Docker for doing setup and configuration. All of the required libraries to run an application are effectively sandboxed from the host machine since they live in the VM. When the VM is destroyed, all of those libraries are destroyed along with it and the developer's system remains clean.
My team uses the Chef Zero provisioner to configure all of our development machines and Chef Server for configuring production. We house all of our Chef cookbooks in a single repository that Chef Zero has access to via Vagrant. Vagrant spools up a virtual machine, calls the provisioner, and Chef Zero builds our machines based on the roles and recipes we have defined in our cookbook repository. This same cookbook repository is synced with the Chef server through the Chef Workstation. Each cookbook is written to support production flags that get set on the Chef server for leveraging any production-specific configurations we use (i.e. access keys). This means the code that builds our production servers is the same code that builds our development servers, from OS to application code and libraries.
The combination of Vagrant and Chef also ensures repeatability and consistency between the local environments for each of the engineers and makes their development environments disposable. Quality engineers build off the same repos and can trust functional parity between the QA, development and production environments. New members of the team just need to pull the application and cookbook repositories, and can get started on a fully built environment with just a few commands in the terminal.
A few useful vagrant plugins include:
To round out the automation and ensure consistency in quality, task runners such as Grunt and Guard can be used to automatically run unit tests when file system changes occur, and perform livereload in browser windows. Find the tool appropriate for the framework you are coding with, and the engineering team can operate using a TDD or BDD approach in a streamlined and automated environment. As a final touch, have your continuous integration server call the same unit and browser tests before pushing builds to production, and you have a platform that encourages proper test coverage, streamlines the development process, and allows the team to focus on delivering code.
Tom Overton has leveraged full-stack technical experience to run engineering teams for companies including Technicolor, VMware, and VentureBeat.