Blog Insights
Local Development in the Age of Containers
Managing an organization’s technical development is a long and winding road. Depending on the size and complexity of what is required over time, how development is managed locally can go from straight-forward to overwhelming overnight. However, as containers become a smart solution for technical growth, the anxiety of managing development becomes much more feasible and secure.
First, let’s start with a brief history of local development at Forum One. Our local development journey mirrors that of most organizations: as our tech team grew, so too did the number of possible configurations of local computers. The end result being that we needed to find better, smarter, more secure solutions manage and deploy our work locally.
Many organizations may not have a clear strategy on how their manage their local development, and to be fair, it’s easy to not have a process for local environments altogether. In our case, our sites typically use PHP-based content management systems, and the platforms they require are extremely common — so common, in fact, that they have their own acronyms:
- LAMP (Linux, Apache, MySQL, and PHP)
- MAMP (ditto, but for MacOS)
- WAMP (ditto, but for Windows)
Installing virtual machines
Vagrant is a command-line tool for creating and managing virtual machines using simple configuration files. Virtual machines carry extremely useful isolation guarantees as well so that different projects result in different virtual machines. Each virtual machine contains only the code and data for a particular project, with zero risk of “contaminating” other projects with old data or misapplied configurations. This eases the first of our headaches, which is that now we can run multiple sites and servers on the same machine. Vagrant’s configuration files enable a process called provisioning, which is essentially a fancy word for “install and configure.” Through provisioning, we accomplish two things very easily:- We are able to ship updates to the base system and simply re-provision
- We have a reproducible way to recreate virtual machines in the event the system state becomes corrupted
Running into new problems
However, as we continued to use Vagrant, we ran into a number of other issues that crept up. Provisioning an entire virtual machine is extremely slow. In some cases, downloading, installing, and configuring a single system could take up to 20 minutes on some machines. This process had to be repeated by each developer for each new project. That much lost time quickly adds up. As projects aged, we also ran into extremely difficult-to-fix packaging problems. Since we tracked the main RPM repositories, we ran into issues when older projects could no longer install outdated PHP versions. This could happen to projects that didn’t yet have an update scheduled, or had recently come back into our patching and support system. Fixing this issue is not as easy as simply changing the version due to the number of incompatible changes introduced in PHP 7. If we provisioned a site with a new version of PHP, we weren’t guaranteed that the site would run because of the breaking changes. Having to address that often meant updating modules, which brought on its own set of changes. This was of particular concern for long-running projects. If a Vagrant VM hadn’t been recently provisioned by a tech lead, a stale package might be running. For the website manager, the local site would still work perfectly; however, new developers joining the project would be unable to create their own VMs. Finally, we had some issues with SaltStack itself. Salt requires what it calls formulas — that is, packages of Salt templates — to provision. This meant that, even if a developer was familiar with the underlying package, they had to learn how to use Salt to interact with it. If we needed to customize Salt to support a package we didn’t normally use, such as Python to support Django-based sites, we quickly ran into problems where people applied hacky solutions that created uniqueness problems.Containers to the rescue
As we continued to grow as a tech team, it became clear that our Vagrant usage was becoming unsustainable. We needed a solution where we could use templates to support common project needs, while giving us room to support less-common configurations (e.g., Python) and grow into more sophisticated architectures (e.g., decoupled sites). Due to these requirements, we decided to standardize on containers. Docker is a container solution based on a few central pieces: images and containers. An image is a self-contained package containing the resources (executables, libraries, default configuration) needed to run a service. When we can, we use off-the-shelf images from the Docker Hub, a community repository of trusted images, but a feature of Docker is that new images can be built from other images — e.g., we can build a customized Node.js server by starting from the Docker Hub’s official Node image. After images are built, they are used by Docker to create a container to act as a running instance of the image. In many ways, containers act like virtual machines. They are isolated from the rest of the system and can only see the resources granted to them by the supervising system (the Docker engine or the VM hypervisor). Containers communicate with other services over network protocols, which are easier to monitor, secure, and load balance. One crucial difference between the two concepts is that a container’s file system is temporary: it lasts only as long as the container does. Despite the drawbacks, a container has a number of benefits. Changes to a running system are effectively reverted once a container exits, which requires developers and users to track a desired configuration elsewhere — whether that be in code as part of the build, or recorded in an external database with which other containers can synchronize.Navigating container challenges
But as we have seen in previous system changes, there is a downside here as well. Where we once had a single VM containing an entire project’s worth of dependencies, we now have multiple individual pieces. Starting a container project therefore involves:- Acquiring all of the images necessary for the web server, database server, search server, and cache server
- Building any custom images
- Starting each service with the appropriate configuration and runtime settings
- Granting each service a way to communicate with others while restricting arbitrary access.