A Brief History
Containers and variations on containers as we know them today date back to the early 2000s and arguably even further. However, a company named Docker was able to thrust containers into the mainstream spotlight in 2014 by leveraging some critical partnerships with Red Hat and AWS. In fact, Docker has been so successful with containers that some people use the words synonymously.
The general knowledge of Docker and containers exploded in 2014 and seemed to be part of every technical discussion. Containers were quickly labeled as the future of application architecture and development and it seemed like every organization has the aspiration to run containers.
However, despite all of this excitement and discussion regarding containers, not many organizations had “figured them out” and even fewer had adopted them. The tools to work with and manage containers were in their infancy and enterprise level capabilities simply didn’t exist. This artificially restricted the potential adoption of containers.
Around 2017 is when the Docker project and the various tools for managing containers matured to the level that allowed more main stream adoption of containers. These two important milestones as well as general comfort with containers has caused container adoption to explode and accelerate even faster each year.
Containers are here to stay and understanding them will become critical to anyone involved in IT or Software Development.
Why Do I Care About Microservices
In a lot of ways, the adoption of microservices has increased the value of adopting containers. Traditionally, in application development, there would be a single application server and perhaps a DB to run all of the services required for the application. This lent itself very well to the idea of virtual machines.
Microservices, adopts a philosophy that an application should be made up of “loosely coupled” services that all run independently. These microservices can then be developed, updated and modified without having to update the entire application. This created the need for something different then a VM. Somewhere that a microservice could run, that had a very small footprint, was incredibly optimized, quickly and easily deployable and like a VM, obfuscated from the environment that was hosting it. The perfect fit is containers!
So What Are Containers?
Here is a graphic courtesy of Docker to get us started:
From this graphic we can see that there is something called “Docker” that runs on a host OS. This host can be physical or virtual and the OS can be Windows or Linux. This “Docker” is more accurately called the Docker Engine and is responsible for running the actual containers.
The containers, represented by App A, App B etc, which run on the Docker Engine, are almost like a super-lightweight and portable virtual machine. Each container has a full copy of the operating system as well as all of the configuration and code to run that piece of the application or microservice.
Each container also has access to underlying host resources but they are abstracted away from the container. This allows containers to be very flexible and easily run on different hosts (like VMs)
Containers Benefit: A developer can use Docker on their laptop to develop and code the application with containers. Those containers can then be deployed within the production environment while keeping everything within the container identical to the development environment.
It’s very common for a single application to require multiple containers. It’s also very common for applications to leverage microservices with different containers for each microservice.
Containers are able to reduce their footprint because they only need the pieces of the OS and the packages of software installed to run that specific microservice. Whereas, in the monolithic era, a VM would need to have a larger OS footprint and many additional packages of software to run all of the required services from one place.
Containers Benefit: Security can be improved while using containers by running OS’ that are optimized specifically for that service.
When adopting containers, it’s also common (and encouraged!) to move away from the idea of having very important and persistent VMs. Instead, containers should have everything they need to run built and coded directly into them and therefore if they die, another container can simply replace it.
Containers Benefit: The ability to quickly create and destroy containers via code allows development teams to easily perform automated testing. Containers for their application are automatically deployed via code, the testing is performed and then all of the containers are destroyed.
Hopefully this provides a basic foundation for what containers are and why so many organizations are rushing to adopt them. Realistically, it all has to do with software development and typically comes from adopting microservices. Going forward, VMs will continue to play a critical role in IT environments even while containers adoption explodes.