But for most of the market, containers officially hit the radar in 2013 with the introduction of Docker, and started mainstreaming with Docker 1.0 in 2015. The widespread adoption of Docker in the 2010’s was a revolution for developers and set the stage for what’s now called cloud-native development. Docker’s hermetic application environment solved the longstanding industry meme of “it works on my machine” and replaced heavy and mutable development tools like Vagrant with the immutable patterns of Dockerfiles and container images. This shift enabled a new renaissance in application development, deployment, and continuous integration (CI) systems. Of course, it also ushered in the era of cloud-native application architecture, which has experienced mass adoption and has become the default cloud architecture.
The container format was the right tech at the right time—bringing so much agility to developers. Virtual machines by comparison looked expensive, heavyweight, cumbersome to work with, and—most damning—were thought of something you had to wait on “IT” to provision, at a time when the public clouds made it possible for developers to simply grab their own infrastructure without going through a centralized IT model.
The virtues of virtual machines
When containers were first introduced to the masses, most virtual machines were packaged up as appliances. The consumption model was generally heavyweight VMware stacks, requiring dedicated VM hosts. Licensing on that model was (and still is) very expensive. Today, when most people hear the term “virtualization,” they automatically think of heavyweight stacks with startup latency, non-portability, and resource inefficiency. If you think of a container as a small laptop, a virtual machine is like a 1,000 pound server.