Blog
Jul 1, 2025 - 12 MIN READ
The Container Revolution: From Docker's Disruption to the OCI Framework

The Container Revolution: From Docker's Disruption to the OCI Framework

A critical look at how containers went from Docker's innovation to an industry standard, and why the orchestration wars ended with Kubernetes as the clear winner.

Julian Morley

Let me tell you something: containers changed everything. And I mean everything. The way we build software, deploy it, scale it, manage it—all fundamentally transformed by this one technology. But here's the thing most people don't realize: the container story isn't really about Docker, even though that's where it started. It's about how an entire industry rallied around a set of standards that nobody owned, and how the resulting ecosystem became more powerful than any single vendor could've made it.

I've been building infrastructure through this entire evolution—from the early Docker days when everyone thought containers were just "fancy VMs," through the orchestration wars, to today's OCI-standardized world where Kubernetes runs basically everything. And honestly? The journey has been messier, more political, and way more interesting than anyone talks about.

The Pre-Container Dark Ages

Before we talk about where we are, let's talk about where we were. Because if you didn't live through the pre-container era, you probably don't appreciate just how painful infrastructure used to be.

How Applications Were Deployed

You'd get a server—physical or virtual, didn't matter. Then you'd SSH in and start installing stuff. First the runtime (Node.js, Python, Java, whatever). Then the dependencies. Then your application. Then you'd configure it, set up logging, create systemd services, configure monitoring...

And when something broke? Good luck reproducing the issue. "Works on my machine" wasn't a joke—it was the reality we lived with. Production environments drifted from staging. Staging drifted from development. Nobody knew exactly what was installed where, or which version, or what obscure system library was the only thing keeping the whole house of cards from collapsing.

Scaling? You'd provision more servers and do the whole dance again. Hope you documented those installation steps correctly. Hope the new servers matched the old ones. Hope nothing changed in the package repositories since last time.

It was slow. It was brittle. It was expensive. And it absolutely didn't scale.

Virtual Machines: Better, But Not Good Enough

Virtual machines helped. At least you could snapshot a VM and spin up copies. Infrastructure as code tools like Terraform could provision them automatically. Configuration management tools like Chef or Puppet could configure them consistently.

But VMs were heavy. Each one needed its own operating system. Boot times were measured in minutes. Resource overhead was massive. And you still had the same configuration drift problems, just at a different layer.

We needed something better. We just didn't know what it looked like yet.

Docker: The Game Changer

Then Docker showed up in 2013, and everything changed.

Now, Docker didn't invent containers. Linux containers (LXC) had been around for years. Google had been running containers internally for ages. But Docker made containers accessible. They took a complex Linux kernel feature and wrapped it in a simple, developer-friendly interface that anyone could use.

What Docker Got Right

Simplicity

Docker made containers ridiculously easy. Write a Dockerfile describing your application environment. Run docker build. Run docker run. That's it. No complex configuration. No arcane commands. Just straightforward, intuitive tools that worked the way developers expected them to work.

FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]

That's a complete application environment, defined in code, reproducible anywhere Docker runs. Revolutionary.

Portability

"Build once, run anywhere" actually worked this time. A container built on your laptop would run identically in production. Same code. Same dependencies. Same behavior. No more "works on my machine" problems.

Speed

Containers shared the host kernel, so they started in seconds instead of minutes. Resource overhead was minimal. You could run dozens of containers on a single host that could barely run a handful of VMs.

Image Layering

Docker's layering system was genius. Each instruction in a Dockerfile created a layer. Layers were cached and reused. Change one line of code? You only rebuild from that point forward. Push an image? Only changed layers transfer. This made builds fast and efficient in ways VMs never could be.

The Docker Ecosystem Explosion

Docker's success wasn't just about the technology—it was about the ecosystem that grew around it. Docker Hub became a registry of pre-built images. Need Redis? docker run redis. Need PostgreSQL? docker run postgres. Thousands of images available instantly.

Developers loved it. Operations teams loved it. The industry went all-in on containers, and Docker was synonymous with the technology itself.

But there was a problem brewing.

The Orchestration Wars

Running one container on one machine is easy. Running hundreds of containers across dozens of machines? That's hard. Really hard. You need orchestration: something to schedule containers, manage networking, handle failures, scale workloads, and keep everything running.

Docker's answer was Docker Swarm. Others had different ideas.

Docker Swarm: Simplicity First

Docker Swarm was Docker's native orchestration solution (Docker, n.d.). It was built into the Docker Engine itself, so no additional installation was required. The learning curve was minimal—if you knew Docker commands, you basically knew Swarm.

Swarm's pitch was simplicity: convert a single-node Docker setup to a cluster with one command. Declare services, set replica counts, and Swarm handled the rest. For small to medium deployments, it worked great (Docker, n.d.).

But Swarm had limitations. Its networking model was simpler but less flexible than alternatives. Its scaling capabilities were decent but not sophisticated. Its ecosystem was smaller. And critically, it was tied to Docker as a platform.

Kubernetes: The Google Brain

Then there was Kubernetes, Google's open-source container orchestration system. Kubernetes (often abbreviated K8s) took a fundamentally different approach (Kubernetes, n.d.).

Where Swarm was simple, Kubernetes was powerful—and complex. It wasn't built into anything. You had to install it, configure it, and learn an entirely new set of concepts: Pods, Deployments, Services, Ingresses, StatefulSets, DaemonSets, ConfigMaps, Secrets...

But that complexity bought you capabilities. Kubernetes could scale to thousands of nodes. Its networking model was flexible and extensible. Its ecosystem exploded with third-party tools, operators, and extensions. And crucially, it was vendor-neutral—it didn't belong to Docker or anyone else (Kubernetes, n.d.).

Google's production experience showed through. Kubernetes was designed for massive scale from day one, drawing on 15 years of running containers at Google scale (Kubernetes, n.d.). That pedigree mattered.

Other Contenders

There were others. Apache Mesos. AWS ECS. HashiCorp Nomad. Each had strengths and carved out niches. But the real battle was Kubernetes versus Swarm, and it wasn't even close.

By 2017-2018, it was clear: Kubernetes had won. Every major cloud provider offered managed Kubernetes. Every enterprise was adopting Kubernetes. Even Docker Inc. integrated Kubernetes into Docker Desktop.

Swarm still exists, and it's still fine for certain use cases. But Kubernetes became the de facto standard for container orchestration, and that's not changing anytime soon.

The OCI: Standardizing the Foundation

While the orchestration wars raged, something important was happening at a lower level: the standardization of containers themselves.

Why Standards Mattered

Docker's early dominance created a problem: vendor lock-in. Docker Inc. controlled the Docker Engine, the Docker image format, and the runtime. If you built everything around Docker and Docker went away (or made decisions you disagreed with), you were screwed.

The industry needed standards that no single vendor controlled. That's where the Open Container Initiative (OCI) came in.

Birth of the OCI

Established in 2015 by Docker and other container industry leaders, the OCI created open standards for container formats and runtimes (Open Container Initiative, n.d.). The goal was simple: ensure containers worked consistently across different tools and platforms, regardless of vendor.

The OCI currently maintains three specifications:

Runtime Specification (runtime-spec)

Defines how to run a container from a filesystem bundle. This ensures containers behave consistently across different runtime implementations (Open Container Initiative, n.d.).

Image Specification (image-spec)

Standardizes container image formats. A compliant image works with any compliant runtime, whether it's Docker, containerd, CRI-O, or something else (Open Container Initiative, n.d.).

Distribution Specification (distribution-spec)

Standardizes how container images are distributed across registries. This ensures registries from different vendors can interoperate seamlessly (Open Container Initiative, n.d.).

What OCI Meant in Practice

The OCI standards fundamentally changed the container landscape. Suddenly, you weren't locked into Docker. Alternative runtimes emerged: containerd (which Docker itself uses internally), CRI-O (built for Kubernetes), and others.

Kubernetes could—and did—remove its direct Docker dependency. You could run Kubernetes with containerd, CRI-O, or any OCI-compliant runtime. This was huge for enterprise adoption, because it meant Kubernetes wasn't tied to Docker's business decisions or technical roadmap.

The standards also enabled innovation. New tools could enter the ecosystem without rebuilding everything from scratch. Buildah and Podman emerged as Docker alternatives. Cloud providers built their own container services confident they'd remain compatible with the broader ecosystem.

K3s: Kubernetes for the Edge

As Kubernetes matured, a new challenge emerged: Kubernetes was powerful, but it was also heavy. Installing a full Kubernetes cluster required significant resources and expertise. For edge computing, IoT devices, and resource-constrained environments, standard Kubernetes was overkill.

Enter K3s, developed by Rancher Labs (now part of SUSE).

What Makes K3s Different

K3s is a "lightweight Kubernetes"—a fully compliant Kubernetes distribution packaged as a single binary under 70MB (K3s, n.d.). It removes non-essential features, uses SQLite instead of etcd for state storage (though you can still use etcd if needed), and generally strips things down while maintaining full Kubernetes API compatibility.

Installing K3s is absurdly simple:

curl -sfL https://get.k3s.io | sh -

That's it. One command. You've got a working Kubernetes cluster. Try doing that with standard Kubernetes—you'll be there for hours.

Where K3s Shines

Edge Computing

K3s is perfect for edge deployments where resources are limited. You can run a full Kubernetes cluster on a Raspberry Pi (K3s, n.d.).

IoT Devices

IoT applications need orchestration too, but standard Kubernetes is too heavy. K3s provides real Kubernetes capabilities in a tiny footprint.

Development Environments

Developers can run K3s locally for testing without the resource overhead of a full cluster. It's fast, lightweight, and realistic.

CI/CD Pipelines

Spinning up disposable Kubernetes clusters for testing is much faster with K3s than with standard Kubernetes.

The ARM Story

K3s supports both ARM64 and ARMv7, making it ideal for ARM-based devices from Raspberry Pi to AWS Graviton instances (K3s, n.d.). This opened Kubernetes to entire categories of hardware that couldn't realistically run standard Kubernetes.

The Current State: Who Won?

So where did we end up after all this evolution? Let's be honest about the winners and losers.

Kubernetes Won the Orchestration Wars

There's no debate anymore. Kubernetes is the standard for container orchestration. Every cloud provider offers managed Kubernetes (EKS, AKS, GKE). Every enterprise is running or planning to run Kubernetes. The ecosystem is massive and still growing.

Docker Swarm exists, but it's basically in maintenance mode. Mesos is fading. Nomad has a niche but never achieved broad adoption. Kubernetes won, and it won decisively.

OCI Standards Won Over Vendor Lock-in

The OCI's success means we're not locked into Docker—or anyone else. Container standards are genuinely open, enabling competition and innovation while maintaining interoperability. This is actually rare in tech, and we should appreciate it.

Docker Survived, But Transformed

Docker Inc. went through some rough times. They pivoted away from enterprise orchestration and refocused on developer tools. Docker Desktop remains popular. Docker Hub is still the default registry most people use. But Docker is no longer synonymous with containers—it's one player in a larger ecosystem.

Honestly? That's healthier for everyone. Docker's innovation sparked the container revolution, but the standardization and competition that followed made containers even better.

The Complexity Problem Remains Unsolved

Here's the uncomfortable truth: containers solved a lot of problems, but they created new ones. Kubernetes is incredibly complex. The learning curve is brutal. Operating it in production requires specialized expertise. Teams that could barely manage VMs are now expected to understand pods, services, ingresses, network policies, RBAC, admission controllers, and a hundred other concepts.

We traded one set of problems for another. The new problems are arguably better—at least they're consistent and well-defined—but they're still problems. Anyone telling you Kubernetes is "easy" is lying to you or selling you something.

Where Do We Go From Here?

So what's next in the container evolution? A few trends are emerging:

Platform Engineering

The complexity of Kubernetes is spawning an entire discipline: platform engineering. Companies are building internal platforms on top of Kubernetes that abstract away the complexity for application developers. This is where tools like Crossplane and solutions like Backstage come in.

Serverless Containers

Services like AWS Fargate, Google Cloud Run, and Azure Container Instances let you run containers without managing orchestration at all. You just deploy a container and the platform handles everything else. This could be where a lot of workloads end up—why deal with Kubernetes if you don't have to?

WebAssembly as a Complement

WebAssembly (Wasm) is emerging as a lightweight alternative to containers for certain use cases. It's faster, smaller, and more portable. Will it replace containers? Probably not entirely. But it might take over certain niches, especially in edge computing and serverless.

Better Developer Experiences

The tooling around containers keeps improving. Docker Desktop got better. Alternatives like Podman and Lima emerged. Local Kubernetes environments like Kind and K3d make development easier. The rough edges are slowly getting smoother.

Lessons From the Container Journey

Looking back at this evolution, what should we take away?

Open Standards Beat Proprietary Lock-in

The OCI's success proves that open standards can work, even in competitive markets. When vendors compete on implementation rather than proprietary formats, everyone benefits.

Simplicity Has Value

Docker won initially because it was simple. K3s found a niche by simplifying Kubernetes. Complexity has costs, and tools that reduce unnecessary complexity will always find an audience.

The Best Technology Doesn't Always Win

Kubernetes didn't win because it was simpler or easier than Swarm—it wasn't. It won because Google's backing gave it credibility, because the ecosystem rallied around it, and because enterprises trusted its ability to scale. Sometimes momentum matters more than pure technical merit.

Evolution Never Stops

Containers aren't the final answer. They're just the current answer. Something better will come along eventually—maybe Wasm, maybe something we haven't imagined yet. Stay curious. Stay adaptable. Don't get too attached to any particular technology.

Conclusion: The Revolution Continues

Containers fundamentally changed how we build and deploy software. Docker made them accessible. The OCI made them open. Kubernetes made them scalable. K3s made them lightweight. And the ecosystem that grew around these standards made them indispensable.

But here's the thing: we're still figuring this out. Best practices are evolving. Tooling is improving. New patterns are emerging. The container revolution isn't over—we're just in a new phase of it.

If you're just getting into containers now, you're entering at a pretty good time. The standards are stable. The tooling is mature. The ecosystem is rich. Yeah, there's a learning curve, but there's also tons of resources to help you climb it.

And if you've been doing this for a while like me? Stay engaged. Keep learning. The landscape keeps shifting, and the best infrastructure engineers are the ones who evolve with it.

The container journey from Docker's disruption to today's OCI-standardized, Kubernetes-orchestrated world has been wild. I can't wait to see where it goes next.

References

Docker. (n.d.). Swarm mode overview. https://docs.docker.com/engine/swarm/

K3s. (n.d.). Lightweight Kubernetes. https://k3s.io/

Kubernetes. (n.d.). Production-Grade Container Orchestration. https://kubernetes.io/

Open Container Initiative. (n.d.). About OCI. https://opencontainers.org/


What's your container story? How has your infrastructure journey evolved? I'd love to hear about your experiences—reach out and let's talk.

Julian Morley • © 2025