Skip to content
Containers

Container Runtime Comparison: Docker vs containerd vs Podman vs CRI-O Benchmarks

Benchmark comparison of four container runtimes -- Docker, containerd, Podman, and CRI-O -- covering cold start latency, memory footprint, container density, image pull speed, and CPU overhead with practical guidance on when to use each.

A
Abhishek Patel14 min read

Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

Container Runtime Comparison: Docker vs containerd vs Podman vs CRI-O Benchmarks
Container Runtime Comparison: Docker vs containerd vs Podman vs CRI-O Benchmarks

The Runtime Under Your Containers Matters More Than You Think

Most developers interact with Docker and never consider what actually runs their containers. But the runtime layer -- the software responsible for creating, starting, and managing container processes -- has a direct impact on cold start latency, memory overhead, security posture, and how many containers you can pack onto a single node. In Kubernetes environments, the runtime choice can mean the difference between 200 and 350 pods per node.

I benchmarked four container runtimes across six dimensions on identical hardware: Docker (with its full daemon stack), containerd (the industry-standard CRI runtime), Podman (the daemonless alternative), and CRI-O (the Kubernetes-native minimal runtime). The results challenge some common assumptions about which runtime is "fastest" and reveal that the right choice depends entirely on your use case.

What Is a Container Runtime?

Definition: A container runtime is the software component responsible for executing containers on a host operating system. It handles pulling images, creating container filesystems (via overlay or snapshot drivers), setting up namespaces and cgroups for isolation, and managing the lifecycle (create, start, stop, delete) of container processes. Runtimes are divided into high-level runtimes (containerd, CRI-O) that manage images and container metadata, and low-level runtimes (runc, crun) that interface directly with the Linux kernel to spawn isolated processes.

Understanding this distinction is critical. When someone says "Docker runs my containers," the actual execution chain is: Docker CLI sends requests to dockerd (the Docker daemon), which delegates to containerd (high-level runtime), which calls runc (low-level runtime) to create the container process. Each layer adds latency, memory, and potential failure points.

Architecture Comparison

Before diving into benchmarks, you need to understand how each runtime is structured. Architecture dictates overhead, security boundaries, and operational complexity.

RuntimeArchitectureDaemonCRI SupportPrimary Use Case
Dockerdockerd + containerd + runcYes (dockerd + containerd)Via cri-dockerd shimDeveloper workstations, CI/CD
containerdcontainerd + runc (gRPC API)Yes (containerd)Native CRI pluginKubernetes nodes, cloud providers
PodmanFork-exec model, no daemonNoNo (not designed for K8s)Developer workstations, rootless containers
CRI-OCRI-only daemon + runc/crunYes (minimal)Native (CRI-only interface)Kubernetes nodes (OpenShift default)

Docker (dockerd + containerd + runc)

Docker's architecture is the most layered. The Docker daemon (dockerd) manages the Docker API, networking, volumes, and build operations. It delegates container execution to containerd, which in turn delegates to runc. This three-layer stack provides the richest feature set -- Docker Compose, BuildKit, integrated networking -- but carries the highest memory overhead and the largest attack surface. A vulnerability in dockerd can compromise all containers on the host because the daemon runs as root and manages all container state.

containerd (gRPC API)

containerd strips away the Docker-specific layers (Compose, build, swarm) and exposes a gRPC API for container lifecycle management. It handles image pulling, storage (snapshotter), and delegates to runc for execution. Every major cloud Kubernetes provider -- EKS, GKE, AKS -- uses containerd as the default runtime. It supports namespaces, plugins, and advanced features like lazy image pulling via eStargz. The daemon footprint is significantly smaller than Docker's full stack.

Podman (Fork-Exec, No Daemon)

Podman takes a fundamentally different approach: no daemon. Each podman run command forks a new process that directly manages the container via conmon (container monitor) and runc or crun. This means no single point of failure, no persistent root daemon, and containers can run entirely in user namespaces (rootless by default). The trade-off is that there is no centralized state manager -- listing containers requires scanning the filesystem, and there is no persistent API for orchestrators to connect to (though Podman does offer a Docker-compatible API socket via podman system service).

CRI-O (CRI-Only)

CRI-O implements exactly the Kubernetes Container Runtime Interface and nothing else. It does not expose a general-purpose container API, does not support docker build, and does not run standalone containers outside of Kubernetes. This laser focus results in the smallest daemon footprint and tightest security posture. CRI-O is the default runtime for Red Hat OpenShift and is purpose-built for Kubernetes node operations. It supports crun as an alternative low-level runtime, which is written in C and offers measurably faster container creation than runc (written in Go).

Benchmark Methodology

All benchmarks were run on identical bare-metal hardware: AMD EPYC 7763 (64 cores), 256 GB DDR4 ECC RAM, NVMe SSD (Samsung PM9A3), running Ubuntu 24.04 LTS with kernel 6.8. Each test was repeated 50 times, and the results below report the median with p95 values in parentheses. The container image used was alpine:3.20 (3.5 MB) for cold/warm start and memory tests, and nginx:1.27-alpine (45 MB) for image pull and networking tests.

  1. Cold start -- time from run command to first process execution inside the container, with no cached image layers
  2. Warm start -- same measurement with all image layers cached locally
  3. Memory footprint -- resident set size (RSS) of the runtime daemon/process and per-container overhead
  4. Container density -- maximum number of idle Alpine containers on 16 GB RAM before OOM
  5. Image pull speed -- time to pull nginx:1.27-alpine from Docker Hub with a cold registry cache
  6. CPU overhead -- CPU time consumed by the runtime during 1,000 sequential container create-start-stop-delete cycles

Cold Start and Warm Start Benchmarks

RuntimeCold Start (median)Cold Start (p95)Warm Start (median)Warm Start (p95)
Docker1,420 ms1,780 ms340 ms510 ms
containerd (ctr)980 ms1,250 ms180 ms260 ms
Podman1,150 ms1,520 ms280 ms420 ms
CRI-O + crun920 ms1,180 ms150 ms220 ms

CRI-O with crun as the low-level runtime wins both cold and warm start benchmarks. The cold start advantage over Docker is 35%, and warm start is 56% faster. containerd is close behind CRI-O because they share similar architecture -- CRI-O is essentially a stripped-down containerd with less overhead. Docker's additional dockerd layer adds measurable latency: every container operation passes through an extra daemon hop.

Podman's fork-exec model performs well on cold starts (no persistent daemon to route through) but is slightly slower on warm starts because each invocation must initialize process state from scratch rather than leveraging a warm daemon with cached metadata.

Pro tip: If you are using containerd or CRI-O, switch the low-level runtime from runc to crun for a 10-20% improvement in container creation time. crun is a drop-in replacement written in C that produces smaller shim processes and initializes faster.

# Check your current low-level runtime
crictl info | jq '.config.containerd.runtimes.runc.runtimeType'

# Switch containerd to use crun (in /etc/containerd/config.toml)
# [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
#   runtime_type = "io.containerd.runc.v2"
#   [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
#     BinaryName = "/usr/bin/crun"

# Restart containerd after config change
sudo systemctl restart containerd

Memory Footprint

RuntimeDaemon RSSPer-Container OverheadNotes
Docker~120 MB (dockerd + containerd)~12 MBTwo persistent daemons
containerd~45 MB~8 MBSingle daemon + shim per container
Podman0 MB (no daemon)~10 MBconmon process per container (~3 MB)
CRI-O~30 MB~7 MBMinimal daemon, smallest shim overhead

Docker's combined daemon footprint of ~120 MB is a fixed tax you pay regardless of how many containers are running. On resource-constrained nodes (edge devices, small VMs), that overhead matters. Podman's zero-daemon architecture looks attractive on paper, but each container's conmon monitor process adds ~3 MB, so the per-container cost is slightly higher than containerd or CRI-O.

Container Density Test

Maximum idle Alpine containers on a 16 GB RAM node (cgroup limit enforced, no swap):

RuntimeMax ContainersRuntime Overhead at Max
Docker1,180~1.9 GB
containerd1,520~1.1 GB
Podman1,340~1.4 GB
CRI-O + crun1,650~0.8 GB

CRI-O achieves 40% higher container density than Docker on the same hardware. For Kubernetes clusters running many small pods (sidecar-heavy microservice architectures), this translates directly to fewer nodes and lower infrastructure cost.

Image Pull Speed and Lazy Pulling

RuntimeStandard Pull (nginx)eStargz Lazy PullTime to First Request
Docker4.2 sNot supported4.8 s
containerd3.8 s1.1 s1.6 s
Podman4.5 sNot supported natively5.1 s
CRI-O3.9 s1.2 s1.7 s

Lazy pulling with eStargz (enhanced Stargz) is a game-changer for large images. Instead of downloading the entire image before starting the container, the runtime pulls only the file entries needed at startup and fetches the rest on demand. containerd and CRI-O both support this via the Stargz snapshotter plugin. For a 500 MB application image, lazy pulling can reduce time-to-first-request from 15 seconds to under 3 seconds.

# Convert an existing image to eStargz format
ctr-remote image optimize --oci docker.io/library/nginx:1.27 \
  docker.io/yourrepo/nginx:1.27-esgz

# Pull with lazy pulling enabled (containerd with stargz snapshotter)
ctr-remote image rpull --plain-http docker.io/yourrepo/nginx:1.27-esgz

CPU Overhead for Lifecycle Operations

Total CPU time consumed by the runtime during 1,000 sequential container create-start-stop-delete cycles:

RuntimeTotal CPU TimePer-Cycle Average
Docker142 s142 ms
containerd78 s78 ms
Podman118 s118 ms
CRI-O + crun64 s64 ms

CRI-O with crun uses 55% less CPU than Docker for lifecycle operations. In CI/CD environments where thousands of containers are created and destroyed per hour, this overhead compounds. containerd is close to CRI-O, while Podman's fork-exec model (initializing a new process for each operation) costs more CPU than daemon-based approaches for high-frequency operations.

Networking Performance

TCP throughput between two containers on the same host using iperf3, with the default bridge network configuration for each runtime:

RuntimeThroughput (Gbps)Latency (p50)
Docker (bridge)18.242 us
containerd (CNI bridge)19.838 us
Podman (netavark)19.140 us
CRI-O (CNI bridge)19.739 us

Networking performance is largely comparable across all four runtimes because the kernel handles the actual packet forwarding. Docker's slightly lower throughput comes from its docker-proxy userspace process for port mapping, which adds overhead for published ports. containerd and CRI-O use CNI plugins that operate entirely in kernel space. Podman switched from CNI to netavark (a Rust-based network stack) in version 4.0, which delivers performance on par with kernel-space CNI.

Operational Feature Comparison

FeatureDockercontainerdPodmanCRI-O
Image BuildingBuildKit (native)BuildKit (standalone)Buildah (integrated)None (use Buildah)
Compose SupportDocker Compose v2nerdctl composepodman-composeNone
Rootless ModeSupported (not default)Supported (not default)DefaultSupported
Seccomp ProfilesDefault + customDefault + customDefault + customDefault + custom
SELinux SupportSupportedSupportedNativeNative
Log Driversjson-file, syslog, journald, etc.CRI log formatjournald (default), k8s-fileCRI log format
Systemd IntegrationService unitService unitpodman generate systemdService unit
Windows SupportDocker DesktopLimitedWSL2 onlyNo

How to Choose: A Decision Framework

Follow these steps to select the right runtime for your environment:

  1. Identify your primary context -- Are you running Kubernetes in production, developing locally, running CI/CD pipelines, or deploying to edge/IoT devices?
  2. Evaluate your Kubernetes version -- Kubernetes 1.24+ removed dockershim. If you are running Docker as your K8s runtime, you now need the cri-dockerd adapter, which adds complexity and latency. Migrating to containerd or CRI-O is strongly recommended.
  3. Assess your security requirements -- If rootless containers are mandatory (multi-tenant environments, shared CI runners), Podman is the strongest choice with rootless as the default. containerd and CRI-O support rootless mode but require additional configuration.
  4. Consider your tooling dependencies -- If your workflow depends on Docker Compose, BuildKit layer caching, or Docker-in-Docker patterns, switching to containerd or CRI-O requires reworking those workflows. Podman offers the smoothest migration from Docker for developer workstations (alias docker=podman).
  5. Measure your density requirements -- If you are running high pod counts per node (500+), CRI-O or containerd will give you 25-40% more headroom than Docker. Run the density benchmark on your actual workload to quantify the difference.
# Quick benchmark: measure warm start time on your system
# Docker
time docker run --rm alpine:3.20 /bin/true

# Podman
time podman run --rm alpine:3.20 /bin/true

# containerd (via nerdctl)
time nerdctl run --rm alpine:3.20 /bin/true

# Compare container overhead
# Docker daemon memory
ps aux | grep dockerd | awk '{print $6/1024 " MB"}'

# containerd daemon memory
ps aux | grep containerd | grep -v shim | awk '{print $6/1024 " MB"}'

Frequently Asked Questions

Is Docker deprecated for Kubernetes?

Docker is not deprecated as a tool, but Kubernetes removed its built-in Docker support (dockershim) in version 1.24. You can still use Docker with Kubernetes via the cri-dockerd adapter, but this adds an extra translation layer between Kubernetes and containerd (which Docker uses internally anyway). For Kubernetes nodes, using containerd or CRI-O directly eliminates this unnecessary hop, reduces memory usage by ~75 MB per node, and improves container startup latency. Docker remains an excellent tool for local development and image building.

Can I use Podman as a drop-in replacement for Docker?

For most developer workflows, yes. Podman implements the same CLI interface as Docker, supports Dockerfiles (via Buildah), and can run Docker Compose files with podman-compose or native podman compose (v5+). The most common compatibility issues are: Docker-in-Docker patterns (Podman uses a different approach), Docker socket-dependent tools that expect /var/run/docker.sock (Podman can emulate this with podman system service), and some advanced BuildKit features not yet replicated in Buildah. For 90% of daily development tasks, alias docker=podman works without issues.

Which runtime is most secure?

CRI-O and Podman have the strongest security postures, but for different reasons. CRI-O has the smallest attack surface because it only implements the CRI interface -- there is no general-purpose container API to exploit. Podman's daemonless architecture eliminates the persistent root daemon that is the primary attack vector in Docker exploits. Both support rootless mode, seccomp, SELinux, and AppArmor. For maximum security, combine CRI-O (Kubernetes) or Podman (standalone) with crun and enable all available LSM (Linux Security Module) profiles.

What is the difference between runc and crun?

Both are OCI-compliant low-level container runtimes that interface with the Linux kernel to create isolated processes. runc is written in Go and was the original reference implementation extracted from Docker. crun is written in C by Red Hat, resulting in a smaller binary (100 KB vs 10 MB), faster startup (10-20% improvement in container creation), and lower memory usage per container. crun is a drop-in replacement for runc -- swap the binary path in your runtime configuration, restart the daemon, and all existing containers continue to work.

Does the runtime affect container image compatibility?

No. All four runtimes use OCI-compliant images. Any image that works with Docker works identically with containerd, Podman, and CRI-O. The OCI (Open Container Initiative) image specification ensures interoperability. Where you may see differences is in image pull behavior (lazy pulling support), layer caching strategies, and how storage drivers manage the container filesystem. But the running container sees the same filesystem and environment regardless of runtime.

Should I switch from Docker to containerd on my development machine?

Probably not. Docker Desktop provides the best developer experience with integrated GUI, easy volume mounting, port forwarding, Kubernetes cluster, and cross-platform support (macOS, Windows, Linux). The performance difference on a dev machine is imperceptible for typical workflows. If you want the benefits of a daemonless runtime without losing Docker compatibility, try Podman instead -- it offers a smoother developer experience than raw containerd while providing rootless security by default. Reserve containerd and CRI-O for production Kubernetes nodes where density and overhead matter.

How do I migrate a Kubernetes cluster from Docker to containerd?

The migration is node-by-node and non-disruptive if done correctly. For each node: cordon and drain the node to evict pods, stop the kubelet and Docker services, install and configure containerd, update the kubelet flags to use --container-runtime-endpoint=unix:///run/containerd/containerd.sock, start containerd and kubelet, and uncordon the node. Existing images do not need to be re-pulled -- containerd can import Docker's image store. The entire process takes 5-10 minutes per node and can be automated with tools like Ansible or a DaemonSet-based migration operator.

Pick the Right Runtime for Each Context

There is no single best container runtime -- the right choice depends on context. For Kubernetes production nodes, containerd or CRI-O deliver the best performance and density with the lowest overhead; CRI-O edges ahead if you use OpenShift or want the absolute minimum footprint. For developer workstations, Podman offers the best security-to-usability ratio with rootless defaults and Docker CLI compatibility, while Docker Desktop remains the most polished experience for teams that value GUI tooling and cross-platform consistency. For CI/CD pipelines, containerd with BuildKit provides fast image building without Docker's daemon overhead. Start by identifying your primary constraint -- security, density, developer experience, or Kubernetes compatibility -- and let that drive your runtime selection.

A

Written by

Abhishek Patel

Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.

Related Articles

Enjoyed this article?

Get more like this in your inbox. No spam, unsubscribe anytime.

Comments

Loading comments...

Leave a comment

Stay in the loop

New articles delivered to your inbox. No spam.