From "it works on my machine" → runs everywhere, every time
Amal Shibu · amalshibu.com
You write code. It works beautifully on your laptop. You send it to a teammate — or push it to a server. It immediately explodes. 💥
Estimated developer time lost to environment issues: a lot. A painful lot.
The solution? Ship the environment too, not just the code.
docker compose up → everything just works ✅This is basically the entire value proposition of Docker.
The predecessor — and why it wasn't enough
A Virtual Machine emulates a complete physical computer — including its own full OS — running on top of a hypervisor.
Hotel rooms 🏨 — every guest has their own full kitchen, bathroom, TV. Full isolation, but expensive and heavy.
Each VM = full OS copy. That's a lot of redundant weight.
Lightweight · Portable · Consistent
A container packages your app with its dependencies — but shares the host OS kernel. No full OS per container. That's the key difference.
Apartments in a building 🏢 — shared electricity/plumbing (kernel), but each tenant has their own isolated space. Efficient AND private.
No Guest OS layer. That entire row is gone. 🎉
| Feature | Virtual Machine 🖥️ | Container 📦 |
|---|---|---|
| OS overhead | Full Guest OS per VM (GBs) | Shares Host kernel (MBs) |
| Startup | 1–5 minutes | < 1 second |
| Size | 2–20 GB | 5–500 MB |
| Isolation | Hardware-level (very strong) | Process-level (strong enough) |
| Portability | Limited (VM format varies) | Runs anywhere with Docker |
| Real use | Cloud VMs, full OS isolation | Apps, microservices, CI/CD |
💡 In reality: containers often run inside VMs.
AWS EC2 = VM. Your Docker containers run on top of it.
Containerisation is not a Docker invention. It's built directly into the Linux kernel. Docker, Podman, containerd — they're all just wrappers around two kernel features.
Give each process its own isolated view of the system. The container thinks it's alone on the machine — its own process list, network, filesystem, hostname.
Control how much CPU and memory a process can use. Prevents one container from starving the rest.
Each namespace type isolates something different — you pick what to unshare:
# Isolate process IDs (PID namespace) unshare --fork --pid --mount-proc /bin/bash # Isolate the hostname (UTS namespace) unshare --uts /bin/bash hostname my-container # host is unaffected # Isolate the filesystem (mount namespace) unshare --mount /bin/bash mount --bind /tmp /mnt # invisible outside # Isolate the network stack (NET namespace) unshare --net /bin/bash # no network interfaces visible here # Combine them all — now it's basically a container unshare --fork --pid --mount-proc --uts --mount --net /bin/bash
This is powerful — but tedious and error-prone at scale. Docker wraps all of this in a clean, simple CLI. That's the entire value of the tool.
Namespaces existed since Linux 2.4.19 (2002). Docker just made them usable by regular humans. 🙂
The tool that made containers mainstream
Docker is an open-source platform to build, ship, and run containerised applications. Think of it as the tool that made containers easy enough for mere mortals to use.
Linux containers existed since 2008 (LXC). But they required deep kernel expertise. Docker made them accessible with a simple CLI and workflow.
Before 1956: every cargo ship needed custom loading. After standardised shipping containers: any box fits any ship, truck, or train.
Docker does the same for software.
Standard box. Any ship, truck, or train. Contents don't matter.
Standard package. Any machine. OS doesn't matter.
"Works on my machine" dies. New devs onboard in minutes, not days.
Build once. The same image goes through dev → test → staging → prod. No surprises.
Each service lives in its own container. Independent deploys. Independent scaling.
App A needs Python 3.8. App B needs Python 3.11. Both run fine. No conflict.
Deploy to AWS, GCP, Azure with the exact same container. No cloud lock-in.
Spin up Postgres in 10 seconds. Try it. Nuke it. Your system stays clean.
Docker is the most popular, but the container ecosystem has grown. These all use the same Linux kernel features — they just differ in approach.
| Tool | What it is | Key difference | Status |
|---|---|---|---|
| 🐳 Docker | Full developer platform — CLI, daemon, Desktop | Most beginner-friendly, largest ecosystem | Industry standard |
| 🦭 Podman | Daemonless, rootless container engine by Red Hat | No root daemon = more secure. alias docker=podman literally works |
Growing fast |
| 📦 containerd | Low-level container runtime (what Docker uses internally) | No CLI for humans — used by Kubernetes directly. You rarely touch it. | K8s default |
| 🏠 LXC / LXD | Linux Containers — older, more VM-like containers | Designed for full OS containers, not single-app containers. Canonical (Ubuntu) backed. | Niche use |
| 🔧 Buildah | Build OCI-compatible images without a Docker daemon | Pairs with Podman. Build images in CI without Docker Desktop. | CI/CD focused |
All these tools produce OCI-standard images that are interchangeable. Learn Docker — the concepts transfer everywhere. Podman is the most credible alternative if you need rootless/daemonless setups.
You type commands.
Sends API calls to the Daemon.
The background service that actually does all the work.
Stores & distributes images.
Docker Hub = public default.
Read-only template/blueprint. Layered filesystem. Stored in registry. Versioned with tags.
A running instance of an image. Isolated process with its own filesystem, network, and process space.
name:tag e.g. node:18-alpineHow layers stack when you run a container:
COPY . .RUN pip installFROM python:3.11-slim10 containers from the same image share the same read-only layers on disk. Only the writable layer is unique per container. Huge storage saving.
A recipe to build your own image
A Dockerfile is a plain text file — a step-by-step recipe that tells Docker how to build your image. Docker reads it top to bottom, and each instruction creates a new layer.
FROM — the base imageDockerfile (no extension)Copy requirements.txt and install dependencies before copying the rest of your code. Your code changes often; dependencies don't. This keeps that expensive install layer cached. 🚀
# Start from official Python image FROM python:3.11-slim # Set working directory inside container WORKDIR /app # ← Pro tip: copy this FIRST COPY requirements.txt . RUN pip install -r requirements.txt # Now copy the rest of the code COPY . . # Document the port (doesn't open it) EXPOSE 5000 # What runs when container starts CMD ["python", "app.py"]
| FROM | Base image to start from | FROM node:18-alpine |
| WORKDIR | Set working directory | WORKDIR /app |
| COPY | Copy files from host into image | COPY . . |
| RUN | Execute command at build time | RUN npm install |
| ENV | Set env variable (available at runtime too) | ENV PORT=3000 |
| ARG | Build-time variable only | ARG VERSION=1.0 |
| EXPOSE | Document which port app uses | EXPOSE 8080 |
| VOLUME | Mount point for persistent data | VOLUME /data |
| CMD | Default start command (overridable) | CMD ["node","index.js"] |
| ENTRYPOINT | Fixed executable (CMD = its args) | ENTRYPOINT ["nginx"] |
CMD vs ENTRYPOINT: CMD is the default command you can override at runtime. ENTRYPOINT is the fixed executable.
Building & Running
| docker build -t myapp . | Build image from current directory, tag it "myapp" |
| docker run myapp | Start a container from "myapp" |
| docker run -p 8080:5000 myapp | Map host port 8080 → container port 5000 |
| docker run -d myapp | Run in background (detached) |
| docker run -it myapp bash | Interactive terminal inside container |
Inspecting & Cleaning Up
| docker ps | List running containers |
| docker ps -a | All containers, including stopped ones |
| docker logs <id> | See container output / errors |
| docker stop <id> | Gracefully stop a container |
| docker rm <id> | Remove a stopped container |
| docker images | List all local images |
The -p host:container flag is critical. Without it, your container's port is unreachable from your browser. This trips up every beginner.
Real apps don't run in one container
Docker Compose lets you define your entire application stack in a single YAML file, then start everything with one command.
A typical web app needs:
Python / Node / Java / whatever
PostgreSQL or MySQL
Redis — blazing fast in-memory store
One command to start all of them:docker compose up -d
# docker-compose.yml services: web: build: . # build from Dockerfile ports: - "8000:8000" environment: - DATABASE_URL=postgresql:// db:5432/mydb depends_on: - db # start db first - redis db: image: postgres:15 environment: - POSTGRES_DB=mydb - POSTGRES_PASSWORD=secret volumes: - pgdata:/var/lib/postgresql redis: image: redis:7-alpine volumes: pgdata: # data persists on restart
💡 All services share a network. web can reach db by hostname "db". Docker's internal DNS does this automatically.
| docker compose up | Start all services (foreground, see logs) |
| docker compose up -d | Start all services in background |
| docker compose up --build | Rebuild images, then start |
| docker compose down | Stop and remove all containers |
| docker compose down -v | Also delete volumes (⚠️ data loss) |
| docker compose ps | List all running services |
| docker compose logs | View logs from all services |
| docker compose logs -f web | Follow live logs for "web" service |
| docker compose exec web bash | Open shell inside "web" container |
| docker compose restart web | Restart a specific service |
docker compose up -d and going to get coffee ☕Let's discuss, explore & experiment 🐳
Amal Shibu · amalshibu.com