← → or Space to navigate
Amal Shibu  ·  amalshibu.com
🐳

Containerization
& Docker

From "it works on my machine" → runs everywhere, every time

Amal Shibu  ·  amalshibu.com

What We'll Cover

Agenda

  1. The Problem — environment hell ~10 min
  2. Bare Metal → VMs → Containers (evolution) ~8 min
  3. Virtual Machines — how they work ~10 min
  4. Containers — lighter, faster, portable ~10 min
  5. Docker — intro, history & architecture ~15 min
  6. Dockerfile — build your own image ~15 min
  7. Docker Compose — multi-container apps ~12 min
  8. Q&A ~10 min
The Problem

A tale as old as software itself

You write code. It works beautifully on your laptop. You send it to a teammate — or push it to a server. It immediately explodes. 💥

  • Python 3.10 on dev, Python 3.8 on server
  • Works on Mac, broken on Windows
  • Library version mismatch
  • Missing env variables on prod
  • "Did you install the correct Node version?"

Estimated developer time lost to environment issues: a lot. A painful lot.

🔥🐕🔥 "Works on my machine"
— Every developer, at some point

The solution? Ship the environment too, not just the code.

The Realisation

There's a better way

🙅‍♂️
Writing a 47-step setup guide that nobody follows and is always out of date
😍
docker compose up  → everything just works ✅

This is basically the entire value proposition of Docker.

Evolution

How we got here

🧠
Bare Metal (1990s): One app per physical server. Servers sitting idle 90% of the time. Buying hardware to scale = weeks of wait.
🧠✨
Virtual Machines (2000s): Run multiple OS on one machine. Better utilization! But each VM carries a full Guest OS — gigabytes of overhead, minutes to boot.
🤯
Containers (2013): Share the host OS kernel. No full Guest OS per app. Starts in seconds, weighs megabytes. The same container runs on any machine.
🌌
Container Orchestration: Kubernetes managing 10,000 containers across 500 servers, auto-healing, auto-scaling. (We're not covering this today 😅)
01

Virtual Machines

The predecessor — and why it wasn't enough

Virtual Machines

How a VM works

A Virtual Machine emulates a complete physical computer — including its own full OS — running on top of a hypervisor.

  • Each VM has its own full Guest OS (2–20 GB overhead per VM)
  • Hypervisor manages hardware sharing
  • Complete isolation — like separate physical computers
  • Boot time: 1–5 minutes
  • Examples: VMware, VirtualBox, Hyper-V, AWS EC2
Analogy

Hotel rooms 🏨 — every guest has their own full kitchen, bathroom, TV. Full isolation, but expensive and heavy.

App A
Bins/Libs
Guest OS
~5 GB RAM
App B
Bins/Libs
Guest OS
~5 GB RAM
App C
Bins/Libs
Guest OS
~5 GB RAM
Hypervisor (VMware / KVM / Hyper-V)
Host Operating System
Physical Hardware

Each VM = full OS copy. That's a lot of redundant weight.

02

Containers

Lightweight · Portable · Consistent

Containers

How a container works

A container packages your app with its dependencies — but shares the host OS kernel. No full OS per container. That's the key difference.

  • No Guest OS — just app + bins/libs (MBs not GBs)
  • Uses Linux namespaces for isolation (own filesystem, network, processes)
  • Uses cgroups to limit CPU/memory
  • Starts in milliseconds to seconds
  • Same container runs on any Linux machine — identical behaviour
Analogy

Apartments in a building 🏢 — shared electricity/plumbing (kernel), but each tenant has their own isolated space. Efficient AND private.

App A
Bins/Libs
App B
Bins/Libs
App C
Bins/Libs
Docker Engine (Container Runtime)
Host OS — Shared Kernel ✓
Physical Hardware

No Guest OS layer. That entire row is gone. 🎉

Comparison

VM vs Containers — head to head

Feature Virtual Machine 🖥️ Container 📦
OS overheadFull Guest OS per VM (GBs)Shares Host kernel (MBs)
Startup1–5 minutes< 1 second
Size2–20 GB5–500 MB
IsolationHardware-level (very strong)Process-level (strong enough)
PortabilityLimited (VM format varies)Runs anywhere with Docker
Real useCloud VMs, full OS isolationApps, microservices, CI/CD
😤

Virtual Machines

still useful
😒→
🤵‍♂️

Developer

distracted
→😍
📦

Containers

looking good

💡 In reality: containers often run inside VMs.
AWS EC2 = VM. Your Docker containers run on top of it.

Under the Hood

Containers are just Linux — Docker is the easy button

Containerisation is not a Docker invention. It's built directly into the Linux kernel. Docker, Podman, containerd — they're all just wrappers around two kernel features.

🔒

Linux Namespaces — Isolation

Give each process its own isolated view of the system. The container thinks it's alone on the machine — its own process list, network, filesystem, hostname.

PID NET MNT UTS IPC USER
📊

cgroups — Resource Limits

Control how much CPU and memory a process can use. Prevents one container from starving the rest.

CPU limit Memory limit I/O limit

Each namespace type isolates something different — you pick what to unshare:

# Isolate process IDs (PID namespace)
unshare --fork --pid --mount-proc /bin/bash

# Isolate the hostname (UTS namespace)
unshare --uts /bin/bash
hostname my-container   # host is unaffected

# Isolate the filesystem (mount namespace)
unshare --mount /bin/bash
mount --bind /tmp /mnt  # invisible outside

# Isolate the network stack (NET namespace)
unshare --net /bin/bash
# no network interfaces visible here

# Combine them all — now it's basically a container
unshare --fork --pid --mount-proc --uts --mount --net /bin/bash
The takeaway

This is powerful — but tedious and error-prone at scale. Docker wraps all of this in a clean, simple CLI. That's the entire value of the tool.

Namespaces existed since Linux 2.4.19 (2002). Docker just made them usable by regular humans. 🙂

03

🐳 Docker

The tool that made containers mainstream

Docker

What is Docker?

Docker is an open-source platform to build, ship, and run containerised applications. Think of it as the tool that made containers easy enough for mere mortals to use.

Linux containers existed since 2008 (LXC). But they required deep kernel expertise. Docker made them accessible with a simple CLI and workflow.

The Shipping Container Analogy 🚢

Before 1956: every cargo ship needed custom loading. After standardised shipping containers: any box fits any ship, truck, or train.

Docker does the same for software.

🚢

Shipping Container

Standard box. Any ship, truck, or train. Contents don't matter.

🐳

Docker Container

Standard package. Any machine. OS doesn't matter.

Build once Run anywhere Same every time
Why Docker?

Where Docker actually shows up

🔧

Consistent Dev Environments

"Works on my machine" dies. New devs onboard in minutes, not days.

🚀

CI/CD Pipelines

Build once. The same image goes through dev → test → staging → prod. No surprises.

🏗️

Microservices

Each service lives in its own container. Independent deploys. Independent scaling.

📦

Dependency Isolation

App A needs Python 3.8. App B needs Python 3.11. Both run fine. No conflict.

☁️

Cloud Deployment

Deploy to AWS, GCP, Azure with the exact same container. No cloud lock-in.

🔬

Quick Experiments

Spin up Postgres in 10 seconds. Try it. Nuke it. Your system stays clean.

Ecosystem

Docker isn't the only container tool

Docker is the most popular, but the container ecosystem has grown. These all use the same Linux kernel features — they just differ in approach.

Tool What it is Key difference Status
🐳 Docker Full developer platform — CLI, daemon, Desktop Most beginner-friendly, largest ecosystem Industry standard
🦭 Podman Daemonless, rootless container engine by Red Hat No root daemon = more secure. alias docker=podman literally works Growing fast
📦 containerd Low-level container runtime (what Docker uses internally) No CLI for humans — used by Kubernetes directly. You rarely touch it. K8s default
🏠 LXC / LXD Linux Containers — older, more VM-like containers Designed for full OS containers, not single-app containers. Canonical (Ubuntu) backed. Niche use
🔧 Buildah Build OCI-compatible images without a Docker daemon Pairs with Podman. Build images in CI without Docker Desktop. CI/CD focused
Why learn Docker first?

All these tools produce OCI-standard images that are interchangeable. Learn Docker — the concepts transfer everywhere. Podman is the most credible alternative if you need rootless/daemonless setups.

History

Docker's origin story

2010
Solomon Hykes founds dotCloud — a PaaS startup (like Heroku). Docker starts as an internal tool to manage their containers.
Mar 2013
Solomon demos Docker in a 5-minute lightning talk at PyCon. The room goes silent. Then it goes viral. The internet collectively loses its mind.
Jun 2013
Docker is open-sourced. 6,000 GitHub stars in the first week. That was unheard of in 2013.
Jun 2014
Docker 1.0 — production-ready. Google, Microsoft, Red Hat, Amazon all announce support. The industry had spoken.
2015
Docker Compose released. Kubernetes 1.0 also released — the container orchestration race begins.
2017
Docker Desktop for Mac & Windows. Moby Project created. Containers go mainstream in enterprise.
2019
Docker sells Enterprise division to Mirantis. Refocuses on developer tooling. Smart pivot.
Today
14M+ developers, 800k+ images on Docker Hub. If you do backend work, you will use Docker. It's not optional anymore.
Docker Architecture

How Docker works internally

⌨️

Docker CLI

You type commands.
Sends API calls to the Daemon.

docker build
docker run
docker ps
⚙️

Docker Daemon

The background service that actually does all the work.

Manages images, containers, volumes, networks
🗄️

Registry

Stores & distributes images.
Docker Hub = public default.

Also: AWS ECR, GCR, Azure ACR
🖼️

Image

Read-only template/blueprint. Layered filesystem. Stored in registry. Versioned with tags.

📦

Container

A running instance of an image. Isolated process with its own filesystem, network, and process space.

Core Concept

Image vs Container

🖼️

Image — the blueprint

  • Read-only — never changes after build
  • Made of stacked layers — each Dockerfile instruction = one layer
  • Layers are cached & shared — if two images share a base layer, it's stored only once on disk
  • Identified by name:tag e.g. node:18-alpine
  • One image → unlimited containers
📦

Container — the running instance

  • Image layers + a thin writable layer on top
  • The writable layer is ephemeral — deleted when container is removed
  • Has its own lifecycle: created → running → stopped → deleted
  • Files written inside the container don't touch the image
  • Need to persist data? Use a volume (mounted from host)

How layers stack when you run a container:

Container Writable Layer  (your writes go here — deleted on rm)
↑ container adds this at runtime
Image Layer 3 — COPY . .
Image Layer 2 — RUN pip install
Image Layer 1 — FROM python:3.11-slim
read-only image layers (shared across containers)
Why this matters

10 containers from the same image share the same read-only layers on disk. Only the writable layer is unique per container. Huge storage saving.

04

Dockerfile

A recipe to build your own image

Dockerfile

Your first Dockerfile

A Dockerfile is a plain text file — a step-by-step recipe that tells Docker how to build your image. Docker reads it top to bottom, and each instruction creates a new layer.

  • Always starts with FROM — the base image
  • Each instruction = one cached layer
  • Layers are reused if unchanged — fast rebuilds
  • File is literally named Dockerfile (no extension)
Pro Tip 200IQ move

Copy requirements.txt and install dependencies before copying the rest of your code. Your code changes often; dependencies don't. This keeps that expensive install layer cached. 🚀

# Start from official Python image
FROM python:3.11-slim

# Set working directory inside container
WORKDIR /app

# ← Pro tip: copy this FIRST
COPY requirements.txt .
RUN pip install -r requirements.txt

# Now copy the rest of the code
COPY . .

# Document the port (doesn't open it)
EXPOSE 5000

# What runs when container starts
CMD ["python", "app.py"]
docker build -t myapp .
docker run -p 5000:5000 myapp
Dockerfile

Key instructions — your cheat sheet

FROMBase image to start fromFROM node:18-alpine
WORKDIRSet working directoryWORKDIR /app
COPYCopy files from host into imageCOPY . .
RUNExecute command at build timeRUN npm install
ENVSet env variable (available at runtime too)ENV PORT=3000
ARGBuild-time variable onlyARG VERSION=1.0
EXPOSEDocument which port app usesEXPOSE 8080
VOLUMEMount point for persistent dataVOLUME /data
CMDDefault start command (overridable)CMD ["node","index.js"]
ENTRYPOINTFixed executable (CMD = its args)ENTRYPOINT ["nginx"]

CMD vs ENTRYPOINT: CMD is the default command you can override at runtime. ENTRYPOINT is the fixed executable.

Docker CLI

Commands you'll use every day

Building & Running

docker build -t myapp .Build image from current directory, tag it "myapp"
docker run myappStart a container from "myapp"
docker run -p 8080:5000 myappMap host port 8080 → container port 5000
docker run -d myappRun in background (detached)
docker run -it myapp bashInteractive terminal inside container

Inspecting & Cleaning Up

docker psList running containers
docker ps -aAll containers, including stopped ones
docker logs <id>See container output / errors
docker stop <id>Gracefully stop a container
docker rm <id>Remove a stopped container
docker imagesList all local images
Tip

The -p host:container flag is critical. Without it, your container's port is unreachable from your browser. This trips up every beginner.

05

Docker Compose

Real apps don't run in one container

Docker Compose

One file. Entire stack.

Docker Compose lets you define your entire application stack in a single YAML file, then start everything with one command.

A typical web app needs:

🐍

Your App

Python / Node / Java / whatever

🐘

Database

PostgreSQL or MySQL

Cache

Redis — blazing fast in-memory store

One command to start all of them:
docker compose up -d

# docker-compose.yml
services:

  web:
    build: .          # build from Dockerfile
    ports:
      - "8000:8000"
    environment:
      - DATABASE_URL=postgresql://
          db:5432/mydb
    depends_on:
      - db            # start db first
      - redis

  db:
    image: postgres:15
    environment:
      - POSTGRES_DB=mydb
      - POSTGRES_PASSWORD=secret
    volumes:
      - pgdata:/var/lib/postgresql

  redis:
    image: redis:7-alpine

volumes:
  pgdata:             # data persists on restart

💡 All services share a network. web can reach db by hostname "db". Docker's internal DNS does this automatically.

Docker Compose

Compose commands

docker compose upStart all services (foreground, see logs)
docker compose up -dStart all services in background
docker compose up --buildRebuild images, then start
docker compose downStop and remove all containers
docker compose down -vAlso delete volumes (⚠️ data loss)
docker compose psList all running services
docker compose logsView logs from all services
docker compose logs -f webFollow live logs for "web" service
docker compose exec web bashOpen shell inside "web" container
docker compose restart webRestart a specific service
🙅‍♂️
Running 6 separate terminal commands to start your app stack
😍
docker compose up -d and going to get coffee ☕
Q&A

Questions?

Let's discuss, explore & experiment 🐳

Amal Shibu  ·  amalshibu.com

hub.docker.com docs.docker.com labs.play-with-docker.com