Docker Basics
Containers, images, Dockerfiles, and essential Docker commands for DevOps
What is Docker?
Docker lets you package an application and all its dependencies into a container β a lightweight, portable, isolated unit that runs the same everywhere.
βIt works on my machineβ β βShip your machine.β
Container vs Virtual Machine
| Container | Virtual Machine | |
|---|---|---|
| Size | MBs | GBs |
| Startup | Seconds | Minutes |
| OS | Shares host kernel | Full guest OS |
| Isolation | Process-level | Hardware-level |
| Use case | App packaging | Full OS isolation |
Core Concepts
Image
A read-only template. Like a class in OOP. Built from a Dockerfile.
Container
A running instance of an image. Like an object instantiated from a class.
Dockerfile
Instructions to build an image, layer by layer.
Registry
A storage server for images. Docker Hub is the public one. AWS ECR, GitHub Container Registry, and self-hosted options exist.
Volume
Persistent storage that survives container restarts.
Network
How containers talk to each other or the outside world.
Essential Docker Commands
Images
docker pull nginx # download image from registrydocker images # list local imagesdocker build -t myapp:1.0 . # build from Dockerfile in current dirdocker rmi myapp:1.0 # delete imagedocker push myregistry/myapp:1.0 # push to registryContainers
docker run nginx # run a container (foreground)docker run -d nginx # run detached (background)docker run -d -p 8080:80 nginx # map host port 8080 β container port 80docker run -d --name web nginx # give it a namedocker run -d -v /data:/app/data nginx # mount a volume
docker ps # list running containersdocker ps -a # list all (including stopped)docker stop web # stop a containerdocker start web # start a stopped containerdocker rm web # delete containerdocker logs web # view logsdocker logs -f web # follow logs (tail -f style)docker exec -it web bash # open shell inside containerCleanup
docker system prune # remove unused containers, networks, imagesdocker volume prune # remove unused volumesWriting a Dockerfile
# Base imageFROM node:20-alpine
# Set working directory inside containerWORKDIR /app
# Copy dependency files first (cache optimization)COPY package*.json ./
# Install dependenciesRUN npm ci --only=production
# Copy the rest of the sourceCOPY . .
# Build the appRUN npm run build
# Expose the port the app listens onEXPOSE 3000
# Command to run when container startsCMD ["node", "dist/server.js"]Layer Caching
Docker caches each layer. Put things that change least often at the top:
- Base image (changes rarely)
- System packages
- Dependencies (package.json)
- Source code (changes most)
This way, npm install is only re-run when package.json changes, not on every code change.
Multi-Stage Builds
Build the app in one stage, copy only the output to a smaller final image:
# Stage 1: BuildFROM node:20-alpine AS builderWORKDIR /appCOPY package*.json ./RUN npm ciCOPY . .RUN npm run build
# Stage 2: Production imageFROM node:20-alpine AS productionWORKDIR /appCOPY --from=builder /app/dist ./distCOPY --from=builder /app/node_modules ./node_modulesEXPOSE 3000CMD ["node", "dist/server.js"]Final image only contains the built output β no build tools, no source code. Much smaller and more secure.
Docker Compose
Run multiple containers together with a single docker-compose.yml:
version: '3.9'services: web: build: . ports: - "3000:3000" environment: - DATABASE_URL=postgres://user:pass@db:5432/mydb depends_on: - db
db: image: postgres:15-alpine environment: POSTGRES_USER: user POSTGRES_PASSWORD: pass POSTGRES_DB: mydb volumes: - pgdata:/var/lib/postgresql/data
volumes: pgdata:docker compose up -d # start all services in backgrounddocker compose down # stop and remove containersdocker compose logs -f # follow logs from all services.dockerignore
Like .gitignore but for Docker builds. Keeps image sizes small:
node_modules.git.envdist*.logREADME.mdBest Practices
- Use official base images β
node:20-alpinenotubuntu+ manual installs - Minimize layers β chain
RUNcommands with&& - Run as non-root β add
USER nodebeforeCMD - Never bake secrets into images β use environment variables or secrets at runtime
- Tag images properly β use semantic versions, not just
latest - Scan images β use
docker scoutortrivyfor vulnerabilities - Use multi-stage builds β keep production images lean