devops

Kubernetes — Pods, ReplicaSets & Deployments

Tutorial-style: YAML anatomy, first Pod from a file, ReplicaSet HA, Deployments, rollouts, strategies, daily kubectl


Before you start

  1. You have a cluster with at least one Ready node (kubectl get nodes).
  2. You are in a terminal in an empty folder where you can save small YAML files (for example lesson-pods/).

Lesson A — YAML building blocks (2 minutes)

Kubernetes object files are usually YAML. Almost every resource has four top-level keys:

FieldMeaning
apiVersionWhich API group/version (e.g. v1 for Pod, apps/v1 for Deployment).
kindObject type: Pod, ReplicaSet, Deployment, Service, …
metadataname, namespace, labels, …
specDesired state — what you want running.

Indentation matters. Child keys must be indented more than their parent; siblings must line up. One wrong space often produces error converting YAML to JSON.

Mini example (not Kubernetes — just YAML shape):

App:
Name: demo
Ports:
- 80
- 443

Lesson B — Your first Pod from a file

B.1 — What is a Pod?

A Pod is the smallest unit you deploy. Usually one app container per pod; sometimes a main app + a sidecar. Containers in the same pod share network (localhost) and volumes.

You scale by adding more pods, not by cramming unrelated apps into one pod.

Pod scheduled on a node

B.2 — Write the manifest

Create pod-definition.yaml:

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx:1.25-alpine
ports:
- containerPort: 80

B.3 — Create and verify (follow in order)

Step 1 — create

Terminal window
kubectl apply -f pod-definition.yaml

Step 2 — list pods

Terminal window
kubectl get pods

Expect a row with READY 1/1 and STATUS Running after a short ContainerCreating phase.

Step 3 — details and logs

Terminal window
kubectl describe pod myapp-pod
kubectl logs myapp-pod -c nginx-container

Step 4 — clean up (optional)

Terminal window
kubectl delete pod myapp-pod

Lesson C — ReplicaSet (keep N copies running)

If one pod dies, users should not wait for you to notice. A ReplicaSet keeps replicas matching spec.replicas by watching labels that match its selector.

ReplicaSet across nodes

C.1 — Write replicaset.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-rs
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx:1.25-alpine
ports:
- containerPort: 80

Check: template.metadata.labels must include every key in selector.matchLabels.

C.2 — Apply and verify

Terminal window
kubectl apply -f replicaset.yaml
kubectl get replicaset
kubectl get pods -l app=myapp

You should see three pods.

C.3 — Prove self-healing

Pick one pod name from kubectl get pods and delete it:

Terminal window
kubectl delete pod POD_NAME
kubectl get pods -w

A new pod name appears; the ReplicaSet recreated it.

C.4 — Clean up

Terminal window
kubectl delete replicaset myapp-rs

Lesson D — Deployment (how you run apps in real life)

A Deployment manages ReplicaSets for you: rolling updates, history, rollback, scaling. For stateless apps, prefer Deployment over a bare Pod.

Versioned rollout behind a Deployment

D.1 — Write deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx:1.25-alpine
ports:
- containerPort: 80

D.2 — Create and watch rollout

Terminal window
kubectl apply -f deployment.yaml
kubectl rollout status deployment/myapp
kubectl get deploy,rs,pods -l app=myapp

You will see a ReplicaSet with a hash suffix (the Deployment owns it).

D.3 — Update the app (new rollout)

Option 1 — edit YAML (change image tag to nginx:1.27-alpine), then:

Terminal window
kubectl apply -f deployment.yaml
kubectl rollout status deployment/myapp

Option 2 — imperative one-liner

Terminal window
kubectl set image deployment/myapp nginx=nginx:1.27-alpine --record
kubectl rollout status deployment/myapp

D.4 — History and rollback

Terminal window
kubectl rollout history deployment/myapp
kubectl rollout undo deployment/myapp
kubectl rollout history deployment/myapp

Lesson E — Deployment strategies (exam + real design)

Kubernetes supports two common patterns on Deployments:

StrategyWhat happensDowntime?
RollingUpdate (default)New pods come up gradually; old scaled down in parallel.Designed for no surge beyond maxUnavailable.
RecreateOld pods terminated first, then new ones created.Yes — gap while nothing runs.

Set explicitly if you need Recreate (some stateful migrations):

spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1

Use kubectl explain deployment.spec.strategy for all fields.


Reference — create / apply / scale / pause (daily commands)

TaskCommand
First-time createkubectl create -f file.yaml
Ongoing updateskubectl apply -f file.yaml
Safe dry runkubectl apply -f file.yaml --dry-run=client
See diff vs livekubectl diff -f file.yaml
Scalekubectl scale deployment myapp --replicas=5
Pause / resume rolloutskubectl rollout pause deployment/myapp then resume
Inspect failurekubectl describe deployment myapp

What’s next

Open Networking & Services: stable IPs/DNS for pods, ClusterIP, NodePort, and how to debug Endpoints.