Why Containers Matter in 2022
The Classic Problem:
Developer: “Works on my machine!”
Operations: “Doesn’t work in production!”
The Variables:
- Different OS versions
- Different dependencies
- Different configurations
- Different environments
The Solution: Containers
Package everything together:
- Application code
- Dependencies
- Runtime
- System libraries
- Configuration
Result: Works the same everywhere.
Docker Basics
What is Docker?
Platform for building, shipping, and running applications in containers.
Key Concepts:
Image: Blueprint (like a class)
Container: Running instance (like an object)
Dockerfile: Instructions to build image
Registry: Store for images (Docker Hub)
Installation:
# Mac/Windows: Docker Desktop
# Linux:
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
# Verify
docker --version
# Docker version 20.10.17
Your First Dockerfile
Simple Node.js App:
app.js:
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.json({ message: 'Hello from Docker!' });
});
app.listen(3000, () => {
console.log('Server running on port 3000');
});
package.json:
{
"name": "docker-app",
"version": "1.0.0",
"dependencies": {
"express": "^4.18.0"
},
"scripts": {
"start": "node app.js"
}
}
Dockerfile:
# Base image
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm install --production
# Copy application code
COPY . .
# Expose port
EXPOSE 3000
# Start command
CMD ["npm", "start"]
Build and Run:
# Build image
docker build -t my-app:1.0 .
# Run container
docker run -p 3000:3000 my-app:1.0
# Visit http://localhost:3000
Explanation:
FROM: Base image (Node.js 18 on Alpine Linux)WORKDIR: Set working directory inside containerCOPY: Copy files from host to containerRUN: Execute commands during buildEXPOSE: Document which port app usesCMD: Command to run when container starts
Dockerfile Best Practices
1. Use Specific Base Images
# BAD: Version can change
FROM node:latest
# GOOD: Specific version
FROM node:18.12.1-alpine
2. Layer Caching (Order Matters)
# BAD: Code changes invalidate all layers
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm install
# GOOD: Dependencies cached separately
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
Why: package.json changes less frequently than code.
3. Multi-Stage Builds (Smaller Images)
# Stage 1: Build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./
CMD ["node", "dist/index.js"]
Result: Production image only contains what’s needed.
4. Use .dockerignore
# .dockerignore
node_modules
npm-debug.log
.git
.env
*.md
.vscode
coverage
Prevents: Copying unnecessary files into image.
5. Non-Root User
# Create user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodejs -u 1001
# Change ownership
RUN chown -R nodejs:nodejs /app
# Switch to user
USER nodejs
Security: Don’t run as root inside container.
6. Health Checks
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node healthcheck.js
healthcheck.js:
const http = require('http');
const options = {
host: 'localhost',
port: 3000,
path: '/health',
timeout: 2000
};
const healthCheck = http.request(options, (res) => {
if (res.statusCode == 200) {
process.exit(0);
} else {
process.exit(1);
}
});
healthCheck.on('error', () => {
process.exit(1);
});
healthCheck.end();
Docker Compose (Multi-Container Apps)
The Problem:
Most apps need multiple services:
- Application server
- Database
- Redis cache
- Background workers
The Solution: Docker Compose
docker-compose.yml:
version: '3.9'
services:
# Application
app:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgres://db:5432/myapp
- REDIS_URL=redis://redis:6379
depends_on:
- db
- redis
volumes:
- ./logs:/app/logs
restart: unless-stopped
networks:
- app-network
# PostgreSQL Database
db:
image: postgres:15-alpine
environment:
- POSTGRES_DB=myapp
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=secret
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- app-network
restart: unless-stopped
# Redis Cache
redis:
image: redis:7-alpine
volumes:
- redis-data:/data
networks:
- app-network
restart: unless-stopped
# Nginx Reverse Proxy
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- app
networks:
- app-network
restart: unless-stopped
volumes:
postgres-data:
redis-data:
networks:
app-network:
driver: bridge
Commands:
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop all services
docker-compose down
# Rebuild and restart
docker-compose up -d --build
# Scale services
docker-compose up -d --scale app=3
Docker Commands Cheat Sheet
Images:
# List images
docker images
# Build image
docker build -t myapp:1.0 .
# Pull image from registry
docker pull nginx:alpine
# Push image to registry
docker push myuser/myapp:1.0
# Remove image
docker rmi myapp:1.0
# Remove unused images
docker image prune -a
Containers:
# List running containers
docker ps
# List all containers
docker ps -a
# Run container
docker run -d -p 8080:80 --name web nginx
# Stop container
docker stop web
# Start container
docker start web
# Restart container
docker restart web
# Remove container
docker rm web
# Remove all stopped containers
docker container prune
# View logs
docker logs web
docker logs -f web # Follow logs
# Execute command in container
docker exec -it web sh
# Copy files
docker cp file.txt web:/app/
docker cp web:/app/file.txt ./
System:
# View disk usage
docker system df
# Clean up everything
docker system prune -a --volumes
# View resource usage
docker stats
Introduction to Kubernetes
What is Kubernetes (K8s)?
Container orchestration platform that:
- Deploys containers across multiple machines
- Scales applications automatically
- Self-heals (restarts failed containers)
- Manages load balancing
- Handles rolling updates
When You Need Kubernetes:
- Running in production at scale
- Need high availability
- Multiple microservices
- Auto-scaling requirements
- Multi-cloud or hybrid cloud
When You Don’t:
- Small applications
- Single server
- Simple deployments
- Just getting started (use Docker Compose first)
Kubernetes Core Concepts
Cluster: Set of machines running Kubernetes
Node: Single machine in cluster (worker)
Pod: Smallest deployable unit (runs one or more containers)
Deployment: Manages desired state of Pods
Service: Exposes Pods to network
Namespace: Virtual cluster (isolation)
ConfigMap: Configuration data
Secret: Sensitive data (passwords, keys)
Ingress: HTTP routing rules
Setting Up Local Kubernetes
Options:
1. Minikube (Recommended for learning)
# Install
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube
# Start cluster
minikube start
# Verify
kubectl get nodes
2. Docker Desktop (Easiest)
Settings → Kubernetes → Enable Kubernetes
3. Kind (Kubernetes in Docker)
# Install
brew install kind
# Create cluster
kind create cluster --name dev
Install kubectl:
# Mac
brew install kubectl
# Verify
kubectl version --client
Your First Kubernetes Deployment
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3 # Run 3 pods
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:1.0
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "200m"
memory: "256Mi"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
Apply:
# Create deployment
kubectl apply -f deployment.yaml
# Create service
kubectl apply -f service.yaml
# Check status
kubectl get deployments
kubectl get pods
kubectl get services
# View logs
kubectl logs -f <pod-name>
# Describe pod
kubectl describe pod <pod-name>
ConfigMaps and Secrets
ConfigMap (Non-sensitive config):
configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
API_URL: "https://api.example.com"
LOG_LEVEL: "info"
MAX_CONNECTIONS: "100"
Use in Deployment:
spec:
containers:
- name: myapp
image: myapp:1.0
envFrom:
- configMapRef:
name: app-config
Secret (Sensitive data):
# Create secret from literal
kubectl create secret generic db-secret \
--from-literal=username=admin \
--from-literal=password=secretpass
# Or from file
kubectl create secret generic db-secret \
--from-file=./username.txt \
--from-file=./password.txt
secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
data:
username: YWRtaW4= # base64 encoded
password: c2VjcmV0cGFzcw==
Use in Deployment:
spec:
containers:
- name: myapp
image: myapp:1.0
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
Scaling Applications
Manual Scaling:
# Scale to 5 replicas
kubectl scale deployment myapp --replicas=5
# Verify
kubectl get pods
Horizontal Pod Autoscaler (HPA):
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Apply:
kubectl apply -f hpa.yaml
# Watch autoscaling
kubectl get hpa -w
Result: Automatically scales between 2-10 pods based on CPU/memory usage.
Rolling Updates & Rollbacks
Update Image:
# Update deployment image
kubectl set image deployment/myapp myapp=myapp:2.0
# Watch rollout
kubectl rollout status deployment/myapp
Rollout Strategy (in deployment.yaml):
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Max pods above desired
maxUnavailable: 0 # Max pods unavailable
Process:
- Create 1 new pod (myapp:2.0)
- Wait for it to be ready
- Terminate 1 old pod (myapp:1.0)
- Repeat until all updated
Zero downtime.
Rollback:
# View rollout history
kubectl rollout history deployment/myapp
# Rollback to previous version
kubectl rollout undo deployment/myapp
# Rollback to specific revision
kubectl rollout undo deployment/myapp --to-revision=2
Ingress (HTTP Routing)
Problem: Multiple services need external access
Solution: Ingress Controller
Install Ingress Controller (Nginx):
# Minikube
minikube addons enable ingress
# Or install manually
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- host: admin.myapp.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 80
Apply:
kubectl apply -f ingress.yaml
# Add to /etc/hosts
echo "127.0.0.1 myapp.local admin.myapp.local" | sudo tee -a /etc/hosts
# Visit http://myapp.local
Persistent Storage
Problem: Pods are ephemeral (data lost when pod dies)
Solution: Persistent Volumes
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
Use in Deployment:
spec:
containers:
- name: postgres
image: postgres:15
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
Data persists even if pod is deleted and recreated.
Complete App Example (3-Tier)
namespace.yaml:
apiVersion: v1
kind: Namespace
metadata:
name: myapp
postgres.yaml:
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
namespace: myapp
type: Opaque
data:
password: cG9zdGdyZXNfcGFzc3dvcmQ=
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: myapp
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: myapp
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: myapp
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: myapp
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
backend.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: backend-config
namespace: myapp
data:
DATABASE_HOST: "postgres"
DATABASE_PORT: "5432"
DATABASE_NAME: "myapp"
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: myapp
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: myapp-backend:1.0
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: backend-config
env:
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "200m"
memory: "256Mi"
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: myapp
spec:
selector:
app: backend
ports:
- port: 80
targetPort: 3000
frontend.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: myapp
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: myapp-frontend:1.0
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: myapp
spec:
selector:
app: frontend
ports:
- port: 80
targetPort: 80
ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
namespace: myapp
spec:
rules:
- host: myapp.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
Deploy Everything:
kubectl apply -f namespace.yaml
kubectl apply -f postgres.yaml
kubectl apply -f backend.yaml
kubectl apply -f frontend.yaml
kubectl apply -f ingress.yaml
# Check status
kubectl get all -n myapp
Production Best Practices
1. Resource Limits (Always)
resources:
limits:
cpu: "1000m"
memory: "1Gi"
requests:
cpu: "500m"
memory: "512Mi"
2. Health Checks (Required)
livenessProbe: # Restart if failing
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe: # Remove from load balancer if failing
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
3. Multiple Replicas
spec:
replicas: 3 # Minimum 3 for HA
4. Pod Disruption Budgets
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: myapp-pdb
spec:
minAvailable: 2 # Always keep 2 pods running
selector:
matchLabels:
app: myapp
5. Network Policies (Security)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 3000
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
6. Use Namespaces
# Separate environments
kubectl create namespace production
kubectl create namespace staging
kubectl create namespace development
7. RBAC (Role-Based Access Control)
apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp-sa
namespace: myapp
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: myapp-role
namespace: myapp
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch"]
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: myapp-rolebinding
namespace: myapp
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: myapp-role
subjects:
- kind: ServiceAccount
name: myapp-sa
namespace: myapp
Monitoring & Logging
Prometheus + Grafana (Monitoring):
# Install using Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack
# Access Grafana
kubectl port-forward svc/prometheus-grafana 3000:80
# Visit http://localhost:3000
# Default: admin / prom-operator
ELK Stack (Logging):
# Install Elasticsearch, Logstash, Kibana
helm repo add elastic https://helm.elastic.co
helm install elasticsearch elastic/elasticsearch
helm install kibana elastic/kibana
# Access Kibana
kubectl port-forward svc/kibana-kibana 5601:5601
kubectl Cheat Sheet
Contexts & Config:
kubectl config get-contexts
kubectl config use-context minikube
kubectl config current-context
Namespaces:
kubectl get namespaces
kubectl create namespace dev
kubectl delete namespace dev
kubectl config set-context --current --namespace=dev
Pods:
kubectl get pods
kubectl get pods -n myapp
kubectl get pods -o wide
kubectl describe pod <pod-name>
kubectl logs <pod-name>
kubectl logs -f <pod-name> # Follow
kubectl logs <pod-name> -c <container-name>
kubectl exec -it <pod-name> -- sh
kubectl delete pod <pod-name>
Deployments:
kubectl get deployments
kubectl describe deployment <name>
kubectl edit deployment <name>
kubectl scale deployment <name> --replicas=5
kubectl rollout status deployment/<name>
kubectl rollout history deployment/<name>
kubectl rollout undo deployment/<name>
kubectl delete deployment <name>
Services:
kubectl get services
kubectl describe service <name>
kubectl port-forward service/<name> 8080:80
kubectl delete service <name>
Everything:
kubectl get all
kubectl get all -n myapp
kubectl delete all --all -n myapp
Docker vs Kubernetes Decision Matrix
| Factor | Use Docker Compose | Use Kubernetes |
|---|---|---|
| Team Size | 1-5 developers | 5+ developers |
| Scale | Single server | Multiple servers |
| Complexity | Simple apps | Complex microservices |
| Learning Curve | Days | Weeks/Months |
| Cost | Low | Higher (ops overhead) |
| High Availability | No | Yes |
| Auto-scaling | No | Yes |
| Self-healing | Limited | Yes |
| Multi-cloud | No | Yes |
Recommendation 2022:
- Starting out: Docker + Docker Compose
- Growing: Evaluate Kubernetes
- Production scale: Kubernetes
- Millions of users: Kubernetes + service mesh
Conclusion: Containers Are Standard
Docker:
- Solves “works on my machine”
- Consistent environments
- Easy local development
- Standard for 2022+
Kubernetes:
- Production orchestration
- Auto-scaling & self-healing
- Cloud-agnostic
- Industry standard
Learning Path:
- Master Docker first (weeks)
- Use Docker Compose for multi-container apps
- Learn Kubernetes basics (Minikube)
- Deploy to managed K8s (EKS, GKE, AKS)
- Advanced topics (service mesh, operators)
2022 Reality: Containers are no longer optional.
Key Takeaways:
- Containers solve environment consistency problems
- Dockerfile best practices reduce image size and improve security
- Docker Compose simplifies multi-container development
- Kubernetes orchestrates containers at scale
- Start with Docker, graduate to Kubernetes when needed
- Always set resource limits in production
- Health checks are non-negotiable
- Use ConfigMaps for config, Secrets for sensitive data
- Horizontal Pod Autoscaler enables automatic scaling
- Rolling updates enable zero-downtime deployments
Need help containerizing your application?
We’ve containerized and deployed 100+ applications to production. Kubernetes consultation available.
[Schedule Container Strategy Session →]