
Kubernetes Mastery: From Pods to Production
Master Kubernetes with practical commands and real-world examples on Mac and Windows. Deploy, scale, and manage containerized applications like a pro.
Introduction to Kubernetes
Kubernetes has become the de facto standard for container orchestration, but its complexity can be overwhelming. This comprehensive guide will take you from basic pod management to production-ready deployments with practical, tested commands you can use immediately on both Mac and Windows.
What is Kubernetes?
Kubernetes (K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Think of it as your application's autopilotβit ensures your containers are running, healthy, and can handle traffic spikes automatically.
Setting Up Your Kubernetes Environment
Before diving into commands, you need a Kubernetes cluster. For learning, minikube, kind (Kubernetes in Docker), or Docker Desktop's built-in Kubernetes are excellent choices for both Mac and Windows. For production, consider managed services like EKS, GKE, or AKS.
Understanding Kubernetes Architecture
Kubernetes follows a master-worker architecture. The control plane manages the cluster state, while worker nodes run your applications. Key components include etcd (data store), API server, scheduler, and kubelet (node agent).
Working with Pods
Pods are the smallest deployable units in Kubernetes. They typically contain one container, though they can host multiple tightly coupled containers that share storage and network.
Deployments and ReplicaSets
While you can create pods directly, Deployments are the recommended way to manage applications. They provide declarative updates, rollback capabilities, and ensure your desired number of replicas are always running.
Services and Networking
Services provide stable network endpoints for your pods. They enable load balancing and service discovery, making your applications accessible within the cluster and from external traffic.
ConfigMaps and Secrets
Configuration management is crucial for cloud-native applications. ConfigMaps store non-sensitive configuration data, while Secrets handle sensitive information like passwords and API keys.
Persistent Storage
Stateful applications need persistent storage that survives pod restarts. Kubernetes provides various storage options through Persistent Volumes and Persistent Volume Claims.
Monitoring and Troubleshooting
Effective monitoring and debugging are essential for maintaining healthy Kubernetes clusters. Learn the essential commands for inspecting cluster state and diagnosing issues.
Production Best Practices
Running Kubernetes in production requires careful consideration of security, resource management, and operational practices. Implement proper RBAC, resource limits, and monitoring from day one.
Conclusion
Kubernetes might seem complex initially, but mastering these fundamental concepts and commands will give you a solid foundation. Start small, practice regularly, and gradually incorporate more advanced features as your confidence grows.
Step-by-Step Guide
Install kubectl and Setup - Mac
# Method 1: Using Homebrew (recommended)
brew install kubectl
# Method 2: Direct download
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
# Install minikube for local development
brew install minikube
# Alternative: Direct download
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/
# Start minikube cluster
minikube start --driver=docker
# Check cluster status
kubectl cluster-info
# Enable Docker Desktop Kubernetes (Alternative)
# Docker Desktop > Preferences > Kubernetes > Enable Kubernetes
Mac users have multiple options for Kubernetes setup. Homebrew is the easiest method for both kubectl and minikube. Docker Desktop's built-in Kubernetes is another convenient option that integrates well with Docker workflows.
Install kubectl and Setup - Windows
# Method 1: Using Chocolatey
choco install kubernetes-cli
# Method 2: Using Scoop
scoop install kubectl
# Method 3: Using winget
winget install Kubernetes.kubectl
# Method 4: Direct download (PowerShell)
curl.exe -LO "https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe"
# Add kubectl.exe to your PATH
# Install minikube
choco install minikube
# Or using winget
winget install Kubernetes.minikube
# Start minikube cluster (PowerShell or Command Prompt)
minikube start --driver=docker
# Check cluster status
kubectl cluster-info
# Enable Docker Desktop Kubernetes (Alternative)
# Docker Desktop > Settings > Kubernetes > Enable Kubernetes
Windows users can use package managers like Chocolatey, Scoop, or winget for easy installation. Docker Desktop's Kubernetes integration works excellently on Windows and is often the simplest setup for beginners.
Your First Pod
# Create a simple nginx pod
kubectl run my-nginx --image=nginx:1.21 --port=80
# Check if pod is running
kubectl get pods
# Get detailed pod information
kubectl describe pod my-nginx
# View pod logs
kubectl logs my-nginx
# Execute commands inside the pod
kubectl exec -it my-nginx -- bash
# On Windows, you might need to specify shell explicitly
kubectl exec -it my-nginx -- /bin/bash
# Access the pod via port forwarding
kubectl port-forward pod/my-nginx 8080:80
# Open browser (Mac)
open http://localhost:8080
# Open browser (Windows)
start http://localhost:8080
# Delete the pod
kubectl delete pod my-nginx
The 'kubectl run' command creates a pod directly. Port forwarding works identically on Mac and Windows. The browser opening commands differ between platforms. Windows users can use either PowerShell or Command Prompt for kubectl commands.
Creating Pods with YAML
# Create pod-definition.yaml
apiVersion: v1
kind: Pod
metadata:
name: webapp-pod
labels:
app: webapp
environment: development
spec:
containers:
- name: webapp-container
image: nginx:1.21
ports:
- containerPort: 80
env:
- name: ENV
value: "development"
- name: PLATFORM
value: "cross-platform"
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
# Health checks work the same on all platforms
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
YAML definitions work identically on Mac and Windows. This pod includes platform-agnostic configuration with labels, environment variables, and resource constraints. Health checks ensure reliable deployments across all platforms.
Apply and Manage YAML Resources
# Apply the pod configuration (Mac/Windows)
kubectl apply -f pod-definition.yaml
# Windows users can also use backslashes for local paths
# kubectl apply -f .\pod-definition.yaml
# View all pods with labels
kubectl get pods --show-labels
# Filter pods by label
kubectl get pods -l app=webapp
# Edit a running pod (opens default editor)
# Mac: Usually opens nano or vim
# Windows: Opens notepad or configured editor
kubectl edit pod webapp-pod
# Set editor preference (Mac/Windows)
# Mac: export EDITOR=nano
# Windows: set EDITOR=notepad
# Or use KUBE_EDITOR environment variable
# Get pod YAML definition
kubectl get pod webapp-pod -o yaml > pod-output.yaml
# Delete using YAML file
kubectl delete -f pod-definition.yaml
The 'apply' command works identically on both platforms. Windows users can use either forward slashes or backslashes for file paths. The default editor for 'kubectl edit' varies by platform but can be configured using environment variables.
Working with Deployments
# Create a deployment
kubectl create deployment web-app --image=nginx:1.21 --replicas=3
# Scale the deployment
kubectl scale deployment web-app --replicas=5
# Check deployment status
kubectl get deployments
kubectl get replicasets
kubectl get pods
# Update deployment image (rolling update)
kubectl set image deployment/web-app nginx=nginx:1.22
# Check rollout status
kubectl rollout status deployment/web-app
# View rollout history
kubectl rollout history deployment/web-app
# Rollback to previous version
kubectl rollout undo deployment/web-app
# Monitor pods during rollout (works great in PowerShell/Terminal)
kubectl get pods -w
Deployments manage ReplicaSets, which manage Pods. This provides high availability and rolling updates. The watch flag (-w) works well in both Mac Terminal and Windows PowerShell for monitoring real-time changes.
Deployment YAML Configuration
# Create deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-container
image: nginx:1.21
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 15
periodSeconds: 20
# Resource limits prevent any single container from consuming all resources
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
This deployment includes health checks that work across all platforms. Readiness probes determine when a pod is ready to receive traffic, while liveness probes detect when a pod needs to be restarted. Resource limits are crucial for stable multi-tenant clusters.
Services and Load Balancing
# Create a ClusterIP service (internal access only)
kubectl expose deployment web-deployment --port=80 --target-port=80 --type=ClusterIP
# Create a NodePort service (external access via node IP)
kubectl expose deployment web-deployment --port=80 --target-port=80 --type=NodePort --name=web-nodeport
# On minikube, get the service URL
minikube service web-nodeport --url
# On Docker Desktop (Mac/Windows), access via localhost:nodePort
kubectl get svc web-nodeport
# Create a LoadBalancer service (cloud provider load balancer)
kubectl expose deployment web-deployment --port=80 --target-port=80 --type=LoadBalancer --name=web-loadbalancer
# List all services
kubectl get services
kubectl get svc
# Get service details
kubectl describe service web-deployment
# Port forwarding for local testing (Mac/Windows)
kubectl port-forward service/web-deployment 8080:80
Services provide stable network endpoints. ClusterIP is for internal communication, NodePort exposes services on each node. On local clusters like minikube or Docker Desktop, LoadBalancer services may not get external IPs - use NodePort or port-forwarding instead.
Service YAML Definition
# Create service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
labels:
app: web-app
spec:
selector:
app: web-app
ports:
- protocol: TCP
port: 80
targetPort: 80
name: http
type: ClusterIP
---
# NodePort service for external access
apiVersion: v1
kind: Service
metadata:
name: web-service-external
spec:
selector:
app: web-app
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
type: NodePort
# Apply both services
# kubectl apply -f service.yaml
The selector matches pods with the specified labels. Multiple services can target the same pods. The '---' separator allows multiple resources in one file. NodePort services require a port in the 30000-32767 range and work consistently across Mac and Windows.
ConfigMaps and Environment Variables
# Create ConfigMap from literal values
kubectl create configmap app-config --from-literal=database_url=mongodb://localhost:27017 --from-literal=debug_mode=true
# Create ConfigMap from file (Mac/Windows compatible paths)
echo "database.host=localhost" > app.properties
echo "database.port=5432" >> app.properties
kubectl create configmap app-config-file --from-file=app.properties
# Windows PowerShell alternative
# echo "database.host=localhost" | Out-File -Encoding UTF8 app.properties
# echo "database.port=5432" | Add-Content -Encoding UTF8 app.properties
# Create ConfigMap from YAML
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: app-settings
data:
database_url: "postgresql://localhost:5432"
log_level: "info"
platform: "cross-platform"
feature_flags: |
feature1=enabled
feature2=disabled
cross_platform_support=true
EOF
# View ConfigMap
kubectl get configmaps
kubectl describe configmap app-settings
ConfigMaps store non-sensitive configuration data. File creation differs slightly between Mac/Linux (echo >>) and Windows PowerShell (Out-File/Add-Content). The pipe symbol '|' preserves multi-line values, useful for configuration files across all platforms.
Using ConfigMaps in Pods
# pod-with-configmap.yaml
apiVersion: v1
kind: Pod
metadata:
name: webapp-with-config
spec:
containers:
- name: webapp
image: nginx:1.21
env:
# Single environment variable from ConfigMap
- name: DATABASE_URL
valueFrom:
configMapKeyRef:
name: app-settings
key: database_url
- name: PLATFORM
valueFrom:
configMapKeyRef:
name: app-settings
key: platform
# All keys from ConfigMap as environment variables
envFrom:
- configMapRef:
name: app-settings
volumeMounts:
- name: config-volume
mountPath: /etc/config
readOnly: true
ports:
- containerPort: 80
volumes:
- name: config-volume
configMap:
name: app-settings
# Set file permissions (important for security)
defaultMode: 0644
ConfigMaps can be consumed as environment variables or mounted as files. Environment variables are good for simple configs, while volume mounts work better for configuration files. File permissions are important for security and work consistently across platforms.
Secrets Management
# Create secret from literal values
kubectl create secret generic app-secrets --from-literal=username=admin --from-literal=password=complexpassword123
# Create secret from files (Mac/Linux)
echo -n 'admin' > username.txt
echo -n 'complexpassword123' > password.txt
kubectl create secret generic app-secrets-file --from-file=username.txt --from-file=password.txt
# Windows PowerShell version
# echo -n 'admin' | Out-File -NoNewline -Encoding UTF8 username.txt
# echo -n 'complexpassword123' | Out-File -NoNewline -Encoding UTF8 password.txt
# Create secret using YAML (base64 encoded)
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: database-secret
type: Opaque
data:
username: YWRtaW4= # base64 encoded 'admin'
password: Y29tcGxleHBhc3N3b3JkMTIz # base64 encoded 'complexpassword123'
stringData:
# stringData automatically encodes values
api_key: "plain-text-api-key-here"
EOF
# View secrets (values are hidden)
kubectl get secrets
kubectl describe secret database-secret
Secrets store sensitive data like passwords and API keys. File creation syntax differs between platforms. The stringData field automatically base64 encodes values, making it easier to work with. Never commit secrets to version control regardless of platform.
Using Secrets in Deployments
# deployment-with-secrets.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-app
spec:
replicas: 2
selector:
matchLabels:
app: secure-app
template:
metadata:
labels:
app: secure-app
spec:
containers:
- name: app-container
image: nginx:1.21
env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: database-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret
key: password
- name: API_KEY
valueFrom:
secretKeyRef:
name: database-secret
key: api_key
volumeMounts:
- name: secret-volume
mountPath: /etc/secrets
readOnly: true
ports:
- containerPort: 80
volumes:
- name: secret-volume
secret:
secretName: database-secret
# Set restrictive permissions
defaultMode: 0400
Secrets can be injected as environment variables or mounted as files. Mounting as files is more secure as the values aren't visible in process lists on any platform. The readOnly flag and restrictive permissions (0400) prevent accidental modification and enhance security.
Persistent Volumes and Claims
# Create a PersistentVolume (Mac/Linux path)
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-mac
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
hostPath:
path: /tmp/k8s-data
EOF
# Create a PersistentVolume (Windows path)
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-windows
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage-windows
hostPath:
path: /c/k8s-data # Maps to C:k8s-data in Docker Desktop
EOF
# Create a PersistentVolumeClaim (platform-agnostic)
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
storageClassName: local-storage
EOF
PersistentVolumes (PV) represent storage resources with platform-specific paths. On Windows with Docker Desktop, paths are mapped to the Linux VM. PersistentVolumeClaims (PVC) are platform-agnostic requests for storage, providing abstraction from underlying storage details.
Using Persistent Storage in Pods
# pod-with-storage.yaml
apiVersion: v1
kind: Pod
metadata:
name: data-pod
spec:
containers:
- name: data-container
image: nginx:1.21
volumeMounts:
- name: data-volume
mountPath: /usr/share/nginx/html
- name: logs-volume
mountPath: /var/log/nginx
ports:
- containerPort: 80
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: data-pvc
- name: logs-volume
emptyDir: {}
# Test persistent storage (works on Mac/Windows)
kubectl apply -f pod-with-storage.yaml
# Create test content
kubectl exec -it data-pod -- bash -c "echo '<h1>Hello from persistent storage!</h1><p>Platform: Cross-platform</p>' > /usr/share/nginx/html/index.html"
# Test access via port forwarding
kubectl port-forward pod/data-pod 8080:80
# Open browser (Mac)
# open http://localhost:8080
# Open browser (Windows)
# start http://localhost:8080
# Delete and recreate pod to test persistence
kubectl delete pod data-pod
kubectl apply -f pod-with-storage.yaml
kubectl exec -it data-pod -- cat /usr/share/nginx/html/index.html
Persistent storage survives pod restarts and rescheduling on all platforms. This example demonstrates that data written to the persistent volume remains available even after the pod is deleted and recreated. EmptyDir volumes are temporary and platform-agnostic.
Namespaces and Resource Organization
# Create namespaces
kubectl create namespace development
kubectl create namespace production
kubectl create namespace testing
# List all namespaces
kubectl get namespaces
kubectl get ns
# Create resources in specific namespace
kubectl run test-pod --image=nginx:1.21 --namespace=development
# Set default namespace for kubectl context
kubectl config set-context --current --namespace=development
# View current context and namespace
kubectl config current-context
kubectl config view --minify --output 'jsonpath={..namespace}'
# List resources in all namespaces
kubectl get pods --all-namespaces
kubectl get pods -A
# Create namespace-specific resources
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: platform-demo
labels:
environment: demo
platform: cross-platform
EOF
# Delete namespace (and all resources in it)
kubectl delete namespace testing
Namespaces provide resource isolation and organization across all platforms. They're essential for multi-tenant clusters and separating environments. Context management works identically on Mac and Windows. Be careful when deleting namespacesβit removes all contained resources.
Troubleshooting and Debugging
# Check cluster components
kubectl get componentstatuses
kubectl get nodes -o wide
# Describe resources for detailed information
kubectl describe pod problem-pod
kubectl describe node node-name
# View logs from pods (same on Mac/Windows)
kubectl logs pod-name
kubectl logs pod-name -c container-name # multi-container pods
kubectl logs pod-name --previous # logs from previous container instance
kubectl logs pod-name --follow # stream logs in real-time
# Get events (crucial for troubleshooting)
kubectl get events --sort-by=.metadata.creationTimestamp
kubectl get events --field-selector involvedObject.name=pod-name
# Debug running pods
kubectl exec -it pod-name -- /bin/bash
kubectl exec -it pod-name -- sh # for alpine-based images
# Port forwarding for debugging (Mac/Windows)
kubectl port-forward pod/pod-name 8080:80
kubectl port-forward service/service-name 8080:80
kubectl port-forward deployment/deployment-name 8080:80
# Resource usage (requires metrics-server)
kubectl top nodes
kubectl top pods
kubectl top pods --all-namespaces
# Debug networking
kubectl run debug-pod --image=nicolaka/netshoot -it --rm -- bash
Effective troubleshooting commands work identically across platforms. Events provide a timeline of cluster activities. Port forwarding is invaluable for debugging services locally. The netshoot container provides networking tools for debugging connectivity issues in any environment.
Advanced kubectl Commands
# Apply all YAML files in a directory (Mac/Windows)
kubectl apply -f ./manifests/
# Windows: kubectl apply -f .\manifests\
# Watch resources in real-time (great in PowerShell/Terminal)
kubectl get pods -w
kubectl get events -w --sort-by=.metadata.creationTimestamp
# Output in different formats
kubectl get pods -o wide
kubectl get pods -o json
kubectl get pods -o yaml
# Use JSONPath for specific data extraction
kubectl get pods -o jsonpath='{.items[*].metadata.name}'
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'
# Custom columns
kubectl get pods -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName
# Dry run to test configurations
kubectl apply -f deployment.yaml --dry-run=client -o yaml
kubectl apply -f deployment.yaml --dry-run=server
# Force delete stuck resources (use with caution)
kubectl delete pod stuck-pod --force --grace-period=0
# Patch resources dynamically
kubectl patch deployment web-app -p '{"spec":{"replicas":5}}'
# Scale resources
kubectl scale deployment web-app --replicas=3
kubectl scale statefulset database --replicas=2
These advanced commands are essential for production operations on any platform. Watch mode (-w) works excellently in both Mac Terminal and Windows PowerShell. JSONPath and custom columns help extract specific data from complex outputs. Dry runs validate configurations before applying them.
Resource Management and Limits
# Create a ResourceQuota for a namespace
kubectl apply -f - <<EOF
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: development
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
persistentvolumeclaims: "4"
pods: "10"
services: "5"
configmaps: "10"
secrets: "10"
EOF
# Create a LimitRange for default limits
kubectl apply -f - <<EOF
apiVersion: v1
kind: LimitRange
metadata:
name: limit-range
namespace: development
spec:
limits:
- default:
memory: "512Mi"
cpu: "500m"
defaultRequest:
memory: "256Mi"
cpu: "250m"
type: Container
- max:
memory: "1Gi"
cpu: "1000m"
min:
memory: "64Mi"
cpu: "100m"
type: Container
EOF
# Check resource usage and quotas
kubectl describe quota compute-quota -n development
kubectl describe limitrange limit-range -n development
kubectl top pods -n development
kubectl get resourcequota -A
ResourceQuotas prevent any single namespace from consuming all cluster resources, crucial for stability on any platform. LimitRanges set default resource requests and limits for containers. These policies ensure fair resource distribution and prevent resource starvation in multi-tenant environments.
Platform-Specific Tips and Tools
# Mac-specific tools and tips
# Use kubectl with shell completion
brew install bash-completion
echo 'source <(kubectl completion bash)' >> ~/.bash_profile
# Install helpful tools
brew install k9s # Terminal-based Kubernetes UI
brew install kubectx # Switch between contexts easily
brew install helm # Package manager for Kubernetes
# Windows-specific tools and tips
# PowerShell completion
kubectl completion powershell | Out-String | Invoke-Expression
# Install helpful tools with Chocolatey
choco install k9s
choco install kubectx
choco install kubernetes-helm
# Or with winget
winget install k9s
winget install helm
# Check platform architecture and compatibility
kubectl version --client --output=yaml
kubectl get nodes -o jsonpath='{.items[*].status.nodeInfo.architecture}'
# Multi-platform cluster info
kubectl cluster-info dump | grep -i platform
# Platform-agnostic aliases (add to shell profile)
# alias k=kubectl
# alias kgp='kubectl get pods'
# alias kgs='kubectl get svc'
# alias kgd='kubectl get deployment'
Platform-specific optimizations enhance the Kubernetes experience. Shell completion significantly improves productivity. Tools like k9s provide excellent terminal-based cluster management. Aliases and shortcuts work well on both platforms and can greatly speed up daily operations.
Pro Tips & Best Practices
Use docker system prune
regularly to clean up unused resources
Always use specific image tags instead of latest
in production
Use .dockerignore
files to exclude unnecessary files
Multi-stage builds can significantly reduce image sizes