Google recently open-sourced Kubernetes, their container orchestration system, and I’ve been kicking the tires. After spending a few weeks with it, I’m convinced this is the future of how we’ll deploy and manage containerized applications.

Kubernetes (k8s for short—8 letters between ‘k’ and ‘s’) is complex and still rough around the edges, but it solves real problems that I’ve been struggling with using Docker alone.

Why Orchestration Matters

Running containers with docker run works fine for a few containers on one host. But when you need:

  • Dozens or hundreds of containers
  • Across multiple hosts
  • With automatic failover
  • Service discovery
  • Load balancing
  • Rolling updates

You need orchestration. I’ve been using Docker Swarm, but Kubernetes takes a different approach that feels more powerful.

Core Concepts

Kubernetes introduces several concepts that took me a while to wrap my head around.

Pods

A Pod is the smallest deployable unit in Kubernetes. It’s one or more containers that share networking and storage:

apiVersion: v1
kind: Pod
metadata:
  name: encryption-service
spec:
  containers:
  - name: app
    image: encryption-service:v1.0
    ports:
    - containerPort: 8443
  - name: log-shipper
    image: log-shipper:v1.0

The pattern of putting a main container plus helper containers (sidecars) in a Pod is powerful. I’m using it for logging, metrics collection, and TLS termination.

Services

Pods are ephemeral—they come and go. Services provide stable endpoints:

apiVersion: v1
kind: Service
metadata:
  name: encryption-service
spec:
  selector:
    app: encryption
  ports:
  - port: 443
    targetPort: 8443
  type: ClusterIP

The service automatically load balances across all Pods with the label app: encryption. As Pods are created and destroyed, the service automatically updates.

Deployments

Deployments describe desired state and Kubernetes makes it happen:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: encryption-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: encryption
  template:
    metadata:
      labels:
        app: encryption
    spec:
      containers:
      - name: encryption-service
        image: encryption-service:v1.0
        resources:
          limits:
            memory: "512Mi"
            cpu: "500m"

Kubernetes ensures 3 replicas are always running. If a Pod crashes, it’s automatically replaced. If a node dies, Pods are rescheduled on healthy nodes.

What Makes Kubernetes Different

Declarative Configuration

With Docker, I was writing scripts that imperatively create containers. With Kubernetes, I declare what I want and it figures out how to get there.

This is a mental shift. Instead of:

# Imperative
docker run -d --name app1 myapp
docker run -d --name app2 myapp
docker run -d --name app3 myapp

I write:

# Declarative
replicas: 3

Kubernetes reconciles reality with desired state continuously. If reality drifts (containers crash, nodes die), Kubernetes corrects it.

Labels and Selectors

Everything in Kubernetes is organized using labels—arbitrary key-value pairs:

metadata:
  labels:
    app: encryption
    tier: backend
    environment: production
    version: v1.0

Services, deployments, and other resources use selectors to find what they manage:

selector:
  matchLabels:
    app: encryption
    environment: production

This is incredibly flexible. I can deploy multiple versions side-by-side and route traffic based on labels.

Self-Healing

Kubernetes constantly monitors desired vs actual state and takes corrective action:

  • Pod crashes → Restart it
  • Node dies → Reschedule Pods on healthy nodes
  • Health check fails → Restart Pod
  • Too many failures → Stop sending traffic (but keep trying to restart)

This is huge for availability. I don’t need complex monitoring scripts that restart services. Kubernetes does it automatically.

Security Considerations

Kubernetes is powerful but introduces new security concerns.

API Server Security

Everything in Kubernetes goes through the API server. It’s the control plane. Securing it is critical.

Currently, I’m using:

  1. TLS for all API communication
  2. Authentication via client certificates
  3. Authorization via RBAC (still experimental, but coming)
  4. Network policies to restrict Pod-to-Pod communication

The API server is also where I store configuration, including secrets. More on that below.

Secrets Management

Kubernetes has a Secret resource type:

apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
type: Opaque
data:
  username: YWRtaW4=  # base64 encoded
  password: cGFzc3dvcmQ=

Secrets are mounted into Pods as files or environment variables:

spec:
  containers:
  - name: app
    env:
    - name: DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: db-credentials
          key: password

However, Kubernetes Secrets are only base64 encoded by default, not encrypted. They’re stored in etcd (Kubernetes’ backing store). If someone gains access to etcd, they get all secrets.

For production, I’m investigating:

  1. Encryption at rest for etcd (coming in future Kubernetes versions)
  2. External secrets management (Vault integration)
  3. Envelope encryption for secrets

This is an area where Kubernetes needs to mature.

Network Policies

By default, all Pods can communicate with all other Pods. For security services, this is unacceptable.

Network Policies let you restrict traffic:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: encryption-service-policy
spec:
  podSelector:
    matchLabels:
      app: encryption
  ingress:
  - from:
    - podSelector:
        matchLabels:
          role: backend
    ports:
    - port: 8443

Only Pods with label role: backend can access the encryption service. Everything else is blocked.

Network policies are still evolving, but they’re essential for multi-tenant security.

Running Kubernetes

Setting up Kubernetes is not trivial. The documentation is improving, but there are many moving parts.

I’m currently running a small cluster:

  • 3 master nodes (for high availability)
  • 5 worker nodes
  • etcd cluster (3 nodes)

Installation Challenges

Some challenges I encountered:

  1. Networking is complex: Choosing between networking solutions (flannel, calico, weave) and getting it configured correctly
  2. Certificate management: Kubernetes uses certificates extensively. Managing them is a pain
  3. etcd: Setting up a reliable etcd cluster requires understanding distributed consensus
  4. Version compatibility: Kubernetes moves fast. Ensuring component versions are compatible is tricky

There are tools to help (kops, kubeadm), but they’re still young. I ended up doing a lot of manual configuration to understand how everything works.

Development Workflow

For development, I’m using minikube—a tool that runs a single-node Kubernetes cluster in a VM. It’s great for testing configurations locally before deploying to the real cluster.

# Start minikube
minikube start

# Deploy application
kubectl apply -f deployment.yaml

# Check status
kubectl get pods

# View logs
kubectl logs encryption-service-xyz

# Delete deployment
kubectl delete -f deployment.yaml

Real-World Usage

I’m currently running a few non-critical services on Kubernetes to gain experience. Here’s what I’ve learned:

Rolling Updates

Kubernetes makes rolling updates easy:

# Update image
kubectl set image deployment/encryption-deployment \
    encryption-service=encryption-service:v2.0

# Watch rollout
kubectl rollout status deployment/encryption-deployment

# Rollback if needed
kubectl rollout undo deployment/encryption-deployment

The deployment automatically:

  1. Creates Pods with new version
  2. Waits for them to be healthy
  3. Terminates old Pods
  4. Repeats until all replicas are updated

If health checks fail, rollout pauses automatically.

Resource Management

Kubernetes lets you specify resource requests and limits:

resources:
  requests:
    memory: "256Mi"
    cpu: "250m"
  limits:
    memory: "512Mi"
    cpu: "500m"

The scheduler uses requests to decide which node to place a Pod on. Limits prevent a Pod from consuming too many resources.

This is essential for multi-tenant clusters. Without limits, one noisy neighbor can starve others.

Monitoring and Debugging

Kubernetes provides good visibility:

# Pod status
kubectl get pods

# Detailed info
kubectl describe pod encryption-service-xyz

# Logs
kubectl logs encryption-service-xyz

# Execute command in Pod
kubectl exec -it encryption-service-xyz -- /bin/sh

# Port forward for debugging
kubectl port-forward encryption-service-xyz 8443:8443

I’m also integrating with monitoring systems (Prometheus, Grafana) to get metrics from Kubernetes and applications.

Current Limitations

Kubernetes is powerful but immature. Some pain points:

  1. Complexity: Steep learning curve. Lots of concepts to understand.
  2. Stability: APIs are still changing. Upgrades can be breaking.
  3. Documentation: Improving but still gaps and outdated information.
  4. Secrets management: Not production-ready for sensitive data yet.
  5. Stateful workloads: Kubernetes is designed for stateless apps. Stateful (databases, etc.) is harder.
  6. Maturity: Still rough edges. Be prepared to troubleshoot.

Why I’m Excited

Despite the challenges, I’m bullish on Kubernetes:

  1. Google’s experience: It’s based on Google’s internal systems (Borg, Omega). They run massive scale on these patterns.

  2. Community: Strong community forming. Lots of contributions. Rapid iteration.

  3. Extensibility: Kubernetes is designed to be extended. Custom resource types, schedulers, etc.

  4. Cloud-native: Designed for cloud. Works across AWS, GCP, Azure, on-premises.

  5. Declarative: The declarative model is powerful. Infrastructure as code.

Next Steps

I’m planning to:

  1. Move more workloads to Kubernetes: Gain operational experience
  2. Explore StatefulSets: For stateful workloads like databases
  3. Integrate secrets management: Likely Vault
  4. Set up CI/CD: Automate deployment to Kubernetes
  5. Implement monitoring: Prometheus and Grafana
  6. Study networking: Deep dive on network policies and service meshes

Should You Use Kubernetes?

For production? Probably not yet, unless you have significant operational expertise and can handle the complexity.

For experimentation and learning? Absolutely. Kubernetes represents the future of container orchestration.

I expect 2016 will see Kubernetes mature significantly. Companies are already building on it. Tools are improving. The ecosystem is growing.

If you’re invested in containers, start learning Kubernetes now. It’s going to be important.

Resources

Some resources I’ve found helpful:

  • Official Kubernetes documentation (improving constantly)
  • Kubernetes Slack channel (active, helpful community)
  • Kelsey Hightower’s tutorials and talks (excellent)
  • Cloud provider guides (GKE, EKS documentation)

Final Thoughts

Kubernetes is complex, immature, and sometimes frustrating. It’s also powerful, well-designed, and the best container orchestration system I’ve used.

The concepts—declarative configuration, self-healing, labels and selectors—are elegant and powerful. As the tooling matures and documentation improves, I expect Kubernetes to become the standard for container orchestration.

I’m excited to be learning it early. The skills will be valuable as Kubernetes adoption grows.

In future posts, I’ll dive deeper into Kubernetes security, monitoring, and operational patterns.

If you’re working with containers at scale, take a look at Kubernetes. It might just be the orchestration system you’ve been looking for.