Containers are everywhere now. We’ve been running containerized workloads in production for about a year, and the security landscape is completely different from traditional VM-based deployments.
Containers promise isolation, but they’re not VMs. They share the host kernel. A vulnerability in one container can potentially compromise the entire host. After several security incidents and countless hours hardening our container infrastructure, I’ve learned what works and what doesn’t.
Here’s what container security actually looks like in production.
The Container Security Model
First, understand what containers actually provide:
Containers give you:
- Process isolation via namespaces
- Resource limits via cgroups
- Filesystem isolation
- Network isolation
Containers don’t give you:
- Kernel-level isolation (they share the host kernel)
- Hardware virtualization boundaries
- Complete security by default
The kernel is the shared attack surface. If a container exploit can escape to the kernel, it can affect other containers or the host.
Layer 1: Image Security
Security starts with the container image. Most vulnerabilities come from what you put in the image.
Minimal Base Images
Don’t start with a full OS. Use minimal base images:
# Bad: Full Ubuntu image
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
gcc \
make \
curl \
wget \
vim \
# ... tons of packages you don't need
# Good: Minimal Alpine image
FROM alpine:3.4
RUN apk add --no-cache python3
The Ubuntu image has hundreds of packages. Each package is a potential vulnerability. Alpine Linux has a minimal package set—only what you need.
Our metrics:
- Ubuntu base image: ~650 MB, ~200 packages, ~50 CVEs
- Alpine base image: ~5 MB, ~15 packages, ~2 CVEs
The difference is dramatic.
Multi-Stage Builds
Don’t ship build tools in production images:
# Build stage
FROM golang:1.6-alpine AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp
# Production stage
FROM alpine:3.4
RUN apk add --no-cache ca-certificates
COPY --from=builder /app/myapp /usr/local/bin/
USER nobody
ENTRYPOINT ["/usr/local/bin/myapp"]
The final image contains only the compiled binary and runtime dependencies. Build tools like compilers and package managers stay in the builder stage.
This reduces:
- Image size (faster deployments)
- Attack surface (fewer binaries to exploit)
- Vulnerability count (no build tool CVEs)
Non-Root Users
Never run containers as root:
FROM alpine:3.4
# Create a non-root user
RUN addgroup -g 1000 appgroup && \
adduser -D -u 1000 -G appgroup appuser
# Install application
COPY myapp /usr/local/bin/
RUN chown appuser:appgroup /usr/local/bin/myapp
# Switch to non-root user
USER appuser
ENTRYPOINT ["/usr/local/bin/myapp"]
If a container is compromised, the attacker has only the privileges of the container’s user. Root in the container can potentially become root on the host.
We enforce this with admission controllers:
# Kubernetes PodSecurityPolicy
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- 'configMap'
- 'emptyDir'
- 'secret'
Pods that try to run as root are rejected automatically.
Image Scanning
Scan every image for vulnerabilities before deployment:
# Scan with Clair
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
quay.io/coreos/clair:latest analyze myapp:latest
# Example output
High: CVE-2016-1234 in openssl-1.0.2g
Medium: CVE-2016-5678 in bash-4.3
We run Clair in our CI/CD pipeline. Images with high-severity vulnerabilities are blocked from production.
Our scanning workflow:
// Jenkins pipeline
stage('Security Scan') {
steps {
script {
def scanResult = sh(
script: "clair-scanner --ip \${LOCAL_IP} myapp:\${BUILD_TAG}",
returnStatus: true
)
if (scanResult != 0) {
error("Security vulnerabilities found")
}
}
}
}
Don’t deploy vulnerable images. Fix the vulnerability or use a patched base image.
Image Signing
Sign images to prevent tampering:
# Enable Docker Content Trust
export DOCKER_CONTENT_TRUST=1
# Push signed image
docker push myregistry.com/myapp:v1.2.3
# Signing automatically happens
# Image signature stored in Notary server
Kubernetes can verify signatures before pulling images:
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myregistry.com/myapp:v1.2.3
imagePullPolicy: Always
imagePullSecrets:
- name: registry-credentials
Combined with admission webhooks, you can reject unsigned images entirely.
Layer 2: Runtime Security
Images are static. Runtime is where attacks happen.
Read-Only Root Filesystem
Make the container filesystem read-only:
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:latest
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /var/cache
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
Attackers can’t modify binaries or inject malicious files. Mount temporary directories as needed, but keep the root filesystem immutable.
Drop Linux Capabilities
Containers get Linux capabilities by default. Drop everything except what’s needed:
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:latest
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE # Only if binding to port < 1024
Most applications don’t need any special capabilities. Drop them all and add back only what’s required.
AppArmor and Seccomp
Use AppArmor or Seccomp to restrict system calls:
apiVersion: v1
kind: Pod
metadata:
name: myapp
annotations:
container.apparmor.security.beta.kubernetes.io/myapp: runtime/default
spec:
containers:
- name: myapp
image: myapp:latest
securityContext:
seccompProfile:
type: RuntimeDefault
This blocks dangerous system calls like ptrace, mount, reboot. If your application doesn’t need them (most don’t), block them.
Resource Limits
Prevent resource exhaustion attacks:
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:latest
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
A compromised container can’t consume all host resources and DOS other containers.
Network Policies
Restrict network access between containers:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-allow-from-frontend
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Default deny all traffic, then whitelist specific connections. This implements micro-segmentation—a compromised frontend can’t directly attack the database.
Layer 3: Host Security
Containers run on hosts. Secure the host.
Minimal Host OS
Use a container-optimized OS:
- CoreOS Container Linux
- RancherOS
- Google Container-Optimized OS
These OSs have:
- Minimal package set
- Read-only root filesystem
- Automatic security updates
- No SSH by default
Traditional Linux distributions have too many packages and services you don’t need for running containers.
Kernel Hardening
Enable kernel security features:
# Enable kernel address space layout randomization
sysctl -w kernel.randomize_va_space=2
# Restrict kernel pointers in /proc
sysctl -w kernel.kptr_restrict=2
# Disable kernel module loading after boot
sysctl -w kernel.modules_disabled=1
# Restrict access to kernel logs
sysctl -w kernel.dmesg_restrict=1
These make kernel exploits harder.
Docker Daemon Security
Secure the Docker daemon itself:
{
"icc": false,
"userns-remap": "default",
"live-restore": true,
"userland-proxy": false,
"no-new-privileges": true
}
Key settings:
icc: false- Disable inter-container communication by defaultuserns-remap- Map container root to non-root host userno-new-privileges- Prevent privilege escalation
User Namespace Remapping
Map container users to unprivileged host users:
# /etc/docker/daemon.json
{
"userns-remap": "default"
}
Root inside the container (UID 0) maps to an unprivileged user on the host (UID 100000). Even if an attacker escapes the container as root, they’re unprivileged on the host.
This is powerful but breaks some applications that expect specific UIDs. Test thoroughly.
Layer 4: Secrets Management
Don’t put secrets in images.
Environment Variables vs Secret Stores
# Bad: Hardcoded secrets
ENV DB_PASSWORD=supersecret123
# Better: Environment variables (but visible in process list)
ENV DB_PASSWORD_FILE=/run/secrets/db_password
# Best: Secret store integration
# Application fetches secrets from Vault/AWS Secrets Manager
In Kubernetes, use Secrets:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
password: c3VwZXJzZWNyZXQxMjM= # base64 encoded
---
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: myapp:latest
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
Even better, integrate with a proper secret store:
// Application code
import (
"github.com/hashicorp/vault/api"
)
func getDBPassword() (string, error) {
client, err := api.NewClient(&api.Config{
Address: "https://vault.company.com",
})
if err != nil {
return "", err
}
secret, err := client.Logical().Read("secret/database/password")
if err != nil {
return "", err
}
return secret.Data["password"].(string), nil
}
Secrets are centralized, audited, and rotatable.
Secret Rotation
Rotate secrets regularly:
type SecretRotator struct {
vault *api.Client
secretPath string
rotationInterval time.Duration
}
func (sr *SecretRotator) Start() {
ticker := time.NewTicker(sr.rotationInterval)
defer ticker.Stop()
for range ticker.C {
if err := sr.rotateSecret(); err != nil {
log.Error("secret rotation failed", "error", err)
}
}
}
func (sr *SecretRotator) rotateSecret() error {
// Generate new secret
newSecret := generateRandomPassword(32)
// Update in Vault
_, err := sr.vault.Logical().Write(sr.secretPath, map[string]interface{}{
"password": newSecret,
})
return err
}
Automated rotation limits the blast radius of compromised secrets.
Layer 5: Monitoring and Detection
Assume breaches will happen. Detect them quickly.
Container Audit Logging
Log all container events:
{
"auditlog": {
"enabled": true,
"max-size": "100",
"max-backups": "10"
}
}
Monitor logs for suspicious events:
- Container started as root
- Privileged container created
- Volume mount from host
- Host network access
- Capability added
Runtime Behavior Monitoring
Tools like Falco detect anomalous runtime behavior:
# Falco rule
- rule: Unexpected outbound connection
desc: Detect unexpected outbound network connection
condition: >
outbound and
container and
not fd.sip in (allowed_ips)
output: >
Unexpected outbound connection
(connection=%fd.name user=%user.name container=%container.name)
priority: WARNING
This catches:
- Reverse shells
- Data exfiltration
- Lateral movement
- Crypto mining
File Integrity Monitoring
Monitor critical files for changes:
type FileMonitor struct {
checksums map[string]string
}
func (fm *FileMonitor) Watch(path string) {
ticker := time.NewTicker(1 * time.Minute)
defer ticker.Stop()
for range ticker.C {
current := fm.calculateChecksum(path)
if current != fm.checksums[path] {
log.Warn("file modified",
"path", path,
"old_checksum", fm.checksums[path],
"new_checksum", current)
// Alert security team
}
fm.checksums[path] = current
}
}
func (fm *FileMonitor) calculateChecksum(path string) string {
data, _ := ioutil.ReadFile(path)
hash := sha256.Sum256(data)
return hex.EncodeToString(hash[:])
}
If /usr/bin/curl suddenly changes, that’s suspicious.
Layer 6: Supply Chain Security
Your containers include third-party code. Trust but verify.
Base Image Provenance
Only use official images from trusted registries:
# Good: Official images
FROM nginx:1.11-alpine
FROM node:6-alpine
FROM python:3.5-alpine
# Risky: Random Docker Hub images
FROM randomuser/nginx-custom:latest
Official images are maintained, scanned, and updated by the upstream project.
Dependency Scanning
Scan application dependencies:
# For Python
safety check
# For Node.js
npm audit
# For Ruby
bundler-audit
# For Go
go list -json -m all | nancy sleuth
Integrate into CI/CD:
stage('Dependency Scan') {
steps {
sh 'safety check --json > safety-report.json'
script {
def report = readJSON file: 'safety-report.json'
if (report.vulnerabilities.size() > 0) {
error("Vulnerable dependencies found")
}
}
}
}
Private Registry
Run your own container registry:
# Harbor registry with security scanning
apiVersion: v1
kind: Service
metadata:
name: harbor
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 8080
selector:
app: harbor
Benefits:
- Control over image scanning
- Image signing enforcement
- Access control and audit logs
- Vulnerability database updates
Real-World Incidents and Lessons
Incident 1: Crypto Mining Container
What happened: A compromised container started mining cryptocurrency.
Detection: CPU usage spiked to 100% for an API container that normally uses 5%.
Root cause: Image built from unverified base image containing a backdoor.
Fix:
- Restricted to official base images only
- Added CPU limits to all containers
- Implemented anomaly detection on resource usage
Incident 2: Container Escape
What happened: Attacker exploited kernel vulnerability from within container.
Impact: Gained root access on the host.
Root cause: Running privileged container with host PID namespace.
Fix:
- Banned privileged containers via PodSecurityPolicy
- Kernel upgraded to patched version
- Enabled AppArmor profiles
Incident 3: Exposed Secrets
What happened: AWS credentials found in container image.
Impact: Credentials used to access production S3 buckets.
Root cause: Developer committed .env file to git, which was copied into image.
Fix:
- Scanned all images for secrets with git-secrets
- Moved to dynamic secret injection from Vault
- Rotated all exposed credentials
- Added pre-commit hooks to prevent credential commits
Practical Security Checklist
Use this checklist for every container deployment:
Image Security:
- Minimal base image (Alpine, scratch, or distroless)
- Multi-stage build (no build tools in final image)
- Non-root user
- Vulnerability scan passed
- Image signed
- No secrets in image
Runtime Security:
- Read-only root filesystem
- Dropped all capabilities
- AppArmor/Seccomp profile applied
- Resource limits set
- Network policy defined
Host Security:
- Container-optimized OS
- User namespace remapping enabled
- Kernel hardening applied
- Docker daemon secured
Secrets:
- No hardcoded secrets
- Secret store integration
- Secret rotation configured
Monitoring:
- Audit logging enabled
- Runtime monitoring configured
- Alerting on anomalies
Tools and Resources
Image Scanning:
- Clair (open source)
- Trivy (fast and comprehensive)
- Anchore Engine (policy-based)
Runtime Security:
- Falco (runtime behavior monitoring)
- Sysdig Secure (commercial)
- Aqua Security (commercial)
Secrets Management:
- HashiCorp Vault
- AWS Secrets Manager
- Kubernetes Secrets
Policy Enforcement:
- Open Policy Agent (OPA)
- Kubernetes PodSecurityPolicies
- Admission Controllers
Conclusion
Container security requires defense in depth:
- Secure images before they run
- Restrict runtime permissions
- Harden the host
- Protect secrets
- Monitor for anomalies
- Verify the supply chain
The default container configuration is not secure. Every container should be hardened based on the principle of least privilege.
Start with the checklist above. Automate enforcement with policies. Monitor continuously.
Containers are powerful but require discipline. Security can’t be an afterthought.
In my next post, I’ll cover how DevOps culture and security practices intersect—building security into the development lifecycle rather than bolting it on later.
The future of infrastructure is containerized. Build secure containers from the start.