I’ve been following Docker’s rapid rise with great interest. Released in March 2013, it’s already generating significant buzz in the infrastructure community. As someone working on storage networking, I’ve been thinking about what containerization means for how we design and deploy storage systems.
Understanding Docker’s Value Proposition
Docker isn’t inventing containers; Linux containers (LXC) have existed for years. Docker’s innovation is making containers accessible and portable. The ability to package an application with its dependencies and run it anywhere is compelling.
I spent last week experimenting with Docker in our lab. The experience is remarkably smooth:
# Pull an image
docker pull ubuntu:12.04
# Run a container
docker run -it ubuntu:12.04 /bin/bash
# Create a custom image
docker build -t my-app .
# Ship it anywhere
docker push my-app
This simplicity is powerful. Developers can build once and run anywhere, operations teams get consistent deployments, and everyone benefits from efficient resource utilization.
Implications for Storage Networking
As I explored Docker, several storage networking implications became clear:
1. Storage as a Service Becomes Critical
Containers are ephemeral. They start, run, and die. But data is persistent. This creates an interesting tension: how do stateless containers interact with stateful storage?
The current Docker approach uses volume mounts:
docker run -v /host/data:/container/data my-database
But in distributed environments, this isn’t sufficient. Containers might run on any host in a cluster. Storage needs to follow the container, not be tied to a specific host.
This is where Fibre Channel and storage networking become relevant. We need:
- Dynamic volume provisioning: Create storage on-demand as containers start
- Storage mobility: Move storage between hosts as containers migrate
- Performance guarantees: Ensure containers get their required IOPS and bandwidth
- Multi-tenancy: Isolate storage between different containers and applications
2. Network-Attached Storage Over Host-Attached
Traditional storage architectures assume stable, long-lived hosts with locally-attached storage. Containers break this assumption.
Network-attached storage (NAS and SAN) fits the container model better:
Traditional:
Host → Local Disk → Application
Container Future:
Container → Network → Storage Fabric → Storage Array
This architectural shift plays to Cisco’s strengths. Our FC and Ethernet fabrics are designed exactly for this: connecting ephemeral compute to persistent storage.
3. Performance Isolation Challenges
When multiple containers share a host and that host shares a storage fabric, performance isolation becomes critical. One noisy neighbor can impact everyone.
This is where our work on FC-Redirect flow tracking becomes relevant. By tracking and managing individual flows, we can provide performance guarantees even in multi-tenant environments.
I’ve been prototyping flow-level QoS mechanisms:
typedef struct container_flow_policy {
container_id_t container_id;
uint32_t min_iops;
uint32_t max_iops;
uint32_t min_bandwidth_mbps;
uint32_t max_bandwidth_mbps;
priority_t priority;
} container_flow_policy_t;
void apply_container_policy(flow_entry_t *flow,
container_flow_policy_t *policy) {
// Apply rate limiting
flow->rate_limiter.tokens = policy->max_iops;
flow->rate_limiter.refill_rate = policy->max_iops;
// Set bandwidth limits
flow->bandwidth_limiter.max_bps =
policy->max_bandwidth_mbps * 1024 * 1024;
// Priority for scheduling
flow->priority = policy->priority;
}
This allows us to give each container its own performance envelope, preventing interference.
Building Docker-Aware Storage Infrastructure
I’ve been experimenting with integrating Docker and FC storage. Here’s a proof-of-concept that automatically provisions storage for containers:
Dynamic Volume Provisioning
#!/usr/bin/env python
import docker
import fc_storage_api
def provision_storage_for_container(container_id, size_gb):
# Create a LUN on the storage array
lun = fc_storage_api.create_lun(
size_gb=size_gb,
name=f"docker-{container_id}"
)
# Zone the LUN to the host
host_wwpn = get_host_wwpn()
fc_storage_api.create_zone(
name=f"docker-{container_id}-zone",
initiator=host_wwpn,
target=lun.wwpn
)
# Discover and mount
discover_lun(lun.wwpn)
device = format_and_mount(lun)
return device
# Docker event listener
client = docker.from_env()
for event in client.events(decode=True):
if event['Type'] == 'container' and event['Action'] == 'start':
container_id = event['id']
# Check if container requests storage
container = client.containers.get(container_id)
if 'storage.size' in container.labels:
size = int(container.labels['storage.size'])
device = provision_storage_for_container(container_id, size)
# Mount into container
# (requires privileged container or bind mount)
This is rough, but it demonstrates the concept: storage provisioned automatically based on container metadata.
Storage Driver Architecture
I’ve been thinking about what a Docker storage driver for FC storage would look like:
Docker Engine
↓
Storage Driver Plugin
↓
FC Storage API
↓
├→ MDS Fabric (zoning, FC-Redirect)
↓
Storage Array (volume provisioning)
The driver would:
- Accept volume creation requests from Docker
- Provision LUNs on the storage array
- Configure fabric zoning
- Apply FC-Redirect policies for QoS
- Discover and format the LUN
- Return a mountable device to Docker
Challenges Ahead
Several challenges need solving before containers and FC storage work seamlessly:
1. Latency of Provisioning
Current FC storage provisioning takes seconds to minutes. For containers that start in milliseconds, this is unacceptable. We need:
- Pre-provisioned storage pools
- Thin provisioning and rapid allocation
- Cached zone configurations
- Optimized discovery processes
2. Metadata Management
Tracking which containers use which LUNs, which zones exist for which containers, and cleaning up after container deletion requires sophisticated metadata management.
3. Security and Isolation
In multi-tenant environments, containers must not access each other’s storage. FC zoning provides hardware-enforced isolation, but managing zones dynamically is complex.
4. Performance Predictability
Container density means many workloads competing for storage resources. Providing predictable performance requires:
- Fine-grained QoS at the flow level
- Admission control (don’t schedule more containers than storage can handle)
- Real-time monitoring and adjustment
The Convergence of Technologies
What excites me most is the convergence I’m seeing:
- Docker: Making compute ephemeral and portable
- FC/FCoE: Providing reliable, high-performance storage connectivity
- SDN: Enabling programmable network fabrics
- Cloud: Driving demand for elastic, on-demand infrastructure
These technologies complement each other. Docker creates the need for dynamic, scalable storage. FC and storage networking provide the performance and reliability. SDN enables the automation required to tie it together.
Practical Steps
While the full vision is years away, there are practical steps we can take now:
-
Build APIs: Expose storage and fabric operations through RESTful APIs that orchestration tools can consume.
-
Optimize for speed: Reduce provisioning latency through pre-allocation and caching.
-
Implement flow-level QoS: Give each container or application its own performance guarantees.
-
Develop orchestration integrations: Build plugins for Docker, Kubernetes (when it emerges), and other orchestration platforms.
-
Create reference architectures: Document how to build container infrastructure on FC storage.
Looking Forward
Docker is less than five months old, but it’s clear that containers represent a fundamental shift in how we deploy and manage applications. Storage networking needs to evolve to support this new paradigm.
The good news is that the core technologies we’ve built, FC-Redirect, flow tracking, dynamic QoS, position us well for this future. The challenge is adapting them to the container world’s speed and scale requirements.
I’m convinced that storage networking and containers will converge. The organizations that figure out how to make stateful storage work seamlessly with stateless containers will have a significant competitive advantage.
Over the coming months, I plan to continue exploring this space. I’ll be building prototypes, testing integration patterns, and working to ensure our storage networking solutions are ready for the container future.
The future of infrastructure is taking shape, and it’s an exciting time to be working on the storage networking foundations that will support it.