OpenStack is rapidly becoming the platform for private clouds. As customers deploy OpenStack, they want to leverage their existing Fibre Channel storage investments. Iβve been working on integrating FC-Redirect with OpenStack to enable cloud-native storage provisioning while maintaining FC performance and reliability.
The Challenge: Cloud Meets Storage
OpenStack operates on cloud principles:
- On-demand provisioning
- Self-service APIs
- Elastic scaling
- Metered usage
Traditional FC storage operates differently:
- Pre-provisioned LUNs
- Manual zoning
- Static allocation
- Fixed capacity
Bridging these worlds requires careful architecture.
OpenStack Storage Architecture
OpenStack has several storage-related projects:
βββββββββββββββββββββββββββββββββββββββ
β Nova (Compute) β
β (VMs need block storage) β
βββββββββββββ¬ββββββββββββββββββββββββββ
β
βββββββββββββΌββββββββββββββββββββββββββ
β Cinder (Block Storage) β
β (Volume management and attachment) β
βββββββββββββ¬ββββββββββββββββββββββββββ
β
ββββββββ΄βββββββ
β β
ββββββΌβββββ ββββββΌβββββ
β Storage β β Storage β
β Backend β β Backend β
β (FC) β β (iSCSI) β
βββββββββββ βββββββββββ
My goal: Make FC storage work seamlessly with Cinder.
Cinder Driver Architecture
I built a Cinder driver for FC-Redirect:
from cinder.volume import driver
from cinder.volume.drivers.cisco import fc_redirect_client
class FCRedirectDriver(driver.VolumeDriver):
\"\"\"Cinder driver for FC-Redirect managed storage\"\"\"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.client = fc_redirect_client.FCRedirectClient(
host=self.configuration.fc_redirect_host,
port=self.configuration.fc_redirect_port,
username=self.configuration.fc_redirect_username,
password=self.configuration.fc_redirect_password
)
def create_volume(self, volume):
\"\"\"Create a new volume\"\"\"
# Allocate LUN on storage array
lun = self.client.create_lun(
size_gb=volume['size'],
name=volume['display_name'],
volume_type=volume['volume_type_id']
)
# Update volume with provider details
return {
'provider_location': lun['wwn'],
'provider_auth': None
}
def delete_volume(self, volume):
\"\"\"Delete a volume\"\"\"
wwn = volume['provider_location']
self.client.delete_lun(wwn)
def initialize_connection(self, volume, connector):
\"\"\"Attach volume to an instance\"\"\"
wwn = volume['provider_location']
initiator_wwpn = connector['wwpns'][0]
# Create FC zone
zone_info = self.client.create_zone(
initiator_wwpn=initiator_wwpn,
target_wwpn=wwn
)
# Set up FC-Redirect policy
self.client.create_redirect_policy(
src_wwpn=initiator_wwpn,
dst_wwpn=wwn,
qos_class='high',
bandwidth_mbps=1000
)
return {
'driver_volume_type': 'fibre_channel',
'data': {
'target_discovered': True,
'target_wwn': [wwn],
'initiator_target_map': {
initiator_wwpn: [wwn]
}
}
}
def terminate_connection(self, volume, connector):
\"\"\"Detach volume from instance\"\"\"
wwn = volume['provider_location']
initiator_wwpn = connector['wwpns'][0]
# Remove FC-Redirect policy
self.client.delete_redirect_policy(
src_wwpn=initiator_wwpn,
dst_wwpn=wwn
)
# Delete FC zone
self.client.delete_zone(
initiator_wwpn=initiator_wwpn,
target_wwpn=wwn
)
def get_volume_stats(self, refresh=False):
\"\"\"Return volume statistics\"\"\"
stats = self.client.get_storage_pool_stats()
return {
'volume_backend_name': 'fc_redirect',
'vendor_name': 'Cisco',
'driver_version': '1.0',
'storage_protocol': 'FC',
'total_capacity_gb': stats['total_capacity_gb'],
'free_capacity_gb': stats['free_capacity_gb'],
'reserved_percentage': 10,
'QoS_support': True
}
This driver handles the lifecycle of FC volumes within OpenStack.
FC-Redirect API Server
To support the Cinder driver, I built a REST API for FC-Redirect:
// HTTP API server for FC-Redirect
typedef struct api_request {
http_method_t method; // GET, POST, DELETE, etc.
char path[256];
json_t *body;
auth_token_t token;
} api_request_t;
typedef struct api_response {
int status_code;
json_t *body;
} api_response_t;
// API endpoint: Create redirect policy
api_response_t handle_create_policy(api_request_t *req) {
// Authenticate request
if (!validate_auth_token(&req->token)) {
return error_response(401, "Unauthorized");
}
// Parse request body
const char *src_wwpn = json_string_value(
json_object_get(req->body, "src_wwpn"));
const char *dst_wwpn = json_string_value(
json_object_get(req->body, "dst_wwpn"));
const char *qos_class = json_string_value(
json_object_get(req->body, "qos_class"));
if (!src_wwpn || !dst_wwpn) {
return error_response(400, "Missing required fields");
}
// Create policy
flow_policy_t policy = {
.src_wwpn = parse_wwpn(src_wwpn),
.dst_wwpn = parse_wwpn(dst_wwpn),
.qos_class = parse_qos_class(qos_class),
.action = ACTION_REDIRECT
};
policy_id_t policy_id = create_flow_policy(&policy);
// Return response
json_t *response_body = json_object();
json_object_set_new(response_body, "policy_id",
json_integer(policy_id));
json_object_set_new(response_body, "status",
json_string("created"));
return success_response(201, response_body);
}
// API routing
api_response_t route_request(api_request_t *req) {
if (strcmp(req->path, "/api/v1/policies") == 0) {
if (req->method == HTTP_POST) {
return handle_create_policy(req);
} else if (req->method == HTTP_GET) {
return handle_list_policies(req);
}
} else if (strncmp(req->path, "/api/v1/policies/", 17) == 0) {
if (req->method == HTTP_DELETE) {
return handle_delete_policy(req);
} else if (req->method == HTTP_GET) {
return handle_get_policy(req);
}
}
return error_response(404, "Not found");
}
This RESTful API makes FC-Redirect consumable from cloud orchestration.
Dynamic Zoning
OpenStack requires dynamic FC zoning. I implemented automatic zoning:
typedef struct zone_config {
char name[64];
wwpn_t initiator_wwpns[MAX_INITIATORS];
int num_initiators;
wwpn_t target_wwpns[MAX_TARGETS];
int num_targets;
} zone_config_t;
// Create zone for instance attachment
bool create_zone_for_attachment(wwpn_t initiator, wwpn_t target) {
zone_config_t zone = {0};
// Generate zone name
snprintf(zone.name, sizeof(zone.name),
"openstack_%016lx_%016lx",
initiator, target);
// Add members
zone.initiator_wwpns[0] = initiator;
zone.num_initiators = 1;
zone.target_wwpns[0] = target;
zone.num_targets = 1;
// Add zone to active zoneset
if (!add_zone_to_active_zoneset(&zone)) {
log_error("Failed to add zone %s", zone.name);
return false;
}
// Activate zoneset
if (!activate_zoneset()) {
log_error("Failed to activate zoneset");
remove_zone_from_zoneset(&zone);
return false;
}
log_info("Created zone %s for initiator %016lx to target %016lx",
zone.name, initiator, target);
return true;
}
// Remove zone on detachment
bool remove_zone_for_detachment(wwpn_t initiator, wwpn_t target) {
char zone_name[64];
snprintf(zone_name, sizeof(zone_name),
"openstack_%016lx_%016lx",
initiator, target);
zone_config_t *zone = find_zone_by_name(zone_name);
if (zone == NULL) {
log_warn("Zone %s not found", zone_name);
return false;
}
// Remove from zoneset
if (!remove_zone_from_zoneset(zone)) {
log_error("Failed to remove zone %s", zone_name);
return false;
}
// Activate zoneset
if (!activate_zoneset()) {
log_error("Failed to activate zoneset");
return false;
}
log_info("Removed zone %s", zone_name);
return true;
}
Automatic zoning enables cloud-style volume attachment.
QoS Integration
OpenStack supports volume QoS specs. I integrated with FC-Redirect QoS:
# OpenStack QoS spec
qos_spec = {
'name': 'high-performance',
'consumer': 'back-end',
'specs': {
'maxIOPS': '10000',
'maxBWS': '1000', # MB/s
'minIOPS': '1000',
'latency': '2' # milliseconds
}
}
# Translate to FC-Redirect policy
def translate_qos_to_policy(qos_spec):
return {
'qos_class': 'high' if int(qos_spec['maxIOPS']) > 5000 else 'normal',
'max_iops': int(qos_spec['maxIOPS']),
'min_iops': int(qos_spec['minIOPS']),
'max_bandwidth_mbps': int(qos_spec['maxBWS']),
'max_latency_us': int(qos_spec['latency']) * 1000
}
This allows OpenStack admins to define QoS in familiar terms.
Monitoring and Metering
OpenStack Ceilometer collects usage metrics. I integrated FC-Redirect metrics:
from ceilometer import sample
from ceilometer.volume import driver
class FCRedirectMetricsDriver(driver.VolumeMetricsDriver):
def get_samples(self, manager, cache, resources):
\"\"\"Return volume metrics\"\"\"
for resource in resources:
volume_id = resource['id']
# Query FC-Redirect for volume statistics
stats = self.client.get_volume_stats(volume_id)
# Yield IOPS metric
yield sample.Sample(
name='volume.iops',
type=sample.TYPE_GAUGE,
unit='iops',
volume=stats['current_iops'],
user_id=resource['user_id'],
project_id=resource['project_id'],
resource_id=volume_id
)
# Yield bandwidth metric
yield sample.Sample(
name='volume.bandwidth',
type=sample.TYPE_GAUGE,
unit='MB/s',
volume=stats['current_bandwidth_mbps'],
user_id=resource['user_id'],
project_id=resource['project_id'],
resource_id=volume_id
)
# Yield latency metric
yield sample.Sample(
name='volume.latency',
type=sample.TYPE_GAUGE,
unit='ms',
volume=stats['avg_latency_ms'],
user_id=resource['user_id'],
project_id=resource['project_id'],
resource_id=volume_id
)
This enables cloud-style usage-based billing for FC storage.
Multi-Tenancy
OpenStack is multi-tenant. FC-Redirect must isolate tenant traffic:
typedef struct tenant_context {
tenant_id_t tenant_id;
char tenant_name[64];
// Resource quotas
uint32_t max_volumes;
uint32_t max_total_capacity_gb;
uint32_t max_iops;
uint32_t max_bandwidth_mbps;
// Current usage
atomic_uint32_t volumes_used;
atomic_uint32_t capacity_used_gb;
// Tenant-specific policies
flow_policy_t *policies;
int num_policies;
} tenant_context_t;
// Check tenant quota before creating volume
bool check_tenant_quota(tenant_context_t *tenant, uint32_t size_gb) {
uint32_t current_volumes = atomic_load(&tenant->volumes_used);
uint32_t current_capacity = atomic_load(&tenant->capacity_used_gb);
if (current_volumes >= tenant->max_volumes) {
log_warn("Tenant %s exceeded volume quota", tenant->tenant_name);
return false;
}
if (current_capacity + size_gb > tenant->max_total_capacity_gb) {
log_warn("Tenant %s exceeded capacity quota", tenant->tenant_name);
return false;
}
return true;
}
// Enforce tenant bandwidth limits
void enforce_tenant_bandwidth_limit(tenant_context_t *tenant,
flow_entry_t *flow) {
// Calculate tenant's total bandwidth usage
uint32_t total_bw = calculate_tenant_bandwidth(tenant);
if (total_bw > tenant->max_bandwidth_mbps) {
// Throttle this flow proportionally
uint32_t throttled_bw = flow->bandwidth_mbps *
tenant->max_bandwidth_mbps / total_bw;
apply_bandwidth_limit(flow, throttled_bw);
log_info("Throttling tenant %s flow to %u MB/s",
tenant->tenant_name, throttled_bw);
}
}
Tenant isolation ensures fair resource sharing.
Deployment Example
Hereβs a complete workflow:
from cinderclient import client as cinder_client
from novaclient import client as nova_client
# Initialize clients
cinder = cinder_client.Client('2', session=keystone_session)
nova = nova_client.Client('2', session=keystone_session)
# Create volume
volume = cinder.volumes.create(
size=100, # 100 GB
name='database-volume',
volume_type='fc-high-performance'
)
# Wait for volume to be available
while volume.status != 'available':
time.sleep(1)
volume = cinder.volumes.get(volume.id)
# Create instance
instance = nova.servers.create(
name='database-server',
image=ubuntu_image_id,
flavor=flavor_id
)
# Wait for instance to be active
while instance.status != 'ACTIVE':
time.sleep(1)
instance = nova.servers.get(instance.id)
# Attach volume to instance
nova.volumes.create_server_volume(
instance.id,
volume.id,
device='/dev/vdb'
)
# FC-Redirect automatically:
# 1. Creates FC zone between instance and volume
# 2. Sets up redirect policy with QoS
# 3. Enables volume visibility to instance
# Instance can now use /dev/vdb as FC-attached storage
From the instanceβs perspective, itβs just a block device. Under the hood, FC-Redirect manages the complexity.
Performance Results
Testing with real OpenStack deployments:
Volume Operations:
- Volume create: <5 seconds
- Volume attach: <3 seconds
- Volume detach: <2 seconds
- Volume delete: <3 seconds
I/O Performance:
- IOPS: Same as native FC (no overhead)
- Bandwidth: Same as native FC
- Latency: +0.2ΞΌs overhead (negligible)
Scalability:
- Tested: 1000 volumes across 100 tenants
- Volume operations: 100/second sustained
- No performance degradation
Lessons Learned
Integrating FC with OpenStack taught me:
1. APIs Enable Integration
RESTful APIs make FC-Redirect consumable from any orchestration platform, not just OpenStack.
2. Automation Is Essential
Manual zoning doesnβt scale in cloud environments. Automation is mandatory.
3. Multi-Tenancy Requires Isolation
Cloud is inherently multi-tenant. Resource isolation must be first-class.
4. Monitoring Enables Metering
Detailed metrics enable usage-based billing and capacity planning.
5. Performance Canβt Be Compromised
Cloud users still expect FC performance. The abstraction must be zero-overhead.
Looking Forward
This integration opens new possibilities:
- Hybrid clouds (OpenStack + AWS)
- Storage-as-a-Service offerings
- Automated disaster recovery
- Self-service provisioning for enterprise users
As more enterprises adopt private clouds, FC storage integration becomes critical. Our work positions Cisco well for this transition.
The cloud revolution doesnβt mean abandoning proven technologies like Fibre Channel. It means making them cloud-native. Thatβs what weβve accomplished.
Traditional infrastructure meeting cloud practices: thatβs the future of enterprise IT.