Fibre Channel over Ethernet (FCoE) is one of the most talked-about technologies in data center networking right now. At Cisco, we’re seeing significant interest from customers who want to converge their LAN and SAN networks. But is FCoE ready for prime time? Let me share my perspective.
The Convergence Vision
The idea behind FCoE is compelling: run both Ethernet and Fibre Channel traffic over a single unified fabric. This promises reduced cabling, fewer adapters, lower power consumption, and simplified management. Instead of separate networks for storage and data traffic, you have one converged infrastructure.
This vision aligns perfectly with the broader data center consolidation trends we’re seeing. Customers are trying to do more with less—fewer cables, fewer switches, less power, less cooling. FCoE fits naturally into this picture.
How FCoE Works
FCoE encapsulates Fibre Channel frames directly into Ethernet frames. Note that I said “directly”—FCoE doesn’t use IP or TCP. Instead, it uses a new Ethernet type (0x8906) to carry native FC frames.
This approach preserves the FC protocol stack above FC-2, which means all the SCSI command semantics remain unchanged. From the storage array’s perspective, it’s still speaking Fibre Channel. The only difference is the transport.
The magic happens at the FCoE Initiation Protocol (FIP) layer, which handles discovery and login. FIP is responsible for discovering FCoE-capable devices and establishing virtual FC links over the Ethernet fabric.
Data Center Bridging
For FCoE to work, the underlying Ethernet network needs to be lossless, just like Fibre Channel. This is where Data Center Bridging (DCB) comes in. DCB is a set of enhancements to standard Ethernet that add the capabilities FCoE needs:
Priority Flow Control (PFC): This provides per-class flow control, allowing lossless delivery for FCoE traffic while other traffic classes remain lossy. It’s more sophisticated than the old 802.3x PAUSE mechanism.
Enhanced Transmission Selection (ETS): This provides bandwidth management across different traffic classes. You can guarantee minimum bandwidth for storage traffic while allowing data traffic to burst.
Data Center Bridging Exchange (DCBX): This is the configuration protocol that allows DCB parameters to be negotiated between devices.
Without DCB, FCoE simply won’t work reliably. The storage traffic requires lossless delivery, and DCB provides that.
Converged Network Adapters
One of the key enablers for FCoE is the Converged Network Adapter (CNA). These adapters combine the functions of a traditional NIC and HBA into a single card. The CNA presents both Ethernet and Fibre Channel interfaces to the operating system.
From a server administrator’s perspective, the storage traffic still looks like FC. The drivers, management tools, and applications don’t change. The convergence happens below the surface.
Implementation Challenges
While the vision is compelling, FCoE implementation has some real challenges. First, it requires a forklift upgrade of your network infrastructure. Your existing Ethernet switches probably don’t support DCB or FCoE.
Second, the operational model is different. Managing a converged fabric requires expertise in both networking and storage—skills that are often in different teams in large organizations. You need people who understand both technologies deeply.
Third, troubleshooting becomes more complex. When you have separate networks, fault isolation is straightforward. With a converged network, you need more sophisticated diagnostic tools.
Where FCoE Makes Sense
FCoE isn’t a universal solution. It makes the most sense in specific scenarios:
New data center builds: If you’re building from scratch, FCoE can reduce initial infrastructure costs significantly.
Highly virtualized environments: Server virtualization creates dense connectivity requirements. FCoE can reduce the number of cables and adapters needed per server.
Distance-limited deployments: FCoE currently works only within a single data center. For campus or metro SANs, native FC is still required.
The Cisco Nexus Approach
Cisco’s approach with the Nexus platform is interesting. The Nexus 5000 series provides FCoE at the access layer, while the MDS switches continue to handle the core FC fabric. This allows incremental adoption—you can deploy FCoE where it makes sense while preserving your existing FC infrastructure.
The Nexus devices perform FCoE-to-FC conversion, so your storage arrays don’t need to change. This is a pragmatic approach that recognizes the huge installed base of FC equipment.
Performance Considerations
In theory, 10 Gigabit Ethernet provides more bandwidth than 8 Gbps FC. But bandwidth isn’t everything. Latency matters too, especially for storage traffic.
Early FCoE implementations add some latency compared to native FC, primarily due to the FCoE processing in the adapters and switches. For most applications, this latency is acceptable, but for ultra-low-latency requirements, native FC may still be preferred.
The Road Ahead
FCoE is evolving rapidly. Standards are maturing, implementations are improving, and the ecosystem is growing. But adoption is happening more slowly than some predicted. Many customers are taking a wait-and-see approach.
I think FCoE has a strong future in specific use cases, but it won’t completely replace native Fibre Channel anytime soon. What we’re likely to see is a hybrid environment where FCoE is used for server connectivity while FC continues to be used for core storage networking.
Conclusion
FCoE represents an important evolution in data center networking. The promise of convergence is real, but so are the implementation challenges. Success with FCoE requires careful planning, the right use cases, and expertise in both networking and storage.
As we continue developing advanced features for both FC and FCoE at Cisco, I’m excited about the possibilities. The unified fabric vision is compelling, and we’re getting closer to making it a practical reality for enterprise data centers.
The key is understanding that FCoE isn’t about replacing FC—it’s about extending FC capabilities to a converged infrastructure. With that perspective, FCoE becomes a powerful tool in the data center architect’s toolkit.