While I spend most of my time working with Fibre Channel at Cisco, I have a healthy respect for iSCSI. It’s easy to be dismissive of iSCSI when you’re deep in the FC world, but that misses the point. iSCSI has fundamentally democratized storage networking, and that’s worth understanding.
The iSCSI Proposition
iSCSI’s value proposition is straightforward: use your existing IP network for storage traffic. No specialized switches, no special cables, no HBAs—just standard Ethernet NICs and TCP/IP. This dramatically lowers the barrier to entry for networked storage.
For small to medium businesses, this is transformative. FC infrastructure has significant upfront costs: FC switches, HBAs, specialized cables, and expertise to manage it all. iSCSI lets you leverage existing infrastructure and skills.
How iSCSI Works
iSCSI encapsulates SCSI commands in TCP/IP packets. The protocol defines initiators (clients) and targets (storage devices), just like in the FC world. But instead of FC addressing and zoning, you use IP addresses and standard network security.
The protocol includes discovery mechanisms (SendTargets, iSNS), authentication (CHAP), and session management. It’s designed to work across routed networks, which FC cannot do natively.
What’s interesting about iSCSI is that it leverages decades of IP networking experience. All the tools and techniques you use for troubleshooting IP networks—packet captures, ping, traceroute—work with iSCSI.
Performance Considerations
The big question everyone asks: How does iSCSI performance compare to FC? The answer, as always, is “it depends.”
With 10 Gigabit Ethernet becoming more affordable, iSCSI can match or exceed 8 Gbps FC in terms of throughput. The latency story is more nuanced. TCP/IP processing adds overhead compared to FC, but with modern offload engines (TOE) and optimized drivers, the gap has narrowed significantly.
For many workloads, especially those that aren’t extremely latency-sensitive, iSCSI performance is entirely adequate. We’re talking about latencies in the low millisecond range for most implementations—perfectly fine for file servers, email, and many database workloads.
Network Design for iSCSI
The key to good iSCSI performance is proper network design. You cannot just throw iSCSI traffic onto your existing network and expect great results. Here are the critical considerations:
Dedicated VLANs: Separate storage traffic from general network traffic. This prevents congestion and improves security.
Jumbo Frames: Enable jumbo frames (MTU 9000) to reduce overhead. This can improve throughput by 15-20% in many scenarios.
Flow Control: While not as critical as with FCoE, enabling flow control on storage ports can help prevent buffer overruns.
Multipathing: Use multiple paths for redundancy and load balancing. iSCSI supports MPIO (Multipath I/O) just like FC.
QoS: Implement quality of service policies to prioritize storage traffic during congestion.
Security Considerations
Security with iSCSI requires more attention than FC. Because iSCSI runs on standard IP networks, it’s potentially exposed to more threats. Best practices include:
Authentication: Always use CHAP authentication, preferably mutual CHAP where both initiator and target authenticate each other.
Network Isolation: Use dedicated storage networks, not the general corporate LAN. If you must share infrastructure, use VLANs with access controls.
Encryption: For traffic crossing untrusted networks, use IPsec encryption. However, this adds latency and complexity.
Access Control: Implement strict ACLs limiting which initiators can reach which targets.
iSCSI Offload Engines
One way to improve iSCSI performance is with specialized hardware. iSCSI HBAs handle TCP/IP processing and SCSI command processing in hardware, removing load from the host CPU.
For busy database servers or virtualization hosts, iSCSI HBAs can make a significant difference. They provide performance closer to FC while maintaining the operational benefits of IP-based storage.
Software initiators have improved dramatically too. Modern operating systems have highly optimized iSCSI initiators that work well for many workloads without specialized hardware.
When to Choose iSCSI Over FC
iSCSI makes sense in several scenarios:
Budget Constraints: When FC infrastructure costs are prohibitive, iSCSI provides a cost-effective alternative.
Existing IP Expertise: Organizations with strong IP networking skills but limited FC experience.
Distance Requirements: iSCSI works over routed networks, enabling storage connectivity across campus or WAN distances without expensive FC extension solutions.
Mixed Environments: When you need to connect diverse operating systems and platforms that may not have FC support.
Moderate Performance Requirements: For workloads that don’t require the absolute lowest latency.
The Virtualization Factor
Server virtualization has been a huge driver for both FC and iSCSI adoption. VMware, which dominates the virtualization market, supports both protocols equally well.
For virtualization, iSCSI has some advantages. It’s easier to provision, simpler to troubleshoot, and the IP-based addressing model fits naturally with virtualized infrastructure. Many VMware shops use iSCSI exclusively and are very happy with it.
Real-World Performance
I’ve seen iSCSI deployments supporting hundreds of virtual machines with good performance. The key is proper design and tuning. Common mistakes include:
- Using 1 Gbps Ethernet (too slow for production)
- Sharing NICs between storage and application traffic
- Not enabling jumbo frames
- Insufficient switch buffer space
- Poor multipathing configuration
When designed correctly, iSCSI can deliver excellent performance. I’ve seen iSCSI SANs sustaining hundreds of thousands of IOPS with sub-2ms latency.
The Hybrid Approach
Interestingly, many organizations use both FC and iSCSI. They might use FC for Tier 1 applications requiring the absolute best performance and iSCSI for less critical workloads. This hybrid approach leverages the strengths of both protocols.
Some storage arrays support both FC and iSCSI front-ends, allowing different hosts to connect via their preferred protocol. This flexibility is valuable in heterogeneous environments.
Looking Forward
iSCSI will continue to evolve alongside Ethernet. As 40 and 100 Gigabit Ethernet become more common, iSCSI’s performance ceiling rises accordingly. The protocol is well-positioned to take advantage of these improvements.
I expect to see continued convergence between iSCSI and Ethernet networking. Features like DCB that were developed for FCoE also benefit iSCSI. The line between “storage network” and “data network” continues to blur.
Conclusion
While FC remains the gold standard for enterprise storage networking, iSCSI has carved out an important role. It’s not about which protocol is “better”—it’s about choosing the right tool for your requirements, constraints, and expertise.
iSCSI has democratized storage networking, bringing SAN capabilities to organizations that couldn’t justify FC infrastructure. That’s a significant achievement. As I work on FC technology at Cisco, I appreciate that iSCSI serves a real need in the market.
The storage networking world is big enough for multiple protocols. Understanding when to use each one is a valuable skill for anyone working in this space.