As we start 2012, the storage industry is at an inflection point. Several major trends are converging that will reshape how we think about storage infrastructure. Based on what I’m seeing at Cisco and in the broader market, here are the trends I’m watching this year.
Flash Storage Goes Mainstream
Flash storage has moved from exotic to essential. SSDs are no longer just for laptops—they’re becoming critical components of enterprise storage infrastructure.
The economics are improving dramatically. While SSDs remain more expensive per GB than spinning disks, the $/IOPS metric tells a different story. A single SSD can deliver 10,000+ IOPS, replacing 100+ traditional disks for certain workloads.
We’re seeing several deployment models:
All-Flash Arrays: Pure SSD arrays for maximum performance. Still expensive but increasingly viable for Tier 1 applications.
Hybrid Arrays: Combining SSD and spinning disk with intelligent tiering. Hot data on SSD, cold data on disk.
Server-Side Flash: PCIe flash cards in servers for ultra-low latency. Useful for caching or local storage.
Array-Based Caching: Using SSD to extend array cache. More cost-effective than all-flash but still provides significant performance improvement.
The challenge is managing the economics. SSD is expensive, so you need intelligent software to ensure hot data lands on SSD while cold data stays on cheaper disk.
Software-Defined Storage Emergence
The software-defined everything trend is reaching storage. The idea: decouple storage software from hardware, running sophisticated storage features on commodity hardware.
This mirrors what’s happening in networking with software-defined networking (SDN). By separating the control plane from the data plane, you can innovate rapidly in software while using cheap, standardized hardware.
Early products are appearing:
Scale-Out NAS: Software that creates distributed file systems across commodity servers.
Virtual SANs: Aggregating direct-attached storage from many servers into a shared pool.
Storage Virtualization: Presenting a virtual view of storage independent of physical arrays.
The promise is better economics and faster innovation. The challenge is reliability—can software-defined storage match the reliability of purpose-built arrays?
Cloud Storage Integration
Cloud storage is no longer just for consumers. Enterprises are integrating cloud storage into their infrastructure:
Cloud Tiering: Automatically moving cold data to cloud storage for cost savings.
Cloud Backup: Using cloud storage as a backup target instead of tape or disk.
Disaster Recovery: Replicating to cloud storage for geographic redundancy.
Hybrid Cloud: Seamlessly extending on-premises storage to cloud.
The economics are compelling for certain workloads. The challenges are around security, compliance, and performance for data in the cloud.
Vendors are responding with cloud-integrated arrays that make on-premises storage work seamlessly with cloud storage providers.
Big Data Storage Requirements
Big data is exploding, driven by analytics, machine learning, and data-driven business processes. This creates unique storage requirements:
Massive Scale: Petabyte-scale datasets are becoming common.
High Throughput: Analytics jobs need to read/write massive amounts of data quickly.
Cost Sensitivity: Big data storage needs to be economical—you can’t use expensive arrays.
Specialized Workloads: Sequential access patterns, append-heavy workloads.
Hadoop and its ecosystem are driving much of this. HDFS provides cost-effective scale-out storage for big data workloads.
Traditional enterprise storage vendors are adapting, offering products optimized for Hadoop workloads. The line between “storage” and “compute” is blurring—Hadoop co-locates them.
Virtualization Deepens
Server virtualization continues to drive storage changes:
Higher Density: Blade servers running 30+ VMs create enormous I/O demands.
Storage APIs: VMware’s VAAI offloads storage operations to arrays, improving efficiency.
Virtual Volumes: Storage array awareness of individual VMs rather than just LUNs.
Storage DRS: Automated load balancing of VM storage across arrays.
The storage array increasingly needs to be virtualization-aware. The VM becomes the unit of management, not the LUN.
We’re also seeing storage virtualization merge with server virtualization. VMware’s vSphere Storage Appliance creates virtual SANs from local storage.
Convergence Continues
FCoE adoption continues, albeit more slowly than some predicted. The promise of unified fabric remains compelling:
Reduced Infrastructure: Fewer cables, adapters, and switches.
Simplified Management: One fabric instead of separate LAN and SAN.
Better Utilization: Share bandwidth between network and storage traffic.
But adoption faces challenges:
Complexity: Managing converged infrastructure requires broader skills.
Forklift Upgrades: Requires replacing existing infrastructure.
Conservative IT: Storage teams are (appropriately) conservative about changes to working infrastructure.
I expect gradual adoption where it makes sense—new data center builds and highly virtualized environments—rather than wholesale replacement of existing FC infrastructure.
Data Protection Evolution
Data protection is evolving beyond traditional backup:
Continuous Data Protection: Near-zero RPO through continuous replication.
Snapshot-Based Backup: Using array-based snapshots instead of backup software.
Dedupe and Compression: Dramatically reducing backup storage requirements.
Cloud Backup: Using cloud storage as backup target.
Recovery Focus: Shifting from “backup” to “recovery”—optimizing for fast, reliable recovery rather than just backing up.
The goal is reducing both RPO (how much data can be lost) and RTO (how long recovery takes). This requires rethinking traditional backup approaches.
Storage Management Automation
The scale and complexity of modern storage demands automation:
Policy-Based Management: Define policies, let software handle implementation.
Automated Provisioning: Self-service storage allocation without manual intervention.
Automated Tiering: Move data between storage tiers based on access patterns.
Capacity Management: Automated monitoring and alerting for capacity issues.
Analytics: Using data analytics to optimize storage placement and predict failures.
Manual storage management doesn’t scale. Automation is essential for modern data centers.
10 Gigabit Ethernet Mainstream
10 Gigabit Ethernet is becoming standard for server connectivity. This has major implications:
iSCSI Performance: 10 GbE makes iSCSI viable for performance-critical workloads.
FCoE Enablement: FCoE requires 10 GbE as baseline infrastructure.
Network Convergence: High bandwidth enables consolidating multiple networks.
Server Virtualization: High bandwidth supports dense VM deployments.
As 10 GbE costs come down, we’ll see even broader adoption. This improves the economics of IP-based storage protocols.
Data Center Architecture Evolution
Data center networks are evolving from hierarchical to more distributed architectures:
Leaf-Spine Topologies: Replacing traditional core-aggregation-access designs.
East-West Optimization: Optimizing for server-to-server traffic, not just north-south.
Scalability: Horizontal scaling by adding nodes rather than upgrading nodes.
This aligns with modern application architectures—distributed, scale-out applications rather than monolithic applications.
Storage architecture must adapt to these new data center designs.
Energy Efficiency
Power and cooling costs are significant data center expenses. Storage vendors are responding:
Efficient Hardware: Lower-power components, better power supplies.
Intelligent Spin-Down: Spinning down idle disks to save power.
Solid State Storage: SSDs use less power than spinning disks.
Consolidation: Higher density reduces overall power per TB.
Measurement: Better metrics (PUE, etc.) to track and optimize efficiency.
Environmental concerns and cost pressures are driving energy efficiency up the priority list.
Multi-Tenancy and Service Delivery
As IT becomes more service-oriented, storage must support multi-tenancy:
Isolation: Segregate different tenants’ data and performance.
Chargeback: Metering and billing for storage consumption.
Self-Service: Allow tenants to provision and manage their own storage.
SLAs: Different service levels for different tenants or applications.
This is essential for cloud service providers but also valuable for enterprises with internal IT service delivery models.
Looking Ahead
These trends will continue evolving throughout 2012 and beyond. The common themes are:
- Adaptation to virtualization and cloud
- Software innovation over hardware innovation
- Automation and self-service
- Scale-out architectures
- Cost optimization
Working on FC-Redirect, I see how storage virtualization enables many of these trends. Decoupling logical storage from physical arrays provides the flexibility needed for modern data centers.
The storage industry is vibrant and innovative. While the fundamentals—reliably storing and retrieving data—don’t change, how we accomplish them is evolving rapidly.
Conclusion
2012 will be an exciting year for storage technology. Flash adoption, cloud integration, big data requirements, and deepening virtualization are reshaping the industry.
For storage professionals, this means continuous learning and adaptation. The skills that made you successful five years ago aren’t sufficient today. You need to understand not just storage but also networking, virtualization, cloud, and application architecture.
The good news is that storage remains critical to business operations. Despite all the changes, the need for reliable, performant, available storage only increases. Understanding these trends positions you to architect storage infrastructure for the next generation of applications.
I’m excited to see where these trends lead. Working at Cisco gives me a front-row seat to industry evolution. I’ll continue sharing insights as we navigate these changes together.