A continuing theme in the storage industry these days is Software-Defined-Storage (SDS). Though Software-Defined-anything is often simply a marketing label applied to anything that has software as part of the product, in the storage industry if refers to the degree to which a product truly virtualizes underlying storage capacity, performance, operations and management.
Expanding Explosion of Big Data
The onslaught of data being produced on the planet is growing larger every day. Every two years, the amount of data collected doubles. Increasing analytical sophistication plus real-time datasets from the Internet of Things will significantly add to the flood. Despite continued advances in storage density and speed, SDS solutions are desperately needed to facilitate infrastructure build-out, increase utilization and vanquish storage management complexity.
Bridging Disparate Environments
Smart SDS must grapple with heterogeneous hardware and software configurations, including the challenge of spanning on-site and public clouds seamlessly. Consistent, less complex storage interfaces promise to converge the underlying disparities in devices, interconnects and even geographic location; which will in turn simplify the application of enterprise IT policies.
Virtual SANs
A significant step toward solving such problems, represented by VMware’s recent release of vSAN software and their upcoming release of VVOL, is to inject change into the underlying paradigm in storage virtualization’s interaction with VMs.
Instead of the VM essentially inheriting low-level properties of the storage array, such as LUNs and NAS mount points, the VM allocates its own storage object that abstracts away those finer points. Via VMware vStorage APIs, VAAI and VASA, the VM interacts with the storage system directly. The storage unit from the VM side is not a LUN but a storage container complete with metadata, services and data store.
Hyperscale Storage
Another approach to improving cost and performance of storage virtualization is the software-centric Open Compute Project, whose aim is to eliminate proprietary, all-in-one server technology whose cost scales poorly in massive data centers, such as those run by Google, Facebook and Yahoo. The server hardware is disaggregated, simplified, standardized and controlled by license-free software where most of the intelligence resides.
Rather than single units of storage being shared by multiple VMs, both bottom-tier conventional bulk storage and top-tier flash storage reside in the same server cluster, which are shared amongst the micro-server components. The whole thing is held together, abstracted and managed by the top software layer for the rack.
Hard Technology Developments
Software-defined anything requires cheaper, faster, more intelligent hardware beneath it in order to drive advances higher in the stack. Solid state and hybrid storage are enabled by All-Flash-Arrays and Software-enhanced Flash Cache for the highest performance tier.
These, along with inline data reduction capabilities, are boosting storage server performance to new levels while simplifying load management. Higher speed interconnects and remote direct memory access network protocols further support wide-area storage virtualization and advanced computational models.
Poised on the Cusp of Change
All in all, the storage industry appears to be approaching an historical inflection point as true software defined storage emerges and storage, compute and networking capabilities reach new heights in performance and scale.