Know Before You Go: Understanding the Software-Defined Storage Market
With SDS, organizations can make huge gains in storage capacity for a reasonable cost. However, as adoption of SDS grows, it is important for organizations to be knowledgeable about current options. There are many potential pitfalls in the SDS market, including compatibility problems, inconsistency issues and lack of features.
Any one of these, let alone a combination of them, can create problems. Below are recommendations for what to avoid and what to look for to deploy a SDS strategy that best serves your organization’s needs.
The Importance of Unified Storage
An approach to SDS that incorporates all the types of storage will help you get the most out of your choice. For instance, if your current file-based storage system offers support for object store as well, it can save the hassle of managing and balancing many different complementary storage systems. First, this unified approach is easier to manage, and second, it makes better and more efficient use of resources in relation to performance and capacity. It’s similar to virtualization, where you cut back on hardware resources that are idling. By using a unified approach, you are using your resources more intelligently.
This approach is surprisingly difficult to find in today’s market, however. Some software-defined storage companies claim to offer flexibility and the ability to meet enterprise needs with object, block and file storage, be both hyper-converged and hyperscale, and claim to support flash storage. However, many lack the features to back up those claims.
This means many of the options on the SDS market are narrowly focused on a single use case, such as:
- Hybrid cloud
- Scale-out file systems
- Object storage
- Archiving
- SAN
- Hyper-convergence
These narrow options do boast a lower cost – usually about one-third the price of more comprehensive solutions. But you get what you pay for; they also have one-third of the features. In addition, they are not focused on general-purpose NAS.
NAS and Consistency
Many enterprises would benefit from a general-purpose NAS that scales well. However, just as with SDS, not all NAS solutions are created equal. Many enterprises do not understand that consistency is critical in scale-out NAS. Some storage environments are only eventually consistent. This means files written to one node are not immediately accessible from other nodes.
Even when the other nodes have been updated to record the change made to the original node, a delay of just fractions of a second can cause problems with accessing applications or users. This can be caused by not having a proper implementation of the protocols, or not tight enough integration with the virtual file system.
This is not the end of the NAS story, however. It’s also possible to have strict consistency. Being strictly consistent means files are accessible from all nodes at the same time. The view of the file system through each node is strictly consistent, so that any modification on one node is instantly available from any other node. Make sure that your solution can be consistent between protocols as well. That means if you write something in SMB, for example, it should be immediately visible over NFS as well.
Putting the SDS Pieces Together
It is clear that you need a unified approach that takes in all storage file types and that you need strict consistency. Here are the other elements of a comprehensive SDS option:
- File systems – Will the majority of data being unstructured, you need file storage. Make sure your SDS setup includes crucial file features such as tiering, quota, snapshot, encryption, antivirus, WORM and retention. It should also be able to integrate into Microsoft Active Directory, have support for multiple authentication providers and enforce authorization checks. If your company is a large one, ensure that the solution has support for multi-tenancy, where you can create multiple file systems in the same environment.
- Hybrid cloud – exchanging data from the local to the cloud presence is important. For example, part of your local storage system will be exposed to virtual machines running in a public cloud like Amazon. That means your SDS file system needs to cover both environments so you can easily pass files between them.
- Hardware-agnostic – Standard commodity storage hardware and servers are a money-saving option with SDS. You can add additional hardware of your choice as needed to scale performance and capacity over time.
- Hyperconverged – Software-based architecture integrates compute, storage, networking and virtualization resources and other technologies on a commodity server.
- Disaster recovery – A unique disaster recovery policy can be used for each of your applications, and you can remain highly available if you choose an SDS solution with a storage cluster that is important to back up.
- Scalable and flexible –SDS makes scalability easy. You can start small and later rapidly add multiple virtual machines to the same cluster, eliminating the cost and hassle of building new clusters in order to accommodate scale-out. If a storage cluster is built on a symmetric architecture, linear scaling up to hundreds of petabytes and billions of files is possible, simply by adding more storage nodes to the cluster. Adding storage nodes and increasing capacity can be carried out during runtime and does not interrupt any ongoing operations in the cluster.
Making a Wise Choice
The growth of cloud-based infrastructure, the rise in the adoption of virtualization technologies and BYOD are some of the primary forces that have driven the growth of the SDS market. Enterprises are in serious need of greater storage capacity that is scalable, flexible and affordable. SDS meets all these criteria, but not all SDS options provide the same features. To avoid ending up with SDS that doesn’t meet all your needs, use the above criteria to find an approach that uses general-purpose NAS and offers compatibility and consistency.
About the Author: Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise-scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.