This is the second of a series of five posts on the market space for hyperconverged infrastructure (HCI).
To be clear, as the
previous blog post outlined, there are many options in this space. Evaluation of your needs and clear understanding of why you want to choose a solution shouldn’t be made lightly. Complete understanding of what you hope to accomplish and why you wish to go with one of these solutions should be evaluated and understood, and this information should guide you in your decision-making process.
Here’s a current listing of industry players in hyperconverged.
- Stratoscale
- Pivot3
- DellEMC Vx series
- NetApp
- Huawei
- VMware
- Nutanix
- HPE SimpliVity
- HyperGrid
- Hitachi Data Systems (Ventara)
- Cisco
- Datrium
There are more, but these are the biggest names today.
Each technology falls toward the top of the solutions set required by the Gartner HCI Magic Quadrant (MQ). The real question is: which is the right one for you?
Questions to Ask When Choosing Hyperconvergence Vendors
Organizations should ask lots of questions to determine what vendor(s) to pursue. Those questions shouldn’t be based on the placement in the Gartner MQ, but rather your organization’s direction, requirements, and what’s already in use.
You also shouldn’t ignore the knowledge base of your technical staff. For example, I wouldn’t want to put a KVM-only hypervisor requirement in the hands of a historically VMware-only staff without understanding the learning curve and potential for mistakes. Are you planning on using virtual machines or containers? There are considerations to this. What about a cloud element? While most architectures support cloud, you should ask what cloud platform and what applications will you be using?
One of the biggest variables to be considered is and always should be
backup, recovery, and DR. Do you have a plan in place? Will your existing environment support this vendor’s approach? Do you believe you’ve evaluated the full spectrum of how this will be done? The elements to set one platform apart are how the storage in the environment handles tasks like replication, deduplication, redundancies, fault tolerance, encryption, and compression. In my mind, the concern as to how this is handled, and how it might be able to integrate into your existing environment, must be considered.
I’d also be concerned about how the security regulations your organization faces are considered in the architecture of your choice. Will that affect the vendor you choose? It can, but it may not even be relevant.
I would also be concerned about the company’s track record. We assume Cisco, NetApp, or HPE will be around, as they’ve been there with support and solutions for decades. To be fair, longevity isn’t the only method for corporate evaluation, but it’s a very reasonable concern when it comes to supporting the environment, future technology breakthroughs, enhancements, and maybe the next purchase, should it be appropriate.
Now, my goal here isn’t to make recommendations, but to urge readers to narrow down a daunting list, and then evaluate the features and functions most relevant to your organization. Should a true evaluation be undertaken, my recommendation would be to do some research into the depth of your company’s needs, and those that can be resolved by placing a device or series of them in your environment.
The decision can last years, change the direction of how your virtual servers exist in your environment, and shouldn’t be undertaken lightly. That said, Hyperconverged Infrastructure has been one of the biggest shifts in the market over the last few years.