Scale Computing expert debunks four common HCI myths

Scale Computing expert debunks four common HCI myths

Alan Conboy, Office of the CTO, Scale Computing, debunks the four most popular HCI myths

Despite the fact that hyperconverged infrastructure (HCI) has gone mainstream in the last few years, there are still myths that circulate and lead to misconceptions and confusion, even for those that have various HCI solutions already deployed. Here, Alan Conboy, Office of the CTO, Scale Computing, debunks the four most popular HCI myths.

  1. HCI costs more than building your own virtual infrastructure

Firstly, the upfront cost of HCI solutions differs between vendors and often by the brand of hypervisor used in the solution. Not to mention, while it is often true that purchasing the individual parts needed to make a virtualisation infrastructure might be less expensive than purchasing an HCI solution, that’s only part of the cost of the solution. The true and total cost of infrastructure goes far beyond the initial purchase.

HCI solutions make virtualisation easier to deploy, manage and grow in the future, which is their most compelling virtue. The ease of use and simplicity it offers make for a dramatically lower total cost of ownership over time. From deploying in hours rather than days, to scaling out seamlessly without downtime, HCI eliminates many of the major headaches that come with traditional DIY virtualisation solutions.

HCI handles a lot of the daily tasks typically associated with managing and maintaining virtualisation infrastructure using automation and machine intelligence. The ease of use and reduction of management time frees up resources to work on other tasks and projects. Savings can also include eliminating hypervisor software licensing, depending on the hypervisor deployed by or supported by the HCI vendor. The savings may vary by organisation, but nearly always, the numbers bear out that the good HCI solutions are less costly over a three to five-year period or less. 

  • HCI is just software-defined storage with a hypervisor

Software-defined storage (SDS) vendors have been coming out of the woodwork with questionable HCI solutions that meet the criteria for this myth ever since HCI went mainstream. But, while the true HCI solutions may include SDS, they are a whole lot more than just that.

The signature of HCI solutions is its simplification of virtualisation. SDS solutions might help to simplify storage to a certain extent, but they often aren’t much more than an emulated SAN/NAS solution. Many SDS solutions use Vendor Specific Attributes (VSAs) to emulate SAN for the hypervisors they support. This means that SDS solutions are ultimately very similar to a SAN in the overall complexity of the solution, defeating the point of making it simpler.

True HCI solutions will automate a lot of the configuration and management tasks that make traditional DIY virtualisation so complicated and hard to manage. That’s why many HCI solutions are delivered as purpose-built appliances where the knowledge of the hardware supports even greater automation. Automation such as automatic storage pooling, rolling updates and self-healing go far beyond the simpler SDS solutions. 

Those HCI solutions that also directly integrate hypervisors, rather than using third-party hypervisors, are the best option. This level of integration allows for more efficient data pathing and resource utilisation. SDS solutions were already supporting third-party hypervisors long before the term HCI was even invented and that simply doesn’t make the grade for HCI.

  • HCI doesn’t work for the entire enterprise to Edge Computing spectrum

Many HCI vendors came out the gate and charged straight at enterprise computing and the enterprise market is definitely the one in which to make a lot of noise and be noticed, whether good or bad. But, with the rise of Edge Computing, we now see a greater emphasis on HCI as a vehicle for Edge infrastructure. Some, but not all, HCI vendors have the right architecture to answer the call of Edge Computing.

VSA-based HCI solutions can consume large amounts of resources, making it nearly impossible to use on the smaller form factor appliances needed for Edge Computing use cases. With Edge Computing, the cost is key and requiring resource-rich appliances to run the storage and hypervisor will increase the cost of the solution at each Edge site. 

If you wanted to install HCI on appliances with a small resource footprint even up to 64GB of RAM, using a VSA-based solution that’s going to consume half that RAM per node. This is simply not cost-effective. Instead, HCI solutions with hypervisor-embedded storage use fewer resources and can install and run on smaller appliances efficiently, making Edge Computing a cost-effective reality.

  • HCI is a bad idea because it’s a single-vendor solution

As the well-known saying goes ‘don’t put all your eggs in one basket’. In the same sense, some don’t like the idea of having their entire infrastructure stack come from one vendor. They might want to diversify their infrastructure portfolio among many vendors, perhaps to avoid that single vendor not living up to its promises. But, while managing risk is important in running any organisation, business leaders may not have fully thought through the risk vs. reward.

HCI came into existence in order to overcome a variety of challenges facing traditional virtualisation infrastructure – challenges which are mostly caused by combining multiple vendors solutions into a single stack. The most egregious of these challenges, or at least the one felt most personally by IT pros, can be the finger-pointing between vendors when a customer calls for support. Vendors may spend days or longer debating who owns the problem while the customer is left without resolution.

The increased integration and automation capabilities that can be achieved is a massive benefit of one single vendor owning the whole stack. This is especially clear in HCI solutions that use third-party hypervisors where, when system updates need to be performed, they must be done separately for the hypervisor from the rest of the system.

Handling these system updates separately isn’t ideal because any one vendor’s updates have the potential to cause issues with other vendor solutions. This is why system updates across a multi-vendor solution have historically been arduous tasks usually performed over long nights and weekends.

A properly integrated HCI solution really will fly in the face of these common myths, as it will enable IT administrators to focus on apps and workloads, rather than leaving them chained to simply managing infrastructure day-in, day-out.

Browse our latest issue

Intelligent Data Centres

View Magazine Archive