Prior to the emergence of virtualised servers, applications and platforms were deployed on, what we call today, bare metal servers. Each server ran a single guest operating system, hosted a set number of applications or services and had exclusive access to the physical resources of the server (CPU, RAM and storage). The resulting efficiency was low as, on average, each of these servers would use only 10% of its physical resources.
The first wave of virtualisation resulted in the repurposing of this type of server hardware to run multiple virtual machines each with its own guest operating system on top of a hypervisor (eg vSphere, XenServer, KVM, Hyper-V) that arbitrated the allocation of the shared physical resources. The resultant efficiency gains are manifest.
This move to virtualisation has given rise to new server architectures that has brought about a greater scale and efficiency in the hosting of virtual machines. The latest trends are in converged and hyper-converged infrastructure.
Gartner defines converged and hyper-converged systems as “combinations of server, storage and network infrastructure, sold with management software that facilitates the provisioning and management of the combined unit. The market for [these] systems can be divided into three broad categories of integrated systems, some of which overlap. These categories are:
- Integrated stack system (ISS) — Server, storage and network hardware integrated with application software to provide appliance or appliance-like functionality. Examples include: IBM PureApplication System, Oracle Exadata Database Machine and Teradata.
- Integrated infrastructure system (IIS) — Server, storage and network hardware integrated to provide shared compute infrastructure. Examples include: VCE Vblock, HP ConvergedSystem, Cisco/NetApp FlexPod and Lenovo Converged System (formerly PureFlex).
- Hyper-converged integrated system (HCIS) — Tightly coupled compute, network and storage hardware that dispenses with the need for a regular storage area network (SAN). Storage management functions — plus optional capabilities like backup, recovery, replication, deduplication and compression — are delivered via the management software layer and/or hardware, together with compute provisioning. Examples include: Gridstore, Nimboxx, Nutanix, Pivot3, Scale Computing and SimpliVity.”
In this article we examine the difference between converged (IIS) and hyper-converged (HCIS) and the pros and cons of each.
In enterprise use cases, converged infrastructure is typically used to deliver general purpose services at scale:
- File and print
- Email server
- Web server
- Collaboration services
- Telephony
- ERP systems
- Databases
Converged infrastructure offers flexibility
Converged infrastructure offers the flexibility for users to select all of the components from a single vendor (if it offers all components) or mix and match multiple vendors.
However, to successfully deploy converged infrastructure at scale, experts recommend the use of reference architectures like Cisco validated designs for FlexPod, HPE’s Composable Infrastrucutre or EMC’s VSPEX (these are just some examples and not an exhaustive list). A special case is worth mentioning – VCE’s vBlock is a set of converged infrastructure products designed using components from leading vendors (typically Cisco, EMC and VMware), tested and assembled for the customer prior to delivery. The other reference architecture products are typically designed and documented by the vendor but built at a customer’s site by a certified partner.
Even using reference architecture, converged infrastructure takes a building block approach. Systems can be designed using a range of vendor equipment for the compute, storage and network components or a range of models from a single vendor. This flexibility in design allows for the specific requirements of the user to be catered for. Additionally, converged architecture has the ability to, scale up and scale out. If additional compute resources are required they can be added, additional network fabric capacity can be added and storage can be increased by adding disk or flash, additional controller power or a mixture of both in a ratio determined by user needs and performance requirements.
Further choice is available in the networking technologies on offer. The general purpose network and SAN/NAS can share a common Ethernet fabric or have their own, employ fibre channel (FC) or FCoE, iSCSI or InfiniBand.
Hyper-converged infrastructure offers simplicity
Hyper-converged infrastructure on the other hand is a collapsed architecture. Compute and storage are combined in a single appliance which is connected to the data centre network fabric. The infrastructure subsystems are tightly integrated and many of the functions provided by hardware in converged infrastructure are delivered by software consuming the on-board compute resources. This can greatly simplify small, and even large, deployments.
When I first encountered hyper-converged appliances they appeared to me to be cleverly packaged and marketed rack mount servers. After all, a classic server has compute and direct attached storage on board with a network connection. On closer examination, however, the hyper-converged appliances reveal the clever features that differentiate them from classic servers. These appliances are built purely for virtualisation and to scale in clusters within a data centre. When there are two or more in a cluster the differences becoming startlingly clear. The management software and storage systems become federated across appliances allowing for the pooling of resources in terms of moving VMs between appliances and combining storage capacity. The storage logic controller, which normally is part of SAN hardware, becomes a software service attached to each VM at the hypervisor level. The software defined storage takes all of the local storage across the cluster and configures it as a single tiered storage pool. Data that needs to be kept local for the fastest response could be stored locally on flash, while data that is used less frequently can be stored on disk on one of the appliances that has spare capacity.
The collapsed architecture of hyper-converged infrastructure greatly simplifies deployment and management. Each node in a cluster is simply connected to an existing or new network and the software defined storage removes the complexity normally associated with setting up a SAN.
Hyper-converged infrastructure is not without its drawbacks
Along with the simplified collapsed architecture comes less flexibility and limited expansion in nodes for scaling out. Each appliance has a maximum capacity for compute, memory and storage resources – once one of these reaches capacity within the node there is no expansion option other than to add another node.
For example, if a use case sees storage requirements growing rapidly without any corresponding increase in requirements for RAM or CPU cores, increasing the storage capacity of the cluster would result in underutilised additional RAM and CPU capacity. The result being that expansion of any single resource can become overly expensive unless CPU cores, RAM and storage requirements remain in the ratio provided by the hyper-converged appliance. In a converged infrastructure, any single resource can be scaled up or out as required by adding just that resource to the stack.
The pre-configured nature of hyper-converged infrastructure is a double edged sword. On the one hand it simplifies deployment and management as discussed earlier but it can also make change or trialling (or proof of concept) difficult. With converged infrastructure, each component of the stack can function independently – compute can be re-deployed as compute elsewhere, network components still switch and route traffic and storage still manages data. However, if a hyper-converged cluster is found to be unsuitable the individual hardware and software components can’t be repurposed due to their tightly integrated nature. They would likely have to be replaced.
The same could also be said of some converged infrastructure solutions like vBlock where, although it is comprised of building block components from different vendors, VCE’s value proposition is that it guarantees the interoperation of them all and therefore places strict restrictions on user customisation in order to maintain that guarantee. This extends to software/firmware patches and upgrades. For some, this will be a comfort knowing the vendor has greatly reduced their risk while for others it will be seen as overly constraining.
In conclusion …
As with most decisions in ICT, the correct choice is not clear until the requirements are known. Both converged and hyper-converged infrastructure have applications that would recommend one over the other.
Broadly the choice comes down to two factors. If scale and flexibility are required then converged infrastructure is likely the front runner but it comes at the cost of increased complexity. If simplicity is required the hyper-converged could be the better solution while sacrificing the greater scalability and flexibility of converged infrastructure. It is clear that defining current and future requirements is key to making the best decision.
Of course, there’s always the option of letting someone else provide your infrastructure needs with a cloud-based IaaS as covered in a previous article Cloud Computing: What’s All the Fuss About?