It’s one of the hype topics of the decade: if you are to believe everything that’s claimed, then hyper-converged infrastructures (HCI) are the answer to pretty much every need in the data center of tomorrow – whether you’ve yet to identify the problem or not. The industry analysts tend to agree that the sector is growing fast – although from a very small start – with the number of installations growing by around 50 per cent, year on year. That’s quite impressive. It’s clear that there are many benefits from combining storage, compute and network functionality in a single virtualized solution, especially when you are looking for solutions outside the data center, for example in a branch office. But just like many other hot technologies, HCI is not the infrastructure holy grail: It still has its limitations.
In the data center itself, a straightforward converged infrastructure is a far better solution. This is particularly the case when compute and storage requirements do not increase in tandem – something that often occurs with systems of record: the tried-and-tested systems that sit at the heart of any organization’s IT infrastructure. These systems may not be providing the latest hype technology, but they are the workhorses of the data center.
Another sting in the tail to be considered when thinking about HCI is the issue of expertise. While many businesses are already affected by an ongoing shortage of key IT skills and are forced to pay over the odds for tech staff with in-demand skills, there are still very capable, deeply experienced IT specialists with tons of experience, who are easily and expertly able to navigate the complexities of managing and maintaining storage area networks (SANs), which are notoriously challenging. In these cases, the adage of ‘If it ain’t broken, don’t fix it’ applies – since any transition to HCI would neglect this hard earned, valuable expertise.
No wonder that considering an HCI feels like a leap of faith, and a move away from the tried and tested systems that businesses depend on. It’s not only the additional expenditure and shortage of skills that should make enterprises pause for thought: it is also not always easy to anticipate whether an existing network can cope with increased data traffic as nodes are added. It’s one thing being able to add compute and storage capacity easily, but as an HCI depends on all communication between servers happens across a network, it is not always easy or cost effective to add additional bandwidth.
In some cases, the complex software licensing rules out an HCI-based approach. Due to its design, it’s almost impossible to know which cores or sockets an application will run on within an HCI system, and this means trouble – and a big, unexpected bill – when you’re dealing with software licenses that are calculated on the number of CPUs, sockets or cores that the software may conceivably run on. In fact, this usually means the costs for HCIs are prohibitive.
Even if software costs are not calculated in this way, they can still significantly affect the total cost of ownership. Back of the envelope calculations may suggest that HCI is the most efficient option when it comes to managing capital expenditure (capex), because there’s no longer any need to invest in external storage. However, a converged infrastructure can actually require lower capex than an HCI designed for the exact same workloads. That’s because, even though HCI hardware costs are lower, the difference is sometimes more than compensated for by the far higher software costs associated with licensing virtualization software – which is not required for a converged infrastructure.
Finally, there is also the issue of capacity to take into consideration as in a hyper-converged infrastructure this is limited. Granted, the limit is more than most organizations will need, but those that do deal with significant volumes of data may well exceed the thresholds as every HCI system’s storage capacity is governed by the number of compute nodes.
I think you get the idea
In such cases, a more appropriate solution than HCI is to ignore the “hype” in hyper-converged and instead choose a more straightforward Converged infrastructure (CI) such as NFLEX, a joint solution from Fujitsu and NTAP. Based around external computer and storage, NFLEX brings together best-in-class Fujitsu and NetApp expertise including FUJITSU PRIMERGY servers, NetApp AFF and FAS storage, switches from Extreme Networks and ready to run VMware vSphere software. Also included is Fujitsu’s Software ServerView® Infrastructure Manager(ISM) which automates and simplifies infrastructure operations across compute, storage and networking devices.
The integrated NFLEX provides a simplified experience – and the security and reliability which can only come from components that are factory integrated and designed from the ground up to work effectively together, all with minimal installation time. Converged infrastructure also offers the benefits of different configurations (for example racked and non-racked) and various expansion packages.
Ultimately, the motivation for most IT and business managers is to not only avoid complexity where possible, but also to minimize the ongoing challenges usually associated with deploying and provisioning technology. There are some cases where implementing a single virtualized solution in the form of a hyper-converged infrastructure delivers significant benefits in terms of delivering agility, easy provisioning and low administration costs, whereas in other cases, a converged infrastructure is the best fit. Fujitsu’s approach is to work closely alongside our customers to define the best possible architecture as part of the process of co-creating a solution. And for those businesses with asymmetric growth in compute and storage requirements, converged infrastructure really is a very attractive option.
For more information on NFLEX solutions, visit: Fujitsu.com/nflex