When you take a look at the technology news, you’ll often see figures relating to the ongoing rapid growth of Hybrid IT deployments.
Estimates vary, but most pundits agree on a figure of around 20 percent CAGR over the next few years. But that makes Hybrid IT sound like a single entity, like a mobile phone or washing machine, whereas actually, no two deployments are alike.
The shape of today’s Hybrid IT workloads
In the early days of Hybrid IT deployments, we witnessed a significant number of ad-hoc cloud deployments within businesses, often driven by the needs of individual departments – often known as shadow IT.
More recently, we’ve seen the balance swing back to IT controlled deployments that are much more strategically implemented. As a result, we’re also seeing a change in the profile of IT workloads – for example, many businesses now deploy new prototypes in the cloud as a matter of course, leveraging its ability to scale easily or turn off when necessary.
You might wonder why we need a Hybrid infrastructure at all – logically it would make sense to do this all with existing infrastructure, or by putting everything in the cloud. But there are real benefits to being able to leverage the best of both worlds. Hybrid IT is not a one size fit all approach.
To design a Hybrid IT system that is the right fit for a business, we need to examine the requirements of individual workloads to decide where on the spectrum of increasing cloud-like capabilities delivers an optimal cost/ performance ratio.
For example, large enterprise ERP systems need to be extremely reliable with optimal storage.
On the other hand, a prototype for a new application, which might be shut down or scaled up in a matter of a few months, has very different requirements.
That’s why we operate with a sliding scale of options for each workload, depending on the level of cloud behaviors required, such as self-service characteristics, resource pooling or elasticity.
These are managed with software-defined technology that delivers greater levels of flexibility as more features are added: from converged infrastructure, where only the compute element is software-defined, to hyper-converged infrastructures that also include software-defined storage.
These leverage public clouds – generally dedicated resources offered by the hyperscalers. By adding software-defined networking, businesses create software-defined data centers, which can be on-premises or hosted.
The next steps are full cloud-based services – located on either shared or dedicated clouds.
With all the choice, how do you decide which workload goes where?
Unfortunately, there’s no easy answer. We use deployment scoring frameworks that take into account key drivers such as governance requirements, cost, security, and application performance, which vary greatly from one business or industry to another.
However, given the highly dynamic nature of clouds and many of the applications they host, and their changing characteristics such as usage patterns or maturity, often the only way to make a decision around deployment is trial and error.
For a quarter of those migrating to from public to on-premises clouds – cost is a leading driver, in particular, the cost of data management as well as on and off-premises access.
Another factor to take into account is the cost savings that cloud workloads bring in terms of both CAPEX and OPEX.
Some other benefits emerge over time, such as increased agility and the ability to respond rapidly to changing conditions. Deploying to cloud models can also compress time to market for new app development as there’s no need to acquire hardware etc.
This leads to different markets having very different cloud adoption characteristics.
For example, in markets such as high tech and business consumer services, agility delivers a very clear competitive advantage. Consequently, these are heavy adopters of the public cloud and are the most likely to have a cloud-first approach.
Industries such as manufacturing or utilities, where agility is often not so much important a driver – are more likely to lean towards on-premises deployments.
Regulation and compliance also drive the Hybrid profile for some verticals.
For example, in the highly regulated healthcare industry, the health insurance portability and accountability act (HIPAA) doesn’t say what can be hosted on various clouds, but the huge penalties resulting from insufficiently protected health information tend to incline IT leaders to deploy public clouds in particular with caution.
Security is also a major factor for most businesses who appreciate that it is almost impossible to match the expertise of major public clouds’ vast numbers of security experts inhouse.
Computing workloads 101
Any discussion of workload placement and configuration in Hybrid IT must start with an understanding of the different types of workload an organization is likely to need.
Batch workloads involve huge volumes of data – for example, mobile phone bills. Processing them consumes a lot of compute resources and businesses invariably want to make processing faster and easier.
However, it isn’t time-sensitive, making them excellent candidates for automation in the public cloud as processing can be regularly scheduled at convenient off-peak times, or even overnight.
That said, it still makes sense to keep them on-premises when resources are available and no additional on-premises resources are required to handle them.
Transactional workloads, such as e-commerce, on the other hand, are more complex and are traditionally best kept in a business’ own data center with expansions into the cloud enabled via SaaS or PaaS capabilities.
Organizations also want to leverage the cloud to make sense of analyzing their vast lakes of data. The emphasis for these analytic workloads is on the ability to analyze across all locations which tends to need real-time compute capability.
In contrast, high-performance workloads, with their specialized process and highly technical requirements typically demand excessive compute capabilities – making performance-optimized Hybrid clouds a good fit.
One of the most common types of workload is the database workload, which varies in scale from small and self-contained to huge. As a result, some are suitable for cloud, while others such as those needing high-performance network storage that need to be accessed very quickly, are not.
In addition, many low latency legacy applications are not designed to run in distributed environments, and database clusters with high network throughput and millisecond response times are also not necessarily cloud-suitable. However, the final decision depends on the requirements and service levels each particular business demands.
Co-creating the optimum Hybrid IT model for your business
The bottom line is that defining a Hybrid IT environment is all about the fine balancing act of conquering complexity and achieving business value. And finding that balance requires smart decisions and continuous management.
Fujitsu takes an active role in helping customers find that balance – by evaluating current and future workloads, priorities, and constraints.
The first step is to arrange a Hybrid IT workshop with our experts, run in one of our digital transformation centers, to devise the approach for your organization, followed by workshops with your stakeholders to ensure buy-in and test workflows in target environments.
Are you ready to take the next step into Hybrid IT? Watch the webinar and make that next step into Hybrid IT.
Complementary analyst report on this topic: Can integrated systems help build Hybrid IT? – There are things you need to know when planning your hybrid future.