Hybrid cloud strategy: 5 contrarian tips
Sometimes the good advice flies just under the radar.
This holds true for hybrid cloud architecture and strategy: A little contrarian thinking might be worth your while. Even the term itself and how people define it might be worth tinkering with, but we’ll get back to that in a moment.
We asked cloud experts and IT leaders for their less-than-conventional wisdom on hybrid cloud and how to do it right. It’s a worthwhile question at a time when hybrid cloud adoption is growing and existing hybrid environments are maturing (and reaping the benefits of experience and lessons learned).
[ Learn the do’s and don’ts of hybrid cloud: Get the free eBook, Hybrid Cloud Strategy for Dummies. ]
Hybrid cloud strategy: 5 ways to shift your thinking
They shared a wealth of know-how that runs against some conventional thinking, such as the notion that hybrid by definition introduces inconsistency across environments. Let’s explore that topic – and four more ways to shift your thinking about hybrid cloud.
1. Ground your hybrid cloud strategy in business realities
It’s natural to think of hybrid cloud as a pure technical choice: It’s a matter of software, infrastructure, and data, right? It’s not a marketing or branding tactic, unless perhaps you’re a cloud service provider. It’s got nothing to do with M&A activity, right? Nor might we quickly discuss hybrid cloud and HR strategy in the same conversation.
This may be too narrow a view. Hybrid cloud, done right, isn’t just a matter of saying, “We’ll use environment X for this and environment Y for that.”
Rather, multi-environment approaches – such as hybrid cloud and multi-cloud – should be fueled by business goals, talent, and technology, says Ryan Murphy, VP and North American cloud center of excellence leader at Capgemini.
“The focus needs to be on business objectives and approaches to achieve their vision,” Murphy says. Is the company looking to do acquisitions, spin off a business, start a new business unit, or save money? Answering these key questions lays the foundation for making technology decisions that are unencumbered by legacy assets. Different objectives have different business attributes, and therefore different technology attributes are needed to support them.”
Taking a business-centric approach to major technical and architectural decisions (such as hybrid cloud) can also help seed the agility that today’s IT teams increasingly need from the start – rather than just demanding teams “be more agile” after the fact to keep up with significant changes to technology strategy.
“Every business is different and has different requirements, but start with the ‘art of the possible’ – what are the desired business outcomes? This lays the foundation for the applications, infrastructure, and personnel,” Murphy says. “This also enables the business to be more integrated and connected, which is what separates those who are fast, agile, responsive, and efficient from those who aren’t.”
(“The art of the possible” is part of a longer quote, commonly attributed to Otto von Bismarck: “Politics is the art of the possible, the attainable – the art of the next best.” Its meaning is interpreted in various contexts, including now in business and management circles. In today’s terms, it might best be thought of as a pragmatic “get things done” approach to a goal: Working within constraints to accomplish what is practical and attainable, rather than insisting on only the most desirable outcomes. Bismarck might have made a good CIO.)
2. Hybrid cloud can actually improve standardization
Complexity is a common knock against hybrid cloud and other heterogeneous approaches to infrastructure. “But now we have to manage more things,” the thinking goes – and there can be some truth to it.
With the right tools and processes – not to mention cultural shifts like DevOps or DevSecOps – you can actually increase standardization and consistency. That’s one of the values that Liberty Mutual Insurance has unearthed in its cloud strategy, according to senior architect Eric Drobisewski: Building a consistent model for development, operations, and security that works across any infrastructure. That was a necessity born out of Liberty Mutual’s hybrid environment. Containerization and orchestration are crucial.
“We are well underway with our cloud transformation, but we also have many workloads running in our internal data centers and private cloud platforms,” Drobisewski says. “This creates a critical need for technology that can help bridge the gap between public and private cloud. Kubernetes has become an integral part of creating a common fabric that we can deploy across our hybrid multi-cloud environment, enabling consistent models for developers to deploy modern cloud-native workloads as well as modernize existing applications.”
[ Read our deep dive for IT leaders: Kubernetes: Everything you need to know. ]
Kubernetes has also become the foundation for simplifying and standardizing critical work for operations and security pros.
“Engineers and operations teams are leveraging declarative provisioning and configuration automation that greatly simplifies how they interact with a variety of infrastructure providers and backend services,” Drobisewski says. “And we are able to improve our security controls and governance through consistent software-defined and policy-driven methods by leveraging admission controllers and open policy agent.”
Back to the developers: Rick Kilcoyne, CTO of CloudBolt, notes that many probably don’t much care what cloud they use. Rather, they care about consistency across environments, from their laptop to test to production, no matter where the latter might reside. Some “traditional” approaches, Kilcoyne says, require too much specific cloud expertise of developers, which is why configuration problems still plague some cloud deployments.
“Abstraction is key to agile, secure, and optimized access to multi-cloud and hybrid cloud environments,” Kilcoyne says. “Developers shouldn’t be required to be certified cloud experts in order to get access to development environments.”
[ Related read: Managing Kubernetes resources: 5 things to remember ]
3. Don’t worry about doctrine when defining “hybrid” cloud
Hybrid cloud definitions can become a bit cumbersome. As we wrote recently, the term usually refers to some mix of public cloud, private cloud, and/or on-premises (bare metal) servers – most often with some level of integration and/or orchestration between those environments.
You don’t need to be overzealous about your definition, though. In fact, hybrid cloud might not include on-premises servers at all – and that’s OK.
“A hybrid cloud does not need to include servers that are on-prem, whether literally in a facility that you own and operate or in a hosted or co-located environment,” says Gordon Haff, technology evangelist, Red Hat.
Haff shares a brief history lesson that explains the assumption that some on-premises servers exist in a hybrid cloud mix: It dates back to the original National Institute of Standards and Technology cloud computing definition, which was finalized in the olden days of 2011.
That definition took the view of cloud as a standardized compute utility, Haff says, similar to the electric grid. “Hybrid cloud” was based simply on the idea that you might want to “burst” from a private cloud to a public cloud to handle temporary load spikes. But a lot can change in a decade, and while many hybrid environments do include on-premises infrastructure, there’s no actual rule that says they must.
“While cloud computing in general retains many of the characteristics originally identified by NIST – self-service, flexibility, scalability, and so forth – it has expanded to encompass a much richer set of cloud-native services that can vary across providers,” Haff says. “Thus, while a hybrid cloud often does often include some level of on-premises compute and storage, it doesn’t have to. Instead, it can refer to some combination of public cloud provider(s), software-as-a-service (SaaS) applications, content delivery networks (CDN), and other types of outsourced capacity and capability – typically integrated to a greater or lesser degree.”
Let’s delve into two more important areas: Workload portability and surprise costs:
4. Not everything needs to run in a cloud
Terms like hybrid cloud and multi-cloud – or heck, just cloud – sometimes arrive with an implicit assumption that an organization will move most or even all of its workloads to “the cloud.” Again, there’s no such rule, just as migrating or building some applications to be run in containers doesn’t mean you need to containerize everything.
“The decision to choose between different cloud providers and stay on-prem should be made after an objective ‘fit to cloud’ assessment of all the workloads in your environment,” says Fahim Khan, VP of cloud transformation services at Brillio. “One must also weigh out the decision to force everything with one cloud vendor or distribute their workloads between different cloud vendors and/or keep some workloads on-prem and operate in a hybrid cloud environment.”
What that “fit to cloud” assessment looks like can vary, but Khan points out some of the fundamentals:
- Security and compliance
- Portability (of workloads and data)
- Costs (to run a workload in one environment versus another)
- Performance and latency considerations (for example, grouping interdependent workloads together in the same environment to reduce latency)
5. Cost and performance optimization isn’t automatic
Multi-environment strategies such as hybrid cloud are often associated with cost optimization – with more options for your workloads, you can get the best price for your infrastructure costs. That’s not a given, though, and surprises can derail that goal. Moving data into a cloud can be “cheap,” for example, but moving it out may produce a surprise bill.
“Data egress costs can mount if you don’t plan for them, whether that is between public cloud vendors or between private (on-prem) and public cloud,” says Lenley Hensarling, chief strategy officer at Aerospike. “Defining APIs on top of data services that optimize the amount of data transferred can make a massive difference in costs.”
Planning for this kind of cost is important at the architecture and design phase, and a big-picture vision (rather than, say, a micro-focus on the cost of running an on-demand instance).
“One also has to design with the cost profiles of the overall solution in mind. [For example,] for stable workloads that have a lot of data, it can be better to have the elastic front end of microservices reach back to an on-prem data solution,” Hensarling says. “While public cloud providers offer elastic solutions, meaning they can grant you infrastructure as an application scales up and down, they do that at a cost that is often very high to you. Determining which workloads require that elasticity and what price you are willing to pay is vital in getting things right.”
Performance issues are another area where you want to avoid surprises – or at least not make overly broad assumptions that performance will be consistent and optimal, even within the same public cloud platform, according to Hensarling.
“Even within the same instance categories, the variance between one instance and another can be significant. Network bandwidth can also fluctuate significantly, so you have to design for that,” Hensarling says. “For workloads that require high performance at a consistent rate, you may be better off placing that app or key portions of that app in your data center or private cloud. There, you have more control over the infrastructure and its management.”
[ Learn more about hybrid and multi-cloud workload strategy: Get the free eBook, Multi-Cloud Portability for Dummies. ]