7 Multi-Cloud Challenges and How To Overcome Them

essidsolutions

Michael Bathon, vice president and executive advisor, IT at Rimini Street, examines the inherent challenges that come along with adopting a multi-cloud approach and advises those considering such a strategy on how to navigate them.

It’s not hard to figure out why organizations continue to migrate from locally hosted data centers to the Big 3 cloud providers, namely Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). IT leaders understand that they need to focus on driving their own business forward using cloud-based apps, services, and infrastructure versus spending their time (and money) maintaining and paying for physical data centers.

As IT leaders investigate which services might be best for their specific business, they soon realize that each has its own strengths and weaknesses. This, combined with the fact that each offers a pay-as-you-go model, has led many to embrace a multi-cloud approach that takes advantage of each service’s strengths while attempting to minimize the weaknesses.

Ways To Combat Multi-Cloud Challenges

Particularly for those interested in adopting a multi-cloud strategy, here are some of the challenges you’re likely to come across, as well as some guidance on how to quickly work through them:

1. Multiple ‘languages’ from service to service

Here we’re not necessarily referring to programming languages, but rather how terminology can vary from service to service. The underlying concepts for how to use the services and features for any particular cloud provider are not dissimilar. It is more that the Big 3 have different terminology and tools for storage, hardware, management tools, and so on. The best way to overcome these nuances is to encourage IT team members to take the online certification courses that each service offers. Investing 20 hours over a few months creates a win-win situation: teams become more knowledgeable and valuable to the business, and individuals improve their resumes with new education and certifications.  

2. Differences in hardware nomenclature

Building on the previous point, the public cloud service providers each employ various types of hardware, and you need to know if the providers you choose are using hardware that’s best suited for your specific needs. Each type of hardware has its own nomenclature, which, again, creates a learning curve. The good news is that the underlying hardware isn’t all that different in practice, but you want to ensure your teams have a general understanding of hardware and associated terminology that each provider uses. In addition, the Big 3 all provide online TCO calculators you can use to determine how costs would change depending upon the CPU and storage configuration.

See More: How SD-WAN Is Simplifying and Accelerating Multi-Cloud Adoption

3. Choosing the right operating system

Choosing an operating system (OS) isn’t the most overwhelming challenge, but you  want to research the platforms available to decide what fits your organization best. Red Hat and other versions of Linux tend to be popular, while some organizations may prefer to work in a Microsoft Windows environment. My usual advice to businesses looking to move to the cloud is to use what your people know. If the IT team is looking to consolidate the number of operating systems they support and standardize, a best practice is to complete the “lift & shift” of the application as-is into the cloud and then change out the underlying OS after the migration. Doing both at the same time can make troubleshooting very complicated, especially for those who are just learning the cloud nomenclature and rules.  

4. Over-reliance on security tools

On the flip side, security is a major challenge for organizations considering the cloud,  particularly top-of-mind in the wake of the recent wave of high-profile attacks and breaches. The first thing for IT decision makers to understand is that while each of the public cloud providers offers a basic level of security, it’s nowhere near what you will need to truly secure your organization’s data. People often naively think their data is protected if it’s in the cloud, but that is certainly not the case. Security needs to be managed for each cloud service you use, and even when it is, it’s actually only the first line of defense. Remember, the Big 3 all provide best-in-class physical and network security, but the customer must still protect their data and manage their logical ingress/egress points.

5. Backup or restore management across multiple environments

No matter how strong your security layer is, there’s still always a chance of a breach. In fact, many organizations understand that it’s not a matter of if but when your data is compromised. Security may be the first line of defense, but the ability to backup and restore your data should a breach occur is equally as vital. Managing this across multiple environments can be demanding, but there are services that allow you to monitor and manage all security and backups in a single unified view. Then, when a security breach occurs, the idea is to revert to the most recent backup available to minimize the loss of data and keep operations running as smoothly as possible. For many companies, even if they plan to move everything to the cloud, there will be a period where they have more than one backup/restore tool. While getting to the “one pane of glass” will take time, backup/restore, along with security, are two areas where significant pre-planning is vital.

6. Network monitoring tools and optimizing your resources

Once your multi-cloud infrastructure is up and running, there will be a lot of “noise” that requires monitoring- bandwidth, latency, errors, and so on. Depending on the size of your organization and architecture, it may make sense to invest in a monitoring tool that can automate the bulk of the “dirty work” and alert teams when a significant incident occurs. This gives teams the flexibility to focus on other high-value work while acting quickly if and when an issue does arise. Some first-time cloud users experience “sticker shock” when they receive their first monthly bill, typically because they didn’t correctly plan for the I/O data transfer costs. Continuous streaming of data between cloud providers or cloud and locally deployed software can be quite expensive. That is why moving applications and databases as a group is usually the best approach.

See More: Break the Bonds of Data Gravity With a True Multi-Cloud Strategy

7. Compliance and certification

In today’s climate, it’s imperative that you know who is accessing your cloud data and where they are located to comply with government privacy regulations such as GDPR or CCPA. The same applies to specific industries as well, such as HIPAA compliance in the healthcare industry or PCI compliance in the credit-card industry. Again, there are services available to help navigate these issues, and they will come at a cost. That said, paying a fee to manage this on the front end is much more cost-effective than paying the fines that come along with breaking compliance.

Closing Thoughts

Most organizations understand that effectively managing these challenges will come with additional costs above and beyond what you’re paying the various cloud services providers. Sticker shock is fairly common for multi-cloud newcomers, but the beauty of this approach is you get what you pay for. The cloud providers’ pay-as-you-go model makes it easy to scale up and down based on actual usage, and the additional services are an upfront investment that helps prevent even costlier disasters, such as a major data breach or fines due to non-compliance with government regulations.  

The one common thread that helps IT teams navigate each of these challenges is planning and education. It’s always worth investing time and resources in educating yourself and your teams on the challenges a multi-cloud strategy could present to determine the approach that’s best for your organization and, ultimately, speed up the time-to-value. 

Finally, it’s imperative for IT teams to perform due diligence around a full TCO analysis. While it is easy to compare hardware, storage, and compute costs, the real challenge is gathering all the costs around support tools and labor. This is one area where bringing in a trusted partner to help is critical – the four- to six-week investment of time can easily be recovered in cloud savings during the first year.

Did you find this article helpful? Tell us what you think on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We’d be thrilled to hear from you.