Cloud service and
Cloud technology-
Provider
Cloud service and
Cloud technology provider
Suche

Our mission
Future of compute - green, open, efficient

Blog.

So this was vendor lock-in! Some reflections on cloud market dynamics and reducing technology dependencies.

Written by Maria Vaquero

For those who are active in the cloud market one thing is crystal clear: it seems like live will never get boring. Most experts have given up the hope of relying on forecasts with a reasonable degree of confidence – let alone the hope of delivering themselves any form of trustworthy prediction of what the future holds in store. It feels very much like a roller coaster ride, sitting in a train, watching the slope leading 50 meters up into the clouds (pun intended), but not knowing what comes after the tipping point. In a similar way, we see the cloud market rising (in volume, number of market stakeholders, complexity…) but cannot say much more beyond the simple prediction that it will keep rising.

The dynamics within the cloud industry are multifaceted. On one hand, new technologies are being continuously developed and tested, and are finding the way towards commercial infrastructures every day. For example, after over a year of monthly ALASCA Tech-Talks one has to be impressed at the speed of progress in development of new approaches and new software tools for the cloud – and this is only a small (open-source) drop in an ocean of continuous developments. On the other hand, other strategic decisions of market stakeholders disrupt the marketevery now and then, not always with positive consequences for other organizations. A very recent example is Broadcom’s decision to modify the license and price model for VMware products, with media reports claiming that costs will significantly increase for many VMware users.[1].

The VMware example shows the real risks of becoming locked-in with specific technologies or vendors with often unpredictable consequences. The problem with strong dependencies often remains latent and unseen until someone decides to exploit a power position in the market or a company is driven out of business. As a cloud provider focused on an open-source technology stack and very active in open-source communities, at Cloud&Heat we are often very passionate about exposing the risks of vendor lock-in. It sometimes feels like we are part of a sect of open-source fanatics sending uncalled-for warnings at every chance, Peter and the Wolf style. Well, the last months have shown that the wolf sometimes does visit the village.

So what can cloud users do in a situation characterised by uncertainty? How do you stay up to date with the latest technologies without changing your infrastructure every month? How do you minimise the risks of vendor lock-in while benefiting from the relevant technologies?

The most obvious solution is to switch technologies and build software or process on fully open-source, community-driven cloud technologies.For instance, with the last events related to the VMware case there has been an intense discussion about open-source alternatives like OpenStack or Proxmox, with each organization positioning them-selves for their preferred alternative according to how they asses the benefits of OpenStack and Proxmox in relation to their own needs and business strategy. While these considerations are valid and offer a way out of lock-in effects through open, transparent, collaboratively developed technolo-gies, the technical challenges, costs and implications of this decision must be thoroughly analyzed case by case.

Another possibility is to reduce dependencies on unexpected market changes by introducing an additional software layer between the cloud infrastructure and the applications developed or deployed by the cloud users. This layer can be relatively streamlined, as well as based on a technology that is open and to some extent standardized. Cloud infrastructures are complex and require the integration of multiple layers that must be installed, operated, updated and monitored. From the perspective of cloud user, however, it is the layer right below its services or applications that is the most critical one. Firstly, because it is the layer whose foundation ensures that the running software operates smoothly. Secondly, because if this layer is reasonably standardized and compatible with multiple underlying cloud technologies, it significantly reduces the cost of migrating this last layer plus the applications that run on it to different infrastructures.

A de-facto standard for this approach is to use Kubernetes as an additional PaaS layer just below the applications developed and operated by cloud users. This can mean installing Kubernetes on an IaaS layer (for instance based on OpenStack, Proxmox or VMware) or even installing Kubernetes directly on bare metal. In addition to the intrinsic benefits of Kubernetes, such as helping to automate the deployment, scaling and management of containerized applications, Kubernetes can also support in reduce dependence on a specific provider. It allows applications to be decoupled from the underlying infrastructure (e.g. cloud operating system), thereby improving their Portability. This means that applications can run consistently across different environments, whether on-premises or in the cloud. As a result, cloud users are less tied to specific infrastructures and can operate their own Kubernetes clusters on different infrastructures or rely on one of the many providers of managed Kubernetes services . If the provider chooses to increase its prices by 10 from one day to the next or stops delivering an adequate service, migration to another provider or to an on-premise setup is possible with comparatively low barriers. Portability can be even higher across Kubernetes services or distributions that are certified, for instance by the Cloud Native Computing Foundation , or that follow standards, such as those developed by the Sovereign Cloud Stack . Conformance with standards ensures Interoperability , reducing the costs of migration across providers to a minimum. The performance of applications running on Kubernetes can also be increased through regular updates and compliance checks, as well as daily operations, which ensure optimal configuration of the network, storage and monitoring, for example.

While Kubernetes might not be adequate for every use case, it is certainly an option worth consid-ering in light of the rapidly changing markets. To close off the article with a contradiction, one could attempt to make a forecast: evaluating the uptake of Kubernetes now will save users of cloud services some headaches in the future. Time will tell. In the meantime, sit back and enjoy the ride.

[1] https://www.computerweekly.com/news/366579593/Education-sector-facing-huge-VMware-cost-increases-after-Broadcom-ends-discounts

https://www.heise.de/news/Europaeische-Cloud-Provider-Broadcom-nimmt-uns-in-Geiselhaft-9659067.html




More blog posts

What have the open source cloud providers ever done for us?
In this blog post, we explain the advantages of open source cloud providers.
Blod Header | Server Test Routine (Automation)
In this article, we address the utilisation of the servers and the benchmarks used. We show how we continuously optimise the performance of our liquid cooling solutions (part 2)
Blod Header | Server Test Routine
In this article, we address the utilisation of the servers and the benchmarks used. We show how we continuously optimise the performance of our liquid cooling solutions (part 1)