This is the first in a series of blogs that will explore various aspects of Anthos from Google Cloud.
One of the key tenets of Anthos is the ability to build, deploy and manage your applications wherever you have a compute footprint. 

Google CEO Sundar Pichai announced last year that Anthos would enable developers to “write once and run anywhere” (WORA) with “the freedom to choose the right cloud partner for the job”.

WORA was originally coined back in 1995 to describe Java applications, so what does this mean in 2020 and in an Anthos context?

 

1. It Means Modernising in Place

Running anywhere means starting where you are, which for many organisations will be in their own data centres, using existing hardware and platforms such as vSphere from VMware.

Kubernetes, the container-based technology that underpins Anthos, is often challenging for teams to run and manage, and requires a significant investment in training and headcount - there’s a good reason why one of the most popular tutorials is Kubernetes the Hard Way by Kelsey Hightower.

By running Google Kubernetes Engine (GKE) on Google Cloud Platform, however, teams are able to much more easily adopt and manage Kubernetes and enjoy the benefits of containerised application deployment, scaling and management.

Anthos now extends this principle to premises-based (via Anthos GKE on-prem) and different cloud environments (initially with Anthos GKE on AWS), giving users the same ease of management, the ability to upgrade seamlessly and a single control plane.

This means that businesses can now modernise their existing applications wherever they currently sit, without having to modify them.

 

2. It Means Easier Management of a Hybrid / Multi-Cloud Strategy

According to Flexera (2020), 93 percent of enterprises now have a multi-cloud strategy while 87 percent have a hybrid cloud strategy. In some instances, such as highly-regulated industries, regulatory bodies can enforce multi-cloud policies. In others, an organisation may simply want to avoid having “all their eggs in one basket”, and share their workloads across multiple clouds (including private clouds).

There aren’t many viable options at the moment that will allow for developers to tool up and for operators to deploy against such a variety of cloud offerings. Anthos, which has been built from the ground up to allow Kubernetes-focused applications to be deployed anywhere, offers that solution wherever the Anthos runtime has been deployed.

That currently includes VMware (GKE on-prem), Google Cloud (GKE) & AWS (GKE on AWS). Very soon we should also have access to Azure and Bare Metal deployments (RHEL, Centos & Ubuntu).

 

3. It Means Global Region Flexibility

Anthos opens up more regional possibilities than ever before.

Let’s consider a global organisation with a footprint in China - your choice of Cloud Service Providers for that area is going to be limited. Conveniently, AWS offers two regions - in Beijing and Ningxia - where a GKE cluster can be quickly deployed and joined to the Anthos platform and control plane.

Deploying your containerised application is then a straightforward task, using kubectl to target this new GKE on AWS cluster. The immediate benefit to your Chinese customers and employees will be a low latency, in-country connection to your application.

Google understands that organisations want choice and freedom to move and that choosing a particular technology shouldn’t mean a wholesale move to a particular cloud. Anthos gives that flexibility and is built completely on open source components like Kubernetes, Istio and KNative.

 

4. It Means Harmony Between Developers and Operators

Running anywhere with Anthos means being able to avoid knowing lots of different technologies. This will help you to better leverage your existing talent.

Your operators will have the Kubernetes and Istio controls they need to provide a highly-available service, running in specific geographies tuned to meet user demand.

Meanwhile your developers can use their favourite container workflow, ideally adopting KNative (Cloud Run) for a consistent serverless abstraction, integrated with CI/CD for seamless deployments (canaries, blue / green) and rollbacks if required.

Anthos gives you the cloud-native tooling that will enable your teams to adopt Google’s SRE (Site Reliability Engineering) best practices for running a fully-integrated developer and operator experience, helping your team level up to provide confident and frequent updates to your applications. As part of this blog series, I’ll be covering this topic soon.

 

Appsbroker - Your Guide for Your First Pilot

Appsbroker is one of a select group of Google Anthos launch partners in EMEA. We can offer an accelerated pilot on Anthos migrating a key workload (monolithic or containerised) to a location of your choice.  Very popular right now are deployments of Anthos GKE on-prem using existing VMware clusters.    

If you’d like to understand more about Anthos, especially around enterprise-grade controls and security, then we are able to offer time in our purpose-built Anthos laboratory where we can show an idealised hybrid deployment showcasing the following key benefits:

  • High Availability VMware cluster (4 nodes, DRS, vMotion) with vSAN
  • Latest Intel Xeon Scalable processors (2nd generation) with Optane DC Persistent Memory for high performance and high density containerisation
  • Google Cloud Dedicated Interconnect for high bandwidth on-prem to Google Cloud private networking
  • Google Cloud certified engineers to assist with your experiment
Get in touch below to learn more. Otherwise, this series will continue over the coming weeks.

 

Sign Up For Blog Updates