by  Yevhen Beznosov

The Differences Between Azure Arc and Google Anthos

clock-icon-white  7 min read

With more and more companies opting to house their workloads in the cloud over the past few years, public cloud adoption has skyrocketed.

While many choose the hybrid infrastructure model that hosts services and resources, both on-premises and in the cloud, others go for a multi-cloud approach using numerous cloud providers’ services at once.

Managing your resources in different environments at the same time can be costly, time-consuming, and confusing. That’s why the rise of hybrid and multi-cloud use has led large cloud providers to focus on providing their users the means to manage all their workloads from one control center.

Both Microsoft and Google joined this trend, developing Azure Arc and Anthos, respectively, to solve these problems. Microsoft’s Azure Arc helps customers simplify using different environments by offering traditional Azure resources on any type of infrastructure, regardless of its location.

For customers used to Google Cloud, Anthos gives them the tools to manage their on-prem and cloud infrastructure using already familiar services based on Google Kubernetes Engine (GKE).

Hybrid Kubernetes management

When managing a hybrid or multi-cloud environment, it’s important to know how the applications are configured and handled, how secure they are, and how the client monitors the infrastructure’s health and productivity.

Both Azure Arc and Google Anthos have taken these elements into account, but comparing each’s core components highlights some essential differences.

Multi-cluster Management

While both Azure Arc and Anthos implement Kubernetes, its use is optional in Azure Arc. That’s because Azure Arc also supports edge computing environments, enabling deployment in any infrastructure.

However, in Anthos, Kubernetes is a core part of the tool. It even goes so far as to provide additional tools for converting virtual and bare-metal workloads to containers.

Azure Arc uses Kubernetes to deploy and manage container-based applications by attaching and configuring clusters. When a Kubernetes cluster is attached to Azure Arc, it appears as a separate identity and can be tagged like any other Azure resource. To connect a Kubernetes cluster to Azure located on any cloud or on-premise infrastructure, the cluster administrator must deploy agents.

These agents are responsible for securely connecting to Azure, collecting the logs and metrics, and watching for configuration requests. That’s how managing multi-cloud environments becomes the more familiar management of Azure objects and resources.

Google Anthos also relies on Kubernetes as its primary computing environment with Google Kubernetes Environment (GKE) on Google Cloud, GKE on-premises, or GKE on AWS for managing Kubernetes hybrid environments.

Here, every object in your infrastructure is treated as a Kubernetes object and handled using the GitOps workflow. Similar to Azure Arc, Anthos lets you treat environment instances as Google Cloud specific objects, meaning you can manage different environments with the same Google tools and processes that you already know.

Application and Configuration Management

How you configure your environments determines the tools and services you can use in the future.

Anthos Config Management integrates with Anthos clusters—either on-premises or in the cloud—and lets you deploy and monitor configuration changes stored in a central Git repository. You can use GitOps workflows in any of the available environments: AWS, GCP, or on-premises.

With Git being the central point of your configurations, you can change your environment with simple pull requests to make them compatible and aligned. In addition, any YAML or JSON that can be applied with kubectl commands can also be merged with Anthos Config Management and applied to your Kubernetes clusters.

Azure Arc enables Kubernetes to deploy configurations or applications using the GitOps approach. It relies on a Kubernetes Operator—here the Flux Operator—to listen for changes being made in the Git repository.

With Cluster-level GitOps Configuration, the goal is to have a baseline for the "horizontal" or "management" components deployed on your Kubernetes cluster and then used by your applications.

Conversely, the goal with Namespace-level GitOps Configuration is to have the Kubernetes resources deployed only in the namespace selected. Having such deployments as part of your GitOps Configuration will ensure your cluster and applications meet the baseline standards.

Security and Compliance

Security is of the utmost importance for any digital infrastructure, and while Arc and Azure differ in how they protect you, both still leave you with a safe infrastructure.

Azure Policy extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA). This allows you to apply your security policies and safeguards at scale to all of your clusters across all of your environments. For the compliance state of your Kubernetes clusters, Azure Policy makes it possible to manage and report on it from one place.

Google Anthos Config Management includes the Policy Controller, which enforces custom business logic against every API request to Kubernetes. Depending on your business needs, common security and compliance rules can be enforced either with the built-in set of rules or by writing your own policies based on the open-source Open Policy Agent project.

Monitoring and Logging

Access to logs for applications and infrastructure components is critical for effectively running and maintaining a production infrastructure.

Azure Monitor for Containers provides numerous monitoring features that explain your Azure Arc clusters’ health and performance. It collects memory and processor metrics from controllers, nodes, and containers, available in Kubernetes through the Metrics API.

After this, metrics are saved in the metrics store, and your log data is written into the logs store associated with your Log Analytics workspace. With this integration, you get all of the Kubernetes-related performance metrics, telemetry, and metadata you need, including from any of the applications deployed on the clusters.

For Anthos on Google Cloud, the workloads that run inside your clusters have logs that are automatically enriched with relevant labels, such as pod and cluster names.

Cloud Logging provides a unified place for you to store and analyze all of these logs. Cloud Audit Logs also allows you to capture and analyze the interactions between your applications and users. Kubernetes Engine Monitoring automatically stores your application's critical metrics for further use in debugging, alerting, and post-incident analysis.

Other Services

Although the core of the two platforms are built using the same general services and practices, Google Anthos and Azure Arc have unique multi-cloud management solutions.

Google Anthos Managed Service Mesh

Anthos Service Mesh is a networking layer for managing and configuring communication between different environments and services. It is based on Istio, an open-source implementation of the service mesh infrastructure layer.

Anthos Service Mesh uses sidecar proxies to enhance network security, reliability, and visibility. All of the functions inside the pods are implemented in a common out-of-process proxy, which is then delivered as a separate container in the same pod.

Google Anthos Serverless

Another Anthos-specific service is Cloud Run, which manages how your services are run in both the cloud and on-premises. It also optimizes your resource utilization, horizontal scaling, and integration with networking and Anthos Service Mesh. Powered by the Knative open-source project, Cloud Run allows you to take advantage of the momentary container-to-production approach.

Azure Arc for Servers

Alongside Kubernetes clusters, Azure Arc also organizes other types of resources like Windows and Linux servers. Servers running outside of Azure—such as AWS EC2 instances, on-premises, VMware, physical machines, or devices in edge scenarios—can all be projected as Azure resources.

Using Azure Policy and resource tags, these resources can then be managed like Azure-native virtual machines along with updates, change tracking, monitoring, and more.

Azure Arc Data Services

Azure Arc also provides you with the ability to run Azure data services on-premises, in multi-cloud, and in edge environments using the Kubernetes-centered infrastructure of your choice.

Arc enables access to the latest Azure innovations, data workload performance optimization, and consistent management across database engines like Postgres and SQL. It also provides a unified view of your data services and underlying infrastructure, with a reliable cloud billing model.


Google Cloud Anthos and Microsoft Azure Arc share similar technical approaches. At the core of each are platforms allowing companies to host their workloads in hybrid and multi-cloud infrastructures. Both also leverage Kubernetes and containers to provide a seamless experience anywhere: on-premises, in their own public cloud platform, or in a competitor’s cloud.

Yet even though Microsoft and Google approached some problems from similar perspectives, both have also implemented a few unique solutions that are only available on their individual platforms.