Why Multicloud Kubernetes Is No Longer Optional
The conversation has shifted. Running Kubernetes on a single cloud provider was once considered best practice simpler networking, unified IAM, one support contract. But modern enterprise reality tells a different story.
Vendor lock-in risk, regional compliance mandates, cost arbitrage opportunities, and resilience requirements are pushing engineering teams to operate Kubernetes clusters across multiple clouds simultaneously. Among the most compelling combinations today is AWS (EKS) paired with Oracle Cloud Infrastructure (OCI/OKE) two providers with fundamentally different strengths that, when combined, can form a genuinely powerful platform.
This post walks through the architectural decisions, tooling choices, and operational patterns for running a production-grade multicloud Kubernetes setup spanning AWS EKS and OCI OKE.
Understanding What Each Cloud Brings
Before designing a multicloud strategy, you need to be honest about why you’re using each provider not just “for redundancy.”
AWS EKS is mature, battle-tested, and has the richest ecosystem of Kubernetes-native tooling. Its managed node groups, Karpenter autoscaler, and deep integration with IAM Roles for Service Accounts (IRSA) make it a natural fit for compute-heavy, stateless microservices. The tradeoff: cost can escalate fast at scale.
OCI OKE (Oracle Container Engine for Kubernetes) is increasingly competitive on price, particularly for compute and egress and has genuine strengths in Oracle Database integrations, bare metal instances, and deterministic network performance via its RDMA fabric. For workloads that touch Oracle DB, Exadata, or need high-throughput interconnects, OKE is not just a fallback, it’s the right tool.
The insight that unlocks a real multicloud strategy: stop treating one cloud as primary and the other as DR. Design for active-active.
The Core Architecture
A production multicloud Kubernetes setup across EKS and OKE requires solving four problems:
- Cluster federation or virtual cluster abstraction
- Cross-cloud networking
- Unified identity and secrets management
- Consistent GitOps delivery
Let’s break each down.
1. Cluster Federation: Choosing Your Control Plane Philosophy
There are two schools of thought:
Option A Independent clusters, unified GitOps (recommended) Each cluster (EKS, OKE) is fully autonomous. A GitOps tool typically Flux or Argo CD manages both from a single source of truth. No shared control plane exists between clusters. Workloads are deployed to each cluster independently based on targeting labels or Kustomize overlays.
Option B Virtual Cluster Mesh (Liqo, Admiralty, or Karmada) Tools like Karmada introduce a meta-control plane that federates multiple clusters. You submit workloads to the Karmada API server, and it distributes them across member clusters based on propagation policies.
For most teams, Option A is the right starting point. Karmada adds power but also operational complexity. The GitOps approach keeps blast radius contained a misconfiguration in one cluster doesn’t cascade.
2. Cross-Cloud Networking: The Hard Problem
Kubernetes pods in EKS can’t natively reach pods in OKE, and vice versa. You need a data plane that spans both clouds.
Recommended approach: WireGuard-based mesh with Cilium Cluster Mesh
Cilium’s Cluster Mesh feature allows pods across clusters to communicate using their native pod IPs, with WireGuard encryption in transit. The setup requires:
- Each cluster runs Cilium as its CNI (replacing the default VPC CNI on EKS and the flannel-based CNI on OKE)
- A
ClusterMeshresource is created linking the two API servers - Cross-cluster
ServiceExportandServiceImportresources (via the Kubernetes MCS API) expose services across the mesh
On the infrastructure layer, you need an encrypted tunnel between your AWS VPC and OCI VCN. Options:
- Site-to-site VPN (quickest to set up, ~1.25 Gbps cap)
- AWS Direct Connect + OCI FastConnect (for production private, dedicated bandwidth)
- Overlay via Tailscale or Netbird (great for dev/staging multicloud setups, not production-grade for high-throughput)
yaml
# Example: Cilium ClusterMesh config snippetapiVersion: cilium.io/v2alpha1kind: CiliumClusterwideNetworkPolicymetadata: name: allow-cross-cluster-servicesspec: endpointSelector: {} ingress: - fromEndpoints: - matchLabels: io.cilium.k8s.policy.cluster: oci-oke-prod
3. Unified Identity: IRSA on AWS, Workload Identity on OCI
This is where multicloud gets philosophically interesting. Each cloud has its own identity system, and they don’t speak the same language.
On AWS (EKS): Use IRSA (IAM Roles for Service Accounts). Your pod’s service account is annotated with an IAM role ARN. The Pod Identity Webhook injects environment variables that allow the AWS SDK to exchange a projected service account token for temporary AWS credentials.
On OCI (OKE): Use OCI Workload Identity, introduced in recent OKE versions. It works analogously to IRSA a Kubernetes service account is bound to an OCI Dynamic Group and IAM policy, and the pod receives a workload identity token that can be exchanged for OCI API credentials.
The challenge: your application code should not need to know which cloud it’s running on. Use a secrets abstraction layer.
External Secrets Operator (ESO) elegantly solves this. Deploy ESO on both clusters. Point the EKS instance at AWS Secrets Manager; point the OKE instance at OCI Vault. Your application consumes a SecretStore resource with a consistent name. ESO handles the transparent fetching of backend-specific credentials.
# SecretStore on EKS (AWS Secrets Manager backend)apiVersion: external-secrets.io/v1beta1kind: SecretStoremetadata: name: app-secretsspec: provider: aws: service: SecretsManager region: us-east-1 auth: jwt: serviceAccountRef: name: external-secrets-sa# SecretStore on OKE (OCI Vault backend) same name, different specapiVersion: external-secrets.io/v1beta1kind: SecretStoremetadata: name: app-secretsspec: provider: oracle: vault: ocid1.vault.oc1... region: us-ashburn-1 auth: workloadIdentity: {}```Your application's `ExternalSecret` resources reference `app-secrets` in both environments the YAML is identical.### 4. GitOps: One Repository, Multiple TargetsUse **Argo CD ApplicationSets** or **Flux's `Kustomization` with cluster selectors** to manage both clusters from a monorepo.A typical repo layout:```/clusters /eks-us-east-1 kustomization.yaml # EKS-specific patches /oke-us-ashburn-1 kustomization.yaml # OKE-specific patches/base /apps deployment.yaml service.yaml /infra external-secrets.yaml cilium-config.yaml
Flux’s Kustomization resource lets you target specific clusters using the cluster’s kubeconfig context or label selectors. Argo CD’s ApplicationSet with a list generator can enumerate your clusters and deploy the same app with environment-specific values.
The key rule: the base layer must be cloud-agnostic. Patches in cluster-specific overlays handle anything that diverges storage classes, ingress annotations, node selectors.
Observability Across Clouds
A multicloud cluster setup with no unified observability is an incident waiting to happen.
Recommended stack:
- Prometheus + Thanos for metrics each cluster runs Prometheus; Thanos Sidecar ships blocks to object storage (S3 on AWS, OCI Object Storage on OCI); Thanos Querier federates across both
- Grafana with both Thanos endpoints as datasources single pane of glass
- OpenTelemetry Collector deployed as a DaemonSet on each cluster, shipping traces to a common backend (Grafana Tempo, Jaeger, or Honeycomb)
- Loki for logs, with agents on each cluster shipping to a common Loki instance
Label discipline is critical: ensure every metric, trace, and log carries cluster, cloud_provider, and region labels from the source. Without this, correlation during incidents across clouds becomes extremely difficult.
Cost Management: The Overlooked Dimension
Multicloud adds a new cost vector: egress. Data leaving AWS costs money. Data entering OCI is free. Cross-cloud service calls that seemed free in a single-cloud setup now carry per-GB charges.
Practical rules:
- Colocate tightly coupled services in the same cluster/cloud don’t split microservices that call each other thousands of times per second across clouds
- Use Cilium’s network policy to audit cross-cluster traffic volume before enabling services in the mesh
- Consider OCI’s free egress to the internet for user-facing workloads where latency to OCI regions is acceptable
- Tag every namespace with cost center labels and use Kubecost or OpenCost deployed on each cluster with a shared object storage backend for unified cost attribution
Operational Runbook Considerations
A few things that will bite you if not planned for:
Clock skew: mTLS certificates and OIDC token validation are sensitive to time drift. Ensure NTP is configured identically on all nodes across both clouds. A 5-minute clock skew will silently break IRSA on EKS and workload identity on OKE.
DNS: Use ExternalDNS on both clusters pointing to a shared DNS provider (Route 53, Cloudflare). Services that need cross-cloud discoverability get DNS entries automatically on deploy.
Cluster upgrades: EKS and OKE release Kubernetes versions on different schedules. Maintain a maximum one-minor-version skew between clusters. Use a canary upgrade pattern: upgrade your OKE cluster first (typically lower blast radius), validate for 48 hours, then upgrade EKS.
Node image parity: Your application containers are cloud-agnostic, but your node OS images are not. Use Bottlerocket on EKS and Oracle Linux 8 on OKE both are minimal, hardened, and have predictable patching cycles.
When NOT to Do This
Multicloud Kubernetes is a force multiplier but only if your team has the operational maturity to support it.
Don’t pursue this architecture if:
- Your team is still stabilizing single-cluster Kubernetes operations
- Your workloads have no actual cross-cloud requirement (cost, compliance, or resilience)
- You lack dedicated platform engineering capacity to maintain the toolchain
- Your application isn’t designed for network partitioning tolerance
A well-run single-cloud EKS or OKE setup will outperform a poorly-run multicloud one every time. Add complexity only when you’ve exhausted simpler options.
Closing Thoughts
The multicloud Kubernetes story has matured considerably. Tools like Cilium Cluster Mesh, External Secrets Operator, Karmada, and OpenTelemetry have closed most of the operational gaps that made this approach impractical two years ago.
The AWS + OCI combination in particular is underrated. AWS brings ecosystem breadth; OCI brings pricing, Oracle database integration, and a network fabric that punches above its weight. For the right workloads and with the right tooling discipline the combination is genuinely compelling.
The architecture isn’t magic. It’s plumbing. But when it’s done right, it disappears and your developers ship to two clouds the same way they ship to one.
Have questions about multicloud Kubernetes design or EKS/OKE specifics? Reach out or leave a comment below.