If you’re using Kubernetes services from cloud providers, you probably manage traffic into the clusters using cloud-provided load-balancing services. After all, it’s the most common approach.

But if you want to run an application on multiple cloud providers, it can be hard to distribute traffic intelligently. Load balancers offered by most cloud vendors are often tailored to a specific cloud infrastructure, like Amazon AWS ELB or Azure ALB. But load balancers also can be single points of failure, which can lead to app outages. Using multiple clouds to host your Kubernetes app adds resiliency.

In this blog post, I’ll show you how to use Citrix Application Delivery Controller (ADC), in conjunction with Kubernetes services in the cloud, to achieve the benefits of a multi-cloud configuration. I’ll show you how to configure a Citrix ADC in front of Kubernetes clusters that are in different public clouds and provide high availability to your applications in the cloud.

When multiple Kubernetes clusters are distributed among different cloud provider and across different geographies, you need a way to direct traffic toward each instance. Citrix ADC with its multi-cluster can help by detecting failures and failover to the cloud provider designated as backup.

In this example, we will use the two most popular public clouds — AWS and Azure — and their managed Kubernetes services, Amazon EKS (Amazon Elastic Kubernetes Service), and AKS (Azure Kubernetes Service). We will also use Service Type Loadbalancer from the cloud provider for a sample app.

The figure below shows AWS, and Azure and Citrix ADC (VPX):

Citrix ADC is available in the Amazon Marketplace and the Azure Marketplace and can be downloaded and easily configured using any CloudFormation for AWS and Azure Resource Manager template in Azure.

We’re using Citrix ADC as ADNS, so we will configure your preferred DNS to direct/route traffic to the ADC. The ADC will have a public IP EIP (Elastic IP) for AWS and a public IP for Azure, which will be used to sync configuration across the clouds.

Preparing the Environment to Ensure High Availability

First, we will deploy EKS in the “us-west-2” region and AKS in the “central india” region. On EKS and AKS, we will install a multi-cluster controller in the Kubernetes cluster where the application will be deployed. For AWS, we will define the environment variable “LOCAL_REGION” as “us-west-2″ and “LOCAL_CLUSTER” as “eks-cluster.” For Azure, we will define the environment variable “LOCAL_REGION” as “central india” and “LOCAL_CLUSTER” as “aks-cluster.” We also will configure access to VPX deployed in both clouds with appropriate credentials, using Kubernetes Secrets.

EKS – Citrix multi-cluster config snippet

   env:
        - name: "LOCAL_REGION"
          value: " us-west-2”
        - name: "LOCAL_CLUSTER"
          value: "eks-multi-cluster”
        - name: "SITENAMES"
          value:"azuresite,awssite"
        - name: "azuresite_ip"
          value: "<Management IP of ADC in Azure>”
        - name: "azuresite_region"
          value:"central-india"
        - name: "awssite_ip"
          value: " Management IP of ADC in AWS””
        - name: "awssite_region"value: "us-west-2"

AKS – Citrix multi-cluster config snippet

   env:
        - name: "LOCAL_REGION"
          value: "central-india"
        - name: "LOCAL_CLUSTER"
          value: "aks-multi-cluster"
        - name: "SITENAMES"
          value: "azuresite,awssite"
        - name: "azuresite_ip"
          value: "<Management IP of ADC in Azure>"
        - name: "azuresite_region"
          value: "central-india"
        - name: "awssite_ip"
          value: " Management IP of ADC in AWS"
        - name: "awssite_region"
          value: "us-west-2"

For more on the environment variables used here, check out our multi-cluster documentation

Configuring Custom Resource Definitions

Now, we’ll configure the following custom resource definitions: Global Traffic Policy and Global Service Entry.

Global Traffic Policy will be used define your deployment types — failover, canary, etc. — and your host, target, and health monitoring methods.

Global Service Entry will automatically pick the app’s external IP, which routes traffic into the cluster. If the external IP of the routes change, Global Service Entry will pick a newly assigned IP address and configure the ADC’s multi-cluster endpoints accordingly.

Deploying a Sample App

We’re ready to deploy a sample application. The sample application in this demo responds whether it is deployed in EKS or AKS. We will expose the app using ServiceType LB, which will automatically configure the cloud loadbalancer of the respective cloud provider.

Snippet of Sample APP in EKS(Amazon)

apiVersion: apps/v1
kind: Deployment
 spec:
  replicas: 2
    spec:
      containers:
      - image: sample-eks-image
        name: sampleapp
        ports:
          - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
    name: sample-app
spec:
    type: LoadBalancer
    ports:
    - port: 80
    selector:

Snippet of Sample APP in AKS(Azure)

apiVersion: apps/v1
kind: Deployment
 spec:
  replicas: 2
    spec:
      containers:
      - image: sample-aks-image
        name: sampleapp
        ports:
          - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
    name: sample-app
spec:
    type: LoadBalancer
    ports:
    - port: 80
    selector:app: sample-app

Now, we can define traffic policy for the sample-app. We will deploy the failover option, where AWS EKS would be primary and Azure AKS would be backup. Here’s the snippet of global traffic profile. The traffic profile will be the same for both AWS and Azure.

apiVersion: "citrix.com/v1beta1"
kind: globaltrafficpolicy
metadata:
  name: gtp-sample-app
  namespace: default
spec:
  serviceType: 'HTTP'
  hosts:
  - host: 'demo.citrixns.com'
    policy:
      trafficPolicy: 'FAILOVER'
      secLbMethod: 'ROUNDROBIN'
      targets:
      - destination: 'sample-app.default.us-west-2.eks-cluster'
        weight: 1
      - destination: 'sample-app.default.central-india.aks-cluster'
        primary: false
        weight: 1
      monitor:
      - monType: http
        uri: ''
        respCode: 200

Please note that targets are defined in following format

<servicename>.<namespace>.<local_region>.<local_cluster>

In this example, I am monitoring app health using http and response code 200. You can define different monitor options like ping, TCP, http, https depending on your application.

Now let’s access the application demo.citrixns.com.

Initially the application in EKS would respond. We simulate the failover by deleting the Kubernetes service on the EKS cluster, and traffic will failover to the AKS app.

In this blog post, I covered failover deployment options. We also can support multiple deployment options like canary and geo-based deployments, which you can learn about in our multi-cluster application delivery blog post.

Citrix’s cloud-native solutions provide the most comprehensive delivery for microservices-based apps and can be used to deliver resilient, secure application on-prem, in the cloud, and in your hybrid-cloud environment. Learn more about Citrix ADC.