Customers are critical to driving innovation at Citrix. In this blog post, we’ll take a look at recent features shaped by customer input on our service of type LoadBalancer offering

You can logically divide a Kubernetes cluster into zones. The Citrix Ingress Controller in different zones listen to same Kubernetes API server because it’s the same Kubernetes cluster. When a service of type LoadBalancer is created, all Citrix ADCs (MPX/VPX/SDX) in the zones advertise the same IP to the BGP fabric.

Why is this approach inefficient? If a zone (for example, Rack 3) does not have any workloads (pods) for a given service, the ADC in that zone still advertises the IP address.

Here are the three scenarios where we can improve efficiency.

  • Service of type LoadBalancer has two pods in Zone 1 and Zone 2. The MPX in Zone 3 should not advertise the VIP to BGP fabric
  • If the number of pods increased from two to three and the new pod is deployed on Zone 3. The MPX on that rack should advertise the VIP.
  • If the pods are decreased from three to one and only Zone 1 has the pod, the MPX in Zone 2 and 3 should recall their advertisement from the BGP fabric.

To support this functionality, we’ve added two new features to the Kubernetes service of type LoadBalancer in Citrix Ingress Controller:

  • Border Gateway Protocol (BGP) Support
  • Advertise and Recall VIP based on the availability of application pods in a zone

Support for automatic configuration of BGP RHI on Citrix ADC

Route health injection (RHI) enables Citrix ADC to advertise the availability of a VIP as a host route throughout a network that is using BGP. In the past, you had to manually perform the configuration on Citrix ADC to support RHI. Now, with Citrix Ingress Controllers deployed in a Kubernetes environment, you can automate the configuration on Citrix ADCs to advertise VIPs.

When a service of type LoadBalancer is created, the Citrix Ingress Controller configures a VIP on the Citrix ADC for the service. If BGP RHI support is enabled for the Citrix Ingress Controller, it automatically configures Citrix ADC to advertise the VIP to the BGP fabric.

Advertise and Recall VIPs Based on the Availability of Pods in a Zone

In a topology like the one below, nodes in a Kubernetes cluster are distributed across three zones. Each zone has a Citrix ADC MPX as the Tier-1 ADC, as well as a Citrix Ingress Controller to configure resources to the Tier-1 ADC in the Kubernetes cluster. Citrix Ingress Controllers in all zones listen to the same Kubernetes API server. Whenever a service of type LoadBalancer is created, all Citrix ADCs in the cluster listen to the event and advertise the same IP address to the BGP fabric, even if there’s no workload on a zone.

Sample topology

Citrix provides a solution to advertise or recall the VIP based on the availability of pods in the zone. You’ll need to label the nodes on each zone so the Citrix Ingress Controller can identify nodes belonging to the same zone. The Citrix Ingress Controller on each zone performs a check to see if there are pods on nodes in that zone. If so, it advertises the VIP. Otherwise, it recalls the advertisement of VIP from the Citrix ADC on the zone.

Benefits

  • Better fault tolerance and minimizing the blast radius.
  • Better load balancing based on locality.

Advertise Case

Here, we have pods of the service of type LoadBalancer on all four nodes, and both the CICs have configured both the MPXs. In turn, both the MPXs have advertised the VIP to the BGP Router.

Advertise case

Recall Case

Here, we have pods of the service of type LoadBalancer on Node 1 and Node 2, which are on Rack 1/Zone 1. CIC-24 has configured the MPX 10.217.212.24, which doesn’t have the configuration because there are no pods on Node 3 and Node 4. It recalls the VIP from advertisement to BGP router.

Recall case

Configuring BGP RHI on Citrix ADCs using Citrix ingress controller

In this section, we’ll look at configuring BGP RHI on Citrix ADCs using the Citrix Ingress Controller, based on a sample topology. In this topology, nodes in a Kubernetes cluster are deployed across two zones. Each has a Citrix ADC VPX or MPX as the Tier-1 ADC and a Citrix Ingress Controller for configuring ADCs in the Kubernetes cluster. The ADCs are peered using BGP with the upstream router.

BGP RHI configuration sample topology

Before you get started, you must configure Citrix ADC MPX or VPX as a BGP peer with the upstream routers. Then, perform the following steps to configure BGP RHI support based on the sample topology.

1) Label nodes in each zone using the following command:

For zone 1:

kubectl label nodes node1 rack=rack-1
kubectl label nodes node2 rack=rack-1

For zone 2:

kubectl label nodes node3 rack=rack-2
kubectl label nodes node4 rack=rack-2

2) Configure the following environmental variables in the Citrix Ingress Controller configuration YAML files as follows:

For zone 1:

- name: "NODE_LABELS"
value: "rack-1"
- name: "BGP_ADVERTISEMENT"
value: "True"

For zone 2:

- name: "NODE_LABELS"
value: "rack-2"
- name: "BGP_ADVERTISEMENT"
value: "True"

A sample yaml file for deploying the Citrix Ingress Controller on Zone 1 is provided as follows:

 apiVersion: v1
 kind: Pod
 metadata:
   name: cic-k8s-ingress-controller-1
   labels:
     app: cic-k8s-ingress-controller-1
 spec:
   serviceAccountName: cic-k8s-role
   containers:
   - name: cic-k8s-ingress-controller
     image: "quay.io/citrix/citrix-k8s-ingress-controller:1.4.392"

     env:
     # Set NetScaler NSIP/SNIP, SNIP in case of HA (mgmt has to be enabled)
     - name: "NS_IP"
       value: "10.217.212.24"
    # Set username for Nitro
     - name: "NS_USER"
       valueFrom:
         secretKeyRef:
           name: nslogin
           key: username
    # Set user password for Nitro
     - name: "NS_PASSWORD"
       valueFrom:
         secretKeyRef:
         name: nslogin
         key: password        
     - name: "EULA"
       value: "yes"
     - name: "NODE_LABELS"
       value: "rack=rack-1"
     - name: "BGP_ADVERTISEMENT"
       value: "True"
 args:
   - --ipam 
     citrix-ipam-controller
 imagePullPolicy: Always

3) Deploy the Citrix Ingress Controller using the following command. Please note, you need to deploy the Citrix Ingress Controller on both racks (per zone)

 Kubectl create -f cic.yaml

4) Deploy a sample application using the web-frontend-lb.yaml.

Kubectl create -f web-frontend-lb.yaml

The content of the web-frontend-lb.yaml is as follows:

 apiVersion: v1
 kind: Deployment
 metadata:
   name: web-frontend
 spec:
   replicas: 4
   template:
     metadata:
       labels:
         app: web-frontend
     spec:
       containers:
       - name: web-frontend
         image: 10.217.6.101:5000/web-test:latest
         ports:
           - containerPort: 80
         imagePullPolicy: Always

5) Create a service of type LoadBalancer for exposing the application.

Kubectl create -f web-frontend-lb-service.yaml

The content of the web-frontend-lb-service.yaml is as follows:

 apiVersion: v1
 kind: Service
 metadata:
   name: web-frontend
   labels:
     app: web-frontend
 spec:
   type: LoadBalancer
   ports:
   - port: 80
     protocol: TCP
     name: http
   selector:
     app: web-frontend

6) Verify the service group creation on Citrix ADCs using the following command.

show servicegroup <service-group-name>

The following is a sample output for the command.

#  show servicegroup k8s-web-frontend_default_80_svc_k8s-web-frontend_default_80_svc

 k8s-web-frontend_default_80_svc_k8s-web-frontend_default_80_svc - TCP
 State: ENABLED  Effective State: UP Monitor Threshold : 0
 Max Conn: 0 Max Req: 0  Max Bandwidth: 0 kbits
 Use Source IP: NO   
 Client Keepalive(CKA): NO
 TCP Buffering(TCPB): NO
 HTTP Compression(CMP): NO
 Idle timeout: Client: 9000 sec  Server: 9000 sec
 Client IP: DISABLED 
 Cacheable: NO
 SC: OFF
 SP: OFF
 Down state flush: ENABLED
 Monitor Connection Close : NONE
 Appflow logging: ENABLED
 ContentInspection profile name: ???
 Process Local: DISABLED
 Traffic Domain: 0

 1)   10.217.212.23:30126    State: UP   Server Name: 10.217.212.23  Server ID: None Weight: 1
   Last state change was at Wed Jan 22 23:35:11 2020 
   Time since last state change: 5 days, 00:45:09.760

   Monitor Name: tcp-default     State: UP   Passive: 0
   Probes: 86941 Failed [Total: 0 Current: 0]
   Last response: Success - TCP syn+ack received.
   Response Time: 0 millisec

 2)   10.217.212.22:30126    State: UP   Server Name: 10.217.212.22  Server ID: None Weight: 1
   Last state change was at Wed Jan 22 23:35:11 2020 
   Time since last state change: 5 days, 00:45:09.790

   Monitor Name: tcp-default     State: UP   Passive: 0
   Probes: 86941 Failed [Total: 0 Current: 0]
   Last response: Success - TCP syn+ack received.

7) Verify the VIP advertisement on the BGP router using the following command.

  >VTYSH
 # show ip route bgp
   B      172.29.46.78/32   [200/0] via 2.2.2.100, vlan20, 1d00h35m
                            [200/0] via 2.2.2.101, vlan20, 1d00h35m
   Gateway of last resort is not set

Conclusion

The enhancements we’ve made for the service of type LoadBalancer offering will help customers optimize their traffic flow and improve load balancing based on locality. Learn more about Citrix ADC and service of type LoadBalancer, and check out our GitHub page.