Redundancy and fault tolerance are built into the DNS protocol. If a nameserver fails to respond to a DNS query, the DNS client gives up on that nameserver after exhausting the number of retry attempts that are configured. It then sends the request to another nameserver that’s advertised as the authoritative nameserver for the domain name.

DNS is a loosely coupled distributed system, and all nameservers should have a consistent view of the configuration so clients get consistent responses. As an extension to DNS, GSLB enables adaptation in response to dynamic workloads. But that means you must have consistent GSLB configuration across all participating nameservers. Fault tolerance is essential during DNS request processing and when configuring nameservers for GSLB.

With the growth in dynamic workloads, changes to GSLB or DNS subsystems happen often, increasing the workload for IT admins. Manual intervention to ensure consistent configuration in GSLB appliances that are spread across datacenters is prone to error, which can affect the end-user experience.

In this blog post, we’ll look at how Citrix ADC can provide a fault-tolerant distributed control plane for GSLB, making it easier for IT admins to manage GSLB appliances.

A Consistent GSLB View

If a DNS goes down, users can’t access apps. Having at least one level of redundancy is critical, and GSLB appliances should be configured in an active-active mode, where all the GSLB appliances are actively serving the DNS requests. Clients’ DNS requests can be served by any appliance in the group, so all appliances need to have a consistent view of:

  • GSLB configuration
  • Runtime information (GSLB provides the runtime information regarding the health and load of the apps and datacenter, so the latency is minimized and clients get the best possible user experience.)

Citrix ADC’s Metric Exchange Protocol (MEP), delivers a consistent view of runtime information across GSLB appliances, and our real-time GSLB config synchronization feature offers a consistent view of the configuration.

GSLB Configuration Consistency

As shown below, the IT admin performs the configuration update on the GSLB node identified as the master. The admin then pushes the GSLB configuration desired state to all the appliances. Each appliance will then compare the desired state with the current state of the local GSLB configuration. If there’s a difference, a configuration patch is applied and a status is pushed back to the master.

If the master node goes down due to a failure or if it is brought down for maintenance, the admin can push the configuration to another node, which can take on the master role, and the latest desired state is pushed to other nodes, as shown below. You can configure more than one GSLB node to take the master role in an active-passive mode so the configuration is pushed to only one of the master nodes at any point in time.

After the old master comes up, if any configuration happened while it was down, the latest GSLB configuration desired state would be pushed. The old master would patch the config difference and would be ready to take up the role of the master for future updates.

GSLB Sync Status

You can view the latest GSLB sync status by using the ‘Show gslb syncstatus’ command. Admins are notified of all GSLB sync failures if the GSLB-SYNC-STATUS-FLIP alarm is enabled, and GSLB admins can take the appropriate action by using the above notification options.

Conclusion

With dynamic compute workloads becoming the norm, fault tolerant and distributed control planes are more important than ever for automating configuration management, which Citrix ADC’s GSLB sync feature can enable. Learn more about Citrix ADC and global server load balancing in our product documentation and on Citrix Tech Zone.