Kubernetes is a great platform to automatically manage your workloads within a container cluster. However, when you’ve got to decide how to get user traffic inside a Kubernetes cluster, there are myriad ways to do it. These include using Service Type NodePort, LoadBalancer, and Kubernetes Ingress. Each approach has its drawbacks and advantages. In this post, we’ll examine each of them.

Service Type: NodePort

The NodePort service type opens a port on each of the worker nodes (VMs). Application developers specify this port number, which is typically greater than 30000. This is a quick and convenient way to get external traffic for temporary demo use cases. This approach works on both on-prem & public/private cloud-based Kubernetes. The traffic flow diagram below explains how this works.

The following YAML file exposes front-end-svc through port number 30080 on each K8s worker node. If you don’t specify this port number, Kubernetes will automatically assign one port in the 30000 to 32767 range. To access this service, just use the following syntax: http://<node-ip>:<nodePort>, i.e. http://1.1.1.1:30080

apiVersion: v1
kind: Service
metadata:
name: front-end-svc
labels:
name: front-end
spec:
selector:
app: cold-drinks-front-end
type: NodePort
ports:
- name: http
port: 80
nodePort: 30080

There are, however, several limitations to this approach:

  • A specific port number is defined for each service.
  • It requires the manual management of port numbers.
  • When worker node IPs change, the service URL or DNS needs to be updated

Due to these limitations, the NodePort service type is seldom used in production scenarios.

Service Type: LoadBalancer

The LoadBalancer service type spins up a new native cloud load balancer depending on your cloud provider (e.g. Google Network Load Balancer for GCP). This approach works only on public/private cloud-based Kubernetes. The LoadBalancer service type is a quick and convenient way to expose your services and is a popular method among cloud users. According to a recent CNCF survey, 67 percent of respondents use this approach. Here is how it works:

The following YAML file exposes front-end-svc through a native cloud load balancer. To access this service, use the following syntax: http://<ip-provisioned-by-cloud>, such as http://12.12.12.12.

kind: Service
apiVersion: v1
metadata:
name: front-end-svc
spec:
selector:
app: cold-drinks-front-end
ports:
- protocol: TCP
port: 80
type: LoadBalancer

This is a convenient and popular way to expose services of any type, including HTTP(S), TCP, UDP, and WebSockets. However, it has its downsides.

First, for each service, the cloud provisions an individual static IP, which incurs an hourly cost. As the number of services increase, the cost increases. Cloud providers also limit the number of public IPs they will allocate to an account. Second, this approach works only for public/private cloud providers. For on-premises deployments, there isn’t yet a production-ready solution, though the open source MetalLB is gaining traction. It takes an IP address from a specified pool and assigns it to service being exposed by serviceType=LoadBalancer. MetalLB requires very particular datacenter configuration and routing and may not be suitable for most deployments.

Ingress

The Kubernetes Ingress-based approach is the most powerful and most highly recommended solution for production applications. Like an application delivery controller (ADC), an Ingress-based solution gives you full traffic management, such as domain-name-based routing, rewrite/responder policies, and TLS encryption offload functionality. Here is a simple traffic flow diagram to show the functionality

This YAML file exposes front-end-svc-1 and front-end-svc-2 on same IP address (12.12.12.12). The routing is based on the actual domain name used (service1.colddrinks.com for front-end-svc-1 & service2.colddrinks.com for front-end-svc-2).

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-vpx
annotations:
kubernetes.io/ingress.class: "vpx"
ingress.citrix.com/frontend-ip: "12.12.12.12"
ingress.citrix.com/secure_backend: '{"front-end-svc-1": "True","front-end-svc-2": "True"}'
spec:
tls:
- secretName: secretfile
rules:
- host:  service1.colddrinks.com
http:
paths:
- path: /
backend:
serviceName: front-end-svc
servicePort: 443
- host:  service2.colddrinks.com
http:
paths:
- path: /
backend:
serviceName: front-end-svc-2
servicePort: 443

This Ingress object is read by an Ingress Controller, which configures any ADC (Citrix or others), which can start taking end user traffic. This ADC can be inside or outside the Kubernetes cluster (more on this later). K8s Ingress based approach is superior to LoadBalancer and NodePort service types because of several reasons. First, you can expose multiple applications through same static IP address. Second, any production-grade application needs a versatile solution to manage several use cases for inbound traffic, such as URL rewrite, responder policies, content switching based on URL/Header, and managing non-HTTP traffic for TCP/UDP based apps. K8s Ingress based approach works for both on-prem & public/private cloud-based Kubernetes.

As discussed above, Ingress controller configures the desired traffic management, load balancing rules on ADC — which could be inside or outside K8s cluster. Let’s understand how end-user traffic reaches the destined application pods in both scenarios. First, if the ADC is outside the Kubernetes cluster (for example a hardware load balancer appliance), it will have its own public IP address on which user traffic can land. Based on the ADC’s configured rules this traffic will be routed to the appropriate Pods that make up the Kubernetes service. Second, if the ADC is containerized, like the Citrix ADC CPX, it can be deployed inside the Kubernetes cluster (often deployed as a Kubernetes service itself) — it is up to you to choose how you bring traffic to this ADC. It could be using NodePort service type or other routing methods (such as overlay networks). This gives you flexibility to control how to route traffic to the ADC.

Citrix Ingress Controller is a feature-rich Ingress Controller that enables you to effectively manage and route Ingress traffic to fulfill the production use cases (load balancing multiple applications, managing HTTP, TCP/UDP based apps, rewrite/responder rules etc). You can leverage your existing Citrix ADCs to manage inbound traffic to Kubernetes cluster using the Citrix Ingress Controller.

This was an introduction to the front-doors of Kubernetes: NodePort, LoadBalancer, and Ingress. In our next post, we’ll detail the full functionality of Citrix’s enterprise-grade Ingress solutions including Citrix Ingress Controller & Citrix ADC CPX.