Skip to content



Cilium Gateway Demo


Cilium Gateway Demo

In this post I will setup a Cilium Gateway on my lab kubernetes cluster, use Cilium L2Announcement to make the gateway available on LAN, and setup plain http access to an application running on the kubernetes cluster.

If you are interested in seeing the setup and demo on https access using cert-manager, please have a look at my separate post.

Prerequisite

  • GitOps is setup using fluxcd and GitLab
  • Five flux kustomizations setup
    • ./clusters/lab-hlv3 as flux-system created during bootstrap process
    • ./infrastructure/lab-hlv3/controllers as infra-controllers
    • ./infrastructure/lab-hlv3/configs as infra-configs
    • ./apps/lab-hlv3 as apps
    • ./sops/lab-hlv3 as sops
  • in short it's the continuation of SOPS Setup

As you can see from the directory names, the kubernetes cluster is named lab-hlv3. The domain name I will be using is lab.blink-1x52.net.

Steps

  • preparing namespace
  • traefik/whoami deployment and service in testbed namespace
  • review the cilium requirement to use cilium gateway
  • install Gateway API CRDs
  • create dedicated namespace for the gateway
  • create the gateway
  • create IP pool
  • create L2 announcement
  • test

Preparing namespace

In my previous post on setting up GitOps, I created the "testbed" namespace to run my utils pod, and here it is.

# ./clusters/lab-hlv3/namespaces/testbed.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: testbed

whoami service to test http access

https://github.com/traefik/whoami

Tiny Go webserver that prints OS information and HTTP request to output.

I am going to spin this up as a deployment, and also create a service to access it.

I am omitting some parameters, but basically this is to create traefik/whoami container and service listening on port 80 to connect to the whoami pod on port 80.

# ./apps/base/testbed/whoami.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
  namespace: testbed
spec:
  ports:
    - name: http
      targetPort: 80
      port: 80
  selector:
    app: whoami
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
  namespace: testbed
  labels:
    app: whoami
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      hostname: whoami
      containers:
        - name: whoami
          image: traefik/whoami:v1.10.3
          imagePullPolicy: IfNotPresent

Here is my apps ks (flux kustomization).

# ./apps/lab-hlv3/kustomization.yaml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  # testbed
  - ../base/testbed

And testbed kustomization.

# ./apps/base/testbed/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - tools-deployment.yaml
  - whoami.yaml

What's created

This is something you will see after the reconciliation.

$ flux tree ks apps
Kustomization/flux-system/apps
├── Service/testbed/whoami
├── Deployment/testbed/tools
├── Deployment/testbed/whoami

$ kubectl get svc whoami -n testbed
NAME     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
whoami   ClusterIP   10.96.133.198   <none>        80/TCP    5d5h

Service access from a node of the cluster member

The services on the kubernetes cluster are accessible from any member of the cluster.

# check the cluster IP of the service whoami
$ kubectl get svc -n testbed
NAME     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
whoami   ClusterIP   10.96.133.198   <none>        80/TCP    5d4h

# access the service
$ curl 10.96.133.198
Hostname: whoami
IP: 127.0.0.1
IP: ::1
IP: 10.0.3.170
IP: fe80::e834:eff:fe1f:44f3
RemoteAddr: 10.0.3.163:55572
GET / HTTP/1.1
Host: 10.96.133.198
User-Agent: curl/7.76.1
Accept: */*

Cilium Gateway Requirements

https://docs.cilium.io/en/stable/network/servicemesh/gateway-api/gateway-api/#prerequisites

  • either NodePort enabled or kube-proxy replacement enabled
  • l7 proxy enabled
  • Kubernetes Gateway API v1.2.0 CRDs installed
    • (optional) experimental TLSRoute CRD installed

The Gateway API CRDs installation have not been taken care of, so this will be done next. All the others have been taken care of during the initial kubernetes cluster setup and cilium installation. Please refer to my other post for details.

Gateway API CRDs

Kubernetes Gateway API CRDs are something that do not get installed by default on the kubernetes cluster. I have downloaded the CRDs files and included them in the infra-controllers flux kustomizations.

Here is the infra-controllers ks file.

# ./infrastructure/lab-hlv3/controllers/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - cm-placeholder.yaml
  ### crds
  # gateway api
  # version 1.2.0
  - crds/gateway-api/standard/standard-install-v1.2.0.yaml
  - crds/gateway-api/experimental/gateway.networking.k8s.io_tlsroutes-v1.2.0.yaml

I have downloaded the standard gateway api CRDs file and experimental TLSroutes CRDs file and placed them accordingly.

  • ./infrastructure/lab-hlv3/controllers/crds/gateway-api/standard/standard-install-v1.2.0.yaml
  • ./infrastructure/lab-hlv3/controllers/crds/gateway-api/experimental/gateway.networking.k8s.io_tlsroutes-v1.2.0.yaml

post-crds-installation

Once successfully installed, you can confirm the changes in kubectl get crds and kubectl api-resources.

$ kubectl api-resources | grep gateway
gatewayclasses                      gc                                  gateway.networking.k8s.io/v1             false        GatewayClass
gateways                            gtw                                 gateway.networking.k8s.io/v1             true         Gateway
grpcroutes                                                              gateway.networking.k8s.io/v1             true         GRPCRoute
httproutes                                                              gateway.networking.k8s.io/v1             true         HTTPRoute
referencegrants                     refgrant                            gateway.networking.k8s.io/v1beta1        true         ReferenceGrant
tlsroutes                                                               gateway.networking.k8s.io/v1alpha2       true         TLSRoute

Cilium values file

Just to recap on my cilium installation options, here is the copy of the list of changes made on the values file for cilium helm chart version 1.17.1. Note that I have changed the domain name from example.net to the actual domain name to later work on cert-manager on this lab cluster.

  • k8sServiceHost: lab-kube-endpoint.lab.example.net
  • k8sServicePort: "8443"
  • k8sClientRateLimit.qps: 33
  • k8sClientRateLimit.burst: 50
  • kubeProxyReplacement: "true"
  • kubeProxyReplacementHealthzBindAddr: "0.0.0.0:10256"
  • l2announcements.enabled: true
  • l2announcements.leaseDuration: 3s
  • l2announcements.leaseRenewDeadline: 1s
  • l2announcements.leaseRetryPeriod: 200ms
  • externalIPs.enabled: true
  • gatewayAPI.enabled: true
  • etcd.enabled: true
  • etcd.ssl: true
  • etcd.endpoints: ["https://192.0.2.5:2379", "https://192.0.2.6:2379", "https://192.0.2.7:2379"] # dummy ipaddr here
  • hubble.ui.enabled: true
  • hubble.relay.enabled: true
  • hubble.peerService.clusterDomain: lab.blink-1x52.net

Dedicated namespace for the cilium gateway

I'd like to create a dedicated namespace to run my cilium gateway.

# ./clusters/lab-hlv3/namespaces/gateway.yaml
---
kind: Namespace
apiVersion: v1
metadata:
  name: gateway
  labels:
    service: gateway
    type: infrastructure

Creating the gateway

Let me create a new gateway using cilium.

This gateway uses the GatewayClass "cilium" which is automatically created by cilium, and you can confirm by running kubectl describe gc. I have the dummy IP address 192.0.2.83 set here, and the plain http listener for "whoami-kube.lab.blink-1x52.net" conditionally available for namespaces that have "gateway: cilium" label set.

# ./infrastructure/lab-hlv3/configs/cilium/gateway.yaml
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: cilium-gateway
  namespace: gateway
spec:
  gatewayClassName: cilium
  addresses:
    - type: IPAddress
      value: 192.0.2.83
  listeners:
    - name: whoami-kube-http
      hostname: whoami-kube.lab.blink-1x52.net
      port: 80
      protocol: HTTP
      allowedRoutes:
        namespaces:
          from: Selector
          selector:
            matchLabels:
              gateway: cilium

And so let me also update the existing testbed namespace like this. By adding the "gateway: cilium" label, I can setup web access to the services in this namespace using the gateway.

# ./clusters/lab-hlv3/namespaces/testbed.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: testbed
  labels:
    service: testbed
    type: app
    gateway: cilium

IP Pool and L2Announcement for the Cilium Gateway

Once the gateway has been successfully created, you will see the gateway and also service for the gateway. Run kubectl get gtw -n gateway and kubectl get svc -n gateway to confirm both.

Next, I want to make the gateway available on LAN. I am going to do the exact same things done to setup access to the Cilium Hubble UI (covered in the separate post).

  • create IP pool to assign IP address to the gateway
  • create L2Announcement to make the gateway service available on LAN

Here is the IP pool (with dummy IP address). This will make the IP address pool available for all the services in gateway namespace.

---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
  name: "ippool-gateway"
spec:
  blocks:
    - start: "192.0.2.83"
      stop: "192.0.2.83"
  serviceSelector:
    matchLabels:
      "io.kubernetes.service.namespace": "gateway"

Here is the L2Announcement. Here I am using a slightly different service selector condition. I have confirmed the labels set to the gateway service and set the same labels here. I have included different interface patterns just because I have different models of servers and OS with different NIC names. It's going to be okay to just have "eth0" if all the kubernetes cluster members have only eht0 interface.

---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:
  name: l2-cilium-gateway
spec:
  serviceSelector:
    matchLabels:
      io.cilium.gateway/owning-gateway: cilium-gateway
      gateway.networking.k8s.io/gateway-name: cilium-gateway
  interfaces:
    - ^eth[0-9]+
    - ^eno[0-9]+
    - ^enp[0-9]s[0-9]+
  loadBalancerIPs: true

infra-configs ks

The final infra-configs flux kustomization file looks like this.

# ./infrastructure/lab-hlv3/configs/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - cm-placeholder.yaml
  # cilium gateway
  - cilium/gateway.yaml
  - cilium/ippools-cilium-gateway.yaml
  - cilium/l2announcement-cilium-gateway.yaml

Gateway with IP address

Here is the resulting gateway (and svc) with dummy IP address. When you check the ARP information (arp -a on Windows for example), you will see this external IP address seen on arp table with the same MAC address with whichever kubernetes cluster member is currently responsible for this IP address.

$ kubectl get svc,gtw -n gateway
NAME                                    TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                      AGE
service/cilium-gateway-cilium-gateway   LoadBalancer   10.96.87.48   192.0.2.83   80:31710/TCP,443:30231/TCP   5d5h

NAME                                               CLASS    ADDRESS        PROGRAMMED   AGE
gateway.gateway.networking.k8s.io/cilium-gateway   cilium   192.0.2.83   True         5d18h

HTTPRoute to access whoami service

The gateway is created, and the whoami service is also created. Next I need to create a HTTPRoute which connects the gateway and the service.

I want to add HTTPRoute to the existing whoami.yaml file, but since I have multiple clusters using the same GitOps repository and some do not have gateway implemented, I place it outside the common ./apps/base directory.

  • parentRefs pointing to the listener on the cilium-gateway created on gateway namespace
  • backendRefs pointing to the service named "whoami" on port "80"
# ./apps/lab-hlv3/httproutes/whoami-http.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: whoami-http
  namespace: testbed
spec:
  parentRefs:
    - name: cilium-gateway
      sectionName: whoami-kube-http
      namespace: gateway
  hostnames:
    - "whoami-kube.lab.blink-1x52.net"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: whoami
          port: 80

And I have it included in the apps kustomization like this.

# ./apps/lab-hlv3/kustomization.yaml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  # testbed
  - ../base/testbed
  # http routes
  - httproutes/whoami-http.yaml

Accessing whoami service through cilium gateway

And here it is!

$ curl http://whoami-kube.lab.blink-1x52.net
Hostname: whoami
IP: 127.0.0.1
IP: ::1
IP: 10.0.3.170
IP: fe80::e834:eff:fe1f:44f3
RemoteAddr: 10.0.3.174:39409
GET / HTTP/1.1
Host: whoami-kube.lab.blink-1x52.net
User-Agent: curl/8.5.0
Accept: */*
X-Envoy-Internal: true
X-Forwarded-For: IP_ADDR_OF_HOST_EXECUTING_CURL
X-Forwarded-Proto: http
X-Request-Id: 96f39cec-ce39-4c26-97d3-216dae13b9d0

Once this is set, you just have to update the gateway manifest and create HTTPRoute to setup a new web access. If the service is on namespace other than testbed, remember to set the label "gateway: cilium" so that the cilium-gateway on gateway namespace accepts the HTTPRoute.