Skip to content




Kubernetes GitOps using fluxcd

This post is on setting up GitOps using fluxcd to manage my homelab kubernetes cluster.

The version control system I am using in this setup is my self-hosted GitLab also running in my homelab environment.

I will be doing one demo to deploy my custom image to run quick troubleshooting commands.

I will also be introducing my GitOps repository directory structure.

ToC

  • create a project used to manage kubernetes cluster through GitOps
  • create a project access token with owner role and api scope
    • export necessary variables used in the flux bootstrap command
  • install flux cli on a host with access to the cluster using kubectl
  • execute flux bootstrap
  • observe the changes in the repository and the cluster
  • demo
  • introduction on my GitOps repository structure

Creating a project

I have a group named "gitops" and a project named "homelab".

Project access token

Navigate to "settings" - "access tokens", you can create a new access token specifically for the project.

Do not worry about the expiration date. This token gets used just once when bootstrapping the cluster with the git repository.

  • give it a name
  • set owner role and api scope

https://YOUR_GITLAB_HOST/gitops/homelab/-/settings/access_tokens

Most recently I created new access tokens named "hlv3" and "lab-hlv3" with owner role and api scope set. These are for my latest kubernetes clusters built using my ansible project to build HA cluster with external etcd cluster.

Now, once you have the token, let's go ahead and export it on whichever host and session you'll be installing flux and executing bootstrap.

export GITLAB_TOKEN={access_token_string_here}
export GITLAB_SERVER={YOUR_GITLAB_HOST}

Install flux

https://fluxcd.io/flux/installation/

Install flux on a host which will be running the bootstrap command. The very host you usually use to run kubectl to manage your kubernetes cluster do.

curl -s https://fluxcd.io/install.sh | sudo bash

You may want to add flux completions to your shell. In my case I add this to ~/.bashrc.

. <(flux completion bash)

Bootstrap

https://fluxcd.io/flux/installation/bootstrap/gitlab/

Below is the command executed to bootstrap my lab kubernetes cluster with "gitops/homelab" repository on my self-hosted GitLab. There are few differences from what's described in the link above, and let me cover those.

  • --owner=gitops and repository=homelab to use "gitops/homelab" repository
  • together with --hostname="$GITLAB_SERVER" it's going to be "https://GITLAB_SERVER/gitops/homelab.git"
  • --branch=main to have flux gitops to treat repository content in the main branch to be the source for the kubernetes cluster
  • --path=./cluster/lab-hlv3 will make this path in the repository to be the installation location
    • the flux components installed are also all declarative and the manifests are written and pushed to the specified repository
  • --cluster-domain=lab.example.net to tell flux of the cluster domain
    • by default, the kubernetes cluster domain is "cluster.local" and so is the flux bootstrap command default
# do not forget to have these variables set/exported
#export GITLAB_TOKEN={token_string_here}
#export GITLAB_SERVER={YOUR_GITLAB_HOST}

flux bootstrap gitlab \
  --deploy-token-auth \
  --hostname="$GITLAB_SERVER" \
  --owner=gitops \
  --repository=homelab \
  --path=./clusters/lab-hlv3 \
  --branch=main \
  --cluster-domain=lab.example.net

# if you mess up anything and want to start over, run uninstall
#
# flux uninstall

Observe the cluster and repository

cluster bootstrapped

Let's first look at the cluster.

Here is the list of things created.

$ kubectl get all -n flux-system
NAME                                           READY   STATUS    RESTARTS   AGE
pod/helm-controller-654c4c4c64-2wtbp           1/1     Running   0          19h
pod/kustomize-controller-55ff9444cd-kkkxm      1/1     Running   0          19h
pod/notification-controller-58ffd586f7-94vkp   1/1     Running   0          19h
pod/source-controller-5b6b6d555c-5dpv5         1/1     Running   0          19h

NAME                              TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/notification-controller   ClusterIP   10.96.22.37    <none>        80/TCP    19h
service/source-controller         ClusterIP   10.96.166.23   <none>        80/TCP    19h
service/webhook-receiver          ClusterIP   10.96.197.23   <none>        80/TCP    19h

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/helm-controller           1/1     1            1           19h
deployment.apps/kustomize-controller      1/1     1            1           19h
deployment.apps/notification-controller   1/1     1            1           19h
deployment.apps/source-controller         1/1     1            1           19h

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/helm-controller-654c4c4c64           1         1         1       19h
replicaset.apps/kustomize-controller-55ff9444cd      1         1         1       19h
replicaset.apps/notification-controller-58ffd586f7   1         1         1       19h
replicaset.apps/source-controller-5b6b6d555c         1         1         1       19h

Here is the list of api-resources added.

$ kubectl api-resources --namespaced=true | grep flux
helmreleases                hr             helm.toolkit.fluxcd.io/v2                true         HelmRelease
kustomizations              ks             kustomize.toolkit.fluxcd.io/v1           true         Kustomization
alerts                                     notification.toolkit.fluxcd.io/v1beta3   true         Alert
providers                                  notification.toolkit.fluxcd.io/v1beta3   true         Provider
receivers                                  notification.toolkit.fluxcd.io/v1        true         Receiver
buckets                                    source.toolkit.fluxcd.io/v1              true         Bucket
gitrepositories             gitrepo        source.toolkit.fluxcd.io/v1              true         GitRepository
helmcharts                  hc             source.toolkit.fluxcd.io/v1              true         HelmChart
helmrepositories            helmrepo       source.toolkit.fluxcd.io/v1              true         HelmRepository
ocirepositories             ocirepo        source.toolkit.fluxcd.io/v1beta2         true         OCIRepository

On the newer version of flux, there is a preview, under-development command you can use to list the resources flux processed, flux tree.

$ flux tree ks flux-system
Kustomization/flux-system/flux-system
├── CustomResourceDefinition/alerts.notification.toolkit.fluxcd.io
├── CustomResourceDefinition/buckets.source.toolkit.fluxcd.io
├── CustomResourceDefinition/gitrepositories.source.toolkit.fluxcd.io
├── CustomResourceDefinition/helmcharts.source.toolkit.fluxcd.io
├── CustomResourceDefinition/helmreleases.helm.toolkit.fluxcd.io
├── CustomResourceDefinition/helmrepositories.source.toolkit.fluxcd.io
├── CustomResourceDefinition/kustomizations.kustomize.toolkit.fluxcd.io
├── CustomResourceDefinition/ocirepositories.source.toolkit.fluxcd.io
├── CustomResourceDefinition/providers.notification.toolkit.fluxcd.io
├── CustomResourceDefinition/receivers.notification.toolkit.fluxcd.io
├── Namespace/flux-system
├── ResourceQuota/flux-system/critical-pods-flux-system
├── ServiceAccount/flux-system/helm-controller
├── ServiceAccount/flux-system/kustomize-controller
├── ServiceAccount/flux-system/notification-controller
├── ServiceAccount/flux-system/source-controller
├── ClusterRole/crd-controller-flux-system
├── ClusterRole/flux-edit-flux-system
├── ClusterRole/flux-view-flux-system
├── ClusterRoleBinding/cluster-reconciler-flux-system
├── ClusterRoleBinding/crd-controller-flux-system
├── Service/flux-system/notification-controller
├── Service/flux-system/source-controller
├── Service/flux-system/webhook-receiver
├── Deployment/flux-system/helm-controller
├── Deployment/flux-system/kustomize-controller
├── Deployment/flux-system/notification-controller
├── Deployment/flux-system/source-controller
├── NetworkPolicy/flux-system/allow-egress
├── NetworkPolicy/flux-system/allow-scraping
├── NetworkPolicy/flux-system/allow-webhooks
└── GitRepository/flux-system/flux-system

repository

You will notice the commits pushed by flux using the project access token with api scope.

$ find . -not -path "*/\.git/*" | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
|-clusters
 | |-lab-hlv3
 | | |-flux-system
 | | | |-kustomization.yaml
 | | | |-gotk-sync.yaml
 | | | |-gotk-components.yaml

Also if you navigate to "settings" > "repository" > "deploy tokens" you will see the new deploy token created with no expiration date and read_repository scope set. Flux components installed will use this read-only deploy token to watch the changes pushed to the repository to apply updates to the kubernetes cluster, to do GitOps.

Demo

As a demo, let me get a pod running.

Instead of just placing a pod manifest at ./clusters/lab-hlv3/demo-pod.yaml, I will be preparing directories per my preference. Follow along for now, and I will explain more in the next section.

namespace

mkdir ./clusters/lab-hlv3/namespaces

cat <<'EOF' > ./clusters/lab-hlv3/namespaces/testbed.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: testbed
EOF

# commit and push

Now if you commit and push, flux will pick up this change and creates the namespace named testbed.

placeholders

Let us continue on.

cat <<'EOF' > ./clusters/lab-hlv3/namespaces/placeholder.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: placeholder
EOF


mkdir -p ./infrastructure/lab-hlv3/configs
mkdir -p ./infrastructure/lab-hlv3/controllers
mkdir -p ./apps/base/testbed
mkdir -p ./apps/lab-hlv3

cat <<'EOF' > infrastructure/lab-hlv3/controllers/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - cm-placeholder.yaml
EOF


cat <<'EOF' > infrastructure/lab-hlv3/controllers/cm-placeholder.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: controllers-placeholder
  namespace: placeholder
data:
  thisis: placeholder
EOF


cat <<'EOF' > infrastructure/lab-hlv3/configs/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - cm-placeholder.yaml
EOF


cat <<'EOF' > infrastructure/lab-hlv3/configs/cm-placeholder.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: configs-placeholder
  namespace: placeholder
data:
  thisis: placeholder
EOF


cat <<'EOF' > apps/lab-hlv3/kustomization.yaml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../base/testbed
EOF


cat <<'EOF' > apps/base/testbed/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - deploy-tools.yaml
EOF


cat <<'EOF' > apps/base/testbed/deploy-tools.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tools
  namespace: testbed
  labels:
    app: tools
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tools
  template:
    metadata:
      labels:
        app: tools
    spec:
      hostname: tools
      containers:
        - name: tools
          image: registry.k8s.io/e2e-test-images/agnhost:2.39
          imagePullPolicy: IfNotPresent
          command: ["tail"]
          args: ["-f", "/dev/null"]
EOF

flux kustomization for infrastructure and apps

Here is the important part I will touch upon in the next section.

cat <<'EOF' > clusters/lab-hlv3/infrastructure.yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: infra-controllers
  namespace: flux-system
spec:
  interval: 1m0s
  path: ./infrastructure/lab-hlv3/controllers
  prune: true
  wait: true
  sourceRef:
    kind: GitRepository
    name: flux-system
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: infra-configs
  namespace: flux-system
spec:
  dependsOn:
    - name: infra-controllers
  interval: 1h
  retryInterval: 1m
  timeout: 5m
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./infrastructure/lab-hlv3/configs
  prune: true
EOF


cat <<'EOF' > clusters/lab-hlv3/apps.yaml
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: apps
  namespace: flux-system
spec:
  interval: 10m0s
  dependsOn:
    - name: infra-configs
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./apps/lab-hlv3
  prune: true
  wait: true
  timeout: 5m0s
EOF

demo using the tools pod in testbed namespace

If you are familiar with docker exec, you can see below that you can do the same with kubernetes.

This is the example of running nslookup on the pod inside the kubernetes cluster.

I set the dnsutils image mentioned in the kubernetes documentation, but you can get a different image running, equipped with troubleshooting tools.

$ kubectl exec deploy/tools -n testbed -- nslookup zenn.dev.
Server:         10.96.0.10
Address:        10.96.0.10#53

Non-authoritative answer:
Name:   zenn.dev
Address: 104.26.14.203
Name:   zenn.dev
Address: 172.67.72.220
Name:   zenn.dev
Address: 104.26.15.203
Name:   zenn.dev
Address: 2606:4700:20::681a:fcb
Name:   zenn.dev
Address: 2606:4700:20::681a:ecb
Name:   zenn.dev
Address: 2606:4700:20::ac43:48dc

$ kubectl exec deploy/tools -n testbed -- nslookup kubernetes.default.svc.lab.example.net.
;; Got recursion not available from 10.96.0.10
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   kubernetes.default.svc.lab.example.net
Address: 10.96.0.1
;; Got recursion not available from 10.96.0.10

GitOps repository structure introduction

That's all for the demo, and let me explain the GitOps repository directory structure and what was setup in above steps.

The root flux gitops is tied with the flux kustomization named "flux-system" which is about synchronizing the declarative kubernetes cluster states stored at ./clusters/lab-hlv3. As mentioned earlier, I could place the pod manifest here directly, and also add as many things as I want here. You can easily imagine things get out of your hand.

There is the fluxcd official guide introducing different ways of setting up your GitOps repository.

https://fluxcd.io/flux/guides/repository-structure/

Mine is no different from what's explained there. Following the reference guide, I have setup layers of flux kustomizations with dependencies.

  • apps ks (flux kustomization) watches for the kustomization at ./apps/lab-hlv3
    • and it depends on infra-configs ks
  • infra-configs ks watches for the kustomization at ./infrastructure/lab-hlv3/configs
    • and it depends on infra-controllers ks
  • infra-controllers ks watches for the kustomization at ./infrastructure/lab-hlv3/controllers
$ flux get ks
NAME                    REVISION                SUSPENDED       READY   MESSAGE
apps                    main@sha1:b1a6b397      False           True    Applied revision: main@sha1:b1a6b397
flux-system             main@sha1:b1a6b397      False           True    Applied revision: main@sha1:b1a6b397
infra-configs           main@sha1:b1a6b397      False           True    Applied revision: main@sha1:b1a6b397
infra-controllers       main@sha1:b1a6b397      False           True    Applied revision: main@sha1:b1a6b397

So 1) you prepare the infrastructure components, 2) configure them, and then 3) spin up your applications or workloads on top.

example with a little bit more on top

At the end of this section, there is the directories list after I implemented cilium gateway (I hope to cover this in a separate post). I have not cleaned up the placeholders put everywhere, but it's okay...

flux-system

I have added the dedicated namespace "gateway" in ./clusters/lab-hlv3/namespaces/gateway.yaml. I have also made some modifications to the testbed namespace manifest to make things work, and I'll spare the details for the other post on cilium gateway implementation.

Commit those changes, and the gateway namespace will be created and desired modifications will be made to the testbed namespace as declared in the file.

infra-controllers

Though this is not processed by flux, I must first mention about my cilium helm chart values file at ./infrastructure/lab-hlv3/controllers/values/cilium-values.yaml. Any changes about the cluster should better be available on this repository, so I have copied the values file I used to install cilium network add-on right after the cluster init.

If I'm to install more infrastructure components using helm chart, the values files will be stored there, and the infra-controller kustomization will have helm repo and helm release.

Now, as for the cilium gateway implementation, I have placed Gateway API CRDs v1.2.0. I edited kustomization file to include the two files.

infra-configs

Likewise with the cilium installation using helm chart, I have stored the original and modified coredns configmaps at ./infrastructure/lab-hlv3/configs/coredns. They do not need to be added to the infra-configs ks, but if I want to make further changes, I'd place a file here and include it in the infra-configs kustomization file.

In regard with cilium gateway implementation, I have placed the necessary manifests at ./infrastructure/lab-hlv3/configs/cilium. The gateway will be configured, IPAM will be configured to allocate IP address for the gateway, and L2 announcement will be configured to announce the gateway IP address on my LAN.

apps

The traefik/whoami deployment and service manifest is added in the testbed to get the pod running and service available to make it accessible.

The HTTPRoutes for whoami is placed at ./apps/lab-hlv3/httproutes/whoami.yaml to connect the cilium gateway and the whoami service.

And lastly just to show the different ways of operation, you can just place "watch this dedicated repository for app A" kind of manifest, and the owner of the application is in control of how and when the application is changed independent from the infrastructure service management.

The gitlab-report and news-logger are the apps developed and managed in separate repositories on my self-hosted GitLab.

tree output

$ tree
.
 |-clusters
 | |-lab-hlv3
 | | |-flux-system
 | | | |-kustomization.yaml
 | | | |-gotk-sync.yaml
 | | | |-gotk-components.yaml
 | | |-app.yaml
 | | |-namespaces
 | | | |-gateway.yaml
 | | | |-placeholder.yaml
 | | | |-testbed.yaml
 | | |-infrastructure.yaml
 |-readme.md
 |-sops
 |-.git
 |-infrastructure
 | |-lab-hlv3
 | | |-configs
 | | | |-kustomization.yaml
 | | | |-coredns
 | | | | |-cm-coredns.yaml
 | | | | |-coredns-modified.yaml
 | | | |-cm-placeholder.yaml
 | | | |-cilium
 | | | | |-ippools-cilium-gateway.yaml
 | | | | |-gateway.yaml
 | | | | |-l2announcement-cilium-gateway.yaml
 | | |-controllers
 | | | |-crds
 | | | | |-gateway-api
 | | | | | |-experimental
 | | | | | | |-gateway.networking.k8s.io_tlsroutes-v1.2.0.yaml
 | | | | | |-standard
 | | | | | | |-standard-install-v1.2.0.yaml
 | | | | | |-readme.md
 | | | |-kustomization.yaml
 | | | |-values
 | | | | |-cilium-values.yaml
 | | | |-cm-placeholder.yaml
 | | | |-default-values
 | | | | |-cilium-1.17.2-values.yaml
 |-apps
 | |-lab-hlv3
 | | |-httproutes
 | | | |-whoami.yaml
 | | |-kustomization.yaml
 | |-base
 | | |-testbed
 | | | |-kustomization.yaml
 | | | |-whoami.yaml
 | | | |-readme.md
 | | | |-tools-deployment.yaml
 | | |-news-logger
 | | | |-repo.yaml
 | | |-gitlab-report
 | | | |-repo.yaml