Skip to content



building homelab cluster part 4


building homelab cluster part 4

In part 3 I installed kubernetes gateway v1.0.0, and NGINX Gateway Fabric to use the gateway.

In this part, I will setup DirectPV which will provide persistent volume, and I'll use the disk space there to run Minio S3 tenant. S3 API will be available through NGINX Gateway Fabric (in the next part where I will setup cert-manager).

DirectPV

https://min.io/directpv

DirectPV is a CSI driver for Direct Attached Storage. In a simpler sense, it is a distributed persistent volume manager, and not a storage system like SAN or NAS.

directpv plugin installation

https://github.com/minio/directpv/blob/master/docs/installation.md#installation-of-release-binary

It has not been possible to version control things dealt using krew for a long time. You can install, upgrade, and uninstall plugins using krew but there is no way to specify version.

Instead, I download the binary directly from the repository so that I have control over which version of directpv to install.

# find the latest release
DIRECTPV_RELEASE=$(curl -sfL "https://api.github.com/repos/minio/directpv/releases/latest" | awk '/tag_name/ { print substr($2, 3, length($2)-4) }')
curl -fLo kubectl-directpv https://github.com/minio/directpv/releases/download/v${DIRECTPV_RELEASE}/kubectl-directpv_${DIRECTPV_RELEASE}_linux_amd64
chmod a+x kubectl-directpv

sudo mv kubectl-directpv /usr/local/bin/.

node labels

Installation of directpv to the cluster can be done on selected nodes as described in the document.

https://github.com/minio/directpv/blob/master/docs/installation.md#installing-on-selected-nodes

I have 3 nodes with disks I'd like to use as directpv drives, and setting the label was done in part 1.

kubectl label --list nodes livaz2
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=livaz2
kubernetes.io/os=linux
kustomize.toolkit.fluxcd.io/name=flux-system
kustomize.toolkit.fluxcd.io/namespace=flux-system
app.kubernetes.io/part-of=directpv
beta.kubernetes.io/arch=amd64

directpv installation to the cluster

Now the installation can be done by simply running kubectl-directpv install --node-selector app.kubernetes.io/part-of=directpv, but I will try to prepare the manifest and place it on the gitops repository.

cd {gitops repo}/infrastructure/homelab/controllers/crds

# generate directpv installation manifest, v4.0.10 as time of this writing
kubectl-directpv install --node-selector app.kubernetes.io/part-of=directpv -o yaml > directpv-v${DIRECTPV_RELEASE}.yaml

This manifest will create/install a lot of things. The node-selector settings I need to add is on the daemonset.

$ grep ^kind directpv-v4.0.10.yaml
kind: Namespace
kind: ServiceAccount
kind: ClusterRole
kind: ClusterRoleBinding
kind: Role
kind: RoleBinding
kind: CustomResourceDefinition
kind: CustomResourceDefinition
kind: CustomResourceDefinition
kind: CustomResourceDefinition
kind: CSIDriver
kind: StorageClass
kind: DaemonSet
kind: Deployment

Here is the diff from the original. It's all fine with the default for, say, VM on Hyper-V, but when you have rpi and other arm64 nodes in your cluster, you might need to change some of the images from minio to k8s registry supporting arm64 arch.

ref) https://github.com/minio/directpv/issues/592#issuecomment-1134143827

<           # image: quay.io/minio/csi-node-driver-registrar@sha256:c805fdc166761218dc9478e7ac8e0ad0e42ad442269e75608823da3eb761e67e
<           # https://github.com/kubernetes-csi/node-driver-registrar
<           image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0
---
>           image: quay.io/minio/csi-node-driver-registrar@sha256:c805fdc166761218dc9478e7ac8e0ad0e42ad442269e75608823da3eb761e67e
1110,1112c1108
<           # image: quay.io/minio/livenessprobe@sha256:f3bc9a84f149cd7362e4bd0ae8cd90b26ad020c2591bfe19e63ff97aacf806c3
<           # https://github.com/kubernetes-csi/livenessprobe
<           image: registry.k8s.io/sig-storage/livenessprobe:v2.12.0
---
>           image: quay.io/minio/livenessprobe@sha256:f3bc9a84f149cd7362e4bd0ae8cd90b26ad020c2591bfe19e63ff97aacf806c3

And I update infra-controllers kustomization to include this directpv installation manifest.

diff --git a/infrastructure/homelab/controllers/kustomization.yaml b/infrastructure/homelab/controllers/kustomization.yaml
index 5b7303d..19e1fd8 100644
--- a/infrastructure/homelab/controllers/kustomization.yaml
+++ b/infrastructure/homelab/controllers/kustomization.yaml
@@ -3,6 +3,7 @@ kind: Kustomization
 resources:
   # CRDs
   - crds/gateway-v1.0.0.yaml
+  - crds/directpv-v4.0.10.yaml
   # infra-controllers
   - metallb.yaml

Here is the result.

$ kubectl get ns
NAME               STATUS   AGE
calico-apiserver   Active   6d19h
calico-system      Active   6d19h
default            Active   6d19h
directpv           Active   2s
flux-system        Active   5d3h
kube-node-lease    Active   6d19h
kube-public        Active   6d19h
kube-system        Active   6d19h
metallb            Active   43h
ngf                Active   3h26m
tigera-operator    Active   6d19h

$ kubectl get csidrivers
NAME              ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES                  AGE
csi.tigera.io     true             true             false             <unset>         false               Ephemeral              6d19h
directpv-min-io   false            true             false             <unset>         false               Persistent,Ephemeral   47s

$ kubectl get ds,deploy,rs -n directpv
NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                        AGE
daemonset.apps/node-server   1         1         1       1            1           app.kubernetes.io/part-of=directpv   12m

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   3/3     3            3           12m

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-697fb86954   3         3         3       12m

directpv drive setup

Running kubectl-directpv discover will discover available disk drives from the nodes with the label "app.kubernetes.io/part-of=directpv". Its finding will be written in the drives.yaml file. It will pick up drives you may not want to use, so edit the file and just remove them. Once you have only the desired drives listed in the drives.yaml file, run kubectl-directpv init drives.yaml --dangerous to have directpv format and repurpose them for directpv to serve disk space in response to pvc.

Here is the log from the one I did with hyper-v cluster.

$ kubectl-directpv init drives.yaml --dangerous

 ███████████████████████████████████████████████████████████████████████████ 100%

 Processed initialization request 'b490b74d-703f-4488-898e-21362626f7b9' for node 'vworker5' ✔

┌──────────────────────────────────────┬──────────┬───────┬─────────┐
│ REQUEST_ID                           │ NODE     │ DRIVE │ MESSAGE │
├──────────────────────────────────────┼──────────┼───────┼─────────┤
│ b490b74d-703f-4488-898e-21362626f7b9 │ vworker5 │ sdb   │ Success │
└──────────────────────────────────────┴──────────┴───────┴─────────┘

I am going to store this drives.yaml file as ./infrastructure/configs/directpv/drives.yaml just as my reference. It's not kubernetes manifest yaml file. I will not have flux process it.

usb ssd drive

By the way, the drives should be formatted using ext4 for example, and not used/mounted.

# confirm the device
sudo fdisk -l

# create a partition anew, assuming the device is at /dev/sdb
sudo fdisk /dev/sdb
d  # to delete existing partitions if any, repeat until there is none
n  # to create a new partition
w  # to write and exit fdisk program

# mkfs
sudo mkfs.ext4 /dev/sdb1

# then directpv discover should be able to find it, and init should be able to reformat this drive

Minio

https://min.io/

MinIO is an object storage solution that provides an Amazon Web Services S3-compatible API and supports all core S3 features. MinIO is built to deploy anywhere - public or private cloud, baremetal infrastructure, orchestrated environments, and edge infrastructure.

minio operator deployment using helm

https://min.io/docs/minio/kubernetes/upstream/operations/install-deploy-manage/deploy-operator-helm.html

cd ./infrastructure/controllers

# get minio helm repository
helm repo add minio-operator https://operator.min.io

# check available charts and their version
helm search repo minio-operator

# create values file
# I'm skipping this as the default is fine to deploy minio-operator
### helm show values minio-operator/operator > minio-operator-values.yaml

I will again prepare shell script to generate minio-operator manifest like I did for metallb and ngf. I am creating both minio-operator and minio-tenant namespace here.

See the additional "gateway-available" label for minio-tenant namespace, where there will be console access and s3 service access through ngf.

minio-operator.sh
#!/bin/bash

# add flux helmrepo
flux create source helm minio \
        --url=https://operator.min.io \
        --interval=1h0m0s \
        --export >minio-operator.yaml

# add flux helm release
flux create helmrelease minio-operator \
        --interval=10m \
        --target-namespace=minio-operator \
        --source=HelmRepository/minio \
        --chart=operator \
        --chart-version=5.0.12 \
        --export >>minio-operator.yaml

Generate ./infrastructure/controllers/minio-operator.yaml by running the script, and then update kustomization to include the minio operator manifest.

diff --git a/infrastructure/homelab/controllers/kustomization.yaml b/infrastructure/homelab/controllers/kustomization.yaml
index 19e1fd8..a3d2af2 100644
--- a/infrastructure/homelab/controllers/kustomization.yaml
+++ b/infrastructure/homelab/controllers/kustomization.yaml
@@ -8,3 +8,4 @@ resources:
   - sops.yaml
   - metallb.yaml
   - ngf.yaml
+  - minio-operator.yaml

Here is the result.

$ flux get source helm minio
NAME    REVISION        SUSPENDED       READY   MESSAGE
minio   sha256:a85473b4 False           True    stored artifact: revision 'sha256:a85473b4'

$ flux get hr minio-operator
NAME            REVISION        SUSPENDED       READY   MESSAGE
minio-operator  5.0.12          False           True    Helm install succeeded for release minio-operator/minio-operator-minio-operator.v1 with chart [email protected]

$ kubectl get all -n minio-operator
NAME                                 READY   STATUS    RESTARTS   AGE
pod/console-6d5fb84464-49764         1/1     Running   0          35s
pod/minio-operator-9d788785b-n244m   1/1     Running   0          35s
pod/minio-operator-9d788785b-zjhqs   1/1     Running   0          35s

NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/console    ClusterIP   10.96.150.5      <none>        9090/TCP,9443/TCP   35s
service/operator   ClusterIP   10.101.100.163   <none>        4221/TCP            35s
service/sts        ClusterIP   10.99.18.177     <none>        4223/TCP            35s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/console          1/1     1            1           35s
deployment.apps/minio-operator   2/2     2            2           35s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/console-6d5fb84464         1         1         1       35s
replicaset.apps/minio-operator-9d788785b   2         2         2       35s

minio tenant deployment using helm

https://min.io/docs/minio/kubernetes/upstream/operations/install-deploy-manage/deploy-minio-tenant-helm.html

Since the chart is in the same helm repo added in the previous step, I will go ahead and prepare and edit values file.

helm show values minio-operator/tenant > minio-tenant-values.yaml
  • existingSecret is named minio-tenant-secret
    • I create this secret with accessKy and secretKey on sops repo
  • configuration name is as is, and I create secret named myminio-env-configuration just like the example
  • changed server and volume count from 4 to 1
  • changed volume size from 10G to 180G
  • nodeSelector to select the node with the largest directpv disk
  • set storageClassName "directpv-min-io" which was created in the previous setup to setup directpv
  • set requestAutoCert to false to run without TLS
diff minio-tenant-values.yaml original.yaml
diff minio-tenant-values.yaml default-values/minio-tenant-values.yaml
19,21c19,21
<   # name: myminio-env-configuration
<   # accessKey: minio
<   # secretKey: minio123
---
>   name: myminio-env-configuration
>   accessKey: minio
>   secretKey: minio123
35,36c35,36
<   existingSecret:
<     name: minio-tenant-secret
---
>   #existingSecret:
>   #  name: myminio-env-configuration
96c96
<     - servers: 1
---
>     - servers: 4
102c102
<       volumesPerServer: 1
---
>       volumesPerServer: 4
105c105
<       size: 180Gi
---
>       size: 10Gi
112c112
<       storageClassName: directpv-min-io
---
>       # storageClassName: standard
134,135c134
<       nodeSelector:
<         directpv.min.io/node: livaz2
---
>       nodeSelector: {}
225c224
<     requestAutoCert: false
---
>     requestAutoCert: true

And I am going to create a new secret with accessKey and secretKey on sops repository encrypted by sops -i --encrypt minio-tenant-secret.yaml.

./clusters/homelab/minio-tenant/minio-tenant-secret.yaml
kubectl -n minio-tenant create secret generic minio-tenant-secret \
  --from-literal=accessKey=USERNAME_HERE \
  --from-literal=secretKey=PASSWORD_HERE \
  --dry-run=client \
  -o yaml > minio-tenant-secret.yaml

I also create the env file as described in the values file.

./clusters/homelab/minio-tenant/myminio-env-configuration.yaml
apiVersion: v1
kind: Secret
metadata:
    name: myminio-env-configuration
    namespace: minio-tenant
type: Opaque
stringData:
    config.env: | -
        export MINIO_ROOT_USER=ROOTUSERNAME_HERE
        export MINIO_ROOT_PASSWORD=ROOTUSERPASSWORD_HERE

And back to gitops/homelab repo. Here is the another script to generate manifest for minio tenant. This one will just cover the generation of minio-tenant helm release manifest.

minio-tenant.sh
#!/bin/bash

# add flux helm release
flux create helmrelease minio-tenant \
    --interval=10m \
    --target-namespace=minio-tenant \
    --source=HelmRepository/minio \
    --chart=tenant \
    --chart-version=5.0.12 \
    --values=minio-tenant-values.yaml \
    --export >minio-tenant.yaml

And generate and include minio-tenant manifest.

diff --git a/infrastructure/homelab/controllers/kustomization.yaml b/infrastructure/homelab/controllers/kustomization.yaml
index a3d2af2..2cdb540 100644
--- a/infrastructure/homelab/controllers/kustomization.yaml
+++ b/infrastructure/homelab/controllers/kustomization.yaml
@@ -9,3 +9,4 @@ resources:
   - metallb.yaml
   - ngf.yaml
   - minio-operator.yaml
+  - minio-tenant.yaml

Here is the result.

$ kubectl get all -n minio-tenant
NAME                   READY   STATUS    RESTARTS   AGE
pod/myminio-pool-0-0   2/2     Running   0          16m

NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/minio             ClusterIP   10.97.120.17   <none>        80/TCP     16m
service/myminio-console   ClusterIP   10.105.76.58   <none>        9090/TCP   16m
service/myminio-hl        ClusterIP   None           <none>        9000/TCP   16m

NAME                              READY   AGE
statefulset.apps/myminio-pool-0   1/1     16m

Persistent disk space is available for the tenant, served by directpv.

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                 STORAGECLASS      VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-058876e7-9d30-4f88-a640-83d4b01fad38   180Gi      RWO            Delete           Bound    minio-tenant/data0-myminio-pool-0-0   directpv-min-io   <unset>                          45h

$ kubectl get pvc -n minio-tenant
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data0-myminio-pool-0-0   Bound    pvc-058876e7-9d30-4f88-a640-83d4b01fad38   180Gi      RWO            directpv-min-io   <unset>                 45h

$ kubectl-directpv list volumes
┌──────────────────────────────────────────┬──────────┬────────┬───────┬──────────────────┬──────────────┬─────────┐
 VOLUME                                    CAPACITY  NODE    DRIVE  PODNAME           PODNAMESPACE  STATUS  ├──────────────────────────────────────────┼──────────┼────────┼───────┼──────────────────┼──────────────┼─────────┤
 pvc-058876e7-9d30-4f88-a640-83d4b01fad38  180 GiB   livaz2  sdb2   myminio-pool-0-0  minio-tenant  Bounded └──────────────────────────────────────────┴──────────┴────────┴───────┴──────────────────┴──────────────┴─────────┘

web access to the minio tenant console

Service "myminio-console" listening on port 9090 is the access to the minio tenant web console. I will use NGF setup in part 3 to make it accessible on LAN.

./infrastructure/homelab/configs/minio-tenant.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: minio-console
  namespace: minio-tenant
spec:
  parentRefs:
    - name: gateway
      sectionName: http
      namespace: gateway
  hostnames:
    - "tenant-mc.blink-1x52.net"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: myminio-console
          port: 9090
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: minio-service
  namespace: minio-tenant
spec:
  parentRefs:
    - name: gateway
      sectionName: http
      namespace: gateway
  hostnames:
    - "s3.blink-1x52.net"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: minio
          port: 80

Update infra-config kustomize to include minio-tenant.yaml.

./infrastructure/homelab/configs/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - metallb-config.yaml
  - gateway.yaml
  - minio-tenant.yaml

And this will create httproutes.

$ kubectl get httproutes -n minio-tenant
NAME            HOSTNAMES                             AGE
minio-console   ["tenant-mc.blink-1x52.net"]   5m51s
minio-service   ["s3.blink-1x52.net"]          5m51s

access to console

Plain http access to the console will be available now. Here is the screenshot from my other lab.

minio-console-web-access-80

repository structure so far

gitops/homelab repository
.
 |-clusters
 | |-homelab
 | | |-nodes                       # add node labels for directpv
 | | | |-node-livaz2-label.yaml
 | | | |-node-ak3v-label.yaml
 | | | |-node-nb5-label.yaml
 | | |-namespace
 | | | |-minio-operator.yaml       # namespace for minio-operator
 | | | |-minio-tenant.yaml         # namespace for minio-tenant
 |-infrastructure
 | |-homelab
 | | |-configs
 | | | |-kustomization.yaml        # include minio-tenant
 | | | |-minio-tenant.yaml         # add HTTPRoutes to setup web access to minio tenant console and s3 service on plain http
 | | | |-directpv
 | | | | |-drives.yaml             # directpv drives init configuration yaml file, not k8s resource manifest file
 | | |-controllers
 | | | |-kustomization.yaml        # include minio operator and tenant
 | | | |-minio-tenant-values.yaml  # values file for minio-tenant
 | | | |-minio-tenant.sh           # script to generate flux helmrelease for minio-tenant
 | | | |-minio-operator.yaml       # flux minio helmrepo and minio-operator helmrelease manifests
 | | | |-crds
 | | | | |-directpv-v4.0.10.yaml   # directpv installation
 | | | |-minio-tenant.yaml         # helmrelease for minio-tenant
 | | | |-minio-operator.sh         # script to generate minio flux helmrepo and minio-operator helmrelease manifests