longhorn
Table of Content
longhorn¶
Cloud native distributed block storage for Kubernetes
testing on hyper-v kubernetes cluster¶
- create x4 debian VMs
- Debian 12.4 netinst, ssh headless
- 2G memory, dynamic
- x4 vcpu
- 32BG disk
- lvm, guided
- hostnames host1, host2, host3, and host4
- install things required for kubernetes
- install packages to use longhorn
apt install open-iscsi -y
systemctl enable iscsid --now
- kubeadm init to form a cluster
- install calico as network addon
- install metallb
- install nginx gateway fabric
- install longhorn
- add gateway and httproute in longhorn-system namespace for longhorn-frontend svc to access longhorn UI
- use ldns to create appropriate fqdn for the longhorn UI
- create x3 32GB disk and attach to VMs host2, host3, and host4
- prep and mount additional disk
fdisk -l
to confirm/dev/sd?
- assuming it's /dev/sdb...
- initialize by deleting the existing partitions if any using fdisk (
sudo fdisk /dev/sdb
,d
to delete partitions, andw
to write and exit)
- initialize by deleting the existing partitions if any using fdisk (
pvcreate /dev/sdb
- try
sudo wipefs --all /dev/sdb
if pvcreate fails
- try
vgcreate vg2 /dev/sdb
lvcreate -n disk2 -l 100%FREE vg2
mkfs.ext4 /dev/vg2/disk2
- check the mapper
ls -1 /dev/mapper
- edit
/etc/fstab
/dev/mapper/{mamper for the new lvm} /mnt/disk2 ext4 errors=remount-ro 0 1
mkdir -p /mnt/disk2
systemctl daemon-reload
mount -a
- access Longhorn UI
- navigate to "node" tab
- find "edit node and disks" under the operation column in the table for hosts 2, 3, and 4
- set "disabled" to not schedule for the default disk
- add disk, specifying the reservation size 0Gi and mount location /mnt/disk2, and set to enabled for scheduling
- as for host1,
- navigate to "volume" tab
- create a new volume v1 with 8Gi
- attach to host2
- create PV/PVC from the operation menu of the volume
kubectl get pv
andkubectl get pvc -A
to confirm
- navigate to "node" tab
Test further to add a pod on different hosts that use the PVC created.
removing disk¶
https://longhorn.io/docs/1.8.0/maintenance/maintenance/#removing-a-disk
- on frontend-ui
- navigate to the node menu and open
edit node and disks
menu - disable scheduling on the disk to remove
- evict all replicas on the disk by setting
true
oneviction requested
and save and wait- confirm that the replicas count goes down to 0 for this disk
- navigate to the node menu and open
removing node¶
https://longhorn.io/docs/1.8.0/maintenance/maintenance/#removing-a-node
- on frontend-ui
- navigate to the node menu and open
edit node and disks
menu - set
node scheduling
todisabled
andeviction requested
totrue
and save and wait- confirm that the replicas count goes down to 0 for this node
- all the workloads should be migrated to other nodes
- select the node and
delete
from the menu- the
delete
button is grayed out with hint text saying the node must be deleted from the kubernetes cluster first - if you delete the node
kubectl delete node {node_to_delete}
, it automatically goes away from the longhorn UI as well
- the
- navigate to the node menu and open