Skip to content



kubernetes basic operation


basic operation

kubectl on macos

https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/

cd ~/dnld
mv kubectl kubectl.bak
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl"
# curl -LO "https://dl.k8s.io/release/v1.30.0/bin/darwin/arm64/kubectl"

# checksum
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl.sha256"
echo "$(cat kubectl.sha256)  kubectl" | shasum -a 256 --check

chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo chown root: /usr/local/bin/kubectl

kubectl version --client

pods not in RUNNING status

kubectl get pods --field-selector status.phase!=Running -A -o wide

manually restart a deployment

You may want to re-do the deployment of a service when you have multiple pods of the service are running on a single node (and of course in addition, you may want to review and revise the deployment manifest to not to assign all the pods on a single node).

Before actually force-restarting the deployment, you want to understand how the deployment strategy and replica count are configured.

In this example on coredns, there are two replicas and set to allow 25% surge and 1 unavailable instance. If you manually force-restart the deployment of coredns, one pod will remain running until the new ones become available for service.

The service managed by a deployment may become unavailable temporarily if deployment manifest is written in a way to allow pod count to be zero with a rollout execution.

 kubectl get deploy coredns -n kube-system -o jsonpath='{.spec.replicas}'
2%
 kubectl get deploy coredns -n kube-system -o jsonpath='{.spec.strategy}'
{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":1},"type":"RollingUpdate"}%

Here is the example command to restart the coredns deployment.

kubectl -n kube-system rollout restart deployment coredns

flux reconcilation ready-status stuck in false

Run suspend and resume if there is no issue with the actual helm installation.

flux suspend hr loki
flux resume hr loki

delete pvc/pv

When volume resources cannot be deleted, patch the pv finalizer with null.

# list of PVC
$ kubectget pvc -A
NAMESPACE      NAME                                 STATUS        VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
memcached      data-memcached-chunk-memcached-0     Bound         pvc-6de9c3e0-b859-49f4-b5f0-2436d76d82eb   8Gi        RWO            directpv-min-io   <unset>                 98m
memcached      data-memcached-chunk-memcached-1     Bound         pvc-b84b7a37-c6aa-47e8-81a6-6f76987bf20e   8Gi        RWO            directpv-min-io   <unset>                 98m
memcached      data-memcached-chunk-memcached-2     Pending                                                                            directpv-min-io   <unset>                 61m
memcached      data-memcached-results-memcached-0   Pending                                                                            directpv-min-io   <unset>                 61m
memcached      data-memcached-results-memcached-1   Pending                                                                            directpv-min-io   <unset>                 61m
memcached      data-memcached-results-memcached-2   Terminating   pvc-a302e890-2bf1-4f62-866e-64f88525d001   8Gi        RWO            directpv-min-io   <unset>                 98m
minio-tenant   data0-myminio-pool-0-0               Bound         pvc-738c376c-62ef-4ff5-9172-1cffe190ecae   180Gi      RWO            directpv-min-io   <unset>                 44h
monitoring     prometheus-k8s-db-prometheus-k8s-0   Bound         pvc-eb0df6e9-941e-44b8-8b61-4078c2e5b253   80Gi       RWO            directpv-min-io   <unset>                 22h

# successful deletion of a PVC
$ kubectl -n memcached delete pvc data-memcached-results-memcached-1
persistentvolumeclaim "data-memcached-results-memcached-1" deleted

# successful deletion of a PVC
$ kubectl -n memcached delete pvc data-memcached-results-memcached-0
persistentvolumeclaim "data-memcached-results-memcached-0" deleted

# deletion stuck
$ kubectl -n memcached delete pvc data-memcached-results-memcached-2
persistentvolumeclaim "data-memcached-results-memcached-2" deleted


^C $
$ kubectl get pvc -A
NAMESPACE      NAME                                 STATUS        VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
memcached      data-memcached-chunk-memcached-0     Bound         pvc-6de9c3e0-b859-49f4-b5f0-2436d76d82eb   8Gi        RWO            directpv-min-io   <unset>                 99m
memcached      data-memcached-chunk-memcached-1     Bound         pvc-b84b7a37-c6aa-47e8-81a6-6f76987bf20e   8Gi        RWO            directpv-min-io   <unset>                 99m
memcached      data-memcached-chunk-memcached-2     Pending                                                                            directpv-min-io   <unset>                 62m
memcached      data-memcached-results-memcached-2   Terminating   pvc-a302e890-2bf1-4f62-866e-64f88525d001   8Gi        RWO            directpv-min-io   <unset>                 99m
minio-tenant   data0-myminio-pool-0-0               Bound         pvc-738c376c-62ef-4ff5-9172-1cffe190ecae   180Gi      RWO            directpv-min-io   <unset>                 44h
monitoring     prometheus-k8s-db-prometheus-k8s-0   Bound         pvc-eb0df6e9-941e-44b8-8b61-4078c2e5b253   80Gi       RWO            directpv-min-io   <unset>                 22h

# patch the finalizer
$ kubectl -n memcached patch pvc data-memcached-results-memcached-2 -p '{"metadata":{"finalizers":null}}'
persistentvolumeclaim/data-memcached-results-memcached-2 patched

# the pvc is gone
$ kubectl get pvc -A
NAMESPACE      NAME                                 STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
memcached      data-memcached-chunk-memcached-0     Bound     pvc-6de9c3e0-b859-49f4-b5f0-2436d76d82eb   8Gi        RWO            directpv-min-io   <unset>                 103m
memcached      data-memcached-chunk-memcached-1     Bound     pvc-b84b7a37-c6aa-47e8-81a6-6f76987bf20e   8Gi        RWO            directpv-min-io   <unset>                 103m
memcached      data-memcached-chunk-memcached-2     Pending                                                                        directpv-min-io   <unset>                 66m
minio-tenant   data0-myminio-pool-0-0               Bound     pvc-738c376c-62ef-4ff5-9172-1cffe190ecae   180Gi      RWO            directpv-min-io   <unset>                 44h
monitoring     prometheus-k8s-db-prometheus-k8s-0   Bound     pvc-eb0df6e9-941e-44b8-8b61-4078c2e5b253   80Gi       RWO            directpv-min-io   <unset>                 22h

delete pods

kubectl -n memcached delete pods memcached-results-memcached-2 --grace-period=0 --force

remove node from cluster

https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/

# confirm list of nodes
kubectl get nodes -o wide

# confirm pods running on a specific node
kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName={node name here}

# drain the node
kubectl drain --ignore-daemonsets {node name here}

# something like flux may have local storage warning and asked to set additional argument
# could use "--force" option as necessary
kubectl drain --ignore-daemonsets --delete-emptydir-data {node name here}

# delete the node
kubectl delete node {node name here}

# confirm
kubectl get nodes -o wide

Here are some clean up commands to use on the node removed from the cluster.

sudo kubeadm reset
sudo iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
sudo ipvsadm -C
sudo rm -rf /etc/cni
rm -rf ~/.kube

removing a node with directpv

Remove the drive on the node, and then run directpv remove-node.sh script.

https://github.com/minio/directpv/blob/master/docs/drive-management.md#remove-drives

kubectl-directpv remove --drives=sdb1 --nodes=nb5

https://github.com/minio/directpv/blob/master/docs/node-management.md#delete-node

Download the script and run, for example, ./remove-node.sh nb5.

upgrading kubernetes cluster

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

  • [ ] upgrade primary control plane node
  • [ ] upgrade other control plane nodes if available
  • [ ] upgrade other worker nodes
# confirm available versions
sudo apt-cache madison kubeadm

# upgrade to the latest version
sudo apt-mark showhold
sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='1.29.3-1.1' && \
sudo apt-mark hold kubeadm

# verify the version
kubeadm version

# verify the upgrade plan on the primary control plane
sudo kubeadm upgrade plan

# upgrade on the primary control plane
# the upgrade command to execute will be given by the upgrade plan command
sudo kubeadm upgrade apply v1.29.3

# on the other control planes
sudo kubeadm upgrade node

# upgrade kubelet and kubectl
# replace x in 1.29.x-* with the latest patch version
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet='1.29.3-1.1' kubectl='1.29.3-1.1' && \
sudo apt-mark hold kubelet kubectl

# and restart kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet

Log - note that the step to drain the node is not executed

# upgrade plan on the primary control plane
❯ sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.29.2
[upgrade/versions] kubeadm version: v1.29.3
[upgrade/versions] Target version: v1.29.3
[upgrade/versions] Latest version in the v1.29 series: v1.29.3

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     5 x v1.29.2   v1.29.3

Upgrade to the latest version in the v1.29 series:

COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.29.2    v1.29.3
kube-controller-manager   v1.29.2    v1.29.3
kube-scheduler            v1.29.2    v1.29.3
kube-proxy                v1.29.2    v1.29.3
CoreDNS                   v1.11.1    v1.11.1
etcd                      3.5.10-0   3.5.12-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.29.3

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

# upgrade apply on the primary control plane
❯ sudo kubeadm upgrade apply v1.29.3
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.29.3"
[upgrade/versions] Cluster version: v1.29.2
[upgrade/versions] kubeadm version: v1.29.3
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.29.3" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-02-05-03-02/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 2 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests4235679551"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-02-05-03-02/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 2 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-02-05-03-02/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 2 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-02-05-03-02/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 2 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1384036076/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[upgrade/addons] skip upgrade addons because control plane instances [livaz2] have not been upgraded

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.29.3". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

# on other control planes
❯ sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='1.29.3-1.1' && \
sudo apt-mark hold kubeadm

Canceled hold on kubeadm.
Hit:1 http://deb.debian.org/debian bookworm InRelease
Hit:2 http://deb.debian.org/debian bookworm-updates InRelease
Hit:3 http://security.debian.org/debian-security bookworm-security InRelease
Hit:4 https://download.docker.com/linux/debian bookworm InRelease
Hit:5 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.29/deb  InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 20 not upgraded.
Need to get 10.1 MB of archives.
After this operation, 94.2 kB of additional disk space will be used.
Get:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.29/deb  kubeadm 1.29.3-1.1 [10.1 MB]
Fetched 10.1 MB in 1s (19.9 MB/s)
apt-listchanges: Reading changelogs...
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = "en_US:en",
        LC_ALL = (unset),
        LC_CTYPE = "UTF-8",
        LC_TERMINAL = "iTerm2",
        LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
(Reading database ... 50378 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.29.3-1.1_amd64.deb ...
Unpacking kubeadm (1.29.3-1.1) over (1.29.2-1.1) ...
Setting up kubeadm (1.29.3-1.1) ...
kubeadm set on hold.
❯ kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"29", GitVersion:"v1.29.3", GitCommit:"6813625b7cd706db5bc7388921be03071e1a492d", GitTreeState:"clean", BuildDate:"2024-03-15T00:06:16Z", GoVersion:"go1.21.8", Compiler:"gc", Platform:"linux/amd64"}
❯ sudo kubeadm upgrade node

[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.29.3"...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-02-14-12-40/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 2 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1300714583"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-02-14-12-40/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 2 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-02-14-12-40/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 2 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-04-02-14-12-40/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 2 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade] The control plane instance for this node was successfully updated!
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config887739548/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
❯ sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet='1.29.3-1.1' kubectl='1.29.3-1.1' && \
sudo apt-mark hold kubelet kubectl

kubelet was already not on hold.
kubectl was already not on hold.
Hit:1 http://deb.debian.org/debian bookworm InRelease
Hit:2 http://security.debian.org/debian-security bookworm-security InRelease
Hit:3 http://deb.debian.org/debian bookworm-updates InRelease
Hit:4 https://download.docker.com/linux/debian bookworm InRelease
Hit:5 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.29/deb  InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 18 not upgraded.
Need to get 30.3 MB of archives.
After this operation, 201 kB of additional disk space will be used.
Get:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.29/deb  kubectl 1.29.3-1.1 [10.5 MB]
Get:2 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.29/deb  kubelet 1.29.3-1.1 [19.8 MB]
Fetched 30.3 MB in 1s (34.7 MB/s)
apt-listchanges: Reading changelogs...
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = "en_US:en",
        LC_ALL = (unset),
        LC_CTYPE = "UTF-8",
        LC_TERMINAL = "iTerm2",
        LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
(Reading database ... 50378 files and directories currently installed.)
Preparing to unpack .../kubectl_1.29.3-1.1_amd64.deb ...
Unpacking kubectl (1.29.3-1.1) over (1.29.2-1.1) ...
Preparing to unpack .../kubelet_1.29.3-1.1_amd64.deb ...
Unpacking kubelet (1.29.3-1.1) over (1.29.2-1.1) ...
Setting up kubectl (1.29.3-1.1) ...
Setting up kubelet (1.29.3-1.1) ...
kubelet set on hold.
kubectl set on hold.
❯ sudo systemctl daemon-reload
sudo systemctl restart kubelet

❯ kubectl get nodes
NAME     STATUS   ROLES           AGE   VERSION
ak3v     Ready    <none>          24d   v1.29.2
livaq2   Ready    <none>          24d   v1.29.2
livaz2   Ready    control-plane   24d   v1.29.3
nb5      Ready    <none>          24d   v1.29.2
rpi4bp   Ready    control-plane   24d   v1.29.3

upgrading other worker nodes

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/

Note that I am skipping the step to drain the node again.

sudo apt update

# upgrade kubeadm to the same version as the control planes
sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='1.29.3-1.1' && \
sudo apt-mark hold kubeadm

# run upgrade
sudo kubeadm upgrade node

# upgrade kubelet and kubectl
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet='1.29.3-1.1' kubectl='1.29.3-1.1' && \
sudo apt-mark hold kubelet kubectl

# and restart kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet

joining control plane

# on the first control plane
sudo cp -r /etc/kubernetes/pki ~/pki
sudo chown -R $USER:$USER ~/pki

# copy this directory to the second control plane

# on the second control plane to join
sudo mkdir -p /etc/kubernetes/pki/etcd
sudo cp pki/ca.crt /etc/kubernetes/pki/ca.crt
sudo cp pki/sa.key /etc/kubernetes/pki/sa.key
sudo cp pki/front-proxy-ca.crt /etc/kubernetes/pki/front-proxy-ca.crt
sudo cp pki/etcd/ca.crt /etc/kubernetes/pki/etcd/ca.crt
sudo cp pki/ca.key /etc/kubernetes/pki/ca.key
sudo cp pki/front-proxy-ca.key /etc/kubernetes/pki/front-proxy-ca.key
sudo cp pki/etcd/ca.key /etc/kubernetes/pki/etcd/ca.key
sudo cp pki/sa.pub /etc/kubernetes/pki/sa.pub

# then run the kubeadm join command

check components

❯ /opt/cni/bin/dummy --version
CNI dummy plugin v1.2.0
CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0

❯ containerd --version
containerd github.com/containerd/containerd v1.7.7 8c087663b0233f6e6e2f4515cee61d49f14746a8

❯ sudo runc --version
[sudo] password for osho:
runc version 1.1.9
commit: v1.1.9-0-gccaecfcb
spec: 1.0.2-dev
go: go1.20.3
libseccomp: 2.5.4