Konubinix' opinionated web of thoughts

Kubernetes

Fleeting

invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update

  • External reference: https://gist.github.com/udhos/447a72e462737c423edc89636ba6addb Happen when you:
    • apply a resource
    • edit the resource -> k8s will add a resourceVersion value in the kubectl.kubernetes.io/last-applied-configuration annotation
    • apply the resource again. It will try to patch it, setting the value of resourceVersion to null

Quick correction Remove the annotation

apply with –force (it will delete the resource and them recreate it)

Correction only modify by applying the file

finalizers

  • External reference: https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/

    When you tell Kubernetes to delete an object that has finalizers specified for it, the Kubernetes API marks the object for deletion by populating .metadata.deletionTimestamp, and returns a 202 status code (HTTP “Accepted”). The target object remains in a terminating state while the control plane, or other components, take the actions defined by the finalizers. After these actions are complete, the controller removes the relevant finalizers from the target object. When the metadata.finalizers field is empty, Kubernetes considers the deletion complete and deletes the object.

    https://kubernetes.io/docs/concepts/overview/working-with-objects/finalizers/ ([2025-06-11 Wed])

    therefore, forcing a deleting can be done with:

    kubectl patch some-resource/some-name \ –type json \ –patch=’[ { “op”: “remove”, “path”: “/metadata/finalizers” } ]'

    https://martinheinz.dev/blog/74 ([2025-06-11 Wed])

Stop Messing with Kubernetes Finalizers, how to stop a resources cleaning

namespace

controller

operator

run locally (for development purpose)

k0s

k3d

Tool to ease playing with k3s inside docker, done by rancher.

It uses flannel to deal with networking. Flannel does not handle networkpolicy.

Flannel doesn’t control how containers are networked to the host, only how the traffic is transported between hosts and doesn’t implement network policy controller. For network policy, other projects such as Calico can be used.

https://banzaicloud.com/docs/pipeline/security/network-policy/network-plugins/

It seems like there is a minimal implementation of networkpolicy inside k3s, but it does not appear to deal with even basic use cases.

running Cilium in K3D

cannot use a cilium init-container in the k3d/k3s combo, because it tries to run a script in the context of the above containers using the /bin/bash interpreter (but the k3s images are based on busybox, and only have a /bin/sh interpreter available).

https://sandstorm.de/de/blog/post/running-cilium-in-k3s-and-k3d-lightweight-kubernetes-on-mac-os-for-development.html

docker exec -it k3d-foo-agent-0 mount bpffs /sys/fs/bpf -t bpf docker exec -it k3d-foo-agent-0 mount –make-shared /sys/fs/bpf

docker exec -it k3d-foo-server-0 mount bpffs /sys/fs/bpf -t bpf docker exec -it k3d-foo-server-0 mount –make-shared /sys/fs/bpf

https://sandstorm.de/de/blog/post/running-cilium-in-k3s-and-k3d-lightweight-kubernetes-on-mac-os-for-development.html

helm repo add cilium https://helm.cilium.io/

helm install cilium cilium/cilium –version 1.9.1 \ –namespace kube-system \ –set kubeProxyReplacement=partial \ –set hostServices.enabled=false \ –set externalIPs.enabled=true \ –set nodePort.enabled=true \ –set hostPort.enabled=true \ –set bpf.masquerade=false \ –set image.pullPolicy=IfNotPresent \ –set ipam.mode=kubernetes

helm upgrade cilium cilium/cilium –version 1.9.1 \ –namespace kube-system \ –reuse-values \ –set hubble.listenAddress=":4244" \ –set hubble.relay.enabled=true \ –set hubble.ui.enabled=true

https://sandstorm.de/de/blog/post/running-cilium-in-k3s-and-k3d-lightweight-kubernetes-on-mac-os-for-development.html

k3d with cgroup v2

k3d: unable to start built container

install a custom cni with k3d

install calico alongside flannel

Installing Calico for policy and flannel (aka Canal) for networking

https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel

kind

kind, a local k8s

known issues

too many open files
  • External reference: https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files Pod errors due to “too many open files”

    This may be caused by running out of inotify resources. Resource limits are defined by fs.inotify.max_user_watches and fs.inotify.max_user_instances system variables. For example, in Ubuntu these default to 8192 and 128 respectively, which is not enough to create a cluster with many nodes.

    To increase these limits temporarily run the following commands on the host:

$ sudo sysctl fs.inotify.max_user_watches=524288 $ sudo sysctl fs.inotify.max_user_instances=512

To make the changes persistent, edit the file /etc/sysctl.conf and add these lines:

fs.inotify.max_user_watches = 524288 fs.inotify.max_user_instances = 512

https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files

ingress

ingress class

default

You can mark a particular IngressClass as default for your cluster. Setting the ingressclass.kubernetes.io/is-default-class annotation to true on an IngressClass resource will ensure that new Ingresses without an ingressClassName field specified will be assigned this default IngressClass.

https://kubernetes.io/docs/concepts/services-networking/ingress/

some ingress controllers, that work without the definition of a default IngressClass. For example, the Ingress-NGINX controller can be configured with a flag –watch-ingress-without-class. It is recommended though, to specify the default IngressClass:

https://kubernetes.io/docs/concepts/services-networking/ingress/

cronjob

trigger manually

kubectl create job –from=cronjob/pgdump pgdump-manual-001

https://www.craftypenguins.net/blog/how-to-trigger-a-kubernetes-cronjob-manually/

taint

remove

if we want to taint a node we use kubectl taint as follows

https://pet2cattle.com/2021/09/k8s-node-untaint

We can use kubectl taint but adding an hyphen at the end to remove the taint (untaint the node)

https://pet2cattle.com/2021/09/k8s-node-untaint

$ kubectl taint nodes minikube application=example:NoSchedule- node/minikubee untainted

https://pet2cattle.com/2021/09/k8s-node-untaint

If we don’t know the command used to taint the node we can use kubectl describe node to get the exact taint we’ll need to use to untaint the node

https://pet2cattle.com/2021/09/k8s-node-untaint

Notes linking here