Kubernetes
Fleeting- External reference: https://kubernetes.io/docs/concepts/services-networking/ingress/
run locally (for development purpose)
k0s
k3d
Tool to ease playing with k3s inside docker, done by rancher.
It uses flannel to deal with networking. Flannel does not handle networkpolicy.
Flannel doesn’t control how containers are networked to the host, only how the traffic is transported between hosts and doesn’t implement network policy controller. For network policy, other projects such as Calico can be used.
— https://banzaicloud.com/docs/pipeline/security/network-policy/network-plugins/
It seems like there is a minimal implementation of networkpolicy inside k3s, but it does not appear to deal with even basic use cases.
running Cilium in K3D
-
External reference: https://sandstorm.de/de/blog/post/running-cilium-in-k3s-and-k3d-lightweight-kubernetes-on-mac-os-for-development.html
important k3s arguments here are
--disable-network-policy
and--flannel-backend=none
.
cannot use a cilium init-container in the k3d/k3s combo, because it tries to run a script in the context of the above containers using the /bin/bash interpreter (but the k3s images are based on busybox, and only have a /bin/sh interpreter available).
docker exec -it k3d-foo-agent-0 mount bpffs /sys/fs/bpf -t bpf docker exec -it k3d-foo-agent-0 mount –make-shared /sys/fs/bpf
docker exec -it k3d-foo-server-0 mount bpffs /sys/fs/bpf -t bpf docker exec -it k3d-foo-server-0 mount –make-shared /sys/fs/bpf
helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium –version 1.9.1 \ –namespace kube-system \ –set kubeProxyReplacement=partial \ –set hostServices.enabled=false \ –set externalIPs.enabled=true \ –set nodePort.enabled=true \ –set hostPort.enabled=true \ –set bpf.masquerade=false \ –set image.pullPolicy=IfNotPresent \ –set ipam.mode=kubernetes
helm upgrade cilium cilium/cilium –version 1.9.1 \ –namespace kube-system \ –reuse-values \ –set hubble.listenAddress=":4244" \ –set hubble.relay.enabled=true \ –set hubble.ui.enabled=true
k3d with cgroup v2
-
External reference: https://github.com/rancher/k3d/issues/493
I think I’m seeing the same or similar issue. When I rollback to rancher/k3s:v1.19.7-k3s1, the cluster starts fine.
k3d cluster create –verbose –trace –image rancher/k3s:v1.19.8-k3s1 wsop Doesn’t work on Fedora 33 as there is different error
> docker logs –follow k3d-wsop-server-0 2>&1 … time=“2021-03-03T11:52:51.432307543Z” level=fatal
Hi @fr33ky , thanks for your input. I guess when you run docker info, you see that Cgroup Version is 2? I’m on the same docker version on Ubuntu, but using cgroup v1… no problems here
Using Debian Sid, in the meantime, I personally switched back to cgroup v1. I added systemd.unified_cgroup_hierarchy=0 to my GRUB_CMDLINE_LINUX_DEFAULT (/etc/default/grub) and then ran update-grub
k3d: unable to start built container
-
External reference: https://github.com/vmware-tanzu/buildkit-cli-for-kubectl/issues/46
Basically it seems like k3d is not a good environment for kubectl-buildkit, and the only option for it is if you are using a registry, as described by @pdevine, although that eliminates one of the use cases (not transferring bytes up and down from the registry, and needing a registry)
install a custom cni with k3d
-
External reference: https://rancher.com/docs/k3s/latest/en/installation/network-options/
Custom CNI
Run K3s with –flannel-backend=none and install your CNI of choice
— https://rancher.com/docs/k3s/latest/en/installation/network-options/
install calico alongside flannel
- External reference: https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel
Installing Calico for policy and flannel (aka Canal) for networking
— https://projectcalico.docs.tigera.io/getting-started/kubernetes/flannel/flannel
kind
kind, a local k8s
known issues
too many open files
External reference: https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files Pod errors due to “too many open files”
This may be caused by running out of inotify resources. Resource limits are defined by fs.inotify.max_user_watches and fs.inotify.max_user_instances system variables. For example, in Ubuntu these default to 8192 and 128 respectively, which is not enough to create a cluster with many nodes.
To increase these limits temporarily run the following commands on the host:
$ sudo sysctl fs.inotify.max_user_watches=524288 $ sudo sysctl fs.inotify.max_user_instances=512
To make the changes persistent, edit the file /etc/sysctl.conf and add these lines:
fs.inotify.max_user_watches = 524288 fs.inotify.max_user_instances = 512
— https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files
ingress
- External reference: https://kubernetes.io/docs/concepts/services-networking/ingress/
Point d’entrée http(s) de kubernetes
ingress class
default
You can mark a particular IngressClass as default for your cluster. Setting the ingressclass.kubernetes.io/is-default-class annotation to true on an IngressClass resource will ensure that new Ingresses without an ingressClassName field specified will be assigned this default IngressClass.
— https://kubernetes.io/docs/concepts/services-networking/ingress/
some ingress controllers, that work without the definition of a default IngressClass. For example, the Ingress-NGINX controller can be configured with a flag –watch-ingress-without-class. It is recommended though, to specify the default IngressClass:
— https://kubernetes.io/docs/concepts/services-networking/ingress/
cronjob
trigger manually
kubectl create job –from=cronjob/pgdump pgdump-manual-001
— https://www.craftypenguins.net/blog/how-to-trigger-a-kubernetes-cronjob-manually/
taint
remove
- External reference: https://pet2cattle.com/2021/09/k8s-node-untaint
if we want to taint a node we use kubectl taint as follows
We can use kubectl taint but adding an hyphen at the end to remove the taint (untaint the node)
$ kubectl taint nodes minikube application=example:NoSchedule- node/minikubee untainted
If we don’t know the command used to taint the node we can use kubectl describe node to get the exact taint we’ll need to use to untaint the node
Notes linking here
- 0/2 nodes are available: 2 Too many pods. preemption: 0/2 nodes are
- 503 in Kubernetes NGINX Ingress
- actions-runner-controller
- admission controllers
- alexellis/arkade: Open Source Kubernetes Marketplace
- attach handlers to container lifecycle events
- avionix
- AvitalTamir/cyphernetes: A Kubernetes Query Language
- AWS
- beware the resolver of nginx
- bitnami
- bitnami ne se configure pas bien
- blue-green deployed
- calico
- clk k8s
- clk k8s and earthly in a local dev env (blog)
- clk k8s: accessing the host from the pod on linux and mac (blog)
- clk parameters
- clk provides a bunch of everyday usage lib
- Cloud Controller Manager
- Cloud Native Live: Crossplane - GitOps-based Infrastructure as Code through Kubernetes API - YouTube
- cognitive resistance to devops mindset
- Comment lister les images installés dans k3d
- configure Liveness, Readiness and Startup Probes
- container network interface
- Container Storage Interface
- custom resource
- debug containers
- develop an android application using keycloak and localhost redirection
- devops
- docker entrypoint vs kubernetes command
- docker looses the dns configuration
- dynamic provisioning and storage classes in kubernetes
- Electro Monkeys
- etcd
- ExternalIP
- helm
- high availability peer-to-peer system from the point of view of the public network
- hoot about proxies, ingress and API gateways
- how I debug MTU issues in k3d in docker in earthly in docker in k8s (blog)
- how I debug my k8s tests not running
- how to debug a typescript program running on k8s using dap in emacs?
- how to organise the inter subchart networkpolicies?
- how to share data using a local kubernetes cluster (blog)
- ingress vs api gateway
- InternalIP
- investigate too many open files
- istio
- k3s
- k8s en est à l’âge de pierre de l’ihm
- k8s job
- k8s namespace
- k8s nginx ingress controller
- k8s templating solutions
- kind configure local registry with kube-public local-registry-hosting
- kubectl
- kubectl build
- kubectl-buildkit
- kubernetes - what k8s-app label represent?
- kubernetes best practices
- kubernetes cheatsheet
- Kubernetes in Dev Mode. The microservice architecture is on 26 | by Mahdi Chihaoui | FAUN | Medium
- kubernetes operator
- kubernetes resources examples
- kubernetes/metrics
- kubernetes: debug running pods
- kubeval
- kustomize
- lens
- lifecycle of persistent volumes
- loadbalancer
- local docker registry
- making k3d work again in debian testing (blog)
- networkpolicy
- persistent local volumes with k3d kubernetes
- persistent volume
- persistent volume claim
- playing with gce persistent disk in k8s
- pod
- port 10250 in kubernetes == self
- Ports and Protocols | Kubernetes
- postStart handler
- preStop handler
- recommended Labels
- service accounts - List of Kubernetes RBAC rule verbs
- service accounts for pod
- several flavors of testing one’s code
- statefulsets
- storage class
- telepresence
- tilt
- toolchains behind successful kubernetes development workflows
- traefik
- traefik et maesh : de l ingress au service mesh avec Michael Matur
- usage ordinaire du mot devops
- use dockerhub from inside kubernetes
- use kind and k3d with tilt and clk k8s
- using helm and kustomize to build more declarative kubernetes workloads
- visual guide on troubleshooting Kubernetes deployments
- volumes in kubernetes
- what is v1beta1.metrics.k8s.io and what does False (MissingEndpoints) means
- where to store acme.json when using traefik
- with docker desktop (blog)
- with kubernetes
- world’s simplest Kubernetes dashboard: k1s
- évolution des ihm