Konubinix' site

Making K3d Work Again in Debian Testing

k3d does not work anymore…

[2021-04-21 Wed 11:30]

k3d cluster create
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default' (78c4061be477b82f4d7caa90c3574c04080bb47373bab83260472bf369c23dbd)
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.20.6-k3s1'
INFO[0009] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0011] Pulling image 'docker.io/rancher/k3d-proxy:v4.4.2'
INFO[0016] Starting cluster 'k3s-default'
INFO[0016] Starting servers...
INFO[0016] Starting Node 'k3d-k3s-default-server-0'
INFO[0020] Starting agents...
INFO[0020] Starting helpers...
INFO[0020] Starting Node 'k3d-k3s-default-serverlb'
INFO[0020] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
WARN[0023] Failed to patch CoreDNS ConfigMap to include entry '172.19.0.1 host.k3d.internal': Exec process in node 'k3d-k3s-default-server-0' failed with exit code '137'
INFO[0023] Successfully added host record to /etc/hosts in 2/2 nodes
INFO[0023] Cluster 'k3s-default' created successfully!
INFO[0023] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0024] You can now use it like this:
kubectl config use-context k3d-k3s-default
kubectl cluster-info

Then I tried running a single pod.

kta test/k8s/pod.yaml
error: error when creating "test/k8s/pod.yaml": Post "https://0.0.0.0:43107/api/v1/namespaces/default/pods?fieldManager=kubectl-client-side-apply": EOF

Then retrying I get

kta test/k8s/pod.yaml
Error from server (Forbidden): error when creating "test/k8s/pod.yaml": pods "slo-pod" is forbidden: error looking up service account default/default: serviceaccount "default" not found
ktl cluster-info

waited a long time before I killed it.

[2021-04-21 Wed 11:35] trying reinstalling docker

same result

[2021-04-21 Wed 11:41] checking the logs

dk logs k3d-k3s-default-server-0
time="2021-04-21T09:38:00.994062481Z" level=info msg="Starting k3s v1.20.6+k3s1 (8d043282)"
time="2021-04-21T09:38:01.006942642Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-04-21T09:38:01.006967151Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-04-21T09:38:01.012144877Z" level=info msg="Database tables and indexes are up to date"
time="2021-04-21T09:38:01.012879672Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-04-21T09:38:01.021148060Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.021550953Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.021867740Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.022212681Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.022549039Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.022845948Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.023169006Z" level=info msg="certificate CN=cloud-controller-manager signed by CN=k3s-client-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.023692786Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.024188330Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.024681857Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.024979982Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.025525515Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.174199662Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.174508183Z" level=info msg="Active TLS secret  (ver=) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.19.0.2:172.19.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=7E838A160CCE0861C9DBDC7B1357D388A3011666]"
time="2021-04-21T09:38:01.178062456Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=ServiceAccountIssuerDiscovery=false --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0421 09:38:01.178803       7 server.go:659] external host was not specified, using 172.19.0.2
I0421 09:38:01.178969       7 server.go:196] Version: v1.20.6+k3s1
I0421 09:38:01.496145       7 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0421 09:38:01.496774       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0421 09:38:01.496814       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0421 09:38:01.497495       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0421 09:38:01.497504       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0421 09:38:01.511130       7 instance.go:289] Using reconciler: lease
I0421 09:38:01.575800       7 rest.go:131] the default service ipfamily for this cluster is: IPv4
W0421 09:38:01.755482       7 genericapiserver.go:425] Skipping API batch/v2alpha1 because it has no resources.
W0421 09:38:01.764294       7 genericapiserver.go:425] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0421 09:38:01.769886       7 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0421 09:38:01.774542       7 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0421 09:38:01.776698       7 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0421 09:38:01.779785       7 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0421 09:38:01.781355       7 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0421 09:38:01.784116       7 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W0421 09:38:01.784125       7 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I0421 09:38:01.788850       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0421 09:38:01.788861       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-04-21T09:38:01.796989821Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-04-21T09:38:01.797036808Z" level=info msg="Waiting for API server to become available"
time="2021-04-21T09:38:01.797442578Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-04-21T09:38:01.798422602Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-04-21T09:38:01.798458348Z" level=info msg="To join node to cluster: k3s agent -s https://172.19.0.2:6443 -t ${NODE_TOKEN}"
time="2021-04-21T09:38:01.799351827Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-04-21T09:38:01.799472186Z" level=info msg="Run: k3s kubectl"
time="2021-04-21T09:38:01.799541906Z" level=info msg="Module overlay was already loaded"
time="2021-04-21T09:38:01.799553096Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-04-21T09:38:01.799562953Z" level=info msg="Module br_netfilter was already loaded"
time="2021-04-21T09:38:01.799866296Z" level=warning msg="Failed to start iptable_nat module"
time="2021-04-21T09:38:01.817893717Z" level=info msg="Cluster-Http-Server 2021/04/21 09:38:01 http: TLS handshake error from 127.0.0.1:48816: remote error: tls: bad certificate"
time="2021-04-21T09:38:01.820634600Z" level=info msg="Cluster-Http-Server 2021/04/21 09:38:01 http: TLS handshake error from 127.0.0.1:48822: remote error: tls: bad certificate"
time="2021-04-21T09:38:01.826331745Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.827665672Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1618997881: notBefore=2021-04-21 09:38:01 +0000 UTC notAfter=2022-04-21 09:38:01 +0000 UTC"
time="2021-04-21T09:38:01.851522334Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-04-21T09:38:01.851640466Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-04-21T09:38:02.852609164Z" level=info msg="Containerd is now running"
time="2021-04-21T09:38:02.857600436Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-04-21T09:38:02.858880734Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-04-21T09:38:02.859262239Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-04-21T09:38:02.859685152Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W0421 09:38:02.859911       7 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0421 09:38:02.860176       7 server.go:412] Version: v1.20.6+k3s1
W0421 09:38:02.860363       7 proxier.go:651] Failed to read file /lib/modules/5.10.0-6-amd64/modules.builtin with error open /lib/modules/5.10.0-6-amd64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0421 09:38:02.860744       7 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0421 09:38:02.860965       7 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0421 09:38:02.861197       7 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0421 09:38:02.861350       7 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0421 09:38:02.861548       7 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
time="2021-04-21T09:38:02.864672880Z" level=info msg="Waiting for node k3d-k3s-default-server-0: nodes \"k3d-k3s-default-server-0\" not found"
E0421 09:38:02.865531       7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
W0421 09:38:02.873489       7 manager.go:159] Cannot detect current cgroup on cgroup v2
I0421 09:38:02.873504       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
I0421 09:38:02.938337       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0421 09:38:02.938358       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0421 09:38:02.938461       7 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
I0421 09:38:02.938721       7 secure_serving.go:197] Serving securely on 127.0.0.1:6444
I0421 09:38:02.938823       7 autoregister_controller.go:141] Starting autoregister controller
I0421 09:38:02.938884       7 cache.go:32] Waiting for caches to sync for autoregister controller
I0421 09:38:02.938931       7 tlsconfig.go:240] Starting DynamicServingCertificateController
I0421 09:38:02.939010       7 controller.go:83] Starting OpenAPI AggregationController
I0421 09:38:02.939050       7 apf_controller.go:261] Starting API Priority and Fairness config controller
I0421 09:38:02.939086       7 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0421 09:38:02.939101       7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0421 09:38:02.939478       7 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0421 09:38:02.939553       7 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0421 09:38:02.939604       7 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0421 09:38:02.939666       7 customresource_discovery_controller.go:209] Starting DiscoveryController
I0421 09:38:02.939688       7 controller.go:86] Starting OpenAPI controller
I0421 09:38:02.939701       7 naming_controller.go:291] Starting NamingConditionController
I0421 09:38:02.939735       7 establishing_controller.go:76] Starting EstablishingController
I0421 09:38:02.939754       7 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0421 09:38:02.939778       7 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0421 09:38:02.939780       7 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0421 09:38:02.939495       7 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
I0421 09:38:02.939504       7 available_controller.go:475] Starting AvailableConditionController
I0421 09:38:02.940014       7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0421 09:38:02.939477       7 crdregistration_controller.go:111] Starting crd-autoregister controller
I0421 09:38:02.940362       7 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0421 09:38:02.939789       7 crd_finalizer.go:266] Starting CRDFinalizer
time="2021-04-21T09:38:02.943920160Z" level=info msg="Waiting for cloudcontroller rbac role to be created"
I0421 09:38:02.952229       7 controller.go:609] quota admission added evaluator for: namespaces
E0421 09:38:02.956188       7 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocated ip:10.43.0.1 with error:cannot allocate resources of type serviceipallocations at this time
E0421 09:38:02.956546       7 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.19.0.2, ResourceVersion: 0, AdditionalErrorMsg:
I0421 09:38:02.996265       7 shared_informer.go:247] Caches are synced for node_authorizer
I0421 09:38:03.039067       7 cache.go:39] Caches are synced for autoregister controller
I0421 09:38:03.039141       7 apf_controller.go:266] Running API Priority and Fairness config worker
I0421 09:38:03.039150       7 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0421 09:38:03.039640       7 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I0421 09:38:03.040134       7 cache.go:39] Caches are synced for AvailableConditionController controller
I0421 09:38:03.040423       7 shared_informer.go:247] Caches are synced for crd-autoregister
I0421 09:38:03.938316       7 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0421 09:38:03.938525       7 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0421 09:38:03.942873       7 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0421 09:38:03.944593       7 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0421 09:38:03.944603       7 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
E0421 09:38:04.022894       7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
I0421 09:38:04.121879       7 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0421 09:38:04.135987       7 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0421 09:38:04.168173       7 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.19.0.2]
I0421 09:38:04.168682       7 controller.go:609] quota admission added evaluator for: endpoints
I0421 09:38:04.170356       7 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
time="2021-04-21T09:38:04.868121785Z" level=info msg="Waiting for node k3d-k3s-default-server-0: nodes \"k3d-k3s-default-server-0\" not found"
time="2021-04-21T09:38:04.944510841Z" level=info msg="Kube API server is now running"
time="2021-04-21T09:38:04.944539269Z" level=info msg="k3s is up and running"
Flag --address has been deprecated, see --bind-address instead.
I0421 09:38:04.946905       7 controllermanager.go:176] Version: v1.20.6+k3s1
I0421 09:38:04.947111       7 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252
time="2021-04-21T09:38:04.949003229Z" level=info msg="Creating CRD addons.k3s.cattle.io"
W0421 09:38:04.952439       7 authorization.go:47] Authorization is disabled
W0421 09:38:04.952451       7 authentication.go:40] Authentication is disabled
I0421 09:38:04.952460       7 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
time="2021-04-21T09:38:04.953928229Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
time="2021-04-21T09:38:04.958931378Z" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"
time="2021-04-21T09:38:04.962896296Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
I0421 09:38:05.410229       7 shared_informer.go:240] Waiting for caches to sync for tokens
I0421 09:38:05.415172       7 controller.go:609] quota admission added evaluator for: serviceaccounts
I0421 09:38:05.416386       7 controllermanager.go:554] Started "daemonset"
I0421 09:38:05.416457       7 daemon_controller.go:285] Starting daemon sets controller
I0421 09:38:05.416463       7 shared_informer.go:240] Waiting for caches to sync for daemon sets
I0421 09:38:05.419864       7 controllermanager.go:554] Started "ttl"
I0421 09:38:05.419919       7 ttl_controller.go:121] Starting TTL controller
I0421 09:38:05.419924       7 shared_informer.go:240] Waiting for caches to sync for TTL
I0421 09:38:05.423358       7 controllermanager.go:554] Started "endpoint"
I0421 09:38:05.423393       7 endpoints_controller.go:184] Starting endpoint controller
I0421 09:38:05.423400       7 shared_informer.go:240] Waiting for caches to sync for endpoint
time="2021-04-21T09:38:05.465033619Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
time="2021-04-21T09:38:05.465050985Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
I0421 09:38:05.510406       7 shared_informer.go:247] Caches are synced for tokens
time="2021-04-21T09:38:05.966874659Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
time="2021-04-21T09:38:05.966891068Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
I0421 09:38:05.979106       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges
I0421 09:38:05.979126       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints
I0421 09:38:05.979141       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0421 09:38:05.979169       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0421 09:38:05.979184       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps
I0421 09:38:05.979196       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch
I0421 09:38:05.979225       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io
I0421 09:38:05.979250       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io
I0421 09:38:05.979262       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
I0421 09:38:05.979274       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0421 09:38:05.979289       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
I0421 09:38:05.979314       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0421 09:38:05.979329       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps
I0421 09:38:05.979346       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps
I0421 09:38:05.979362       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps
I0421 09:38:05.979386       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates
I0421 09:38:05.979430       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions
I0421 09:38:05.979446       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0421 09:38:05.979463       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
I0421 09:38:05.979483       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0421 09:38:05.979499       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
W0421 09:38:05.979511       7 shared_informer.go:494] resyncPeriod 12h45m12.476734148s is smaller than resyncCheckPeriod 15h0m27.744941522s and the informer has already started. Changing it to 15h0m27.744941522s
I0421 09:38:05.979532       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts
I0421 09:38:05.979562       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch
I0421 09:38:05.979578       7 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
I0421 09:38:05.979587       7 controllermanager.go:554] Started "resourcequota"
I0421 09:38:05.979598       7 resource_quota_controller.go:273] Starting resource quota controller
I0421 09:38:05.979608       7 shared_informer.go:240] Waiting for caches to sync for resource quota
I0421 09:38:05.979621       7 resource_quota_monitor.go:304] QuotaMonitor running
I0421 09:38:05.983742       7 controllermanager.go:554] Started "deployment"
I0421 09:38:05.983814       7 deployment_controller.go:153] Starting deployment controller
I0421 09:38:05.983821       7 shared_informer.go:240] Waiting for caches to sync for deployment
I0421 09:38:05.987885       7 controllermanager.go:554] Started "statefulset"
I0421 09:38:05.987963       7 stateful_set.go:146] Starting stateful set controller
I0421 09:38:05.987987       7 shared_informer.go:240] Waiting for caches to sync for stateful set
I0421 09:38:05.992690       7 controllermanager.go:554] Started "persistentvolume-expander"
I0421 09:38:05.992783       7 expand_controller.go:310] Starting expand controller
I0421 09:38:05.992809       7 shared_informer.go:240] Waiting for caches to sync for expand
I0421 09:38:05.996914       7 controllermanager.go:554] Started "pvc-protection"
I0421 09:38:05.996982       7 pvc_protection_controller.go:110] Starting PVC protection controller
I0421 09:38:05.996989       7 shared_informer.go:240] Waiting for caches to sync for PVC protection
I0421 09:38:06.001089       7 controllermanager.go:554] Started "endpointslice"
W0421 09:38:06.001102       7 controllermanager.go:533] "tokencleaner" is disabled
I0421 09:38:06.001170       7 endpointslice_controller.go:237] Starting endpoint slice controller
I0421 09:38:06.001178       7 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
I0421 09:38:06.005284       7 controllermanager.go:554] Started "replicaset"
I0421 09:38:06.005358       7 replica_set.go:182] Starting replicaset controller
I0421 09:38:06.005365       7 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
I0421 09:38:06.009186       7 node_ipam_controller.go:91] Sending events to api server.
E0421 09:38:06.373013       7 node.go:161] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" not found
time="2021-04-21T09:38:06.470426993Z" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"
time="2021-04-21T09:38:06.474559134Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz"
time="2021-04-21T09:38:06.474718647Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
time="2021-04-21T09:38:06.474796138Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
time="2021-04-21T09:38:06.474868391Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
time="2021-04-21T09:38:06.474911583Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
time="2021-04-21T09:38:06.474958448Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
time="2021-04-21T09:38:06.475002610Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
time="2021-04-21T09:38:06.475074838Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2021-04-21T09:38:06.475119939Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
time="2021-04-21T09:38:06.475161044Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
time="2021-04-21T09:38:06.475207475Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
time="2021-04-21T09:38:06.475252524Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
time="2021-04-21T09:38:06.475297581Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
time="2021-04-21T09:38:06.575745726Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2021-04-21T09:38:06.578650327Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2021-04-21T09:38:06.584280965Z" level=info msg="Cluster dns configmap has been set successfully"
I0421 09:38:06.595197       7 controller.go:609] quota admission added evaluator for: addons.k3s.cattle.io
I0421 09:38:06.658736       7 controller.go:609] quota admission added evaluator for: deployments.apps
time="2021-04-21T09:38:06.677208499Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
time="2021-04-21T09:38:06.677239289Z" level=info msg="Starting /v1, Kind=Pod controller"
time="2021-04-21T09:38:06.677226427Z" level=info msg="Starting /v1, Kind=Node controller"
time="2021-04-21T09:38:06.677243499Z" level=info msg="Starting /v1, Kind=Endpoints controller"
time="2021-04-21T09:38:06.677227961Z" level=info msg="Starting /v1, Kind=Service controller"
time="2021-04-21T09:38:06.677231824Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
time="2021-04-21T09:38:06.677237181Z" level=info msg="Starting batch/v1, Kind=Job controller"
time="2021-04-21T09:38:06.874278788Z" level=info msg="Waiting for node k3d-k3s-default-server-0: nodes \"k3d-k3s-default-server-0\" not found"
I0421 09:38:06.882737       7 controller.go:609] quota admission added evaluator for: helmcharts.helm.cattle.io
I0421 09:38:06.899239       7 controller.go:609] quota admission added evaluator for: jobs.batch
I0421 09:38:07.127519       7 request.go:655] Throttling request took 1.048559583s, request: GET:https://127.0.0.1:6444/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s
time="2021-04-21T09:38:07.381620001Z" level=info msg="Starting /v1, Kind=Secret controller"
time="2021-04-21T09:38:07.579310566Z" level=info msg="Active TLS secret k3s-serving (ver=300) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.19.0.2:172.19.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=7E838A160CCE0861C9DBDC7B1357D388A3011666]"
time="2021-04-21T09:38:07.777394002Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
time="2021-04-21T09:38:07.859265959Z" level=info msg="Stopped tunnel to 127.0.0.1:6443"
time="2021-04-21T09:38:07.859295737Z" level=info msg="Connecting to proxy" url="wss://172.19.0.2:6443/v1-k3s/connect"
time="2021-04-21T09:38:07.859314977Z" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-04-21T09:38:07.859378006Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
time="2021-04-21T09:38:07.860843371Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
W0421 09:38:07.878160       7 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I0421 09:38:07.878356       7 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0421 09:38:07.878443       7 container_manager_linux.go:287] container manager verified user specified cgroup-root exists: []
I0421 09:38:07.878451       7 container_manager_linux.go:292] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
I0421 09:38:07.878497       7 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope
I0421 09:38:07.878502       7 container_manager_linux.go:323] [topologymanager] Initializing Topology Manager with none policy and container-level scope
I0421 09:38:07.878505       7 container_manager_linux.go:328] Creating device plugin manager: true
I0421 09:38:07.878632       7 kubelet.go:265] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
I0421 09:38:07.878646       7 kubelet.go:276] Watching apiserver
I0421 09:38:07.878695       7 kubelet.go:453] Kubelet client is not nil
I0421 09:38:07.879059       7 kuberuntime_manager.go:216] Container runtime containerd initialized, version: v1.4.4-k3s1, apiVersion: v1alpha2
W0421 09:38:07.879154       7 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0421 09:38:07.879429       7 server.go:1177] Started kubelet
I0421 09:38:07.879494       7 server.go:148] Starting to listen on 0.0.0.0:10250
I0421 09:38:07.880184       7 server.go:410] Adding debug handlers to kubelet server.
I0421 09:38:07.880520       7 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
E0421 09:38:07.880950       7 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0421 09:38:07.880985       7 kubelet.go:1296] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I0421 09:38:07.881131       7 volume_manager.go:271] Starting Kubelet Volume Manager
I0421 09:38:07.881241       7 desired_state_of_world_populator.go:142] Desired state populator starts to run
E0421 09:38:07.886904       7 nodelease.go:49] failed to get node "k3d-k3s-default-server-0" when trying to set owner ref to the node lease: nodes "k3d-k3s-default-server-0" not found
I0421 09:38:07.887341       7 cpu_manager.go:193] [cpumanager] starting with none policy
I0421 09:38:07.887350       7 cpu_manager.go:194] [cpumanager] reconciling every 10s
I0421 09:38:07.887373       7 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0421 09:38:07.892192       7 policy_none.go:43] [cpumanager] none policy: Start
I0421 09:38:07.899429       7 kubelet_network_linux.go:56] Initialized IPv4 iptables rules.
I0421 09:38:07.899456       7 status_manager.go:158] Starting to sync pod status with apiserver
I0421 09:38:07.899469       7 kubelet.go:1833] Starting kubelet main sync loop.
E0421 09:38:07.899512       7 kubelet.go:1857] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
E0421 09:38:07.923615       7 node_container_manager_linux.go:57] Failed to create ["kubepods"] cgroup
W0421 09:38:07.923615       7 watcher.go:95] Error while processing event ("/sys/fs/cgroup/kubepods": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent: no such file or directory
F0421 09:38:07.923633       7 kubelet.go:1372] Failed to start ContainerManager cannot enter cgroupv2 "/sys/fs/cgroup/kubepods" with domain controllers -- it is in an invalid state

Followed by a bunch of goroutine errors (200K)

goroutine 16895 [running]:
github.com/rancher/k3s/vendor/k8s.io/klog/v2.stacks(0xc0001a0001, 0xc009a0a360, 0xb5, 0x10c)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
github.com/rancher/k3s/vendor/k8s.io/klog/v2.(*loggingT).output(0x7174820, 0xc000000003, 0x0, 0x0, 0xc007927570, 0x6e065da, 0xa, 0x55c, 0x0)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
github.com/rancher/k3s/vendor/k8s.io/klog/v2.(*loggingT).printf(0x7174820, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x47120a7, 0x23, 0xc00c9689a0, 0x1, ...)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:750 +0x191
github.com/rancher/k3s/vendor/k8s.io/klog/v2.Fatalf(...)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:1502
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).initializeRuntimeDependentModules(0xc009015500)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1372 +0x446
sync.(*Once).doSlow(0xc009015d50, 0xc008e41de8)
	/usr/local/go/src/sync/once.go:66 +0xec
sync.(*Once).Do(...)
	/usr/local/go/src/sync/once.go:57
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).updateRuntimeUp(0xc009015500)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2204 +0x659
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00bedbd80)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00bedbd80, 0x4d9b280, 0xc008ebbb00, 0xc0064a0401, 0xc000182660)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00bedbd80, 0x12a05f200, 0x0, 0x1, 0xc000182660)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc00bedbd80, 0x12a05f200, 0xc000182660)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).Run
	/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1425 +0x16a

goroutine 1 [sleep]:
time.Sleep(0x77359400)
	/usr/local/go/src/runtime/time.go:188 +0xbf
github.com/rancher/k3s/pkg/agent/flannel.Run(0x4e56240, 0xc0006ae5c0, 0xc006ae7500, 0x4e94100, 0xc00b01d940, 0x0, 0x0)
	/go/src/github.com/rancher/k3s/pkg/agent/flannel/setup.go:93 +0x65
github.com/rancher/k3s/pkg/agent.run(0x4e56240, 0xc0006ae5c0, 0xc008231ce0, 0x5e, 0x0, 0x0, 0x0, 0x0, 0xc002d6eec0, 0x16, ...)
	/go/src/github.com/rancher/k3s/pkg/agent/run.go:102 +0x3d9
github.com/rancher/k3s/pkg/agent.Run(0x4e56240, 0xc0006ae5c0, 0xc008231ce0, 0x5e, 0x0, 0x0, 0x0, 0x0, 0xc002d6eec0, 0x16, ...)
	/go/src/github.com/rancher/k3s/pkg/agent/run.go:167 +0x438
github.com/rancher/k3s/pkg/cli/server.run(0xc0002c3b80, 0x7175700, 0xc000faed58, 0x0, 0x0, 0xc000faed58, 0x0, 0x0, 0x0, 0x0)
	/go/src/github.com/rancher/k3s/pkg/cli/server/server.go:352 +0x1cf8
github.com/rancher/k3s/pkg/cli/server.Run(0xc0002c3b80, 0x0, 0x0)
	/go/src/github.com/rancher/k3s/pkg/cli/server/server.go:44 +0x85
github.com/rancher/k3s/vendor/github.com/urfave/cli.HandleAction(0x3f353c0, 0x48a2bb8, 0xc0002c3b80, 0xc0002c3b80, 0x0)
	/go/src/github.com/rancher/k3s/vendor/github.com/urfave/cli/app.go:523 +0xfd
github.com/rancher/k3s/vendor/github.com/urfave/cli.Command.Run(0x468e118, 0x6, 0x0, 0x0, 0x0, 0x0, 0x0, 0x46c36a1, 0x15, 0xc00120c660, ...)
	/go/src/github.com/rancher/k3s/vendor/github.com/urfave/cli/command.go:174 +0x58e
github.com/rancher/k3s/vendor/github.com/urfave/cli.(*App).Run(0xc0009dac40, 0xc0003ad000, 0x4, 0x4, 0x0, 0x0)
	/go/src/github.com/rancher/k3s/vendor/github.com/urfave/cli/app.go:276 +0x7d4
main.main()
	/go/src/github.com/rancher/k3s/cmd/server/main.go:49 +0x69a

[2021-04-21 Wed 11:50] cgroups v2 incompatibility?

I tried rollbacking k3s following this idea

I think I’m seeing the same or similar issue. When I rollback to rancher/k3s:v1.19.7-k3s1, the cluster starts fine.

https://github.com/rancher/k3d/issues/493

k3d cluster create --image rancher/k3s:v1.19.7-k3s1
time="2021-04-21T09:52:33.671627786Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-04-21T09:52:33.672720650Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-04-21T09:52:33.672745721Z" level=info msg="To join node to cluster: k3s agent -s https://172.20.0.2:6443 -t ${NODE_TOKEN}"
time="2021-04-21T09:52:33.673670801Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-04-21T09:52:33.673734280Z" level=info msg="Run: k3s kubectl"
time="2021-04-21T09:52:33.673783710Z" level=warning msg="Failed to find cpuset cgroup, you may need to add \"cgroup_enable=cpuset\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
time="2021-04-21T09:52:33.673793774Z" level=error msg="Failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"
time="2021-04-21T09:52:33.673805914Z" level=fatal msg="failed to find memory cgroup, you may need to add \"cgroup_memory=1 cgroup_enable=memory\" to your linux cmdline (/boot/cmdline.txt on a Raspberry Pi)"

Not the same error, but it still does not work.

I indeed am using cgroupv2

mount|gi cgroup
docker info | grep -i cgroup
 Cgroup Driver: cgroupfs
 Cgroup Version: 2

[2021-04-21 Wed 12:00] try with other versions of k3s

k3d cluster create --image rancher/k3s:v1.20.4-k3s1
E0421 09:59:44.576016       7 kubelet.go:1853] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
E0421 09:59:44.607749       7 node_container_manager_linux.go:57] Failed to create ["kubepods"] cgroup
W0421 09:59:44.607761       7 watcher.go:95] Error while processing event ("/sys/fs/cgroup/kubepods": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent: no such file or directory
F0421 09:59:44.607787       7 kubelet.go:1368] Failed to start ContainerManager cannot enter cgroupv2 "/sys/fs/cgroup/kubepods" with domain controllers -- it is in an invalid state

[2021-04-21 Wed 14:06] back to cgroups v1

Did what is advised in here.

Using Debian Sid, in the meantime, I personally switched back to cgroup v1. I added systemd.unified_cgroup_hierarchy=0 to my GRUB_CMDLINE_LINUX_DEFAULT (/etc/default/grub) and then ran update-grub

https://github.com/rancher/k3d/issues/493

mount|gi cgroup
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,size=4096k,nr_inodes=1024,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)

I think it means I’m back to having cgroup v1.

docker info | grep -i cgroup
 Cgroup Driver: cgroupfs
 Cgroup Version: 1

And then,

k3d cluster create
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default' (30942feb6381247a01b17eb491ec0ea4e0e3a483732e81eb5e4324e2e6e1d19e)
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0004] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0005] Starting cluster 'k3s-default'
INFO[0005] Starting servers...
INFO[0005] Starting Node 'k3d-k3s-default-server-0'
INFO[0011] Starting agents...
INFO[0011] Starting helpers...
INFO[0011] Starting Node 'k3d-k3s-default-serverlb'
INFO[0011] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
INFO[0015] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap
INFO[0015] Cluster 'k3s-default' created successfully!
INFO[0015] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0016] You can now use it like this:
kubectl config use-context k3d-k3s-default
kubectl cluster-info

And the logs seem fine.

I could create a pod. Everything appears to be back to normal.