how to share data using a local kubernetes cluster, using a custom PersistentVolume.
I will use the clk extension to bootstrap easily a k3d cluster.
clk parameter set k8s --distribution k3d
clk parameter set k8s.create-cluster --volume /tmp/k3d:/tmp/k3d
clk k8s flow
This basically gives the parameters --volume /tmp/k3d:/tmp/k3d
to the clk k8s create-cluster
command that in turn, will provide it to k3d cluster create
.
Then, say that you are debugging a deployement that contain a PersistentVolumeClaim that is named my-claim in the default namespace.
The PersistentVolume to create is simply.
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-volume
spec:
accessModes:
- ReadWriteOnce
claimRef:
name: my-claim
namespace: default
capacity:
storage: 1Gi
hostPath:
path: /tmp/k3d/myvolume/
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3d-k3s-default-server-0
persistentVolumeReclaimPolicy: Delete
storageClassName: local-path
volumeMode: Filesystem
The nodeAffinity is not strictly needed as in the standard k3d cluster, you have only one node, but it might help remembering that you put this persistent volume explicitly on this node.
Then, appreciate the fact that the claim is correctly bound.
kubectl get pvc my-claim
NAME | STATUS | VOLUME | CAPACITY | ACCESS | MODES | STORAGECLASS | AGE |
---|---|---|---|---|---|---|---|
my-claim | Bound | shared-volume | 1Gi | RWO | local-path | 2m31s |
Then, everything that uses this volume can be seen and played with in the path /tmp/k3d
.
Notes linking here
- persistent local volumes with k3d kubernetes (braindump)