Playing With Gce Persistent Disk in K8s
Fleetingplaying with gce persistent disk in k8s
Find out the region of your cluster
gcloud container clusters list |sed -r 's/ +/ /g'|cut -f2 -d ' '
LOCATION
europe-west1-d
Then, create the disk in this region.
gcloud compute disks create --size=10GB --zone=europe-west1-d slo-test-disk
NAME ZONE SIZE_GB TYPE STATUS
slo-test-disk europe-west1-d 10 pd-standard READY
gke does not want to hear about disks under 10GB…
Then, create the associated PersistentVolume,
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "slo-pv"
spec:
capacity:
storage: 10Gi
accessModes:
- "ReadWriteOnce"
gcePersistentDisk:
fsType: "ext4"
pdName: "slo-test-disk"
Then, add it to k8s
kta ~/test/k8s/pv.yaml
persistentvolume/slo-pv created
See if the PV is correctly associated to the gce persistent disk
ktl describe pv slo-pv
Name: slo-pv
Labels: failure-domain.beta.kubernetes.io/region=europe-west1
failure-domain.beta.kubernetes.io/zone=europe-west1-d
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: default/slo-pvc
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/zone in [europe-west1-d]
failure-domain.beta.kubernetes.io/region in [europe-west1]
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: slo-test-disk
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
Great,
Create the PVC.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: slo-pvc
spec:
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set
volumeName: slo-pv
resources:
requests:
storage: 100Mi
accessModes:
- ReadWriteOnce
kta ~/test/k8s/pvc.yaml
persistentvolumeclaim/slo-pvc created
Is it correctly bound to the PV?
ktl describe pvc slo-pvc
Name: slo-pvc
Namespace: default
StorageClass:
Status: Bound
Volume: slo-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: slo-pod
Events: <none>
Great,
Now, let’s try to write something into this volume, with a simple pod.
apiVersion: v1
kind: Pod
metadata:
name: slo-writer
spec:
containers:
- name: slo
image: busybox
command: ["sh"]
args: ["-c", "echo test > /volume/test && sleep 3600"]
volumeMounts:
- mountPath: "/volume"
name: slo-volume
volumes:
- name: slo-volume
persistentVolumeClaim:
claimName: slo-pvc
kta ~/test/k8s/writer.yaml
pod/slo-pod created
Check that the pod is correctly bound
ktl describe pod slo-pod |gi slo-pv
ClaimName: slo-pvc
Normal SuccessfulAttachVolume 52s attachdetach-controller AttachVolume.Attach succeeded for volume "slo-pv"
Name: slo-pv
Labels: failure-domain.beta.kubernetes.io/region=europe-west1
failure-domain.beta.kubernetes.io/zone=europe-west1-d
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Bound
Claim: default/slo-pvc
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/zone in [europe-west1-d]
failure-domain.beta.kubernetes.io/region in [europe-west1]
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: slo-test-disk
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
Is it running?
ktl get pod slo-pod
NAME READY STATUS RESTARTS AGE
slo-pod 1/1 Running 0 68s
Nice, let’s take a look at the content of the volume.
ktl exec slo-pod ls /volume
lost+found
test
Ok, now let’s take a look at the gce disk content, just to make sure the content is there.
Those are in this
Hmm, looks like I cannot see the content of gce persistent disk from here.
At least, let’s start another pod to read the content and see if we find out the wrote file.
apiVersion: v1
kind: Pod
metadata:
name: slo-reader
spec:
containers:
- name: slo
image: busybox
command: ["sh"]
args: ["-c", "ls /volume/ && sleep 3600"]
volumeMounts:
- mountPath: "/volume"
name: slo-volume
volumes:
- name: slo-volume
persistentVolumeClaim:
claimName: slo-pvc
kta ~/test/k8s/reader.yaml
pod/slo-reader created
Then, look at its output
ktl logs slo-reader
lost+found
test
Great, the content is there.
No, let’s close the pods to see if the content actually persists.
ktd ~/test/k8s/writer.yaml
ktd ~/test/k8s/reader.yaml
pod "slo-writer" deleted
pod "slo-reader" deleted
Now, let’s spawn the reader another time
kta ~/test/k8s/reader.yaml
pod/slo-reader created
Then, look at its output
ktl logs slo-reader
lost+found
test
Ok, just like we expected.
Just to be sure, let’s write another thing before concluding.
apiVersion: v1
kind: Pod
metadata:
name: slo-writer2
spec:
containers:
- name: slo
image: busybox
command: ["sh"]
args: ["-c", "echo test > /volume/test2 && sleep 3600"]
volumeMounts:
- mountPath: "/volume"
name: slo-volume
volumes:
- name: slo-volume
persistentVolumeClaim:
claimName: slo-pvc
kta ~/test/k8s/writer2.yaml
pod/slo-writer2 created
kta ~/test/k8s/reader.yaml
pod/slo-reader created
ktl logs slo-reader
lost+found
test
test2
Ok, now I feel like this is working,
Lets clean a bit
ktd ~/test/k8s/ -R
gcloud compute disks delete --zone=europe-west1-d slo-test-disk