Storage
concepts
1. Persistent Volumes
2. Persistent Volume Claims
3. Configure applications with Persistent storage
4. Access modes for Volumes
5. Kubernetes Storage Object
storage_in_docker
Docker Storage:
--------------------------->
1. Storage Drivers --> AUFS, ZFS, BTRFS, DEVICE MAPPER, OVERLAY, OVERLAY2
2. Volume Driver Plugins: Local, Azure File Storage, Convoy, gce-docker, GlusterFS, VMware vSphere Storage, rexray
Local is by default
docker run -it \
--name mysql --volume-driver rexray/ebs --mount src=ebs-vol,target=/var/lib/mysql mysql
Docker Storage Drivers and Filesystems:
-------------------------------------------------->
on local FS:
------------->
/var/lib/docker
dockers layered architecture
------------------------------->
reuse cached image layers
"image layer" --> Read Only
-------------------------------------------------->
Layer5: update entrypoint with "flask" command
Layer4: Source code
Layer3: changes in pip packages
Layer2: Changes in apt packages
layer1: Base ubuntu layer
"container layer" --> Read Write
----------------------------------------------------->
docker build Dockerfile -t bharath/webapp
docker run bharath/webap
when we run a container from the image ...
docker creates a container based on the "imagelayer" and creates a new writebale layer("container layer") on top of the image layer.
then the "writeable layer" is used to store the data created by the container such as log files written by the application and any temporary files generated by the container.
or any files modified by the use in the container, the life of this layer is till the container is alive.
The same image layer used by all the containers created by this image.
Mount Volumes: Volumes are created by docker(Volme Mount)
------------------------------------------------------->
docker volume create data_volume
docker run -v data_volume:/var/lib/mysql mysql
docker run -v data_volume2:/var/lib/mysql mysql #docker automatically creates data_volume2(docker volume create data_volume2)
to mount existing host directory(cause the data already present there) we need to use full path as below:(Bind mount)
----------------------------------------------------------------------------------------------------------------------->
docker run -v /data/mysql:/var/lib/mysql mysql
docker run --mount type=bind,source=/data/mysql,target=/var/lib/mysql mysql
container_runtime
rkt |
docker | CRI(Container Runtime Interface) ---> Kubernetes
cri-o |
like the sameway CNI(Container Network Interface) has developed....any network solutions(weaveworks, flannel, cilium) should follow the CNI guidelines to develop network plugins for k8s.
Similarly CSI(Container Storage Interface) has developed ...any storage solutions(Partworx, Amazon EBS, DELL EMC, GlusterFS) should follow the CSI guidelines to develop storage plugins for k8s.
RPC --> Remote Procedure Call
k8svolumes
volumes in kubernetes
to persist data processed by the containers we attach volumes to containers when they are created.
attach volumes to pods in k8s:
--------------------------------------->
spec:
containers:
- image: alpine
name: alpine
command: ["/bin/sh", "-c"]
args: ["shuf -i 0-100 -n 1 >> /tmp/number.out"]
volumeMounts:
- mountPath: /opt
name: data-volume
volumes:
- name: data-volume
hostPath:
path: /Data
type: Directory
Volume storage options....we can use hostPath what if we have multi node cluster...in which host the Data directory creates.
for that we can NFS storage, EFS, EBS, google persistent disk, GlusterFS etc
if we want to use volume storage type as AWS EBS can as well.
volumes:
- name: data-volume
awsElasticBlockStore:
volumeID: <volume_id>
fsType: ext4
podvolumes
apiVersion: v1
kind: Pod
metadata:
name: random-number-generator
spec:
containers:
- image: alpine
name: alpine
command: ["/bin/sh", "-c"]
args: ["shuf -i 0-100 -n 1 >> /tmp/number.out"]
volumeMounts:
- mountPath: /opt
name: data-volume
volumes:
- name: data-volume
hostPath:
path: /Data
type: Directory
- name: cloud-volume
awsElasticBlockStore:
volumeID: <volume-ID>
fsType: ext4
PVS_in_k8s
volumes:
- name: data-volume
awsElasticBlockStore:
volumeID: <volume_id>
fsType: ext4
--------------------------------------------------------------------------------------------------->
A "PersistentVolumes(PVs)" is a cluster wide pool of storage volumes configured by an administrator, to be used in application deployed by users on the cluster.
users can now select storage from this PersistentVolumes using an object called "PersistentVolumeClaims(PVCs)"
"accessModes:"
--------------------->
ReadOnlyMany
ReadWriteOnce
ReadWriteMany
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data
awsElasticBlockStore: <volume_id>
fsType: ext4
"other options:"
------------------------->
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vol
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
# hostPath:
# path: /tmp/data
awsElasticBlockStore:
volumeID: <volume_id>
fsType: ext4
demopv
PersistentVolumeClaims having one to one relatioship with PersistentVolumes
pvc
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vol
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /tmp/data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
pvcs_in_pods
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
demopvc
root@controlplane:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
webapp 1/1 Running 0 14s
root@controlplane:~#
root@controlplane:~# kubectl describe pod webapp
Name: webapp
Namespace: default
Priority: 0
Node: controlplane/10.51.220.9
Start Time: Sat, 25 Sep 2021 00:57:24 +0000
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.0.4
IPs:
IP: 10.244.0.4
Containers:
event-simulator:
Container ID: docker://6b01ef3409ea8fa1044717f94d8fc3c5a85fd468ac4049b67cd981b1453a4ab1
Image: kodekloud/event-simulator
Image ID: docker-pullable://kodekloud/event-simulator@sha256:1e3e9c72136bbc76c96dd98f29c04f298c3ae241c7d44e2bf70bcc209b030bf9
Port: <none>
Host Port: <none>
State: Running
Started: Sat, 25 Sep 2021 00:57:34 +0000
Ready: True
Restart Count: 0
Environment:
LOG_HANDLERS: file
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rqj8k (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-rqj8k:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rqj8k
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49s default-scheduler Successfully assigned default/webapp to controlplane
Normal Pulling 47s kubelet Pulling image "kodekloud/event-simulator"
Normal Pulled 40s kubelet Successfully pulled image "kodekloud/event-simulator" in 7.353654278s
Normal Created 39s kubelet Created container event-simulator
Normal Started 39s kubelet Started container event-simulator
root@controlplane:~#
root@controlplane:~# kubectl exec webapp -- cat /log/app.log
[2021-09-25 00:57:34,911] INFO in event-simulator: USER2 logged in
[2021-09-25 00:57:35,911] INFO in event-simulator: USER1 logged out
[2021-09-25 00:57:36,913] INFO in event-simulator: USER1 logged in
[2021-09-25 00:57:37,913] INFO in event-simulator: USER1 is viewing page1
[2021-09-25 00:57:38,915] INFO in event-simulator: USER3 logged out
[2021-09-25 00:57:39,916] WARNING in event-simulator: USER5 Failed to Login as the account is locked due to MANY FAILED ATTEMPTS.
[2021-09-25 00:57:39,916] INFO in event-simulator: USER1 logged in
[2021-09-25 00:57:40,917] INFO in event-simulator: USER2 is viewing page1
[2021-09-25 00:57:41,918] INFO in event-simulator: USER2 is viewing page2
[2021-09-25 00:57:42,919] WARNING in event-simulator: USER7 Order failed as the item is OUT OF STOCK.
[2021-09-25 00:57:42,920] INFO in event-simulator: USER3 logged out
[2021-09-25 00:57:43,920] INFO in event-simulator: USER1 is viewing page2
[2021-09-25 00:57:44,922] WARNING in event-simulator: USER5 Failed to Login as the account is locked due to MANY FAILED ATTEMPTS.
[2021-09-25 00:57:44,922] INFO in event-simulator: USER2 is viewing page1
[2021-09-25 00:57:45,922] INFO in event-simulator: USER3 is viewing page2
[2021-09-25 00:57:46,924] INFO in event-simulator: USER4 logged out
[2021-09-25 00:57:47,925] INFO in event-simulator: USER3 logged out
[2021-09-25 00:57:48,925] INFO in event-simulator: USER2 is viewing page3
[2021-09-25 00:57:49,926] WARNING in event-simulator: USER5 Failed to Login as the account is locked due to MANY FAILED ATTEMPTS.
[2021-09-25 00:57:49,927] INFO in event-simulator: USER2 is viewing page2
[2021-09-25 00:57:50,928] WARNING in event-simulator: USER7 Order failed as the item is OUT OF STOCK.
[2021-09-25 00:57:50,928] INFO in event-simulator: USER2 is viewing page3
[2021-09-25 00:57:51,929] INFO in event-simulator: USER4 logged in
[2021-09-25 00:57:52,930] INFO in event-simulator: USER3 is viewing page1
[2021-09-25 00:57:53,932] INFO in event-simulator: USER4 is viewing page3
root@controlplane:~#
Configure a volume to store these logs at /var/log/webapp on the host.
#===============================================================================
apiVersion: v1
kind: Pod
metadata:
name: webapp
spec:
containers:
- name: event-simulator
image: kodekloud/event-simulator
env:
- name: LOG_HANDLERS
value: file
volumeMounts:
- mountPath: /log
name: log-volume
volumes:
- name: log-volume
hostPath:
# directory location on host
path: /var/log/webapp
# this field is optional
type: Directory
#===============================================================================
root@controlplane:~# kubectl edit pod webapp
error: pods "webapp" is invalid
A copy of your changes has been stored to "/tmp/kubectl-edit-6ic2t.yaml"
error: Edit cancelled, no valid changes were saved.
root@controlplane:~# kubectl delete pod webapp
pod "webapp" deleted
root@controlplane:/var/log/webapp# pwd
/var/log/webapp
root@controlplane:/var/log/webapp# ls -rtlh
total 8.0K
-rw-r--r-- 1 root root 4.1K Sep 25 01:08 app.log
root@controlplane:/var/log/webapp# tail -f app.log
[2021-09-25 01:08:48,417] INFO in event-simulator: USER1 is viewing page2
[2021-09-25 01:08:49,417] INFO in event-simulator: USER1 logged in
[2021-09-25 01:08:50,419] INFO in event-simulator: USER1 is viewing page1
[2021-09-25 01:08:51,420] INFO in event-simulator: USER3 logged in
[2021-09-25 01:08:52,421] WARNING in event-simulator: USER5 Failed to Login as the account is locked due to MANY FAILED ATTEMPTS.
[2021-09-25 01:08:52,421] WARNING in event-simulator: USER7 Order failed as the item is OUT OF STOCK.
[2021-09-25 01:08:52,421] INFO in event-simulator: USER1 logged out
[2021-09-25 01:08:53,422] INFO in event-simulator: USER4 is viewing page1
[2021-09-25 01:08:54,423] INFO in event-simulator: USER4 is viewing page1
[2021-09-25 01:08:55,424] INFO in event-simulator: USER4 is viewing page2
[2021-09-25 01:08:56,426] INFO in event-simulator: USER2 is viewing page2
[2021-09-25 01:08:57,426] WARNING in event-simulator: USER5 Failed to Login as the account is locked due to MANY FAILED ATTEMPTS.
[2021-09-25 01:08:57,427] INFO in event-simulator: USER1 is viewing page3
[2021-09-25 01:08:58,428] INFO in event-simulator: USER2 logged in
[2021-09-25 01:08:59,429] INFO in event-simulator: USER3 is viewing page3
[2021-09-25 01:09:00,430] WARNING in event-simulator: USER7 Order failed as the item is OUT OF STOCK.
[2021-09-25 01:09:00,430] INFO in event-simulator: USER2 logged out
[2021-09-25 01:09:01,432] INFO in event-simulator: USER1 is viewing page3
[2021-09-25 01:09:02,433] WARNING in event-simulator: USER5 Failed to Login as the account is locked due to MANY FAILED ATTEMPTS.
[2021-09-25 01:09:02,433] INFO in event-simulator: USER1 is viewing page1
[2021-09-25 01:09:03,434] INFO in event-simulator: USER3 is viewing page1
^C
root@controlplane:/var/log/webapp#
Create a Persistent Volume with the given specification.
Volume Name: pv-log
Storage: 100Mi
Access Modes: ReadWriteMany
Host Path: /pv/log
Reclaim Policy: Retain
root@controlplane:~# vim pv.yaml
root@controlplane:~# kubectl apply -f pv.yaml
persistentvolume/pv-log created
root@controlplane:~#
Let us claim some of that storage for our application. Create a Persistent Volume Claim with the given specification.
Volume Name: claim-log-1
Storage Request: 50Mi
Access Modes: ReadWriteOnce
root@controlplane:~# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-log 100Mi RWX Retain Available 6m33s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/claim-log-1 Pending 26s
root@controlplane:~#
AccessModes also should match on bot PV and PVC
root@controlplane:~# kubectl delete pvc claim-log-1
persistentvolumeclaim "claim-log-1" deleted
root@controlplane:~# kubectl apply -f pvc.yaml
persistentvolumeclaim/claim-log-1 created
root@controlplane:~# kubectl get pv,pvc -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/pv-log 100Mi RWX Retain Bound default/claim-log-1 12m Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/claim-log-1 Bound pv-log 100Mi RWX 18s Filesystem
root@controlplane:~#
root@controlplane:~# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-log 100Mi RWX Retain Released default/claim-log-1 25m
root@controlplane:~#
add_volumes
apiVersion: v1
kind: Pod
metadata:
name: webapp
spec:
containers:
- name: event-simulator
image: kodekloud/event-simulator
env:
- name: LOG_HANDLERS
value: file
volumeMounts:
- mountPath: /log
name: log-volume
volumes:
- name: log-volume
hostPath:
# directory location on host
path: /var/log/webapp
# this field is optional
type: Directory
create_pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-log
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /pv/log
create_pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-log-1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Mi
pvcs_in_pods
apiVersion: v1
kind: Pod
metadata:
name: webapp
spec:
containers:
- name: event-simulator
image: kodekloud/event-simulator
env:
- name: LOG_HANDLERS
value: file
volumeMounts:
- mountPath: /log
name: log-volume
volumes:
- name: log-volume
persistentVolumeClaim:
claimName: claim-log-1
storage_classes
pv-definition.yaml:
-------------------------->
apiVersion:v1
kind: persistentVolume
metadata:
name: pv-vol1
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 500Mi
gcePersistentDisk:
pdName: pd-disk
fsType: ext4
the problem with above PV definition is that first we need to create a persistent Disk in GCP like below.
gcloud beta compute disks create --size 1GB --region asia-southeast1
We can automatically create volume in GCP dynamically using storage classes
dynamically provision disks in GCP using storage classes:
--------------------------------------------------------------->
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: google-storage
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
With StorageClass is define we no-longer needed the PersitentVolumes.
In the PersistentVolumeClaim definition specify the StorageClass as below:
------------------------------------------------------------------------------->
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteMany
storageClassName: google-storage
resources:
requests:
storage: 500Mi
use PVC inside pod definition file:
------------------------------------------>
apiVersion: v1
kind: Pod
metadata:
name: random-number-generator
spec:
containers:
- image: alpine
name: alpine
command: ["/bin/sh", "-c"]
args: ["shuf -i 0-100 -n 1 >> /tmp/number.out"]
volumeMounts:
- mountPath: /opt
name: data-volume
volumes:
- name: data-volume
PersistentVolumeClaim:
claimName: myclaim
demo_storage_classes
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: google-storage
provisioner: kubernetes.io/gce-pd
specify_storage_classes
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-log-1
spec:
accessModes:
- ReadWriteMany
storageClassName: google-storage
resources:
requests:
storage: 50Mi
pvc_storage_classes
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: google-storage
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteMany
storageClassName: google-storage
resources:
requests:
storage: 500Mi
---
apiVersion: v1
kind: Pod
metadata:
name: random-number-generator
spec:
containers:
- image: alpine
name: alpine
command: ["/bin/sh", "-c"]
args: ["shuf -i 0-100 -n 1 >> /tmp/number.out"]
volumeMounts:
- mountPath: /opt
name: data-volume
volumes:
- name: data-volume
PersistentVolumeClaim:
claimName: myclaim
labs_storage_classes
root@controlplane:~# kubectl get sc --all-namespaces
No resources found
root@controlplane:~#
root@controlplane:~# kubectl get sc --all-namespaces
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-storage kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 26s
portworx-io-priority-high kubernetes.io/portworx-volume Delete Immediate false 26s
root@controlplane:~# kubectl describe sc local-storage
Name: local-storage
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
root@controlplane:~# kubectl describe sc portworx-io-priority-high
Name: portworx-io-priority-high
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"portworx-io-priority-high"},"parameters":{"priority_io":"high","repl":"1","snap_interval":"70"},"provisioner":"kubernetes.io/portworx-volume"}
Provisioner: kubernetes.io/portworx-volume
Parameters: priority_io=high,repl=1,snap_interval=70
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
root@controlplane:~#
root@controlplane:~# kubectl get pv --all-namespaces
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv 500Mi RWO Retain Available local-storage 11m
root@controlplane:~#
root@controlplane:~# kubectl describe pv local-pv
Name: local-pv
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 500Mi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [controlplane]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /opt/vol1
Events: <none>
root@controlplane:~#
Create a new PersistentVolumeClaim by the name of local-pvc that should bind to the volume local-pv.
Inspect the pv local-pv for the specs.
PVC: local-pvc
Correct Access Mode?
Correct StorageClass Used?
PVC requests volume size = 500Mi?
------------------------------------------------->
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-pvc
spec:
storageClassName: local-storage
volumeName: local-pv
resources:
requests:
storage: 8Gi
------------------------------------------------->
root@controlplane:~# kubectl describe pvc local-pvc
Name: local-pvc
Namespace: default
StorageClass: local-storage
Status: Bound
Volume: local-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 500Mi
Access Modes: RWO
VolumeMode: Filesystem
Used By: <none>
Events: <none>
root@controlplane:~#
MacBook-Pro:7.Storage bharathdasaraju$ kubectl run nginx --image=nginx --dry-run=client -o yaml > 22.pod_uses_local_storagePVC.yaml
MacBook-Pro:7.Storage bharathdasaraju$
root@controlplane:~# vim 22.pod_uses_local_storagePVC.yaml
root@controlplane:~# kubectl apply -f 22.pod_uses_local_storagePVC.yaml
pod/nginx created
root@controlplane:~#
root@controlplane:~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
local-pvc Bound local-pv 500Mi RWO local-storage 11m
root@controlplane:~#
Create a new Storage Class called delayed-volume-sc that makes use of the below specs:
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: delayed-volume-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PV_to_PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
volumeName: local-pv
resources:
requests:
storage: 8Gi
pod_localstorage
apiVersion: v1
kind: Pod
metadata:
labels:
namne: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: local-persistent-storage
mountPath: /var/www/html
volumes:
- name: local-persistent-storage
persistentVolumeClaim:
claimName: local-pvc
storageclass_noprovisioner
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: delayed-volume-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer