With the distributed and dynamic nature of containers, managing and configuring storage statically has become a challenge on Kubernetes, with workloads now being able to move from one worker node […]
Share this:
With the distributed and dynamic nature of containers, managing and configuring storage statically has become a challenge on Kubernetes, with workloads now being able to move from one worker node to another in a matter of seconds. To address this, Kubernetes manages volumes with a system of Persistent Volumes (PV), API objects that represent a storage configuration/volume, and PersistentVolumeClaims (PVC), a request for storage to be satisfied by a Persistent Volume. Additionally, Container Storage Interface (CSI) drivers can help automate and manage the handling and provisioning of storage for containerized workloads. These drivers are responsible for provisioning, mounting, unmounting, removing, and snapshotting volumes.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes
A PersistentVolume can be mounted on a worker node in any way supported by the resource provider. Different providers have different capabilities and each PV’s access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities.
The access modes are:
ReadWriteOnce: the volume can be mounted as read-write by a single node. ReadWriteOnce access mode still can allow multiple pods to access the volume when the pods are running on the same node.
ReadOnlyMany: the volume can be mounted as read-only by many nodes.
ReadWriteMany: the volume can be mounted as read-write by many nodes.
ReadWriteOncePod: the volume can be mounted as read-write by a single Pod. Use ReadWriteOncePod access mode if you want to ensure that only one pod across whole cluster can read that PVC or write to it. This is only supported for CSI volumes and Kubernetes version 1.22+.
Using the VMware CNS driver – the built in the VMware CSI driver for vSphere with Tnazu, it’s not possible to create ReadWriteMany volumes on VMFS, vVOL or NFS datastores.
However, this capability can be achieved using the external PowerScale CSI driver by creating the PVs directly on PowerScale. This functionality is also supported by PowerStore and Unity storage arrays.
ReadWriteMany volumes are volumes that can be mounted in a Read/Write fashion simultaneously into a number of pods. This is particularly useful for web and app servers that serve the same files – but also for CI systems like Jenkins which can use a shared volume for artifact storage rather than unnecessarily duplicating data and impacting CI performance.
The PowerScale CSI driver integrates a Kubernetes cluster with the DELL EMC PowerScale Storage product. A developer can use this to dynamically provision ReadWriteOnce volumes for containerized applications in Kubernetes. However, applications can sometimes require data to be persisted and shared across multiple pods. ReadWriteMany volumes are volumes that can be mounted in a Read/Write fashion simultaneously into a number of pods. This is particularly useful for web and app servers that serve the same files – but also for CI systems like Jenkins which can use a shared volume for artifact storage rather than unnecessarily duplicating data and impacting CI performance.
PowerScale CSI driver supports ReadWriteMany (RWX) access mode Persistent Volumes. A RWX PVC can be used simultaneously by many Pods in the same Kubernetes namespace for read and write operations.
To create a ReadWriteMany (RWX) volume with PowerScale, create a Persistent Volume Claim (PVC) with an access mode of ReadWriteMany
The following YAML manifest files provide an example:
A 10Gi PVC with the powerscale storageClassName, with an accessMode of ReadWriteMany:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: external-nfs-pvc
spec:
storageClassName: powerscale
accessModes:
– ReadWriteMany
resources:
requests:
storage: 10Gi
By navigating to powerscale OneFS, and clicking on the FS explorer, we can see that the new PV has been created successfully.
To demonstrate how we can read and write data simultaneously from multiple pods using the PowerScale CSI driver, I deployed two NGINX pods and specified the same claim name under the PVC section.
apiVersion: v1
kind: Pod
metadata:
name: external-nfs-pod-1
spec:
volumes:
– name: external-nfs-storage
persistentVolumeClaim:
claimName: external-nfs-pvc
containers:
– name: external-nfs-container
image: nginx
ports:
– containerPort: 80
name: “http-server”
volumeMounts:
– mountPath: “/usr/share/nginx/html”
name: external-nfs-storage
—
apiVersion: v1
kind: Pod
metadata:
name: external-nfs-pod-2
spec:
volumes:
– name: external-nfs-storage
persistentVolumeClaim:
claimName: external-nfs-pvc
containers:
– name: external-nfs-container
image: nginx
ports:
– containerPort: 80
name: “http-server”
volumeMounts:
– mountPath: “/usr/share/nginx/html”
name: external-nfs-storage
We can see below that files that we create and update on one pod are available on the other, and changes we make in either pod are reflected in both – this makes sense as they are sharing the same volumes and proves that everything is working as planned.
Volume Snapshot Feature for vSphere with Tanzu
Another very using CSI feature is volume snapshots, the ability to create and manage snapshots of existing persistent volume, this feature is not supported by the CNS driver – the built in the VMware CSI driver so we will use the PowerScale CSI driver to show you how we can leverage the native PowerScale snapshots mechanism for vSphere with Tanzu workloads. This functionality is also supported by PowerStore and Unity storage arrays.
The CSI PowerScale driver version 2.0 and later supports managing v1 snapshots.
In order to use Volume Snapshots, ensure the following components have been deployed to your cluster:
Kubernetes Volume Snapshot CRDs
Volume Snapshot Controller
Volume Snapshot Class
During the installation of CSI PowerScale driver version 2.0, no default Volume Snapshot Class will get created.
Following are the manifests for the Volume Snapshot Class:
VolumeSnapshotClass
# For kubernetes version 20 and above (v1 snaps)
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: “powerscale-snapclass”
driver: csi-isilon.dellemc.com
#The deletionPolicy of a volume snapshot class can either be Retain or Delete
#If the deletionPolicy is Delete, then the underlying storage snapshot is deleted along with the VolumeSnapshotContent object.
#If the deletionPolicy is Retain, then both the underlying snapshot and VolumeSnapshotContent remain
deletionPolicy: Delete
parameters:
#IsiPath should match with respective storageClass IsiPath
IsiPath: “/ifs/data/csi”
The following is a sample manifest for creating a Volume Snapshot using the v1 snapshot APIs; The following snippet assumes that the persistent volume claim name is testvolume.
Once the VolumeSnapshot has been successfully created by the CSI PowerScale driver, a VolumeSnapshotContent object is automatically created. Once the status of the VolumeSnapshot object has the readyToUse field set to true, it is available for use.
Creating PVCs with Volume Snapshots as Source
The following is a sample manifest for creating a PVC with a VolumeSnapshot as a source:
Here you can see a video showing how to create ReadWriteMany volumes on TKG Clusters and volumes snapshots using DELL EMC PowerScale with vSphere with Tanzu.