The PowerStore CSI Driver by Dell EMC implement an interface between CSI enabled Container Orchestrator (CO) and Dell EMC Storage Arrays. It is a plug-in that is installed into Kubernetes to provide […]
The PowerStore CSI Driver by Dell EMC implement an interface between CSI enabled Container Orchestrator (CO) and Dell EMC Storage Arrays. It is a plug-in that is installed into Kubernetes to provide persistent storage using Dell PowerStore storage system.
The CSI Driver for Dell EMC PowerStore supports the following new features / changes:
Added support for OpenShift 4.5/4.6 with RHEL and CoreOS worker nodes
Added support for Red Hat Enterprise Linux (RHEL) 7.9
Added support for Ubuntu 20.04
Added support for Docker EE 3.1
Added support for Controller high availability (multiple-controllers)
Version 1.2 of CSI PowerStore introduces controller HA feature • Instead of StatefulSet controller pods deployed as a Deployment • The user can adjust number of replicas and node tolerations by editing values.yaml • When multiple replicas of controller pods are in cluster each sidecar (attacher, provisioner,
resizer, snapshotter) tries to get a lease so only one instance of each sidecar would be active
in the cluster at a time
Added support for Topology
Starting from version 1.2, the CSI PowerStore driver supports Topology which forces volumes
to be placed on worker nodes that have connectivity to the backend storage • This covers use cases where users have chosen to restrict the nodes on which the CSI driver
is deployed • The driver doesn’t support customer defined topology, users cannot create their own labels
for nodes, they should use whatever labels are returned by driver and applied automatically
by Kubernetes on its nodes
Added support for ephemeral volumes
Since v1.2, the CSI PowerStore driver supports the ephemeral volume workload • Ephemeral volumes are being specified inside the pod manifest and follow the pod’s lifecycle • In order to use this feature, the CSI Driver Object must exist in the cluster and support for
ephemeral workload must be specified inside it
Ephemeral volume is not a Generic CSI Volume and can only be used as a temporary storage
for a pod. Ephemeral volumes don’t support snapshotting, restoring from snapshot, but they
The CSI Driver for Dell EMC PowerStore and Kubernetes communicate using the Container Storage Interface protocol v 1.1. CSI Driver for Dell EMC PowerStore is compatible with Kubernetes versions 1.17, 1.18, 1.19 and OpenShift 4.3 and 4.4 and Docker EE 3.1.
The CSI Driver for Dell EMC PowerStore supports Red Hat Enterprise Linux (RHEL) 7.6, 7.7 and 7.8, CentOS 7.6, 7.7 and 7.8. The CSI Driver for Dell EMC PowerStore supports Dell EMC PowerStore version 1.0.x
The CSI Driver for Dell EMC PowerStore can be deployed by using the provided Helm v3 charts and installation scripts on both Kubernetes and OpenShift platforms. For more detailed information on the installation scripts, review the script documentation.
The controller section of the Helm chart installs the following components in a Deployment in the PowerStore namespace:
CSI Driver for Dell EMC PowerStore
Kubernetes External Provisioner, which provisions the volumes
Kubernetes External Attacher, which attaches the volumes to the containers
Kubernetes External Snapshotter, which provides snapshot support
Kubernetes External Resizer, which resizes the volume
In order to automate and simplify the CSI driver installations on Openshift clusters we’ve developed the Dell CSI Operator. this is a Kubernetes native application which helps in installing and managing CSI Drivers provided by Dell EMC for its various storage platforms. Dell CSI Operator uses Kubernetes Custom Resource Definitions to define a manifest that describes the deployment specifications for each driver to be deployed. Multiple CSI drivers provided by Dell EMC and multiple instances of each driver can be deployed by the operator by defining a manifest for each deployment.
PowerStore CSI Features
ReadWriteMany NFS Volumes
PowerStore offers a native file solution that is designed for the modern data center. The file system architecture is highly scalable, efficient, performance-focused, and flexible. PowerStore also includes a rich supporting feature set, enabling the ability to support a wide array of use cases such as departmental shares or home directories. These file capabilities are integrated, so no extra hardware, software, or licenses are required. File management, monitoring, and provisioning capabilities are handled through the simple and intuitive HTML5-based PowerStore Manager.
This new CSI feature allows us to create Kubernetes readwrite many volumes, which are basically Shared volumes on PowerStore NFS shares.
Shared volumes are useful when you want multiple PODs to access the same PVC (volume) at the same time. They can use the same volume even if they are running on different hosts.
Starting with version 1.2, the CSI PowerStore driver supports running multiple replicas of controller pod. At any time, only one controller pod is active(leader), and the rest are on standby. In case of a failure, one of the standby pods becomes active and takes the position of leader. This is achieved by using native leader election mechanisms utilizing kubernetes leases. Additionally by leveraging pod anti-affinity, no two controller pods are ever scheduled on the same node.
In addition to the controllers, we have the two nodes pods running on each of the worker nodes
Volume Snapshot Feature
The CSI PowerStore driver supports beta snapshots. Driver versions prior to version 1.2 supported alpha snapshots.
The Volume Snapshots feature in Kubernetes has moved to beta in Kubernetes version 1.17. It was an alpha feature in earlier releases (1.13 onwards). The snapshot API version has changed from v1alpha1 to v1beta1 with this migration.
In order to use Volume Snapshots, ensure the following components have been deployed to your cluster:
Kubernetes Volume Snaphshot CRDs
Volume Snapshot Controller
Starting from version 1.2, the CSI PowerStore driver supports Cloning of Persistent volumes – the ability to clone a PVC from another PVC as a source.
Online Volume Expansion
Starting with v1.2, the CSI PowerStore driver supports expansion of Persistent Volumes (PVs). This expansion is done online, that is, when the PVC is attached to any node.
In order to expand a volume, we need to edit the pvc using the oc command, change the volume size and save the file.
As you can see, the volume size has changed in the background as well as the PVC and PV objects while the volume is still mounted on the running pod.
Raw Block Device Support
There are some specialized applications that require direct access to a block device because, for example, the file system layer introduces unneeded overhead. The most common case is databases, which prefer to organize their data directly on the underlying storage.
Starting v1.2, CSI PowerStore driver supports raw block volumes. Raw Block volumes are presented as a block device to the pod by using a bind mount to a block device in the node’s file system.
Raw Block volumes are created using the volumeDevices list in the Pod template spec with each entry accessing a volumeClaimTemplate specifying a volumeMode: Block. An example configuration is outlined here:
The CSI PowerStore driver Version 1.2.0 adds support for unidirectional Challenge Handshake Authentication Protocol (CHAP) for iSCSI protocol.
To enable CHAP authentication:
Create secret powerstore-creds with the key chapsecret and chapuser set to base64 values. chapsecret must be between 12 and 60 symbols. If the secret exists, delete and re-create the secret with this newly added key.
Set the parameter connection.enableCHAP in my-powerstore-settings.yaml to true.
The driver uses the provided chapsecret to configure the iSCSI node database on each node with iSCSI access.
When creating new host on powerstore array driver will populate host chap credentials with provided values. When re-using already existing hosts be sure to check that provided credentials in powerstore-creds match earlier preconfigured host credentials.
The CSI PowerStore driver version 1.2 and later supports Topology which forces volumes to be placed on worker nodes that have connectivity to the backend storage. This covers use cases where users have chosen to restrict the nodes on which the CSI driver is deployed.