CSI Driver for DELL EMC PowerFlex v1.5

Previously, we released the 1.3 CSI driver for the PowerFlex array, you can read all about it here And the PowerFlex 1.4 CSI version, which you can read all bout here

CSI Driver for Dell EMC PowerFlex v1.5 is aligned with CSI Spec 1.3 and supports the following new features:

  • Added support for PowerFlex 3.6
  • Added support for Kubernetes v1.21

  • Added support for OpenShift 4.6 EUS and 4.7 with RHEL and CoreOS worker nodes

  • Added support for Red Hat Enterprise Linux (RHEL) 8.4

  • Added support for Mirantis Kubernetes Engine (FKA Docker EE 3.4)

  • Added support for Rancher RKE v1.2.8

  • Added support for MKFS format option

  • Added support for dynamic log config
  • Config.yaml support

  • CSM Volume Group Snapshotter

The following describes the support strategy for CSI Driver for Dell EMC PowerFlex:

  • The CSI Driver for Dell EMC PowerFlex image, which is the built driver code, is available on
    https://hub.docker.com/r/dellemc/csi-vxflexos and is officially supported by Dell EMC.

  • The source code available on Github https://github.com/dell/csi-powerflex
    is unsupported and provided solely under the terms of the license attached to the source code. For clarity, Dell EMC does not provide support for any source code modifications.

  • A Dell EMC Storage Automation and Developer Resources page is available and includes developer resources, case studies, forums, and other materials. It is anticipated that members of this page will be excellent resources in addressing product issues and concerns. It is recommended this community page be the first page customers use for their product questions prior to contacting Dell EMC Support. For any setup, configuration issues, questions or feedback, join the Dell EMC Container community at
    Dell EMC Storage Automation and Developer Resources .

Documentation and Downloads

CSI Driver for Dell EMC PowerFlex v1.5 downloads and documentation are available on:

https://github.com/dell/csi-powerflex

https://dell.github.io/storage-plugin-docs/docs/features/powerflex/

New Features in v1.5

Volume Snapshot Feature

Many stateful Kubernetes applications use several persistent volumes to store data. To create recoverable snapshots of volumes for these applications, it is necessary to have capability to create consistent snapshots across all volumes of the application at the same time.
Dell CSM Volume Group Snapshotter is an operator which extends Kubernetes API to support crash-consistent snapshots of groups of volumes. This operator consists of VolumeGroupSnapshot CRD and csi-volumegroupsnapshotter controller. The csi-volumegroupsnapshotter is a sidecar container, which runs in the controller pod of CSI driver. The csi-volumegroupsnapshotter uses CSI extension, implemented by Dell EMC CSI drivers, to manage volume group snapshots on backend arrays.

CSM Volume Group Snapshotter is currently in a Technical Preview Phase, and should be considered alpha software. We are actively seeking feedback from users about its features. Please provide feedback using . We will take that input, along with our own results from doing extensive testing, and incrementally improve the software. We do not recommend or support it for production use at this time.

The Volume Snapshot feature was introduced in alpha (v1alpha1) in Kubernetes 1.13 and then moved to beta (v1beta1) in Kubernetes 1.17 and is generally available (v1) in Kubernetes version >=1.20.

The CSI PowerFlex driver version 1.5 supports v1beta1 snapshots on Kubernetes 1.19 and v1 snapshots on Kubernetes 1.20 and 1.21.

In order to use Volume Snapshots, ensure the following components are deployed to your cluster:

  • Kubernetes Volume Snaphshot CRDs
  • Volume Snapshot Controller

Before PowerFlex driver v1.5, the installation of the driver created default instance of VolumeSnapshotClass. API version of this VolumeSnapshotClass instance was defined based on Kubernetes version, as below:

Following is the manifest for the Volume Snapshot Class created during installation, prior to PowerFlex driver v1.5:

{{- if eq .Values.kubeversion “v1.20” }}

apiVersion: snapshot.storage.k8s.io/v1

{{- else }}

apiVersion: snapshot.storage.k8s.io/v1beta1

{{- end}}

kind: VolumeSnapshotClass

metadata:

name: vxflexos-snapclass

driver: csi-vxflexos.dellemc.com

deletionPolicy: Delete

Note: Installation of PowerFlex driver v1.5 does not create VolumeSnapshotClass. You can find samples of default v1beta1 and v1 VolumeSnapshotClass instances in helm/samples/volumesnapshotclass directory. There are two samples, one for v1beta1 version and the other for v1 snapshot version. If needed, install appropriate default sample, based on the version of snapshot CRDs in your cluster.

Create Consistent Snapshot of Group of Volumes

This feature extends CSI specification to add the capability to create crash-consistent snapshots of a group of volumes. PowerFlex driver implements this extension in v1.5. This feature is currently in Technical Preview. To use this feature users have to deploy csi-volumegroupsnapshotter side-car as part of the PowerFlex driver.

Volume group snapshotter support added this release only for helm.
Dell CSM Volume Group Snapshotter is an operator which extends Kubernetes API to support crash-consistent snapshots of groups of volumes.
The csi-volumegroupsnapshotter is a sidecar container, which runs in the controller pod of CSI driver
CSM Volume Group Snapshotter is currently in a Technical Preview Phase and should be considered alpha software.

Custom File System Format Options

The CSI PowerFlex driver version 1.5 supports additional mkfs format options. A user is able to specify additional format options as needed for the driver. Format options are specified in storageclass yaml under mkfsFormatOption as in the following example:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: vxflexos
annotations:
storageclass.kubernetes.io/is-default-class: “true”
provisioner: csi-vxflexos.dellemc.com
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
storagepool: <STORAGE_POOL> # Insert Storage pool
systemID: <SYSTEM_ID> # Insert System ID
mkfsFormatOption: “<mkfs_format_option>” # Insert file system format option
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
– matchLabelExpressions:
– key: csi-vxflexos.dellemc.com/<SYSTEM_ID> # Insert System ID
values:
– csi-vxflexos.dellemc.com

Multiarray Support

The CSI PowerFlex driver version 1.5 adds support for managing multiple PowerFlex arrays from the single driver instance. This feature is enabled by default and integrated to even single instance installations.

To manage multiple arrays you need to create an array connection configuration that lists multiple arrays.

Creating array configuration

There is a sample yaml file under the top directory named config.yaml with the following content:

– username: “admin” # username for connecting to API
password: “password” # password for connecting to API
systemID: “ID1” # system ID for system
endpoint: “https://127.0.0.1&#8221; # full URL path to the PowerFlex API
skipCertificateValidation: true # skip array certificate validation or not
isDefault: true # treat current array as default (would be used by storage class without arrayIP parameter)
mdm: “10.0.0.1,10.0.0.2” # MDM IPs for the system
– username: “admin”
password: “Password123”
systemID: “ID2”
endpoint: “https://127.0.0.2&#8221;
skipCertificateValidation: true
mdm: “10.0.0.3,10.0.0.4”

Here we specify that we want the CSI driver to manage two arrays: one with an IP 127.0.0.1 and the other with an IP 127.0.0.2.

To use this config we need to create a Kubernetes secret from it. To do so, run the following command:

kubectl create secret generic vxflexos-config -n vxflexos –from-file=config=config.yaml

Creating storage classes

To be able to provision Kubernetes volumes using a specific array we need to create corresponding storage classes.

Find the sample yaml files under helm/samples/storageclass. Edit storageclass.yaml if you want ext4 filesystem, and use storageclass-xfs.yaml if you want xfs filesystem. Replace <STORAGE_POOL> with the storage pool you have, and replace <SYSTEM_ID> with the system ID you have.

Then we need to apply storage classes to Kubernetes using kubectl:

kubectl create -f storageclass.yaml

After that, you can use the storage class for the corresponding array.

Dynamic Logging Configuration

This feature is introduced in version 1.5, CSI Driver for PowerFlex now supports dynamic logging configuration.

To accomplish this, we utilize two fields in logConfig.yaml: LOG_LEVEL and LOG_FORMAT.

LOG_LEVEL: minimum level that the driver will log
LOG_FORMAT: format the driver should log in, either text or JSON

If the configmap does not exist yet, simply edit logConfig.yaml, changing the values for LOG_LEVEL and LOG_FORMAT as you see fit.

If the configmap already exists, you can use this command to edit the configmap:
kubectl edit configmap -n vxflexos driver-config

or you could edit logConfig.yaml, and use this command:
kubectl apply -f logConfig.yaml

and then make the necessary adjustments for LOG_LEVEL and LOG_FORMAT.
If LOG_LEVEL or LOG_FORMAT are set to options outside of what is supported, the driver will use the default values of “info” and “text” .

Here you can see a demo of the new CSM Volume Group Snapshotter feature:

A guest post by Tomer Nahumi

Similar Posts

Leave a ReplyCancel reply