Dell EMC PowerFlex is an extremely scalable software defined storage platform for the most demanding workloads from enterprise databases to Analytics, AI and ML as well as modern containerized applications. CSI plugin for PowerFlex offers developers an enterprise grade storage platform for applications that require storage with extreme scale with a software defined architecture. The CSI plugin is part of the Dell EMC CSI Kubernetes Operator that can deployed on vanilla Kubernetes as well as platforms like OpenShift. If you are new to PoweFlex (the new name for VxFlexOS which was the new name for ScaleIO..), I highly suggest you start by reading about it here Dell Technologies PowerFlex 3.5 – Part 1, The New User Interface | Itzikr’s Blog (volumes.blog) But in a nutshell, PowerFlex is our software-defined storage that is well suitable for containers based workloads and we see more and more customers that are adopting it for these very workloads (and others) and as such, it was time to improve it’s container storage functionality even further!

What’s new in PowerFlex CSI 1.3?

New Features/Changes

  • Added support for OpenShift 4.5/4.6 with RHEL and CoreOS worker nodes
  • Added automatic SDC deployment on OpenShift CoreOS nodes -this is a BIG one, it means that we are now deploying the SDC component for you as part of the CSI plugin installtion!
  • Add volume cloning support
  • Added support for Red Hat Enterprise Linux (RHEL) 7.9
  • Added support for Ubuntu 20.04
  • Added support for Controller high availability (multiple-controllers)
  • Changed driver base image to UBI 8.x

STORAGE DATA CLIENTS (SDC) The Storage Data Client (SDC) allows an operating system or hypervisor to access data served by PowerFlex clusters. The SDC is a client-side software component that can run natively on Windows®, Linux, IBM AIX®, or ESX®. It is analogous to a software initiator, but is optimized to use multiple networks and endpoints in parallel. The SDC provides the operating system or hypervisor running it with access to logical block devices called “volumes”. A volume is analogous to a LUN in a traditional SAN. Each logical block device provides raw storage for a database or a file system.
The SDC knows which Storage Data Server (SDS) endpoints to contact based on block locations in a volume. The SDC consumes distributed storage resources directly from other systems running VxFlex OS. SDCs do not share a single protocol target or network end point with other SDCs. SDCs distribute load evenly and autonomously.
The SDC is extremely lightweight. SDC to SDS communication is inherently multi-pathed across SDS storage servers, in contrast to approaches like iSCSI, where multiple clients target a single protocol endpoint. This enables much better performance scalability.

PowerFlex fundamentally consists of three types of software components: the Storage Data Server (SDS), the Storage Data Client (SDC), and the Meta Data manager (MDM).

The Storage Data Server is installed on every node that will contribute its storage to the system. It owns the contributing drives and – together with the other SDSes – forms a protected mesh from which storage pools are created. The MDM hands out instructions to each SDC and SDS about its role and how to play it, giving each component all the information it needs. The Storage Data Client (SDC) allows an operating system or hypervisor to access data served by PowerFlex clusters. The SDC is a client-side software component that can run natively on multiple Oss and is optimized to use multiple networks and endpoints in parallel. By default, OpenShift runs on CoreOS. Which is fully immutable, container optimized, Linux OS host that is delivered and installed as a component of OpenShift. And as a result, until now,it wasn’t possible to install the PowerFlex SDC component on coreos worker nodes and a RHEL compute nodes were required to deploy and use the PowerFlex CSI driver
Starting from Openshift 4.6, The strategy to bring your own Red Hat Enterprise Linux (RHEL) 7 compute machines is now deprecated. Support for using RHEL compute machines is planned for removal in a future release of OpenShift 4.

https://docs.openshift.com/container-platform/4.6/release_notes/ocp-4-6-release-notes.html One of the highlight of this CSI release is the SDC InitContainer, we’ve made an architectural change in the driver during the CSI driver installation, it automatically deploys the SDC component as a container on each coreos compute node so that users can now use CoreOS compute nodes to consume storage from PowerFlex using the new CSI driver

Volume Cloning

Starting from version 1.3, the CSI PowerFlex driver supports volume cloning. This allows specifying existing PVCs in the dataSource field to indicate a user would like to clone a Volume.

A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a “new” empty Volume, the PowerFlex creates an a readwrite snapshots which is exact duplicate of the specified Volume.

The source PVC must be bound and available. Source and destination PVC must be in the same namespace and have the same Storage Class.

To clone a volume, you need to specify the source volume name under the datastource parameter, of course, the size and the storage class name must be the same as the original volume

To clone a volume, you should first have an existing pvc, eg, pvol0:

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

name: pvol0

namespace: helmtest-vxflexos

spec:

storageClassName: vxflexos

accessModes:

– ReadWriteOnce

volumeMode: Filesystem

resources:

requests:

storage: 8Gi

SDC Container Run (docker run)

SDC container has 3 running modes, configuration, monitoring, and command mode. The configuration vs monitoring mode is determined via the MODE environment parameter. The command mode is determined once a command is passed to the container.

Configuration Mode

When running in configuration mode, the container will perform configuration command and exit with status code, zero for success, and non-zero for error. Supported configuration operations:

  • Install SDC on a host
  • Upgrade & downgrade existing SDC on a host
  • Uninstall existing SDC on a host
  • Update SDC configuration on a host

During the run step, the container performs sequential operations: detects the host platform, detects the SDC service state on the host, and the parses ENV variables. Based on these, it determines which operation to perform.

Monitoring Mode

When running in monitoring mode, the container runs as a ‘live’ container and performs passive monitoring tasks such as logs (dmesg) and metrics (network, CPU, RAM metrics) collection. Metrics collection will be done using the diag_coll service worker, similarly to a conventional setup. Logs and metrics output will be stored in the /storage mapping. When the host’s distribution is unsupported, the container in monitoring mode will run but do nothing.

Command Mode

When running the command, the SDC container will not perform any configuration nor monitoring step. It will only execute the provided command. This mode is useful when interested to execute an operation using SDC support tools like get_info scripts, sdbg, drv_cfg.

Added automatic SDC deployment on CoreOS nodes
• SDC is installed on CoreOS nodes via init container
• Init container doesn’t work on RHEL, CentOS or Ubuntu nodes, and SDC installation
process has not changed in those configurations.
• Logs of init container after install on RHEL:

To check init container logs after an install:
kubectl logs –n <namespace> <worker node> sdc

Automated SDC Deployment for CoreOS – Helm
Example
Specified in helm/csi-vxflexos/driver-image.yaml:


Then driver is installed as usual:
./csi-install.sh –namespace vxflexos –values values.yaml –skip-verify
Check logs of init container after install on CoreOS:
kubectl logs -n vxflexos vxflexos-node-5xfjj sdc


SDC Monitor Container
Parameters in values.yaml:


To use:
kubectl exec -it -n vxflexos <worker node> –container sdc-monitor /bin/bash
Then run: /opt/emc/scaleio/sdc/diag/get_info.sh to gather results
kubectl cp -n <namespace> –container sdc-monitor <worker node>:/tmp/scaleiogetinfo/getInfoDump.tgz sample.tgz

Controller HA
Multiple instances of controller pods are deployed, # is controlled in values file for helm or
sample yaml file for operator.
controllerCount: 2
If controller count > number of available nodes, excess pods will get stuck in “pending” during deployment


Node Anti-affinity prevents two controller pods from being deployed to same node
Ensures driver can continue when a node with controller pod crashes
Leases dictate which driver active (Native kubernetes leader election)


Controller HA – controller pods on Master nodes
• Controller HA related parameters in values.yaml
• For operator, these parameters are in the driver sample yaml


Driver Installation – Methods
Full details are in the CSI Driver documentation, the following is a summary.
CSI Driver for Dell EMC PowerFlex can be installed in the following ways
Using Helm 3 Charts and installation scripts
Using the Dell CSI Operator

Driver Installation Prerequisites – Upstream Kubernetes
Full details are in the CSI Driver documentation but here is a summary of pre-reqs for the driver.
Upstream Kubernetes 1.17.x/1.18.x/1.19.x running on a supported Host OS
Container Runtime with mount propagation enabled
Snapshot CRDs and snapshot controller installed in cluster
Helm 3 installed in the Kubernetes cluster (if using Helm for install or test examples)
A namespace “vxflexos” should be created prior to the installation.
The driver pods, secrets are created in this namespace.
SDC must be installed and set up on all worker nodes, if worker nodes are not CoreOS
If CoreOS, driver will take care of this step

Driver Installation Prerequisites – OpenShift
Full details are in the Installation Guide but here is a summary of pre-reqs for the driver.
OpenShift 4.5/4.6 cluster running CoreOS or RHEL worker nodes
A namespace where the driver, secret and pods will be created
SDC must be installed and set up on all worker nodes, if worker nodes are not CoreOS
If CoreOS, driver will take care of this step

Installation using Helm based installer
Full details are in the Driver Documentation.
1. Clone the repository from the URL – github.com/dell/csi-vxflexos (release 1.3)
2. Create a Kubernetes secret – “vxflexos-creds” – with your username and password for
the VxFlex instance, in the vxflexos namespace.
3. Create your myvalues.yaml file from the values.yaml file and edit the parameters for your
installation per the documentation.
4. Go to dell-csi-helm-installer/ dir
5. Install the driver using csi-install.sh bash script by running:
./csi-install.sh –namespace vxflexos –values <path to your myvalues
location>

Installation using CSI Operator
Full details are in the Driver Documentation.
The CSI operator can be deployed either manually or using OLM(Operator Lifecycle Manager).
For OLM based deployment, review details in the Operator documentation
For manual deployment, use the following procedure:
1. Clone the repository from the URL – github.com/dell/csi-operator to pull the latest release
2. Deploy operator by filling in operator.yaml and running scripts/install.sh
3. Create a Kubernetes secret – “vxflexos-creds” – with your VxFlex username and password.
Create secret in test-vxflexos namespace
4. Use the specific vxflex_ manifest for your environment to create the driver manifest
In case of manual install of CSI Operator, it will be present in the csi-operator/deploy/crds/ folder
5. Edit the mandatory parameters in the driver manifest, process/variables are the same as
the documentation

Deploy the driver using the manifest in the chosen namespace
In case of manual install run – `kubectl create –f <manifest.yaml>
7. The driver manifest can also be used to deploy storage classes and snapshot classes.
8. Once the driver is deployed successfully, you will see the same statefulset and daemonset
containing the driver pods

Driver upgrade
Upgrade from 1.2 to 1.3
For a Helm-based installation: Run the csi-install.sh script with the same required arguments, along
with a –upgrade argument.

cd ../dell-csi-helm-installer &&
../helm/myvalues.yaml –upgrade
./csi-install.sh –namespace vxflexos –values

If using Operator: Update the driver image tag (and any environment variable if required).
NOTE: You can get latest the CSI Driver for Dell EMC PowerFlex image on Dell EMC Docker Hub
Upgrade from 1.1.5 or earlier to 1.3
Delete any existing VolumeSnapshotClass and alpha snapshot CRD’s
Uninstall driver
Checkout latest code (1.3) and install driver (be sure to update image in myvalues/driver-image.yaml)
For Operator, update the driver image tag, config version and anything else you want to add

Installation Verification
• To verify installation on operator install:


To verify installation on helm install:
kubectl get pods –n vxflexos

you can see a demo how it all works below

You can view the bin image by clicking the screenshot below

And the github page, by clicking the screenshot below

There is also a new landing page for documention, the PowerFlex one can be located in this url ( PowerFlex | Dell Technologies )

Leave a Reply