So far, we have covered the following aspects of Dell Technologies PowerStore: High Level Overview, Hardware, AppsON, vVols, File Capabilities, User Interface , Importing external storage, PowerStore local protection works , remote replication , VMware SRM integration […]
So far, we have covered the following aspects of Dell Technologies PowerStore:
High Level Overview, Hardware, AppsON, vVols, File Capabilities, User Interface , Importing external storage, PowerStore local protection works , remote replication , VMware SRM integration and the resource balancer
If you are new to Kubernetes, i highly suggest you start reading about it in a post i wrote here https://volumes.blog/2018/12/13/tech-previewing-our-upcoming-xtremio-integration-with-kubernetes-csi-plugin/ , it will explain everything you need to know about Kubernetes and persistent storage.
Read it? Awesome!
Kubernetes (k8s) is one of the most widely used Container Orchestrator system.
- It provides facilities for automating the deployment, scaling, and management of containerized applications. It is designed to scale to systems running billions of containers a week.
- It is an open source project that builds upon 15 years of experience running production workloads at Google, along with best-of-breed ideas and practices from the community.
- “Upstream K8s” refers to the open source effort being run by the Cloud Native Computing Foundation.
- Other variants exist, such as RedHat OpenShift and VMware PKS. These derive from upstream but have to meet certain standards to be called a Kubernetes distribution
The CSI Driver for Dell EMC PowerStore is a “plug-in” that is designed to be installed into Kubernetes to provide persistent storage using the Dell EMC PowerStore Storage Array.
- K8S and the driver communicate using a protocol called the “Container Storage Interface” (CSI). It is documented here: https://github.com/container-storage-interface/spec.
The driver can be separately installed, upgraded, and uninstalled from a Kubernetes installation
In order to effectively deploy and use the CSI Driver, you should be familiar with the following K8S concepts (see Documentation/Concepts on kubernetes.io):
- StatefulSets and DaemonSet
- Volumes, Persistent Volumes, Persistent Volume Claims, and Storage Classes
- Kubernetes Secrets
- Kubernetes Service Accounts
- Custom Resource Definitions (CRDs)
- You should be familiar with the various kubectl commands
- The CSI specification is documented here: https://github.com/container-storage-interface/spec. The driver uses CSI v1.1.
CSI is being adopted by:
- Cloud Foundry
- The transport layer for CSI is gRPC over Unix domain sockets
The CSI specification requires the Storage Providers to implement –
- Node Plugin -> This must be run on each node where a provisioned volume will be mounted
- Controller Plugin -> This can be run anywhere
- There are 3 services defined as part of the spec –
- Identity service – Implemented by both the Node & Controller plugin
- Controller service – Implemented by the Controller Plugin
- Node service – Implemented by the Node Plugin
CSI Driver for Dell EMC PowerStore Capabilities
- Persistent volumes creation, deletion, mounting, unmounting, listing.
- Dynamic volume creation using only a Persistent Volume Claim
- Static volume creation using a Persistent Volume
- Volume mount as ext4 and xfs
- Support for FC or ISCSI connection to storage
- Native multipath support
- Tested with Kubernetes v1.14 and v1.16 with both Ubuntu 18.04 and RHEL 7.6
- Automatic Kubernetes version detection
- Filesystem and Block volumeMode (CSI type Mount) is supported.
- The driver does not support topology (VOLUME_ACCESSIBILITY_CONSTRAINTS).
- The driver doesn’t support snapshots or clones
- The driver doesn’t support volume expansion
- Upstream Kubernetes versions 1.14 or 1.16
- Docker daemon running and configured with MountFlags=shared on all K8s masters/nodes
- Helm v2.16 and Tiller installed in cluster
If using iSCSI
- iscsi-initiator-utils package installed on all the Kubernetes nodes
- Make sure that iscsid service started
- Make sure that the iscsi IQNs (initiators) from the Kubernetes nodes are not part of any existing Hosts on the array(s)
If using FC
- Make sure that FC card is attached to nodes and connected to storage array
- Zoning of the Host Bus Adapters (HBAs) to the fibre channel port director must be done
If using multipath
- Enable multipath with mpathconf –enable –with_multipathd.y
- Enable user_friendly_names and find_multipaths in the multipath.conf file
- Clone the repository from the github.com
Create your myvalues.yaml file from the values.yaml file and edit some parameters for your installation. These include –
- The API URL (must include the port number as well)
- API credential (user, password)
- transportProtocol – defines which transport protocol to use (FC, ISCSI or Auto)
- nodeIDPath – path to file with unique identifier identifying the node in Kubernetes cluster
- volumeNamePrefix – defines a string prepended to each volume created by the CSI driver
- nodeNamePrefix – defines a string prepended to each node registered by the CSI driver
- (optional) A set of values for the default storage class
- Run the “install.sh” shell script, it will check your cluster with verify.sh script and install driver using helm
A correct installation should display text similar to that below
- 3 containers running in the controller pod
- 2 containers running in each node pod
Storage classes csi-driver and csi-driver-xfs created
- You can check if your driver works correctly by running simple tests from repository
Just run from root directory of repository
kubectl create -f ./tests/simple/
Check if pod is created and Ready and Running by running
kubectl get all -n testdriver
After that you can uninstall the testing PVCs and StatefulSet
kubectl delete -f ./tests/simple/
Here are some installation failures that might be encountered and how to mitigate them.
- “kubectl describe pods csi-driver-controller-0 –n csi-driver” indicates the driver image could not be loaded. You may need to put an insecure-registries entry in /etc/docker/daemon.json or login to the docker registry.
“kubectl logs csi-driver-controller-0 –n csi-driver” logs shows the driver cannot authenticate (check your values for username and password)
Using the Driver
- You could test the driver with any helm chart from public helm repository which uses persistent volumes, for example, stable/postgresql.
- helm install stable/postgresql -n postgresql
Follow helm chart’s notes to validate the installation.
Kubernetes upstream repo: https://github.com/kubernetes/kubernetes
CSI 1.1 spec:
Since there are many ways to work with CSI, either via upstream k8s or an upper level orchestrator that just leverages it, we created some demos below
CSI with upstream Kubernetes
RedHat OpenShift is one of the more popular distributions out there and it is supported by our CSI plugin as well.
below, you can see a demo how it works with RedHat Openshift
Visit the Dell EMC CSI GitHub page for the CSI Driver download and documentation.