So far, we have covered the following aspects of Dell Technologies PowerStore: High Level Overview, Hardware, AppsON, vVols, File Capabilities, User Interface , Importing external storage, PowerStore local protection works , remote replication , VMware SRM integration […]
Share this:
So far, we have covered the following aspects of Dell Technologies PowerStore:
Kubernetes (k8s) is one of the most widely used Container Orchestrator system.
It provides facilities for automating the deployment, scaling, and management of containerized applications. It is designed to scale to systems running billions of containers a week.
It is an open source project that builds upon 15 years of experience running production workloads at Google, along with best-of-breed ideas and practices from the community.
“Upstream K8s” refers to the open source effort being run by the Cloud Native Computing Foundation.
Other variants exist, such as RedHat OpenShift and VMware PKS. These derive from upstream but have to meet certain standards to be called a Kubernetes distribution
The CSI Driver for Dell EMC PowerStore is a “plug-in” that is designed to be installed into Kubernetes to provide persistent storage using the Dell EMC PowerStore Storage Array.
The driver can be separately installed, upgraded, and uninstalled from a Kubernetes installation
In order to effectively deploy and use the CSI Driver, you should be familiar with the following K8S concepts (see Documentation/Concepts on kubernetes.io):
StatefulSets and DaemonSet
Volumes, Persistent Volumes, Persistent Volume Claims, and Storage Classes
Kubernetes Secrets
Kubernetes Service Accounts
Custom Resource Definitions (CRDs)
You should be familiar with the various kubectl commands
Dynamic volume creation using only a Persistent Volume Claim
Static volume creation using a Persistent Volume
Volume mount as ext4 and xfs
Support for FC or ISCSI connection to storage
Native multipath support
Tested with Kubernetes v1.14 and v1.16 with both Ubuntu 18.04 and RHEL 7.6
Automatic Kubernetes version detection
Filesystem and Block volumeMode (CSI type Mount) is supported.
The driver does not support topology (VOLUME_ACCESSIBILITY_CONSTRAINTS).
The driver doesn’t support snapshots or clones
The driver doesn’t support volume expansion
Installation
Upstream Kubernetes versions 1.14 or 1.16
Docker daemon running and configured with MountFlags=shared on all K8s masters/nodes
Helm v2.16 and Tiller installed in cluster
If using iSCSI
iscsi-initiator-utils package installed on all the Kubernetes nodes
Make sure that iscsid service started
Make sure that the iscsi IQNs (initiators) from the Kubernetes nodes are not part of any existing Hosts on the array(s)
If using FC
Make sure that FC card is attached to nodes and connected to storage array
Zoning of the Host Bus Adapters (HBAs) to the fibre channel port director must be done
If using multipath
Enable multipath with mpathconf –enable –with_multipathd.y
Enable user_friendly_names and find_multipaths in the multipath.conf file
Clone the repository from the github.com
Create your myvalues.yaml file from the values.yaml file and edit some parameters for your installation. These include –
The API URL (must include the port number as well)
API credential (user, password)
transportProtocol – defines which transport protocol to use (FC, ISCSI or Auto)
nodeIDPath – path to file with unique identifier identifying the node in Kubernetes cluster
volumeNamePrefix – defines a string prepended to each volume created by the CSI driver
nodeNamePrefix – defines a string prepended to each node registered by the CSI driver
(optional) A set of values for the default storage class
Run the “install.sh” shell script, it will check your cluster with verify.sh script and install driver using helm
A correct installation should display text similar to that below
3 containers running in the controller pod
2 containers running in each node pod
Storage classes csi-driver and csi-driver-xfs created
You can check if your driver works correctly by running simple tests from repository
Just run from root directory of repository
kubectl create -f ./tests/simple/
Check if pod is created and Ready and Running by running
kubectl get all -n testdriver
After that you can uninstall the testing PVCs and StatefulSet
kubectl delete -f ./tests/simple/
Troubleshooting
Here are some installation failures that might be encountered and how to mitigate them.
“kubectl describe pods csi-driver-controller-0 –n csi-driver” indicates the driver image could not be loaded. You may need to put an insecure-registries entry in /etc/docker/daemon.json or login to the docker registry.
“kubectl logs csi-driver-controller-0 –n csi-driver” logs shows the driver cannot authenticate (check your values for username and password)
Using the Driver
You could test the driver with any helm chart from public helm repository which uses persistent volumes, for example, stable/postgresql.
helm install stable/postgresql -n postgresql
Follow helm chart’s notes to validate the installation.
2 Comments »