|

Dell Technologies PowerFlex integration with VMware Tanzu Kubernetes Grid (TKG)

A guest post by Tomer Nahumi

Dell EMC PowerFlex is a software-based storage area network (SAN) that converges storage and compute resources to form a hyper-converged, enterprise-grade storage product. PowerFlex is elastic and delivers linearly scalable performance. Its scale-out server SAN architecture can grow from a few to thousands of servers.

If you are new to PowerFlex (the new name for VxFlex OS), i highly suggest you start by reading about our 3.5 software release here & here

PowerFlex (v3.0.1 and later) supports VMware vVols 1.0 through VASA 2.0 implementation.

  • A PowerFlex cluster may contain a single VASA provider or three for a highly resilient configuration using replica sets.
  • The PowerFlex SDC on each host exposes a Protocol Endpoint (PE) for each VASA provider.

PowerFlex VASA implementation exposes four profiles that can be used to implement VM storage policies, based on the underlying storage configuration or capabilities (SPBM).

  • Bronze – An HDD based storage pool
  • Silver – An accelerated HDD storage pool (RFCache)
  • Gold – All-SSD storage pool
  • Compression – A Fine Granularity (FG) storage pool (SSD based)


vVols are VMDK granular storage entities exported by storage arrays. vVols are exported to the ESXi host through a small set of protocol end-points (PE). Protocol Endpoints are part of the physical storage fabric, and they establish a data path from virtual machines to their respective vVols on demand. Storage systems enable data services on vVols. The results of these data services are newer vVols. Data services, configuration and management of virtual volume systems is exclusively done out-of-band with respect to the data path. vVols can be grouped into logical entities called storage containers (SC) for management purposes. The existence of storage containers is limited to the out-of-band management channel.

Capability profiles are populated through the VMware vStorage API for Storage Awareness (VASA) protocol from the storage system into vSphere or vCenter. These capability profiles map to VMware VVol storage policy profiles. When a storage policy is selected in vSphere or vCenter, only those VVol datastores compatible with these policies will appear as eligible storage containers for the virtual volume.

PowerFlex has implemented VMware’s vSphere API for Storage Awareness (VASA). PowerFlex’s VASA enables control of PowerFlex storage to be handled by the vCenter administrator. PowerFlex’s VASA uses VMware vSphere Virtual Volumes (vVols) to enable VMs hosted on ESXis in the vCenter to have their storage mapped directly to storage in PowerFlex. PowerFlex’s implementation uses VASA 2.0 and vVols 1.0. There can be one or three instances of VASA.

The following are the different components of PowerFlex’s VASA:

  • Virtual Volumes (vVols): A VMware object type that allows the vSphere Admin to provision VMs without depending on the Storage Admin.
  • vVol datastores: A type of VMware datastore, in addition to VMFS and NFS datastores, which allow vVols to map directly to the storage system. vVol datastores are more granular than VMFS and NFS datastores, enabling VMs or virtual disks to be managed independently. You can create vVol datastores based on one or more underlying storage pools and then allocate the pool to be used for the vVol datastore and its associated vVols.
  • Storage containers: A logical abstraction on to which vVols are mapped and stored. In PowerFlex’s VASA, the storage container is mapped to a PowerFlex Storage Pool. vSphere maps storage containers to VVol datastores and provides applicable datastore level functionality.
  • Storage policies: Storage policies are used to ensure that VMs are placed on storage that guarantees a specific level of performance.
  • VASA Storage Virtual Machine (SVM): The virtual machine that runs the VASA.
  • VASA-database: A database created to store the VVols metadata.
  • Protocol Endpoint (PE): PEs are used to establish an I/O data path between the ESXi hosts and the PowerFlex. For every VASA implemented, the SDC exposes a single PE on the ESXi for that VASA to use.

    VASA allows for automatic mapping of storage. When a new VM is created, a request for storage passes through the VASA. The VASA sends the request to the MDM to create the volume. The MDM creates the volume and passes the volume ID back through the VASA to the ESXi server. Mapping volume requests also pass through the VASA as the MDM maps the volume the VM runs on to the ESXi. For every VM that is created, a minimum of two vVols are created. One is for the VM metadata and one is for the VM disk. When the VM is powered on, another vVol, called the swap vVol, is created. The swap vVol is removed when the VM is powered off. The VASA is used only for storage management; no I/Os are sent through the VASA.

    In order to use vVols with PowerFlex, you must install the VASA provider. In Linux-based systems, you can choose to install the VASA provider during installation of PowerFlex using the PowerFlex Installer. At that point, select whether to use one or three VASA providers. If you want your VASA support to be highly available, you should install three VASA providers. In the case of an existing PowerFlex system, you can add VASA support by extending the existing cluster by adding one or three VASA providers. This is also done by creating a CSV file and using the PowerFlex Installer. In VMware ESXi-based systems, install the VASA provider after deployment of the PowerFlex system by creating a CSV file and using the PowerFlex Installer to extend the system.

    Once the installation is completed, the Vasa provider can be added to the vCenter


    Then, a new VVOL datastore can be created


    DELL EMC PowerScale vVols integration is a key component of this solution, as it offers a flexible, and granular approach for vVols usage with Kubernetes as with VMware CNS (Cloud Native Storage) each vVol represents a Kubernetes persistent volume.

    vVols adoption continues to grow and is accelerating in 2020 and it’s easy to see why. vVols eliminates LUN management, accelerates deployments, simplifies operations, and enables utilization of all of the array’s functionality.

    With vSphere 7.0 and the CSI 2.0 driver for vSphere VMware have introduced support for vVols as a storage mechanism for Cloud Native Storage. Now, vVols vSphere are supported as backing storage for the Kubernetes PersistentVolumes that are provisioned through VMware CSI driver.

    vVols is preferred as the storage of choice for container volumes because of scaling ability and day-2 operations, vVols also allows for a much larger scale than traditional LUNs, up to 64k per ESXi host and you don’t have to manage LUNs or volumes, which would be a nightmare at scale.

    64k vVols per host is a huge number and because this storage type is SPBM based, with native primitives, vVols can be adjusted after provisioning to change SLA or other storage parameters. This can’t be done with VMFS and NFS as tag-based SPBM placement only allows for the initial placement of volumes, not reconfigurations.

    VMware vSphere 7 release has numerous added features, including native integration of the Tanzu Kubernetes Grid (TKG) to drive adoption of Kubernetes through familiar tools.

    A Tanzu Kubernetes Grid (TKG) cluster is a Kubernetes (K8s) cluster that runs inside virtual machines on the Supervisor layer and not on vSphere Pods. It is enabled via the Tanzu Kubernetes Grid Service for vSphere. Since a TKG cluster is fully upstream-compliant with open-source Kubernetes it is guaranteed to work with all your K8s applications and tools. That alone is a big advantage.

    A Tanzu Kubernetes cluster is a full distribution of the open-source Kubernetes container orchestration platform that is built, signed, and supported by VMware. You can provision and operate Tanzu Kubernetes clusters on the Supervisor Cluster by using the Tanzu Kubernetes Grid Service. A Supervisor Cluster is a vSphere cluster that is enabled with vSphere with Kubernetes. Key Characteristics of Tanzu Kubernetes Clusters Created by the Tanzu Kubernetes Grid Service A Tanzu Kubernetes cluster that is provisioned by the Tanzu Kubernetes Grid Service has the following characteristics:

    • Opinionated Installation of Kubernetes
    • Integrated with the vSphere Infrastructure
    • Production Ready
    • Fully Supported by VMware
    • Managed by Kubernetes A Tanzu Kubernetes cluster provisioned

    After you read about PowerFlex vVols, it’s time for a demonstration.

    The first step is to assign a PowerFlex storage policy that TKG can utilize, by attaching it to the namespace. A new Kubernetes VVOL based storage class is created and mapped to the namespace



    The next step is logging in to the TKG cluster by using the kubectl command and specifying my vSphere user, the TKG cluster, and the Kubernetes namespace.


    We can take a look at the nodes to make sure we are in the right context, here we can see the single control plane node and six worker nodes


    For the purpose of this demo, mysql instance and wordpress application which uses the mysql database will be deployed. each application requires a persistent volume which represents a single PowerFlex VVOL.

    I set the storage class for each application to the PowerFlex-VVOL storeage policy, this allows dynamically provision VVOL based persistent volumes for my TKG Kubernetes workloads


    Now let’s deploy the two applications by running the kubectl create command,

    You can see that the persistent volumes are created and bound to the pods, in the background you can see that these are basically PowerFlex VVOLs which are handled by the VMware cloud native storage plugin.


    Within a few seconds, we can access the application UI by specifying the external IP of the service


    Now, let’s connect to the wordpress site and configure my first website.


    DELL EMC PowerFlex vVols integration is a key component of this solution, as it offers full flexibility, and granularity that are required by your containerized cloud native application.

    Below, you can see a demo how it all works

Similar Posts

Leave a ReplyCancel reply