VMware have released vSphere 7 Update 1, introducing VMware vSphere with VMware Tanzu vSphere 7.0 Update 1 brings a new possibility to run containers and Tanzu Kubernetes Grid (TKG) without […]
VMware have released vSphere 7 Update 1, introducing VMware vSphere with VMware Tanzu
vSphere 7.0 Update 1 brings a new possibility to run containers and Tanzu Kubernetes Grid (TKG) without the need to go via VMware Cloud Foundation (VCF). This means that admins will be able to set up and use Kubernetes and containers within their organizations just with vSphere. The good news is that NSX or vSAN are not required so admins don’t need to revamp completely their infrastructure to accommodate the vSphere container workload platform and they can use any block/file storage.
Cloud Native Storage (CNS) provides comprehensive data management for stateful, containerized apps, enabling apps to survive restarts and outages. Stateful containers can use vSphere storage primitives such as standard volume, persistent volume, and dynamic provisioning, independent of VM and container lifecycle. CNS works with VMFS and NFS datastores using tag-based SPBM policies.
PowerScale NFS datatores can be added to the vSphere Kubernetes namespace as SPBM policies by clicking on the edit storage button under the namespace view
Once added, the storage classes automatically appear in the Kubernetes environment, and can be used. If a vSphere administrator assigns multiple storage policies to the Supervisor Namespace, a separate storage class is created for each storage policy.
If you use the Tanzu Kubernetes Grid Service to provision Tanzu Kubernetes clusters, each Tanzu Kubernetes cluster inherits storage classes from the Supervisor Namespace in which the cluster is provisioned.
These storage classes can be used to create TKG clusters as well as PVs (persistent volumes)
As you can see in the example below the PowerScale storage class is used in the manifest file of my TKG cluster ,this storage class is used for both the control plane and worker nodes
Once the guest TKG cluster is up and running in vSphere, applications and PVs can be created in it.
If we navigate to the PowerScale UI, and click on the file system explorer we can see the tkg export which is actually our NFS datastore
In this directory we can find the TKG master and worker nodes files but we if go to the FCD directory, we can find the application persistent volume files
Persistent storage is presented through the VMware CSI driver, called CNS (Cloud Native Storage). it uses first class disks instead of standard disks
FCD are just virtual disks, but in the API they are 1st class objects–they can be created and exist independently of a VM. Which makes sense for something a container.
FCDs can be created, resized, etc just like a virtual disk but without a VM to own it, this is exactly what Kubernetes persistent volume claim is.
One of the main advantages of running K8s on top of vSphere is the insight we receive via CNS. Rather than having to keep switching between array views and datastore content views, we can view all the information relevant to Persistent Volumes consuming vSphere storage in one place. Using NFS for PVs has no difference. Here is the Container Volumes view, showing our PV in the vSphere UI:
We can see the PV name, we see it is on the PowerScale datastore which is compliant with the storage policy. We can also see the health status and the capacity.
If I click on the Details icon, under the Basics view, there is more information such as the type of the volume, storage policy, and volume ID
We see more information about the Kubernetes objects, Such as the different labels, the name of the pod and the namespace
Below, you can see a demo how it all works