|

DELL EMC PowerStore and VMware vSphere 7 Update 1 with VMware Tanzu


VMware has announced vSphere 7 Update 1, introducing VMware vSphere with VMware Tanzu

vSphere 7.0 Update 1 brings a new possibility to run containers and Tanzu Kubernetes Grid (TKG) without the need to go via VMware Cloud Foundation (VCF). This means that admins will be able to set up and use Kubernetes and containers within their organizations just with vSphere. The good news is that NSX or vSAN are not required so admins don’t need to revamp completely their infrastructure to accommodate the vSphere container workload platform and they can use any block storage and more specifically vVols.

What makes vSphere 7U1 one of the most anticipated releases of vSphere to date is the fact that customers can quickly modernize the 70 million+ workloads running on vSphere today. vSphere with Tanzu is the fastest way to get started with Kubernetes workloads on a developer-ready infrastructure

vSphere with Tanzu offers the ability to configure an enterprise-grade Kubernetes infrastructure leveraging your existing networking and storage in as little as an hour:

Tanzu Kubernetes Grid (TKG) service: The TKG service allows IT admins to deploy and manage consistent, compliant and conformant Kubernetes runtime environments, while providing developers a simple, fast, self-service provisioning of Tanzu Kubernetes clusters in just a few minutes. TKG being conformant to upstream Kubernetes allows low-friction migration of a container-based application or development environment without refactoring.

Bring Your Own Networking: Customers now have choice in the networks they would like to use for Tanzu Kubernetes clusters. One could use existing networking infrastructure with vSphere Distributed Switch (vDS) centralized interfaces to configure, monitor and administer switching access for VMs and Kubernetes workloads. Networking for Tanzu Kubernetes clusters now is more approachable than ever with the option of using vDS, yet this does not preclude customers from using NSX if they choose to do so. Also, Antrea delivers performance, portability and operations for internal networking of Tanzu Kubernetes Grid service

Bring Your Own Storage: VMware has worked on deep integrations for storage in Kubernetes, to provide persistent storage volume management.  The best way to manage storage in vSphere with Tanzu is to use Storage Policy Based Management, a framework native to storage systems like vSAN and vVol, but for customers not using these capabilities the storage service can provide volumes using any existing block or file-based storage infrastructure.

Bring Your Own Load Balancer: Choose your own L4 load balancing for Tanzu Kubernetes clusters. The first load balancing partner for VMware is HAProxy. While packaged in an OVA format for easy deploy and configuration, commercial support will be offered directly by HAProxy. This new support for different networking options builds a framework by which more partners will be made available in the future, to offer unprecedented choice in network options.

Application-focused management: In the same fashion that IT admins can use vCenter Server tools to manage VMs they can now use vCenter to operate modern applications using namespaces as a unit of management. This is referred to as ‘application focused management’. Using application focused management, IT admins can use vCenter Server to observe and troubleshoot Tanzu Kubernetes clusters alongside VMs, implement role-based access and allocate capacity to developer teams.

For more details on vSphere with Tanzu, please return to the vSphere blog often – we will have a series of blogs and demos showcasing its use.  In the meantime, you can run your own trial of vSphere with Tanzu.  It is an inherent part of vSphere, allowing you to start your trial from any 7u1 installation in record time.


One of the new features that was added in vSphere 7.0 is the ability to provision Virtual Volumes (vVols) to back Kubernetes Persistent Volumes (PVs) via the updated version of the vSphere Container Storage Interface (CSI) driver.

PowerStore vVols leverage Storage Policy Based Management (SPBM) to ensure VMs have the appropriate storage capabilities through their entire life cycle. VM storage policies can be optionally created after the storage provider is registered. These policies are used to determine the desired storage capabilities when a VM is being provisioned.

vVol datatores can be added to the vSphere Kubernetes namespace as SPBM policies by clicking on the edit storage button


In the vSphere hosts and cluster view, we can find the TKG cluster I deployed in advance, which consists of one master and 5 worker nodes, under the related objects tab, this TKG cluster is running on the VVOL datastore.


By navigating to the Powerstore UI and clicking on the compute tab, we can see that Powerstore detects the TKG virtual machines and we can find really useful information not only about the storage but about the VMs performance metrics as well.


By clicking on the VVOLs tab we can see the different virtual volumes for the config swap and data file.


The type of vVol provisioned depends on the type of data that is being stored:

•    Config: Stores standard VM configuration data such as .vmx files, logs, and NVRAM

•    Data vvols: Stores data such as VMDKs, snapshots, full clones, and fast clones

•    Swap: Stores a copy of the VM memory pages when the VM is powered on. Swap vVols are automatically created and deleted when VMs are powered on and off.

In addition to the TKG VMs files, every time we create a new K8s application with persistent volume running on this TKG cluster a new dedicated VVOL is created, representing the K8s persistent volume object.

One of the main advantages of running K8s on top of vSphere is the insight we receive via CNS. Rather than having to keep switching between array views and datastore content views, we can view all the information relevant to Persistent Volumes consuming vSphere storage in one place. Using vVols for PVs has no difference. Here is the Container Volumes view, showing our PV in the vSphere UI:

We can see the PV name, we see it is on the PowerStore-VVol Datastore which is compliant with the storage policy. We can also see the health status and the capacity.


If I click on the Details icon, under the Basics view, there is more information such as the type of the volume, storage policy, and vol ID


We see more information about the Kubernetes objects, Such as the different labels, the name of the pod and the namespace

In Dell EMC PowerStore, we took this integration one step further, if we navigate to the power store UI and navigate to the compute UI, We can find the new MySQL I’ve just created and part of the compute objects, we can get full visibility of the capacity, compute and storage performance of that specific pod and because each persistent volume is a VVOL he Guest OS file system is the native FS on the vVol itself. It is not a VMDK sitting on top of another FS such as VMFS or NFS. Each vVol is a first-class citizen, meaning it is independent of other vVols, LUNs, or volumes. As a result, vVols match very well with First Class Disks (FCD), which are disks not requiring an associated VM. vVols also allow for a much larger scale than traditional LUNs, up to 64k per host Plus, you don’t have to manage LUNs or volumes


If we look at the PowerStore array, we see that 4 Virtual Volumes have been created,

There is 1 x Config vVol representing the catalog directory – This catalog contains the metadata that tracks the FCDs on a datastore. we will see one catalog created per datastore when FCDs are provisioned on the datastore.

There is 1 x Config vVol representing the fcd directory

Then we have 2 additional vVols, 1 Data and 1 Other.

The Data vVol is the VMDKs and the Other vVol is the .vmfd “sidecar” metadata file that we see on the datastore.

The size of the Data vVol matches the request we made in the PVC manifest.


Becuase each PV is a vVol we can be even more specific and get the performance metrics for each and every vVol


You can see a demo how it all work, below

 

Similar Posts

Leave a ReplyCancel reply