So far, we have covered the following aspects of Dell Technologies PowerStore: High Level Overview, Hardware, AppsON, vVols, File Capabilities, User Interface , Importing external storage, PowerStore local protection works , remote replication , VMware SRM integration , resource […]
So far, we have covered the following aspects of Dell Technologies PowerStore:
High Level Overview, Hardware, AppsON, vVols, File Capabilities, User Interface , Importing external storage, PowerStore local protection works , remote replication , VMware SRM integration , resource balancer & the integration with an upstream Kubernetes and / or RedHat OpenShift
Calling vSphere 7 a major release is an understatement! This latest version of vSphere has numerous added features, including native integration of the Tanzu Kubernetes Grid (TKG) to drive adoption of Kubernetes through familiar tools. As a result, vSphere 7 provides the platform needed to convert a data center into a self-contained cloud capable of supporting multiple service offerings that go above and beyond infrastructure provisioning services.
DELL EMC Powerstore vVols integration is a key component of this solution, as it offers a flexible, and granular approach for vVols usage with Kubernetes as with VMware CNS (Cloud Native Storage) each vVol represents a Kubernetes persistent volume.
vVols adoption continues to grow and is accelerating in 2020 and it’s easy to see why. vVols eliminates LUN management, accelerates deployments, simplifies operations, and enables utilization of all of the array’s functionality.
vSphere 7 with Kubernetes brings two major innovations
The embedded Kubernetes cluster has all the necessary components to handle various Kubernetes resource objects and APIs. The embedded Kubernetes cluster runs each container enclosed in a compact VM on the hypervisor, also known as the Pod VM.
In the second integration method, we leverage Kubernetes embedded within vSphere to build a service known as Tanzu Kubernetes Grid (TKG) service, which will help deploy multiple Kubernetes clusters on top of vSphere. These clusters are all upstream-compliant; they run containers on Linux hosts and do not embed containers in a pod VM.
VMware Tanzu™ Kubernetes Grid™ is an enterprise-ready Kubernetes runtime that streamlines operations across a multi-cloud infrastructure.
Engineered to simplify installation and Day 2 operations, Tanzu Kubernetes Grid packages together key open source technologies and automation tooling to help you get up and running quickly with a scalable, multi-cluster Kubernetes environment.
VMware Tanzu™ Kubernetes Grid™ provides organizations with a consistent, upstream-compatible, regional Kubernetes substrate across software-defined datacenters (SDDC) and public cloud environments, that is ready for end-user workloads and ecosystem integrations. Tanzu Kubernetes Grid builds on trusted upstream and community projects and delivers a Kubernetes platform that is engineered and supported by VMware, so that you do not have to build your Kubernetes environment by yourself.
Tanzu Kubernetes Grid is available in different configurations and implementations:
Standalone Tanzu Kubernetes Grid, a multi-cloud Kubernetes footprint on-prem and in the public cloud. This standalone, multi-cloud Tanzu Kubernetes Grid experience is the focus of this documentation.
VMware Tanzu™ Kubernetes Grid™ service for vSphere, a tightly integrated Kubernetes experience available with VMware Cloud Foundation 4.0.
VMware Tanzu™ Mission Control™, a hosted Tanzu Kubernetes Grid implementation for public cloud environments.
In this blog post we will focus on the VMware Tanzu Kubernetes Grid with vSphere 7
Key characteristics of Tanzu Kubernetes clusters provisioned by the Tanzu Kubernetes Grid Service include the following:
A Tanzu Kubernetes cluster is an opinionated installation of Kubernetes.
The Tanzu Kubernetes Grid Service provides well-thought-out defaults optimized for vSphere to provision Tanzu Kubernetes clusters. The Tanzu Kubernetes Grid Service reduces the amount of time and work typically required to get an enterprise-grade Kubernetes cluster up and running. For more information, see Provisioning Tanzu Kubernetes Clusters Using the Tanzu Kubernetes Grid Service.
A Tanzu Kubernetes cluster is integrated with the underlying infrastructure.
A Tanzu Kubernetes cluster is integrated with the vSphere SDDC, including storage, networking, and authentication. In addition, a Tanzu Kubernetes cluster is built on a Supervisor Cluster that maps to a vCenter Server cluster. Because of the tight integration, running a Tanzu Kubernetes cluster is a unified product experience. For more information, see Architecture and Components of the Tanzu Kubernetes Grid Service.
A Tanzu Kubernetes cluster is tuned for running production workloads.
The Tanzu Kubernetes Grid Service provisions production-ready Tanzu Kubernetes clusters. You can run production workloads without the need to perform any additional configuration. In addition, you can ensure availability and allow for rolling Kubernetes software upgrades and run different versions of Kubernetes in separate clusters.
A Tanzu Kubernetes cluster is supported by VMware.
Tanzu Kubernetes clusters use the open-source Photon OS from VMware, are deployed on vSphere infrastructure, and run on ESXi hosts. If you experience problems with any layer of the stack, from the hypervisor to the Kubernetes cluster, VMware is the only vendor you need to contact.
A Tanzu Kubernetes cluster is managed by Kubernetes.
Tanzu Kubernetes clusters are built on top of the Supervisor Cluster, which is itself a Kubernetes cluster. A Tanzu Kubernetes cluster is defined in the Supervisor Namespace using a custom resource. You provision Tanzu Kubernetes clusters in a self-service way using familiar kubectl commands. There is consistency across the toolchain, whether you are provisioning a cluster or deploying workloads, you use the same commands, familiar YAML, and common workflows.
Since there is a lot of confusion around VMware’s different offerings, i want to highlight the differences between the VMware modern app products.
- VMware Tanzu Kubernetes Grid Integrated (TKGI) – FKA Enterprise PKS 1.7 https://tanzu.vmware.com/content/blog/vmware-tanzu-kubernetes-grid-integrated-edition-1-7
- VMware Tanzu Kubernetes Grid standalone – available today, doesn’t require VCF.
VMware Tanzu Kubernetes Grid, informally known as TKG, is a multi-cloud Kubernetes footprint that you can run both on-premises in vSphere and in the public cloud. In addition to Kubernetes binaries that are tested, signed, and supported by VMware, Tanzu Kubernetes Grid includes signed and supported versions of open source applications to provide the networking, authentication, ingress control, and logging services that a production Kubernetes environment requires ( https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/index.html )
vSphere with Kubernetes – FKA Project Pacific is only available with VCF 4.0 at this time and includes:
- VMware Tanzu Kubernetes Grid – the ability to deploy virtual photon based Kubernetes clusters.
- VMware vSphere Pod Service – the ability to run pods directly on the ESXi hosts.
The differences between the two:
vSphere with Kubernetes for vSphere 7.0 requires a special license in addition to the regular vSphere license. It is available as a part of VMware Cloud Foundation 4.0 and later,
Tanzu Kubernetes Grid with vSphere 7 is currently available via VMware Cloud Foundation 4.0.
VMware Tanzu Kubernetes Grid standalone (2) doesn’t require NSX-T as the TKG cluster network settings are manually specified during the deployment.
In VMware Tanzu Kubernetes Grid (3.1) which is part of vSphere with Kubernetes, the internal communication between the pods is done using Calico while the external access from the virtual Kubernetes cluster externally is done via NSX-T virtual tunnels.
Cloud Native Storage (CNS) – the
vSphere CSI Driver – Cloud Native Storage (CNS) provides comprehensive data management for stateful, containerized apps, enabling apps to survive restarts and outages. Stateful containers can use vSphere storage primitives such as standard volume, persistent volume, and dynamic provisioning, independent of VM and container lifecycle.
CNS can be manually installed on native Kubernetes clusters and it’s built in TKG.
Native K8s on vSphere 6.7U3
Native K8s on vSphere 7.0
vSphere with Kubernetes – Supervisor Cluster
vSphere with Kubernetes – TKG ‘Guest’ Cluster
Enterprise PKS 1.7 (TKGI) on vSphere 6.7U3
(CSI 1.0.2 & 2.0)
Dell Technologies PowerStore can integrate with TKG using either VMFS and / or vVols, if you are new to vVols, i suggest you first start by reading about PowerStore vVols architecture here https://volumes.blog/2020/05/06/what-is-powerstore-part-4-vvols/
After you read about PowerStore vVols, the next step is to create a storage policy that TKG can utilize, use the Policies and profiles -> VM Storage Policies -> Create VM Storage Policy as shown below
Next, we need to do is a build a storage policy in vSphere that can be referenced by the Kubernetes StorageClass manifest, these rules are automatically created using the vVols VASA registration
Since PowerStore vVols / VASA policy, also support array based QoS, you can select it here as well as shown below, a use case i can think of is that you have a new project and you are not sure how much IOPs / Bandwidth will be consumes by the running containers, so you can limit them to “Medium” or “Low”.
PowerStore vVols leverage Storage Policy Based Management (SPBM) to ensure VMs have the appropriate storage capabilities through their entire life cycle. VM storage policies can be optionally created after the storage provider is registered. These policies are used to determine the desired storage capabilities when a VM is being provisioned.
We are pretty much done, we can review the settings and click next -> finish.
We are now ready to create a new namespace, click the Workload Management – Namespaces tab -> New, select the vSphere cluster which is configured for vSphere with Kubernetes and give it a name
We can now see that a namespace has been created successfully created and the Kubernetes status is now active.
Now, it’s time to add a storage to the namespace to use, since we are dealing with vVols here, let’s click the Storage -> add storage and select the storage policy we created earlier.
After logging in to the control plane cluster with kubectl
We need to run the ‘kubectl config use-context’ command to switch context to the newly created namespace
We can validate that the storage policy is available for this namespace by ‘kubectl get sc”
As you can see, the storage class is specified in the manifest file of the TKG cluster ,this storage class is used for both the control plane and worker nodes.
Now, lets go ahead and create the TKG cluster
And we can now see that the control VM has been created but it’s not up and running yet. There is nothing special you need to do but wait a little bit..
Ok, everything is up and running!
We can now navigate to the NSX-T console (which is an integrated part of the solution) and see that a TIER-1 gateway is being created as well as a load balancer
It’s now time to navigate into the PowerStore console and you should see multiple vVols that have been created, each belongs to a particular TKG VM.
The type of vVol provisioned depends on the type of data that is being stored:
- Config: Stores standard VM configuration data such as .vmx files, logs, and NVRAM
- Data vVols: Stores data such as VMDKs, snapshots, full clones, and fast clones
Swap: Stores a copy of the VM memory pages when the VM is powered on. Swap vVols are automatically created and deleted when VMs are powered on and off.
We can now login to the guest TKG cluster we created by specifying the by specifying the tanzu-kubernetes-cluster-name and namespace
We can verify we are in the right namespace but running the ‘kubectl get nodes’ command
Ok, time to create an app! We are going to create a db called ‘yugabyte db’ which is a true scale up CNA DB, this app consist of three master nodes and three worker nodes
Each has its own persistent volume so we can see them being created
We can access the application UI by specifying the external IP of the service and port it uses
There you go, easy!
Kubernetes continues to grow in adoption, and VMware is on the forefront. One of Kubernetes requirements is persistent storage, and until now, that included vSAN, NFS, and VMFS. The thing is vVols couldn’t be more suited for K8s storage because a vVol is its own entity. Now, deploy an FCD as a vVol and you’ve got a first-class disk as a first-class citizen that has additional benefits like mobility and CSI to SPBM policy mapping.
We can now see that all the vVols were created using VMware CNS (Container Native Storage) interface
Cloud Native Storage (CNS) provides comprehensive data management for stateful, containerized apps, enabling apps to survive restarts and outages. Stateful containers can use vSphere storage primitives such as standard volume, persistent volume, and dynamic provisioning, independent of VM and container lifecycle.
vSphere storage backs the volumes, and you can set a storage policy directly on the volumes. After you create the volumes, you can use the vSphere client to review the volumes and their backing virtual disks, and monitor their storage policy compliance.
And because PowerStore is deeply integrated with VMware vVols, you can also see metrics storage performance metrics for each and every persistent volume as each vVol represents a Kubernetes persistent volume which belongs to a single pod
More information about VMware TKG is available here –
and You can see a demo how it all works with PowerStore here –
what about other platforms?
For PowerMax, i encourage you to read the post Drew Tonnesen wrote here, it is really good and walk you through the VASA elements (amongst many other elements) https://drewtonnesen.wordpress.com/2020/05/15/cns-vvols/
Jason Boche had a good writeup here http://www.boche.net/blog/2020/05/17/vsphere-with-kubernetes/