–08/04/2019 update–

The plugin is now available, you can read about it here

https://volumes.blog/2019/03/03/the-xtremio-kubernetes-csi-plugin-is-now-available/

The VxFlex OS plugin is also available, you can read about it here

https://volumes.blog/2019/04/12/the-vxflex-os-3-0-kubernetes-csi-plugin-is-now-available/

The PowerMax plugin is also available, you can read about it here

https://volumes.blog/2019/07/24/dell-emc-powermax-is-now-supporting-kubernetes-csi/

This blog post should have been really called “where are we as Dell EMC company where it comes to integration with Kubernetes” and really, we are already here

From a Dell Technologies perspective, you can use the Pivotal Container Service (PKS), from a VMware perspective, you can use the vSphere integrated containers.

Right, but you want to use vFlexOS / PowerMax / XtremIO / Unity etc’ and for this reason, the post exist, I wanted to start by looking at my personal experience with containers as my journey with Docker started at 2015..

(Credit: Laurel Duermaël)

Docker was the hottest thing and the biggest promise in the industry, everyone were talking about the way that they monetize the containers landscape by providing a technology that is simple to use and “just works”..but you see this bottle of mile at the screen above? That’s there for a reason, as an infant company, they found many challenges to the way that they perceived the usage of containers

  • The first one is that while “next gen apps” shouldn’t have a persistent layer of storage because the very fundamental design of the container runtime is to get deleted every time it’s being powered off, Docker found out that many customers care, they care a LOT about the persistency of their data because data is the most important aspect of everything in the data center so customers wanted to run either their current apps in a container with a persistent data layer or even next gen apps where the app that runs inside of the container may get shut down and deleted but the data shouldn’t!, fast forward to the Kubecon 2018 conference:

Yes, that’s google saying that more than 40% of the clusters are running state fullapps!

  • The second issue that is related to the first one is how can a small company such as Docker develop these plugins, even if they already agree that this is a real need.

And just like a clock, you know that if there is a gap in a product that is supposed to be the next big thing, there will be startups that will try to fill the gap, enter ClusterHQ, ClusterHQ was a startup that had one mission in mind, to provide a persistent storage layer for storage arrays and Docker, so back in June 2015 we (XtremIO and ScaleIO) already had a working plugin to docker.

But then, 2017 happened and the startup shut down their company, so the problem resurfaced itself again, the difference now was that the entire container industry was more mature with many more customers asking for the persistent storage layer.

The other change was the raise of the containers orchestrator, it wasn’t about the docker runtime anymore, it was all about how you spin up clusters of containers, how do you really manage all of these hundreds / thousands of containers that you may run, into the category of orchestrators, we first saw the raise of Mesos, than Docker swarm and kubernetes and after a year or two, when the dust settled down, it think it’s ok to say that Kubernetes has one as the orchestrator, it doesn’t mean that the other two do not exist but at least, from my conversation with customers, they ask support from Kubernetes as the orchestrator and then a possible upper abstraction layer such as offered by Pivotal or RedHat.

So, if kubernetes the leading orchestrator, shouldn’t they come with an solution to the persistent storage layer?

Enter Container Storage Interface (‘CSI’ in short), this was a multi year effort led by google with supporters from other companies to provide a true, common, open API to connect storage arrays to the Docker containers, the good thing about CSI was that it didn’t try to accommodate any protocol or a unique storage feature, they focused on file support first, block later on with the basic features and each continue to release versions, incrementally gaining support from the storage industry.

But lessons had to be learned.

The first implementation of the volume plugins were “In-Tree”, (https://kubernetes.io/docs/concepts/storage/volumes/) what it meant was that they are linked, compiled, built, and shipped with the core kubernetes binaries. Adding a new storage system to Kubernetes (a volume plugin) requires checking code into the core Kubernetes code repository. his is undesirable for many reasons including:

  • Volume plugin development is tightly coupled and dependent on Kubernetes releases.
  • Kubernetes developers/community are responsible for testing and maintaining all volume plugins, instead of just testing and maintaining a stable plugin API.
  • Bugs in volume plugins can crash critical Kubernetes components, instead of just the plugin.
  • Volume plugins get full privileges of kubernetes components (kubelet and kube-controller-manager).
  • Plugin developers are forced to make plugin source code available, and can not choose to release just a binary.

So if In-Tree volumes plugins aren’t working for the above reason, they had to think of a better model which you may already guess:

Yes, its “Out-of-Tree” volume plugins.

  • The Out-of-tree volume plugins include the Container Storage Interface (CSI) and FlexVolume. They enable storage vendors to create custom storage plugins without adding them to the Kubernetes repository.
  • Before the introduction of CSI and FlexVolume, all volume plugins were “in-tree” meaning they were built, linked, compiled, and shipped with the core Kubernetes binaries and extend the core Kubernetes API. This meant that adding a new storage system to Kubernetes (a volume plugin) required checking code into the core Kubernetes code repository.
  • Both CSI and FlexVolume allow volume plugins to be developed independent of the Kubernetes code base, and deployed (installed) on Kubernetes clusters as extensions.
  • Ok, so if both CSI and FlexVolume are suitable ways, why chose CSI over FlexVolume?
  • Flex Volumes
    • Legacy attempt at out-of-tree
    • Exec based
    • Deployment difficult and remember, it’s all about ease of use
    • Doesn’t support clusters with no master access which is a no-go for the landscape of containers
  • So it’s CSI then, what’s the status of CSI? I’m glad you are asking because as of October 28th, it has finally reached a 1.0 GA status which includes block support.

You can read the entire spec here https://github.com/container-storage-interface

What about Kubernetes support for CSI? As of December 3rd, Kubernetes 1.13 is fully supporting CSI 1.0

What about the other orchestrators? Well, in the case of Redhat OpenShift, based on this public roadmap (and roadmaps may change), it looks like OpenShift 4.1 which comes out in 03/2019 will support Kubernetes 1.13 which of course, support CSI 1.0

https://blog.openshift.com/wp-content/uploads/OpenShift-Commons-Whats-New-in-OpenShift-Container-Platform-3.11.pdf

While this was a brief history of time, I really wanted to share with you my thoughts of where we were and where the container industry is now and why now, it has a true, mature API that we can integrate to, knowing it will be supported for years to come, and with that, I want to introduce you to our upcoming XtremIO CSI plugin with Kubernetes 1.13, more platforms from our portfolio will follow suite in 2019 and the demo that you can see below is a tech preview, we are accepting beta nominations for it now, so please ask your local SE to participate if you want to participate

what do you see in the demo above?

The first part shows the three pods which are part of the XtremIO CSI plug-in,

External CSI attacher container translates attach and detach calls from Kubernetes to the CSI plugin,

External CSI provisioner container translates provision and delete calls from Kubernetes to respective CreateVolume and DeleteVolume calls to CSI driver.

The next part shows the deployment of a YugaByte DB stateful set which consists of 3 master nodes and 3 worker nodes, each one requires a persistent volume claim from the XtremIO storage class upon deployment

YugaByte DB is an open source database for high-performance applications that require ACID transactions and planet-scale data distribution which support both scale up and scale down.

By creating the stateful set, 6 containers have been deployed, 6 XtremIO volumes have been created, mapped, formatted, and mounted on each container using the XtremIO CSI plugin

The next part shows DB workload against all different worker nodes, we can see the load from the XMS as well.

The next part shows the scale-up capability of the DB cluster from 3 to 6 workers, 3 additional XtremIO volumes have been created, mapped, formatted, and mounted on each container using the XtremIO CSI plugin.

When running the workload generator again, we can see that the load is redistributed across all available worker nodes and persistent volumes (XtremIO LUNs).

1 Comment »

Leave a ReplyCancel reply