Hi,
We have a lot of customers who are using Citrix XenDesktop, many of them are still using PVS so I wanted to revisit PVS 7.1 running on EMC XtremIO and touch some of the best practices around it.
I also wanted to thank Gadi Feldman ( @gadi_fe ) from Citrix providing me with some insights.
i would first advice you to read the first post i wrote here:
Reccomanded Practices When Using CITRIX XD7 PVS With EMC XtremIO
many customers are running very large VDI workloads, this is not surprising since the array can scale out and hosts tens of thousands on VDI VM’s, my lab can cope with at least 2,500 VDI vm’s (I’m bounded by the compute part..) and while I found the PXE services to be reliable, I also found that the PXE & TFTP processes don’t always cope well with the load of a boot storm, so one big advantage to PVS is the ability to use a BDM disk which in essence saves a lot of time (and network traffic) by embedding the required bootstrap files into each of of the vm’s, the size of this BDM drive is very small and regardless, XtremIO can dedupe it all right? so there is no downsize.
as you can see, the size of the BDM drive is 8mb and I highly recommend using it!
this is how the boot of a vm will look like with the BDM..less network traffic!
Numbers of PVS Servers
for 2,500 VDI workloads, I would recommend using 1 PVS server per 500 VDI VM’s, you can host more VM’s on each PVS server, yep but I want to make sure my PVS servers are not the bottleneck, 4 X 4 vCPU’s per PVS server and around 16GB of RAM
PVS Server tweaks
this is kinda of a grey area, there are many documented tweaks that can help you in certain cases, think that storage in no longer your bottleneck so I went a bit wild by first expanding the ports used for the server communication.
secondly, I tweaked the advanced server properties “server” tab to it’s limits.
third, I changed the I/O burst size to 32k bytes
forth, I changed the boot pause to 3 seconds (note that I have left the maximum devices booting to 500)
Optimizing the PVS target device
I have used a VMXnet3 vnic and applied the following ps script to it before converting it
http://www.ingmarverheij.com/citrix-pvs-optimize-endpoint-with-powershell/
Volume Sizes:
one of the most common question I get in regards to XtremIO and VDI is about the lun sizes and how many VM’s per datastore, in this test I have used one volume to provision all the PVS based VM’s into (a known issue with PVS not being able to provision to more than one datastore at the same time) and after the provisioning was done, I started to vMotion them in bucnhes of 250 VM’s per datastore, each datastore was 6TB in size.
Enabling Multipathing At The ESX Level
you can use the one liner script above to enable both round robin and IOPS=1, you just need to run this command once per an ESX host BEFORE it sees the XtremIO luns and it will automatically do it for you for every new volumes mapped to it.