Hi,

our solutions group recently released a new Reference Architecture for those who are interested in running their VDI based on CitrixXenDesktop 5 as the “Broker” and VMware vSphere 4.1 as the “hypervisor”.

the solution is based of 1,000 Windows 7 clients running 100% concurrent.

this FC based RA can be downloaded from:

http://www.emc.com/collateral/hardware/technical-documentation/h8243-virtualdesktop-vnx-citrix-xendesktop5-vsphere-ra.pdf

An NFS based RA will follow later on..

working as a vSpecialist for EMC I get to interact with a LOT of customers, VDI is a very hot topic in 2011, some customers prefer XenDesktop, some prefer VMware VIEW, ALL of them prefer running XenDesktop on VMware vSphere 4.1 as it’s the industry standard for Virtualization.

some interesting facts about XenDesktop5 Hosted Desktop architecture, it allow you to either use PVS (Provisioning Services) to stream your OS over the network hence it’s a heavy network dependent solution or new to XenDesktop 5, you can also use MCS (Machine Creation Services) which rely on your storage backend, hence it’s a storage dependent solution.

MCS is fairly new and work in a very similar way to VMware Linked Clone, meaning that you have a master image replica which you clone it in very small chucks (16kb) so all the VM’s will be able to read from it and write to their own area which will grow, untill the next desktop “recompose”, one big difference between VMware VIEW to XenDesktop is that VMware VIEW allow you to isolate your replica from the linked clone Datastore(s) where in the CITRIX model the replica must reside in the same datastore where the linked clones reside. this must be taken into considerations when designing a XenDesktop5 environment based on MCS.

other than this, MCS seems to be really catching up and I except it to catch more and more momentum of the XenDesktop deployment. it’s definitely not meant to “small or poc deployments” and can scale to thousands of VDI seats providing that you have the right storage solution (hint, one that is based on read AND writes caching..)

EMC VNX series storage architecture

The EMC VNX series is a dedicated network server optimized for file and block access that delivers high-end features in a scalable and easy-to-use package.

The VNX series delivers a single-box block and file solution, which offers a centralized point of management for distributed environments. This makes it possible to dynamically grow, share, and cost-effectively manage multiprotocol file systems and provide multiprotocol block access. Administrators can take advantage of simultaneous support for NFS and CIFS protocols by allowing Windows and Linux/UNIX clients to share files using the sophisticated file-locking mechanisms of VNX for file and VNX for block for high-bandwidth or for latency-sensitive applications.

This solution uses both block-based and file-based storage to leverage the benefits that each of the following provides:

Block-based storage over Fibre Channel (FC) is used to store the VMDK files for all virtual desktops. This has the following benefits:

* Block storage leverages the VAAI APIs (introduced in vSphere 4.1) that include a hardware-accelerated copy to improve the performance and for granular locking of the VMFS to increase scaling.

* Unified Storage Management plug-in provides seamless integration with VMware vSphere to simplify the provisioning of datastores or VMs.

* PowerPath Virtual Edition (PowerPath/VE) allows better performance and scalability as compared to the native multipathing options.

* Redirection of user data and roaming profiles to a central location for easy  backup and administration.

* Single instancing and compression of unstructured user data to provide the highest storage utilization and efficiency.

* File-based storage is provided by a CIFS share. This has the following benefits:

* This section explains the configuration of the storage that was provided over FC to the ESX cluster to store the VMDK images and the storage that was provided over CIFS to redirect user data and roaming profiles.

The following diagram shows the storage layout of the disks:

The following storage configuration was used in the solution:

Four SAS disks (0_0 to 0_3) are used for the VNX OE.

Disks 0_4, 1_10, and 2_13 are hot spares. These disks are denoted as hot spare in the storage layout diagram.

Twenty SAS disks (0_5 to 0_14 and 1_0 to 1_9) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. Eight LUNs of 500 GB each are carved out of the pool and presented to the ESX servers.

Two Flash drives (1_11 and 1_12) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.

Five SAS disks (2_0 to 2_4) on the RAID 5 storage pool 2 are used to store the infrastructure virtual machines. Two LUNs of 500 GB each are carved out of the pool and presented to the ESX servers.

Eight NL-SAS disks (2_5 to 2_12) on the RAID 6 (6+2) group are used to store user data and roaming profiles. Two VNX file systems are created from the NL-SAS storage pool, a 2 TB file system for profiles and a 4 TB file system for user data.

Disks 1_13, 1_14, and 2_14 are unbound. They are not used for testing this solution.

note that the beauty about this design is that it used the right DISKS types for the right user requirement, we could theoretically put everything on SAS disks but wouldn’t it be a waste??

EMC VNX FAST CACHE

VNX FAST Cache, a part of the VNX FAST Suite, enables Flash drives to be used as an expanded cache layer for the array. The VNX5300 is configured with two 100 GB Flash drives in a RAID 1 configuration for a 93 GB read/write-capable cache. This is the minimum amount of FAST Cache. Larger configurations are supported for scaling beyond 1,000 desktops.

FAST Cache is an array-wide feature available for both file and block storage. FAST Cache works by examining 64 KB chunks of data in FAST Cache-enabled objects on the array. Frequently accessed data is copied to the FAST Cache and subsequent

accesses to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to Flash drives. This dramatically improves the response times for the active data and reduces data hot spots that can occur within the LUN.

FAST Cache is an extended read/write cache that enables XenDesktop to deliver consistent performance at Flash drive speeds by absorbing read-heavy activities such as boot storms and antivirus scans, and write-heavy workloads such as operating system patches and application updates. This extended read/write cache is an ideal caching mechanism for MCS in XenDesktop 5 because the base desktop image and

other active user data are accessed with sufficient frequency to service the data is directly from the Flash drives without having to access slower drives at a lower storage tier.

EMC VNX VAAI SUPPORT

Hardware acceleration with VMware vStorage API for Array Integration (VAAI) is a storage enhancement in vSphere 4.1 where ESX can offload specific storage operations to compliant storage hardware such as the EMC VNX series. With storage hardware assistance, ESX performs these operations faster and consumes fewer CPU,memory, and storage fabric bandwidth resources.

4 Comments »

Leave a ReplyCancel reply