CITRIX XenDesktop 5 on EMC VNX – Match made in Heaven (Part2)
Hi, on the last blog post, I focused around the storage design and gave a bit of introduction to the EMC FAST CACHE effect in the VDI usecase scenario. before […]
Dell Storage, PowerStore, PowerFlex PowerMax & PowerScale, Virtualization & Containers Technologies
Hi, on the last blog post, I focused around the storage design and gave a bit of introduction to the EMC FAST CACHE effect in the VDI usecase scenario. before […]
Hi,
on the last blog post, I focused around the storage design and gave a bit of introduction to the EMC FAST CACHE effect in the VDI usecase scenario.
before I dig in to it, let’s establish something here, VDI workload requires a LOT of WRITES in Additions to READS, it is not rare at all to see a 50/50% or even 60% Writes and 40% Reads, this is important because our FAST CACHE is based around EFD’s that servers both of the READS/WRITES..you probably want me to show you the money by now..
ENTER EMC FAST CACHE
Profile characteristics |
The solution was validated with the following environment profile.
Profile characteristic |
Value |
Number of virtual desktops |
1,000 |
Virtual desktop OS |
Windows 7 Enterprise (32-bit) |
CPU per virtual desktop |
1 vCPU |
Number of virtual desktops per CPU core |
7.8125 |
RAM per virtual desktop |
1 GB |
Desktop provisioning method |
Machine Creation Services (MCS) |
Average storage available for each virtual desktop |
4 GB (vmdk and vswap) |
Average IOPS per virtual desktop at steady state |
6 IOPS |
Average peak IOPS per virtual desktop during boot storm |
84 IOPS |
Number of datastores to store virtual desktops |
8 |
Number of virtual desktops per data store |
125 |
Disk and RAID type for datastores |
RAID 5, 300 GB, 15k rpm, 3.5” SAS disks |
Disk and RAID type for CIFS shares to host roaming user profiles and home directories |
RAID 6, 2 TB, 7200 RPM, 3.5” NL-SAS disks |
Number of VMware clusters |
2 |
Number of ESX servers per cluster |
8 |
Number of VMs per cluster |
500 |
Use Cases
Four common use cases were executed to validate whether the solution performed as expected under heavy load situations.
The tested use cases are listed below:
· Simultaneous boot of all desktops
· Full antivirus scan of all desktops
· Installation of a security update using SCCM on all desktops
· Login and steady state user load simulated using the Login VSI medium workload
In each use case, a number of key metrics are presented showing the overall performance of the solution
The following graph shows the IOPS serviced from FAST Cache during the boot storm test.
At peak load, FAST Cache serviced over 80,000 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced over 19,000 IOPS at peak load. A sizing exercise using EMC’s standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would take roughly 105 SAS drives to achieve the same level of performance. However, EMC does not recommend using a 105:2 ratio for SAS to SSD replacement. EMC’s recommended ratio is 20:1 because workloads may vary.
Pool LUN Load
The following graph shows the LUN IOPS and response time from one of the datastores. Because the statistics from all the LUNs were similar, only a single LUN is reported for clarity and readability of the graph.
During peak load, the LUN response time remained within 22 ms, and the datastore serviced over 5500 IOPS. The majority of the read I/O was served by the FAST Cache and not by the pool LUN.
Full antivirus scan of all desktops
FAST Cache IOPS
The following graph shows the IOPS serviced from FAST Cache during the test.
At peak load, FAST Cache serviced over 24,000 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced almost all of the 24,000 IOPS at peak load. A sizing exercise using EMC’s standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would take roughly 133 SAS drives to achieve the same level of performance. However, EMC does not recommend using a 133:2 ratio for SAS to SSD replacement. EMC’s recommended ratio is 20:1 because workloads may vary.
Patch Install Results
This test was performed by pushing a security update to all desktops using Microsoft System Center Configuration Manager (SCCM). The desktops were divided into five collections containing 200 desktops each. The collections were configured to install updates in a 1-minute staggered schedule an hour after the patch was downloaded. This caused all patches to be installed within 7 minutes
The following graph shows the IOPS serviced from FAST Cache during the test.
At peak load, FAST Cache serviced over 8,500 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced over 5,000 IOPS at peak load. A sizing exercise using EMC’s standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would take roughly 28 SAS drives to achieve the same level of performance.
Login storm timing
To simulate a login storm, 1,000 desktops are powered up initially into steady state by setting the idle desktop count to 1,000. The login time of each session is then measured by starting a LoginVSI test that establishes the sessions with a custom interval of three seconds. The 1,000 sessions are logged in within 56 minutes, a period that models a burst of login activity that takes place in the opening hour of a production environment.
The LoginVSI tool has a built-in login timer that measures from the start of the logon script defined in the Active Directory group policy to the start of the LoginVSI workload for each session. Although it does not measure the total login time from an end-to-end user’s perspective, the measurement gives a good indication of how sessions will be affected in a login storm scenario.
The following figure shows the trend of the login time in seconds as sessions are started in rapid succession. The average login time for 1,000 sessions is approximately 3.5 seconds. The maximum login time is recorded at 8.6 seconds, while the minimum login time is 1.6 seconds.
The following graph shows the IOPS serviced from FAST Cache during the test
At peak load, FAST Cache serviced over 8,000 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the pair of Flash drives alone serviced over 6,500 IOPS at peak load. A sizing exercise using EMC’s standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would take roughly 36 SAS drives to achieve the same level of performance.
To illustrate the benefits of enabling FAST Cache in a desktop virtualization environment, a study was conducted to compare the performance with and without FAST Cache. The non-FAST Cache configuration called for 50 SAS drives in a storage pool, as opposed to the baseline of 20 SAS drives backed by FAST Cache with 2 Flash drives, which essentially displaced 30 SAS drives in the non-FAST Cache configuration, a 15:1 ratio of drive savings. The summary graphs below demonstrate how FAST Cache benefits are realized in each use case.
The following graph shows the host peak response time without FAST Cache during boot storm is roughly three times higher than that with FAST Cache
The following graph shows that the antivirus scan completed in 77 minutes with FAST Cache enabled, compared with 101 minutes in the case of the non-FAST Cache configuration. The overall scan time was reduced by roughly 24%.
The graph below shows the host peak response time without FAST Cache during patch storm is roughly two and a half times higher than that with FAST Cache.
Summary
EMC VNX FAST Cache provides measurable benefits in a desktop virtualization environment as shown in the previous chapter. It not only reduces response time for both read and write workloads, it effectively supports more users on fewer drives, resulting in less power, cooling, rack space consumption, and greater IOPS density with less drive requirements.
3 Comments »