Enginuity 5875 carries the extended and systematic feature development forward from previous Symmetrix generations. This means all of the reliability, availability, and serviceability features, all of the interoperability and host operating systems coverage, and all of the application software capabilities developed by EMC and its partners continue to perform productively and seamlessly even as underlying technology is refreshed.

The following section describes the major feature enhancements that are made available with the Enginuity 5875 operating environment on Symmetrix VMAX storage arrays.

FAST VP

Fully Automated Storage Tiering with Virtual Pools (FAST VP) maximizes the benefits of in-the-box tiered storage by automatically optimizing cost versus performance requirements by placing the right thin data extents, on the right tier, at the right time. The FAST VP system allows a storage administrator to decide how much SATA/Fibre Channel/Flash capacity is given to a particular application and then automatically places the appropriate busiest thin data extents on the desired performance tier and the least busy thin data extents on a capacity tier. In order to provide further controls and modifications an administrator’s input criteria are assembled into FAST policies. The FAST VP system uses policy information to perform extent data movement operations within two or three disk tiers in the VMAX array. Because the unit of analysis and movement is measured in thin extents this sub-LUN optimization is extremely powerful and efficient. FAST VP is an evolution of the existing FAST technology.

Virtual LUN VP Mobility

Virtual LUN technology offers manual control of data mobility between storage tiers within a VMAX array. Virtual LUN data movement can nondisruptively change the drive type (capacity, rotational speed) and the protection method (RAID scheme) of Symmetrix logical volumes. Enginuity 5875 brings a further enhancement to this feature by allowing thin device movement from one thin pool to another thin pool.

Virtual Provisioning enhancements

Symmetrix Virtual Provisioning™ continues evolving toward the goal of providing the same feature set for thin devices as traditional Symmetrix devices and extending it further. The Enginuity 5875 Virtual Provisioning enhancements are:

· The ability to rename thin pools.

· Virtual Provisioning rebalancing was initially released with a static pool variance measure of ± 10 percent. This variation value is now user-definable from 1 percent to 50 percent, with the default value of 1%.

· The maximum concurrent devices participating in the rebalance can now be set from two devices to the entire pool.

· The T10 SBC-3 committee has finalized standards for two new SCSI commands for thin devices. The UNMAP command advises a target device that a range of blocks is no longer needed. If the range covers a full Virtual Provisioning extent Enginuity 5875 returns that extent to the pool. If the UNMAP command range covers only some tracks in an extent, those tracks are marked Never Written by Host (NWBH). The extent is not returned to the pool but those tracks don’t have to be read from disk to return all zeros, don’t have to be copied for Snap, Clones or rebuilds, and won’t take bandwidth with SRDF®.

· The WRITE SAME command instructs a VMAX to write the same block of data to a specified number of sequential logical blocks. Operating systems like VMware will use this command to write zeros to a Symmetrix LUN. Without this command the VMware host would need to use multiple write requests to the LUN. But with WRITE SAME support in Enginuity 5875, a host can do the same format using just one WRITE SAME instruction.

VAAI primitives

The vStorage API for Array Integration (VAAI) is a new API for storage partners to leverage that permits certain functions to be delegated to the storage array, thus greatly enhancing the performance of those functions. This API is fully supported by EMC Symmetrix VMAX running Enginuity 5875 or later. In the vSphere 4.1 release, this array offload capability supports three primitives: hardware-accelerated Full Copy, hardware-accelerated Block Zero, and hardware-assisted locking.

Hardware-accelerated Full Copy

The time it takes to deploy or migrate a virtual machine will be greatly reduced by use of the Full Copy primitive, as the process is entirely executed on the storage array and not on the ESX server. The host simply initiates the process and reports on the progress of the operation on the array. This greatly reduces overall traffic on the ESX server. In addition to deploying new virtual machines from a template or through cloning, Full Copy is also utilized when doing a Storage vMotion. When a virtual machine is migrated between datastores on the same array the live copy is performed entirely on the array.

Not only does Full Copy save time, but it also saves significant server CPU cycles, memory, IP and SAN network bandwidth, and storage front-end controller I/O.

Figure 1. Hardware-accelerated Full Copy

Hardware-accelerated Block Zero

To have the array complete a Block Zeroing out of a disk is far more efficient and much faster than traditional software Block Zeroing. A typical use case for Block Zeroing is when creating virtual disks that are eagerzeroedthick in format. Without the Block Zeroing primitive, the ESX server must complete all the zero writes of the entire disk before it reports back that the disk zeroing is complete. For a very large disk this is time-consuming. When employing the Block Zeroing primitive, however, also referred to as “write same,” the disk array returns the cursor to the requesting service as though the process of writing the zeros has been completed. It then finishes the job of zeroing out those blocks without the need to hold the cursor until the job is done, as is the case with software zeroing.

Figure 2. Hardware-accelerated Block Zero

Hardware-assisted locking

As a clustered shared-storage manager, VMware’s Virtual Machine File System (VMFS) needs to coordinate the access of multiple ESX server hosts to portions of the space within the logical units that they share. VMFS allocates portions of the storage available to it for the data describing virtual machines, and their configurations, as well as the virtual disks that they access. Within a cluster of ESX servers the virtual machines contained in the VMFS datastore can be loaded and run on any of the ESX instances. They can also be moved between instances for load balancing and high availability.

VMware has implemented locking structures within the VMFS datastores that are used to prevent any virtual machine from being run on, or modified by, more than one ESX at a time. The initial implementation of mutual exclusion for updates to these locking structures was built on the use of SCSI RESERVE and RELEASE commands. This protocol claims sole access to an entire logical unit for the reserving host until it issues a subsequent release. Under the protection of a SCSI RESERVE, a server node could update metadata records on the device to reflect its usage of portions of the device without the risk of interference from any other host that might also wish to claim the same portion of the device. This approach, which is shown in Figure 4, has significant impact on overall cluster performance since all other access to any portion of the device is prevented while SCSI RESERVE is in effect. As ESX clusters have grown in size, as well as in their frequency of modifications to the virtual machines they are running, the performance degradation from the use of SCSI RESERVE and RELEASE commands has become unacceptable.

Figure 3. Traditional VMFS locking before vSphere 4.1 and hardware-assisted locking

This led to the development of the third primitive for VAAI in the vSphere 4.1 release, hardware-assisted locking. This primitive provides a more granular means of protecting the VMFS metadata than SCSI reservations. Hardware-assisted locking leverages a storage array atomic test and set capability to enable a fine-grained block-level locking mechanism as shown in Figure 5. First, hardware-assisted locking replaces the sequence of RESERVE, READ, WRITE, and RELEASE SCSI commands. It does this with a single SCSI request for an atomic read-modify-write operation conditional on the presumed availability of the target lock. Second, this new request only requires exclusion of other accesses to the targeted locked block, rather than to the entire VMFS volume containing the lock. This locking metadata update operation is used by VMware whenever a virtual machine’s state is being changed. This may be a result of the virtual machine being powered on or off, having its configuration modified, or being migrated from one ESX server host to another with vMotion or Dynamic Resource Scheduling.

Figure 4. VMFS locking with vSphere 4.1 and hardware-assisted locking

Although the non-hardware assisted SCSI reservation locking mechanism does not often result in performance degradation, the use of hardware-assisted locking provides a much more efficient means to avoid retries for getting a lock when many ESX servers are sharing the same datastore. Hardware-assisted locking enables the offloading of the lock mechanism to the array and then the array does the locking at a very granular level. This permits significant scalability in a VMware cluster sharing a datastore without compromising the integrity of the VMFS shared storage-pool metadata.

Enabling the vStorage APIs for Array Integration

The VAAIs are enabled by default on both the Symmetrix VMAX running 5875 Enginuity or later, and on the 4.1 ESX server (properly licensed) and should not require any user intervention. All of the three primitives, however, can be disabled through the ESX server if desired. Using the vSphere Client, Full Copy and Block Zero can be disabled or enabled by altering the respective settings, DataMover.HardwareAcceleratedMove and DataMover.HardwareAcceleratedInit in the ESX server advanced settings under DataMover as shown in Figure 6. Hardware-assisted locking can be disabled or enabled by changing the setting VMFS3.HarwareAcceleratedLocking in the ESX server advanced settings under VMFS3 as shown in Figure 7.

Figure 5. Enabling hardware-accelerated Full Copy or Block Zero on ESX 4.1

Figure 6. Enabling hardware-assisted locking on ESX 4.1

here’s some of the results:

 

 

 

 

 

 

2 Comments »

Leave a ReplyCancel reply