Connecting EMC XtremIO To An Heterogeneous Storage Environment

Hi,

A Topic that comes and go every once in a while is what should you do if multiple storage arrays (VNX, VMAX etc’) are connected to the same vSphere cluster where the XtremIO array is connected to as well.

This is in fact a two sided question.

Question number one is specific to the VAAI ATS primitive, it is because in some specific VMAX / VNX software revisions, there was a recommendation to disable ATS because of bugs, these bugs have been resolved since and I ALWAYS encourage you to check with your VNX/VMAX team if a recommendation in the past was made to disable ATS/XCOPY.. but what happen if your ESXi host(s) are set to ATS off and you just connected an XtremIO array, mapped these volumes and now you recalled that, hey! We (XtremIO) actually always recommend to enable ATS..well,

If VAAI setting is enabled after a datastore was created on XtremIO storage, the setting does not automatically propagate to the corresponding XtremIO Volumes. The setting
must be manually configured to avoid data unavailability to the datastore. Perform the following procedure on all datastores created on XtremIO storage before VAAI
is enabled on the ESX host. To manually set VAAI setting on a VMFS-5 datastore created on XtremIO storage with VAAI
disabled on the host:
1. Confirm that the VAAI Hardware Accelerator Locking is enabled on this host.
2. Using the following vmkfstools command, confirm that the datastore is configured as “public ATS-only”: # vmkfstools -Ph -v1 <path to datastore> | grep public
• In the following example, a datastore volume is configured as “public”:


• In the following example, a datastore volume is configured as “public ATS-only”:


3. If the datastore was found with mode “public”, change it to “public ATS-only” by executing the following steps:
a. Unmount the datastore from all ESX hosts on which it is mounted (except one ESX host).
b. Access the ESX host on which the datastore is still mounted.
c. Run the following vmkfstools command to enable ATS on the datastore: # vmkfstools –configATSOnly 1 <path to datastore>
d. Click 0 to continue with ATS capability.
e. Repeat step 2 to confirm that ATS is set on the datastore.
f. Unmount datastore from the last ESX host.
g. Mount datastore on all ESX hosts.

Qestion number two is a more generic one, you have a VNX/VMAX And XtremIO all connected to the same vSphere cluster and you want to enable ESXi best practices, for example, the XCOPY chunk size, what can you do if some of these best practices vary between one platform to the other, its easy when a best practice can be applied as per the specific storage array but like the example I used above, XCOPY is a system parameter that can be applied per the entire ESXi host..

Below you can see the table we have come up with, like always, things may change so you want to consult with your SE before the actual deployment

 

 

 

Parameter Name

Scope/ Granularity

VMAX1

VNX

XtremIO

Multi-Array Resolution

vSphere 5.5

vSphere 6

FC Adapter Policy IO Throttle Count

per vHBA

256 (default)

256 (default)

1024

2562

(or per vHBA)

same as 5.5

fnic_max_qdepth

Global

32 (default)

32 (default)

128

32

same as 5.5

Disk.SchedNumReqOutstanding

LUN

32 (default)

32 (default)

256

Set per LUN3

same as 5.5

Disk.SchedQuantum

Global

8 (default)

8 (default)

64

8

same as 5.5

Disk.DiskMaxIOSize

Global

32MB (default)

32MB (default)

4MB

4MB

same as 5.5

XCOPY (/DataMover/MaxHWTransferSize)

Global

16MB

16MB

256KB

4MB

VAAI Filters with VMAX

 

Notes:

  1. Unless otherwise noted, the term VMAX refers to VMAX and VMAX3 platforms
  2. The setting for FC Adapter policy IO Throttle Count can be set to the value specific to the individual storage array type if connections are segregated. If the storage arrays are connected using the same vHBA’s, use the multi-array setting in the table.
  3. The value for Disk.SchedNumReqOutstanding can be set on individual LUNs and therefore the value used should be specific to the underlying individual storage array type.

Parameters Detail

 

The sections that follow describe each parameter separately.

 

FC Adapter Policy IO Throttle Count

 

Parameter

FC Adapter Policy IO Throttle Count

Scope

UCS Fabric Interconnect Level

Description

The total number of I/O requests that can be outstanding on a per-virtual host bus adapter (vHBA)


This is a “hardware” level queue.

Default UCS Setting

2048

EMC Recommendations

EMC recommends setting to 1024 for systems vHBA’s connecting to XtremIO only

EMC recommends leaving at default of 256 for vHBA’s connecting to VNX/VMAX systems only

EMC recommends setting to 256 for vHBA’s connecting to XtremIO systems and VNX/VMAX.

 

fnic_max_qdepth

 

Parameter

fnic_max_qdepth

Scope

Global

Description

Driver level setting that manages the total number of I/O requests that can be outstanding on a per-LUN basis.
This is a Cisco driver level option.

Mitigation Plan
vSphere 5.5

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S

 


 

Mitigation Plan
vSphere 6

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S


 

EMC Recommendations

There are options to reduce the queue size on a per-lun basis:

 

Disk Queue Depth:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008113

esxcli storage core device set –device  device_name –queue-full-threshold  Q –queue-full-sample-size S

 

EMC will set fnic_max_qdepth to 128 for systems with XtremIO only

VCE will leave at default of 32 for VNX/VMAX systems adding XtremIO.

VCE will set to 32 for XtremIO systems adding VNX/VMAX.

 

Disk.SchedNumReqOutstanding

 

Parameter

Disk.SchedNumReqOutstanding

Scope

LUN

Description

When two or more virtual machines share a LUN (logical unit number), this parameter controls the total number of outstanding commands permitted from all virtual machines collectively on the host to that LUN (this setting is not per virtual machine).

Mitigation Plan
vSphere 5.5

vSphere 5.5 permits the application of per-device application of this setting. Use the value in the table that corresponds to the underlying storage system presenting the LUN.

Mitigation Plan
vSphere 6

vSphere 6.0 permits the application of per-device application of this setting. Use the value in the table that corresponds to the underlying storage system presenting the LUN.

 

Disk.SchedQuantum

 

Parameter

Disk.SchedQuantum

Description

The maximum number of consecutive “sequential” I/O’s allowed from one VM before we force a switch to another VM (unless this is the only VM on the LUN). Disk.SchedQuantum is set to a default value of 8.

Scope

Global

EMC Recommendations

EMC recommends setting to 64 for systems with XtremIO only

EMC recommends leaving at default of 8 for VNX/VMAX systems adding XtremIO.

EMC recommends setting to 8 for XtremIO systems adding VNX/VMAX.

 

Disk.DiskMaxIOSize

 

Parameter

Disk.DiskMaxIOSize

Scope

Global

Description

ESX can pass I/O requests as large as 32767 KB directly to the storage device. I/O requests larger than this are split into several, smaller-sized I/O requests. Some storage devices, however, have been found to exhibit reduced performance when passed large I/O requests (above 128KB, 256KB, or 512KB, depending on the array and configuration). As a fix for this, you can lower the maximum I/O size ESX allows before splitting I/O requests.

EMC Recommends

EMC recommends setting to 4096 for systems only connected to XtremIO.

EMC recommends leaving at default of 32768 for systems only connected to VNX or VMAX.

EMC recommends setting to 4096 for systems with VMAX + XtremIO.

EMC recommends setting to 4096 for XtremIO systems adding VNX.

 

XCOPY (/DataMover/MaxHWTransferSize)

 

Parameter

XCOPY (/DataMover/MaxHWTransferSize)

Scope

Global

Description

Maximum number of blocks used for XCOPY operations.

EMC Recommends

vSphere 5.5:

EMC recommends settingto 256 for systems only connected to XtremIO.

EMC recommends settingto 16384 for systems only connected to VNX or VMAX.

EMC recommends leaving the default of 4096 for systems with VMAX or VNX adding XtremIO.

EMC recommends leaving the default of 4096 for XtremIO systems adding VNX or VMAX.

 

vSphere 6:

EMC recommends enabling VAAI claim rule for systems connected to VMAX to override system setting to set to 240MB

EMC recommends setting to 256KB for systems only connected to XtremIO

 

vCenter Concurrent Full Clones

 

Parameter

config.vpxd.ResourceManager.maxCostPerHost

Scope

vCenter

Description

Determines the maximum number of concurrent full clone operations allowed (the default value is 8)

EMC recommends setting

EMC recommends setting to 8/Xbrick (up to 48) for systems only connected to XtremIO.

EMC recommends leaving the default for systems only connected to VNX or VMAX.

EMC recommends settingto 8 for systems with VMAX + XtremIO.

EMC recommends settingto 8 for systems with VNX + XtremIO

 

 

Similar Posts

Leave a ReplyCancel reply