vSphere 4.1 – The First Bug found (and how to resolve it)
Hi, So i was having Fun testing Our new EMC VAAI integration into vSphere 4.1, and started to do a full clone of a VM X 10 Times. What are […]
Dell Storage, PowerStore, PowerFlex PowerMax & PowerScale, Virtualization & Containers Technologies
Hi, So i was having Fun testing Our new EMC VAAI integration into vSphere 4.1, and started to do a full clone of a VM X 10 Times. What are […]
Hi,
So i was having Fun testing Our new EMC VAAI integration into vSphere 4.1, and started to do a full clone of a VM X 10 Times.
What are VAAI you may asked, well, CLARiiON and Celerra and VMAX will support VMware vStorage APIs for Array Integration, . there are some functions that your storage system can handle more efficiently than your virtual servers and their hosts can. VAAI moves these tasks from the host to the storage system to perform those functions faster — and free up resources for your virtual servers.
the API’s that were implemeted here includes
* Hardware-Accelerated Locking = 10-100x better metadata scaling
–Replace LUN locking with extent-based locks for better granularity.
–Reduce number of “lock” operations required by using one efficient SCSI command to perform pre-lock, lock, and post-lock operations.
–Increases locking efficiency by an order of magnitude
* Hardware-Accelerated Zero = 2-10x lower number of IO operations –Eliminate redundant and repetitive host-based write commands with optimized internal array commands
* Hardware-Accelerated Copy = 2-10x better data movement –Leverage native array Copy capability to move blocks
anyway,
To my surprise, i found out that as oppose to the small disk I/O observed at early betas using ESXtop, in this build generated a LOT of IOPS..
i started scratching my Head..what could it be, so i started eliminating,
first thing ive done is to clone the VM at the same datastore..that worked just Fine as observed here:
so that leaded me to think there is something wrong with the source datastore where the template reside..i checked the source datastore and then it hit me,
The “template Datastore” Block Size was 1MB cluster size, and the “Target Datastore” was 2MB cluster size..
and i HAVE seen this bug before (4.0 – 4.0.2), so i formatted the source datastore to be at a 2MB cluster dize and that fixed the problem..
so, an old problem that was fixed in vSphere 4.0.2 came back from the dead
🙂
Update1: it turned out that this is by Design:
linked to chad’s blog at:
The source and destination VMFS volumes have different block sizes (a colleague, Itzik Reich, already ran into this one at a customer, here – not quite a bug, but it does make it clear – “consistent block sizes” is a “good hygiene” move)
2 Comments »