Hi,

Here at EMC, we try to listen very hard to our customer requests; a strong one that I often been asked about is the ability to include automatic Failback to SRM4 / 4.1 when using EMC SRDF (we already offer automatic Failback to the Celerra IP Replicator, CLARiion Mirroview and RecoverPoint), so here it is

SRM Automatic Failback for DMX / VMAX SRDF

:How does the Automatic Failback works, glad you’ve asked

VMware Site Recovery Manager does not offer a built-in Failback mechanism, Failback required manual intervention to recreate Protection Groups and Recovery Plans.

The SRDF SRA made Failback easier by automatically swapping R1/R2 Personalities and therefore, the replication, after Failover

VSI 4.0 takes this a step further by creating a wizard to allow automated Failback

No more laborious recreations of Protection Groups or Recovery Plans!

Post-Failback Operations:
Any “inaccessible” VMs that are being failed back are removed from the inventory
Each RDM mapping file is deleted so that they can be re-added with appropriate references.

VM is reconfigured with the user selected network mappings and the virtual device information.

Datastores are automatically renamed to remove the SNAPXXXX prefix.

For each VM, the mappings to the spanned datastores are fixed appropriately.

Inaccessible VM Clean-UP
ESX hosts on which the failed over VMs were hosted are identified. This can be determined from the consistency group definitions.

The VMs that are attached to these failed-back datastores are marked “inaccessible”
These “Inaccessible” VMs are determined by comparing with the already calculated list of VMs at the failback site.

The matching VMs that are marked “inaccessible” are unregistered or removed from the inventory.

Recreating RDM Files
When a VM is failved over or failed back along with it’s RDM mappings, the mappings become invalid at the new site because the pointers will still be pointing to the pass through devices at the old site.

As the files “.vmdk” and “-rdm(p).vmdk” contains just the references to the actual devices, these files are deleted.

New RDM mappings are then created on the VM with the refernces to the respective replicated devices at the new site.

Some other very important features are:

Provides the ability to suspend SRDF replication and allows for virtual machines (VMs) to be mounted and tested directly on the R2 copies.

Optionally, you can choose to run the testFailover command on R2 devices directly, without using Timefinder devices. SRA first splits the RDF links, mounts datastores on the recovery site, and brings up VMs. When finished, the testFailover discards any changes to R2 device and reestablishes the RDF link so that data replication continues.

(is this feature cool or what, Hugh space savings!!!)

Provides the ability to issue space saving TimeFinder/Snap copies off an SRDF/A target R2 copy. This feature requires Enginuity 5875 and Solutions Enabler V7.2.

Supports Group Name Services (GNS) definitions. Support is limited to device groups (DGs) only. Allows for the creation of DGs and adding remote clones/snaps/mirrors.

New global options:

TestFailoverWithoutLocalSnapshots option test failover operation directly off of R2 devices without a need to use local Timefinder replica devices.

Here’s the compatibility matrix for it:

Update: i updated this post with a new one that contain a DEMO of the Failover / failback at

 http://volumes.blog/2011/01/10/srm-automatic-failback-using-emc-symmetrix-vmax/

Leave a ReplyCancel reply