Today is a special day, we have finally released RP4VMS 5.1 SP1, to call it a service pack will be unfair, I think it’s the biggest release or at least, the biggest architecture change we have made to RP4VMs since its “1.0” version (or should I say, it’s 4.X version)

So what’s new?

First, RP4VMs is really based on the trusted, robusted, physical RecoverPoint, we have 2 product lines, the classic RP which I used to replicate data to and from physical Dell EMC and non EMC arrays , for example, from XtremIO to Unity etc, and the relatively new one RecoverPoint For VMs which is used to replicate VMs and doesn’t rely or requires a specific storage array brand.

The beauty of it is that it really works at the hypervisor level, meaning that you can set your RPO at the VM level as oppose to the lun level which can host multiple VMs and you can only select one RPO per a lun.

The use cases are

  • Fail-Over, you have your primary site down and you want to recovery your VMs at the remote site
  • Test Copy, you want to test your VMs at the remote site in an isolated network while your production site VMs (and replication) is still on going
  • Recover Production, you want to use the remote, replicated data and recover your VMs but on the production site, meaning, your production site Is up but the content of it’s VM or VMs is corrupted etc

    From a very high level architecture, you deploy a vRPA (OVA appliance) at the ESXi level, you probably want to deploy at least 2 for performances and HA capabilities, you then want to do it at the “remote” cluster, this can be a remote site with a remote vCenter or just a local but a different ESXi cluster in case you want to protect your data locally, each one of the copies in each of the cases I mentioned should reside on another storage, it can be a “real” array or just DAS, VSAN etc

    So far, I haven’t described anything new to this release so why do I think it’s the biggest release ever?

    Because now we do not require the internal iSCSI kernel to communicate between the vRPA to the ESXi kernel, all the internal and external communication is done over IP, that means that you don’t need to mess about with ESXI software kernel adapters, multipathing them etc, what a god send!, as someone who works very closely with the product, I can’t even describe how easier it is to work with the IP splitter Vs the iSCSI one and I am very excited to see it finally GA.

    Some other components, just like the past, you have your deployment manager (web based) and your vCenter plugin to manage your VMs, from the ESXi splitter,  *

  • iSCSI communication mode can use either the vSCSI or the VAIO filter
  • IP communication modes is using the vSCSI splitter in this release.

    Some scalability numbers that were also improved along the versions,

  • Up to 50 vRPAss per a vCenter
  • Up to 8000 protected VMS per a vCenter
  • Up to 256 Consistency Groups per a vRPA Cluster
  • Up to 1000 protected VMs per a vRPA Cluster
  • Up to 5 vCenters and ESXi Clusters registered with each vRPA Cluster

    Just like the past, you can set the CG’s boot up priorities and the priority of the VMs power up within the CG.

  • Starting with 5.0.1 MAC replication to remote Copies is enabled by default
  • By default, MAC replication to Local Copies is disabled
  • During the Protect VM wizard, the user will have the option to enable MAC replication for copies residing on the same vCenter (local copies and remote copies when RPVM clusters are sharing the same vCenter)
    • This can create a MAC conflict if the VM is protected back within the same VC/Network
    • Available for different networks and/or VCs hosting the Local Copy

    When enabled, The production VM network configuration is also preserved (so there’s no need to configure the Re-IP).

    New design for Re-IP (5.0.1)

    • Manage IP settings through UI
    • No scripts required
    • No manual operations
    • Single click to retrieve protected VM settings
    • Supports Microsoft and Linux
    • Automatic MAC replication for remote copies

    You can either assign the new IP for your recovered failed-over VM or in a case of a L2 network, you can leave the source IP when for the failed-over VM

    Pre-Defined Failover Network Configuration

    Define failover network per CG:

    During Protect VM wizard

    For already protected VMs

    During Failover wizard

    Testing Point-in-Time:

    User can choose the pre-defined failover network, or a dedicated isolated network

    Promote to failover flow:

    User can continue with current test network, or use the pre-defined failover network

    Recovery Operations – Status Reporting

    Expand Or Reduce CG Without Journal Loss

    • Provide flexibility for CG configuration
      • Add a new VM to the same CG
      • Add a VMDK to the protected VM
      • Remove a VM from an existing CG
      • Remove a VMDK from a protected VM
    • No impact to Journal history

    You can watch a video of the new IP based deployment manager here, Thanks Idan Kentor for producing it!

as always, you can download the new version from support.emc.com

5 Comments »

  1. Do you happen to know if this new version has RBAC controls for managing CGs in VMware. We’re finding that when we setup RBAC in VMware, we can’t limit which sites control which CGs.

  2. Hi Mike,
    Unfortunately, the answer is no, we do have concrete plans to enhance our integration with vCenter RBAC in upcoming releases, there are few ways to workaround this issue by leveraging RP4VMs RESTful interface as well as restricting RO access to vCenter.

    Hope that helps,
    Idan Kentor
    RecoverPoint Corporate Systems Engineering
    @IdanKentor

Leave a Reply Cancel reply