RecoverPoint For VMS (RP4VMs) 5.1 SP1 Is Out
Today is a special day, we have finally released RP4VMS 5.1 SP1, to call it a service pack will be unfair, I think it’s the biggest release or at least, […]
Dell Storage, PowerStore, PowerFlex PowerMax & PowerScale, Virtualization & Containers Technologies
Today is a special day, we have finally released RP4VMS 5.1 SP1, to call it a service pack will be unfair, I think it’s the biggest release or at least, […]
Today is a special day, we have finally released RP4VMS 5.1 SP1, to call it a service pack will be unfair, I think it’s the biggest release or at least, the biggest architecture change we have made to RP4VMs since its “1.0” version (or should I say, it’s 4.X version)
So what’s new?
First, RP4VMs is really based on the trusted, robusted, physical RecoverPoint, we have 2 product lines, the classic RP which I used to replicate data to and from physical Dell EMC and non EMC arrays , for example, from XtremIO to Unity etc, and the relatively new one RecoverPoint For VMs which is used to replicate VMs and doesn’t rely or requires a specific storage array brand.
The beauty of it is that it really works at the hypervisor level, meaning that you can set your RPO at the VM level as oppose to the lun level which can host multiple VMs and you can only select one RPO per a lun.
The use cases are
From a very high level architecture, you deploy a vRPA (OVA appliance) at the ESXi level, you probably want to deploy at least 2 for performances and HA capabilities, you then want to do it at the “remote” cluster, this can be a remote site with a remote vCenter or just a local but a different ESXi cluster in case you want to protect your data locally, each one of the copies in each of the cases I mentioned should reside on another storage, it can be a “real” array or just DAS, VSAN etc
So far, I haven’t described anything new to this release so why do I think it’s the biggest release ever?
Because now we do not require the internal iSCSI kernel to communicate between the vRPA to the ESXi kernel, all the internal and external communication is done over IP, that means that you don’t need to mess about with ESXI software kernel adapters, multipathing them etc, what a god send!, as someone who works very closely with the product, I can’t even describe how easier it is to work with the IP splitter Vs the iSCSI one and I am very excited to see it finally GA.
Some other components, just like the past, you have your deployment manager (web based) and your vCenter plugin to manage your VMs, from the ESXi splitter, *
Some scalability numbers that were also improved along the versions,
Just like the past, you can set the CG’s boot up priorities and the priority of the VMs power up within the CG.
When enabled, The production VM network configuration is also preserved (so there’s no need to configure the Re-IP).
New design for Re-IP (5.0.1)
You can either assign the new IP for your recovered failed-over VM or in a case of a L2 network, you can leave the source IP when for the failed-over VM
Pre-Defined Failover Network Configuration
Define failover network per CG:
During Protect VM wizard
For already protected VMs
During Failover wizard
Testing Point-in-Time:
User can choose the pre-defined failover network, or a dedicated isolated network
Promote to failover flow:
User can continue with current test network, or use the pre-defined failover network
Recovery Operations – Status Reporting
Expand Or Reduce CG Without Journal Loss
You can watch a video of the new IP based deployment manager here, Thanks Idan Kentor for producing it!
as always, you can download the new version from support.emc.com
Good stuff! Is there an easy way to convert from iSCSI to the native write splitter when upgrading an existing RP4VM deployment?
yes. i believe it’s part of the user guide, please work with your Dell EMC SE
Do you happen to know if this new version has RBAC controls for managing CGs in VMware. We’re finding that when we setup RBAC in VMware, we can’t limit which sites control which CGs.
Hi Mike,
Unfortunately, the answer is no, we do have concrete plans to enhance our integration with vCenter RBAC in upcoming releases, there are few ways to workaround this issue by leveraging RP4VMs RESTful interface as well as restricting RO access to vCenter.
Hope that helps,
Idan Kentor
RecoverPoint Corporate Systems Engineering
@IdanKentor
Thanks for the response, is there any way I can get a link to the documentation for those RESTful APIs?