Carrying on from today’s announcements about RecoverPoint ( https://volumes.blog/2016/10/10/recoverpoint-5-0-the-xtremio-enhancements/ ), I’m also super pumped to see that my brothers & sisters from the CTD group just released RP4VMs 5.0!, why do I like this version so much? In one word “simplicity”, or in 2 words, “large scale”, these are 2 topics that I will try to cover in this post.
The RP4VM 5.0 release is centered on the main topics of Ease of use, Scale, and Total Cost of Ownership (TCO). lets take a quick look at the enhanced scalability features of RP4VM 5.0. One enhancement is that it can now support an unlimited amount of Virtual Center inventory. It also supports the protection of up to 5000 Virtual machines (VMs) when using 50 RP clusters, 2500 VMs with 10 RP clusters, and now up to 128 Consistency Groups (CGs) per cluster.
The major focus of this module is on the Easily deploy and Save on network subjects as we discuss the simplification of configuring the vRPAs using from 1-4 IP addresses during cluster creation and the use of DHCP when configuring IP addresses.
RP4VMs 5.0 has enhanced the pre-deployment steps by allowing all the network connections to be on a single vSwitch if desired. The RP Deployer usage to install and configure the cluster has also been enhanced with improvements to the required number of IP addresses required, And lastly, improvements have been made in the RecoverPoint for VMs plug-in to
improve the user experience
With the release of RecoverPoint for VMs 5.0, the network configuration requirements have been simplified in several ways. The first enhancement is that now all the RP4VM vRPA
connections on each ESXi host can be configured to run through a single vSwitch. It will make the pre-deployment steps quicker and easier for the vStorage admin to accomplish and take up less resources. It can also all be run through a single vmnic, saving resources on the hosts.
With the release of RP4VM 5.0, all of the network connections can be combined onto a single vSwitch. Here you can see that the WAN, LAN, and iSCSI ports are all on vSwitch0.
While two VMkernel ports are shown, only a single port is required for a successful implementation. Now for a look at the properties page of the vRPA we just created. You can
see here that the four network adapters needed for vRPA communication are all successfully connected to the VM Network portal. It should be noted that while RP4VM 5.0 allows you to save on networking resources, you still need to configure the same underlying infrastructure for the iSCSI connections as before.
A major goal with the introduction of RP4VM 5.0, is to reduce the number of IPs per vRPA down to as few as 1. Achieving this allows us to reduce the required number of NICs and ports per vRPA. This will also allow for the number of iSCSI connections to be funneled
through a single port. Because of this, the IQN names for the ports will be reduced to one and the name will represent the actual NIC being used, as shown above. The reduced topology will be available when doing the actual deployment, either when running Deployer (DGUI) or when selecting the box property in boxmgmt. This will be demonstrated later
Releases before 5.0 supported selecting a different VLAN for each network during OVF deployment. The RecoverPoint for Virtual Machines OVA package in Release 5.0 requires that only the management VLAN be selected during deployment. Configuring this management VLAN in the OVF template enables you to subsequently run the Deployer wizards to further configure the network adapter topology using 1-4 vNICs as desired.
RP4VMs 5.0 will support a choice from 5 different IP configuration options during deployment. All vRPAs in a cluster require the same configuration for it to work. Shown in this table are the 5 options that can be used. Option 1 uses a single IP address with all the
traffic flowing through Eth0. Option 2A uses 2 IP addresses with the WAN and LAN on one and the iSCSI ports on the other. Option 2B also uses 2 IPs, but with the WAN on its own and the LAN and the iSCSI connections on the other. Option 3 uses 3 IPs, one for WAN, one for LAN and one for the two iSCSI ports. The last option, #4, separates all the connections to their own IPs. Use this option when performance and High Availability (HA) is critical. This is the recommended practice whenever the resources are available. It should be noted that physical RPAs only use options 1 and 2B, without iSCSI, as iSCSI is not yet supported on a physical appliance.
Observe these recommendations when making your selection:
1) In low-traffic or non-production deployments, all virtual network cards may be placed on the same virtual network (on a single vSwitch).
2) Where high availability and performance is desired, separate the LAN and WAN traffic from the Data (iSCSI) traffic. For even better performance, place each network (LAN, WAN, Data1, and Data2) on a separate virtual switch.
3) For high-availability deployments in which clients have redundant physical switches, route each Data (iSCSI) card to a different physical switch (best practice) by creating one VMkernel port on every vSwitch and vSwitch dedicated for Data (iSCSI).
4) Since the vRPA relies on virtual networks, the bandwidth that you expose to the vRPA iSCSI vSwitches will determine the performance of the vRPA. You can configure vSphere hosts with quad port NICs and present them to the vRPAs as single or dual iSCSI networks; or implement VLAN tagging to logically divide the network traffic among multiple vNICs
The Management network will always be run through eth0. When deploying the OVF template you need to know which configuration option you will be using in Deployer and set the port accordingly. If you do not set the management traffic to the correct destination network, you may not be able to reach the vRPA to run Deployer.
start the deployment process, enter the IP address of one of the vRPAs, which has been configured in your vCenter, into a supported browser in the following format: https://<vRPA_IP_address>/. This will open up the RecoverPoint for Virtual Machines
home page where you can get documentation or start the deployment process. To start Deployer, click on the EMC RecoverPoint for VMs Deployer link.
After proceeding through the standard steps from previous releases, you will come to the Connectivity Settings screen. In our first example we will setup the system to have all the networking go through a single interface. The first item to enter is the IP address which will be used to manage the RP system. Next you will choose what kind of network infrastructure will be use for the vRPAs in the Network Adaptors Configuration section. The first part is to choose the Topology for WAN and LAN in the dropdown. When selected, you will see 2 options to choose from, WAN and LAN on the same adapter and WAN and LAN on separate adapters. In this first example we will choose WAN and LAN on same network adapter.
Depending on the option chosen, the number of fields available to configure will change as will the choices in the Topology for Data (iSCSI) dropdown. To use a single interface we will select the Data (iSCSI) on the same network adapter as WAN and LAN option. Because we are using a single interface, the Network Mapping dropdown is grayed out and the choice we made when deploying the OVF file for the vRPA has been chosen for us. The next available field to set is the Default Gateway which we entered to match the Cluster Management IP. Under the vRPAs Setting section there are only two IP fields. The first is for the WAN/LAN/DATA IP, which was already set as the IP used to start Deployer. The second IP is for all the connections in the second vRPA that we are creating the cluster with. This IP will also be used for LAN/WAN and DATA on the second vRPA. So there is a management IP and a single IP for each vRPA to use once completed.
Our second example is for the Data on separate connection from the WAN/LAN option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on same adapter and Data (iSCSI) on separate network
adapter from WAN and LAN. Next we will have to select a network connection to use for the Data traffic from a dropdown of configured interfaces. Because we are using multiple IP addresses, we have to supply a netmask for each one, unlike the previous option where it was already determined by the first IP address we entered to start Deployer. Here we enter one for WAN/LAN and another for the Data IP address. Under the vRPAs Settings section which was lower on the vRPA Cluster Settings page, we will have to provide an IP for the Data connection of the first vRPA , and also the two required for the second vRPAs connections.
Our third example is for the WAN is separated from LAN and Data connection option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on same network adapter as LAN. Once that option is selected the fields below will change accordingly. Next we will have to select a network connection to use for the WAN traffic from a dropdown of configured interfaces since the LAN and Data are using the connection chosen when deploying the OVF template for the vRPA. We once again need to supply two netmasks, but this time the first is for the LAN/Data connection and the second is for the WAN connection alone. Under the vRPAs Setting section which is lower down on the vRPA Cluster Settings page, you will supply an IP for the WAN alone on the first vRPA and two IPs for the second vRPA, one for the WAN and one for the LAN/Data connection.
The 4th example is for the WAN and LAN and Data all being separated option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on separate network adapters from WAN and LAN. Once that option is selected the fields below will change accordingly displaying the new configuration screens shown here. Next we will have to select a network connection to use for the WAN and the Data traffic from a dropdown of configured interfaces since the LAN is using the connection chosen when deploying the OVF template for the vRPA. There will now be three netmasks which need to be input, one for LAN, one for WAN and a single one for the Data connections. In the vRPAs Settings section which is lower down on the Cluster Settings page, you will now input a WAN IP address and a Data IP address for the first vRPA and then an IP for each of the LAN, WAN and Data connections individually on vRPA2.
The last available option is the All are separated with dual data NICs option, which we have selected in the Network Adapters Configuration dropdown lists by choosing WAN and LAN on separate adapters and Data (iSCSI) on two dedicated network adapters, which is used for the best available performance and is recommended as a best practice. Once those options are selected the fields below will change to display the new configuration screens shown. Next we will have to select network interfaces to use for WAN and the two Data connections from a dropdown of configured interfaces since the LAN is using the connection chosen when deploying the OVF template for the vRPA. This option requires 4 netmask be entered, one for the WAN, LAN, Data 1 and Data 2 IPs, as all have their own connection links. Under
the vRPAs Setting section which is lower down on the Cluster Settings page, we can now see that we need to provide the full amount of IPs which can be used in any configuration per vRPA.
With the release of RP4VM 5.0, DHCP is supported for the WAN, LAN and iSCSI interfaces, but the cluster management and iSCSI VMkernel addresses must remain static. Support is also added for dynamic changes to all interfaces, unlike previous versions of the software. RP4VM 5.0 is also offering full stack support for IPv6 on all interfaces except iSCSI. Another enhancement is a reduction in the amount of configuration data which is shared across the clusters; with 5.0 only the WAN addresses of all vRPAs, the LAN addresses of vRPA 1and 2, MTUs, the cluster name, and the cluster management IPs of all clusters will be shared.
Note that the boxmgmt option to retrieve settings from remote RPA is unsupported as of 5.0 When IP addresses are provided by DHCP and an RPA reboots, the RPA will acquire an IP address from the DHCP server. If the DHCP server is not available, the RPA will not be able to return to the operational state; therefore it is recommended to supply redundant, highly available DHCP servers in the network when using the DHCP option.
Shown here on the Cluster Settings page of the Connectivity step of Deployer, we can see the option to select DHCP for individual interfaces. Notice that the DHCP icon does not appear for the Cluster Management IP address. This address must remain static in 5.0. If any of the other interfaces have their DHCP checkbox selected, the IP address netmasks will be removed and DHCP will be entered in its place. When you look at the vRPAs Settings window you can see that the LAN is a static address while the WAN and two Data addresses are now using DHCP. Another item to note here is that IPv6 is also available for all interfaces except for iSCSI, which currently only supports IPv4. Another important consideration to take note of is that adapter changes are only supported offline using boxmgmt. A vRPA would have to be detached from the cluster, the changes made, and then reattached to the cluster.
Let us take a closer look at the main page of Deployer. In the top center we see the vCenter IP address as well as the version of the RP4VM plugin which is installed on it. Connected to that is the RP4VMs cluster which displays the system name and the management IP address. If the + sign is clicked, the display will change to display the vRPAs which are part of the cluster.
In the wizards ribbon along the bottom of the page we will find all the functions that can be performed on a cluster. On the far right of the ribbon are all the wizards for the 5.0 release of Deployer which includes wizards to perform vRPA cluster network modifications, replace a vRPA, add and remove vRPAs from a cluster and remove a vRPA cluster from a system.
Up to the release of RecoverPoint for VMs 5.0, clusters were limited to a minimum of two appliances with a maximum of eight. While such a topology makes RP4VMs clusters more robust and provides high availability, additional use cases exist where redundancy and availability is traded for cost reduction. RP4VMs 5.0 introduces support for a single-vRPA cluster to enable, for example, product evaluation of basic RecoverPoint for VMs functionality and operations, and to provide cloud service providers the ability to support a topology with a single-vRPA cluster per tenant. This scale out model enables starting with a low scale single-vRPA cluster and provides a simple scale out process. This makes RP4VMs a low footprint protection tool. It protects a small number of VMs by having a minimal required footprint to reduce Disaster Recovery (DR) costs. The RecoverPoint for VMs environment allows scale out in order to meet sudden growth for DR needs.
RP4VMs systems can contain up to five vRPA clusters. They can be in a star, partially or fully connected formation protecting VMs locally or remotely. All clusters in an RP4VMs system need to have the same amount of vRPAs. RP4VMs 5.0 single-vRPA cluster deployments reduce the footprint for network, compute, and storage overheard for small to medium deployments. It offers a Total Cost of Ownership (TCO) reduction.
Note: The single-vRPA cluster is only supported in RecoverPoint for VMs implementations.
The RP4VMs Deployer can be used for connecting up to five clusters to the RP4VMs system. All clusters in an RP4VMs systems must have the same number of vRPAs. Software upgrades can be run from the Deployer. Non-disruptive upgrades are supported for clusters with two or more vRPAs. For a single-vRPA cluster, the warning shows that the upgrade will be disruptive. The replication tasks managed by this vRPA will be stopped until the upgrade is completed. The single-vRPA cluster upgrade occurs without a full sweep or journal loss. During the vRPA reboot, the Upgrade Progress report may not update and Deployer may become unavailable. When the vRPA completes its reboot, the user can login to Deployer and observe the upgrade process to its completion. Deployer also allows vRPA cluster network modifications such as cluster name, time zones
and so on, for single-vRPA clusters. To change network adapter settings use advanced tools such as Deployer API or the boxmgmt interface.
The vRPA Cluster wizard in Deployer is used to connect clusters. When adding an additional cluster to an existing system, the cluster must be clean, meaning that no configuration changes, including license installations, have been made to the new cluster. Repeat the connect cluster procedure to connect additional vRPA clusters.
Note: RP4VMs only supports clusters with the same number of vRPAs in one RP4VMs system.
New Dashboard tabs for RP4VM 5.0 will provide users an overview of system health and Consistency Group Status. The new tabs will allow the administrator quick access to the overall RP4VM system.
To access the Dashboard, in the vSphere Web Client home page, click on the RecoverPoint for VMs icon.
New Tabs include:
• Overall Health – provides a summary of the overall system health including CG transfer status and system alerts
• Recovery Activities – displays recovery activities for copies and group sets, provides recovery-related search functions, and enables users to select appropriate next actions
• Components– displays the status of system components and a history of incoming writes or throughput for each vRPA cluster or vRPA
• Events Log – displays and allows users to filter the system events
The new Dashboard for RP4VMs includes a Recovery Activities Tab. This will allow the monitoring of any active recovery actions such as Failover, Test a Copy and Recover Production. This tab allows the user to monitor and control all ongoing recovery operations.
The RecoverPoint for VMs Dashboard includes a Component Tab for viewing the status of all Clusters and vRPAs managed by the vCenter Server instance. For each component selected from the System Component window on the left, relevant statistics and information will be displayed in the right window.
Beginning with RecoverPoint for VMs 5.0 there is now an automatic RP4VM Uninstaller. Running the Uninstaller removes vRPA clusters and all of their configuration entities from vCenter Servers.
For more information on downloading and running the tool, see Appendix: Uninstaller tool in the RecoverPoint for Virtual Machines Installation and Deployment User Guide.
RecoverPoint for VMs 5.0 allows the removal of a Replication set from a Consistency Group without journal loss. Removing a Replication set does not impact the ability to perform a Failover or Recover Production to a point in time before the Rset was removed. The deleted Rset will not be restored as part of that image.
The RecoverPoint System will automatically generate a bookmark indicating the Rset removal. A point to remember is that the only Replication Set of a Consistency Group cannot be removed
Here we see a Consistency Group that is protecting 2 VMs. Each VM has a Local copy. Starting with RP4VMs 5.0 a user can now remove a protected VM from the Consistency Group without causing a Journal history loss. Also after a VM removal the Consistency Group is still fully capable of returning back to an image, using Recover Production or Failover, that contained the VM that was removed.
Displayed is a view of the Journal Volume for the copies of a Consistency Group. There are both system made Snapshots and User generated Bookmarks. Notice that after the deleting of a Replication Set, a Bookmark is created automatically. All the snapshots created from that point will not include the volumes from the removed Virtual Machine.