EMC ViPR Controller 2.4 Is Out, Now With XtremIO Support

Hi,

We have just released the 2.4 version of the ViPR controller, this release add a lot of goodness for many scenarios, one of them is the support for XtremIO XIOS 4.0/4.0.1,

For me, I think about ViPR as a manager of managers, it allow EMC and NON EMC customers to be able to provision volumes, create snapshots, map volumes, create DR workflows and much more, all in a true self-service portal (cloud anyone..?) with a true RBAC (role based access)

There’s a good high level why ViPR is such a critical component in today’s diverse DC which you can view here:

and another good deep dive into ViPR (from an older version) which you can view here:

Now,let’s recap the changes in ViPR 2.4, please note that I haven’t covered everything as it’s a monster release, instead, I just focused on areas that involves XtremIO (either directly or via another product that integrates to XtremIO)


Block: enhancements have been made to support ScaleIO via the REST API starting with version 1.32, the management of remote clusters with Vblock and XtremIO 4.0
Object: Elastic Cloud Storage (ECS) Appliance support has been added through the Object Storage Services.


File: enhancements have been made to add the ingestion of file system subdirectory and shares along with the discovery of Virtual Data Mover on the VNX (vNAS) to intelligently place newly created file systems on vNAS servers that provide better performance.


Data Protection: Cinder discovered storage arrays are supported as VPLEX backend and an administrator can increase the capacity of a RecoverPoint.


Product: enhancements have been made to empower the administrator to customize a node name, to add different volumes to a consistency group, and improve security using improved handling of passwords

Some enhancements have been made to:
VCE Vblock: support is being added for the integration of multiple remote image servers; this directly provides better network latency that benefits the installation of operating systems
ScaleIO: while the supported functionalities remain the same, ViPR Controller is able to communicate with ScaleIO version 1.32 using the REST API
XtremIO: along with support for the software version 4.0, ViPR Controller manages multiple clusters through a single XtremIO Management Service (XMS)

The following enhancements have been made relating to data protection:
VPLEX: Cinder discovered storage arrays are now usable as VPLEX backend storage, which enables ViPR Controller to orchestrate virtual volumes from non-EMC storage arrays behind
VPLEX systems. Additional enhancements are the ingestion of backend volumes and the management of migration services using VPLEX.
RecoverPoint: this release enables the administrator to optionally increase the size of a journal volume making certain the volumes continue to collect logs.


Security while accessing the ViPR Controller has improved. By default, entering th password ten consecutive times incorrectly causes the system to lock that station out for 10
minutes. An administrator with REST API and/or CLI access can manage this feature. An administrator has the capability to customize the ViPR Controller node name to meet
data center specification, which is a change from the usual “vipr1”, “vipr2” and “vipr3” naming convention.
The ViPR Controller Consistency Group has been enhanced to support VPLEX, XtremIO, and other volumes. This also includes the ability to add multiple volumes to a consistency group to ensure these volumes remain at a consistent level.
The method that ViPR Controller treats existing zones is different. When an order has been placed, the infrastructure checks the fabric manager to determine whether a zone with the appropriate WWNs already exists. If yes, that zone is leveraged to process the order. If a zone does not already exist, a new zone is created to process the order. This feature makes certain that the ViPR Controller creates zones only when necessary. This enhancement is available for all installations of ViPR Controller 2.4, but it must be enabled on upgrades.


Starting with ViPR Controller 2.4, support for XtremIO 4.0 along with the management of multiple cluster through a single XtremIO Management Service (XMS) is being added. The Storage Provider page discovers the XMS along with its clusters. A user with
administrative privileges is required for the ViPR Controller to integrate and manage these clusters. Additionally, XtremIO-based volume can now be part of a Consistency Group in ViPR Controller, which is an operation that was unavailable to XtremIO volumes before this release.

After upgrading to ViPR Controller 2.4, ViPR Controller will create a storage provider entry for each XtremIO system that was previously registered.


ViPR Controller also adds support for XtremIO 4.0 snapshots. The specific supported operations are:
Read-Only: XtremIO snapshots are regular volumes and are created as writable snapshots. In order to satisfy the need for local backup and immutable copies, there is an option to create a read-only snapshot. A read-only snapshot can be mapped to an external host such as a backup application, but it is not possible to write to it.
Restore: Using a single command, it is possible to restore a production volume or a CG from one of its descendant Snapshot Sets.
Refresh: The refresh command is a powerful tool for test and development environments and for the offline processing use case. With a single command, a snapshot of the production volume or CG is taken. This allows the test and development application to work on current data without the need to copy data or to rescan.



VPLEX: the VPLEX lesson covers the use of Cinder discovered storage arrays as VPLEX backend storage, the ingestion of backend volume and the management of VPLEX data
migration speed.
RecoverPoint: the RecoverPoint lesson covers the enhanced capability of adding more capacity to a journal size.
Let us first take a look at VPLEX.

ViPR Controller 2.0 started support for a broader set of third party block storage arrays by leveraging OpenStacks Cinder interface and existing drivers. ViPR Controller 2.2 added support for multipathing for FC. With ViPR Controller 2.4, Cinder discovered storage arrays
can be used as VPLEX backend, as long as both Fibre Channel ports from the VPLEX local and the third-party storage array are connected in the same fabric. Most importantly, the third party storage array must also be a supported VPLEX backend.
Check the ViPR Controller Support Matrix for a list of supported fabric manager, OpenStackoperating system and VPLEX supported backend.


Listed are some of the steps necessary to provision a virtual volume using the ViPR Controller. Add FC Storage Port (step 3) is being added here due to Cinder’s limitation. The limitation occurs when ViPR Controller discovers any storage arrays behind Cinder, Cinder only provides ViPR Controller with one link to communicate with the storage array. It is recommended to add additional ports for the storage array to assure that there are at least two storage ports connected to each VPLEX director. This step only needs to be performed the first time a Cinder discovered storage array is being used; thereafter, it can be skipped. In this course, some of the steps are covered due to their technical differences. Let us take a look at how storage ports are added and virtual pools are created.


First, the OpenStack server must be added as a Storage Provider. The process to do this is the same as before. This image shows three storage providers: two VPLEX systems and
the OpenStack host identified as a Third-Party Block Storage Provider. When the OpenStack host is added onto ViPR Controller for Southbound integration, any storage arrays that are configured inside of the Cinder configuration are automatically identified.
Also shown here are five configured storage arrays. Due to Cinder limitation, only one storage port is identified per storage array; however, there are ways within the ViPR Controller to add more storage ports.


Beginning with ViPR Controller 2.4, a Migration Services option, which leverages VPLEX, is introduced in the Service Catalog. The two tasks that can be performed within Migration Services are data migration and migration from VPLEX Local to VPLEX Metro.
In order to leverage data migration, all volumes must already have been created through the VPLEX cluster and the ViPR Controller. However, if the volume was created through the VPLEX cluster but not ViPR Controller, the volumes must first be ingested into ViPR Controller for management before it can be migrated.
With migration from VPLEX Local to VPLEX Metro, the virtual volume is simply moved from being a local volume to a distributed volume thus improving its availability across two VPLEX clusters instead of one.


In the VPLEX Data Migration page, the options are project, virtual pool, operation, target virtual pool and volume. Two options play a key role in the data migration task: Operation and Target Virtual Pool. Operation specifies the type of data migration while Target
Virtual Pool specifies the destination of the volume being migrated.


The speed of the migration can be configured using the Controller Configurations within ViPR Controller > VPLEX > Data Migration Speed. The value varies among the following: lowest, low, medium, high and highest. The transfer size can either be 128 KB, 2MB, 8 MB, 16 MB or 32 MB.
Note: If the migration value is changed during a migration operation, the newly-changed value will take effect on future migration operations. The current operation is not impacted.


Prior to ViPR Controller 2.4, VPLEX volume ingestion was only being performed for the virtual volume, not for the other components including the storage from the backend
storage arrays. With ViPR Controller 2.4, this framework is improved by adding ingestion for backend volumes, clones/full copies and mirrors/continuous copies of unmanaged VPLEX volumes. With this improvement, the volumes become ViPR Controller-managed volumes along with their associated snapshots and copies.
Note: For the most up-to-date information on supported VPLEX backend arrays inside of the ViPR Controller, please refer to the ViPR Controller Support Matrix in EMC Support Zone.


Now let us take a look at the RecoverPoint related enhancements in this release.


Prior to ViPR Controller 2.4, a RecoverPoint journal created within the ViPR Controller was a single volume, with no way to increase the journal size within the ViPR Controller. 80% of the journal volume is used to keep track of changes. In a busy environment, the journal size could fill quickly.
ViPR Controller 2.4 now provides the ViPR Controller administrator the option to increase the journal capacity. Using the Add Journal Capacity option within the Block Protection
Services category, an administrator can increase the volume by selecting the appropriate project, consistency group, copy name (the RecoverPoint volume name), virtual array and virtual pool. The new capacity can either depend on pre-defined calculations detailed in the RecoverPoint Administration Guide or defined by the data center administrator.


ViPR Controller 2.4 enhances the way a Consistency Group operates. For VPLEX systems, to ensure that all virtual volumes get to and remain on a consistent level, volumes from
different backend storage arrays can be part of the same consistency group. For RecoverPoint, ViPR Controller is able to provision multiple provisioning requests against the same Consistency Group at the same time. For XtremIO, with the support of version 4.0, XtremIO volumes can be added to or deleted
from a Consistency Group. Snapshots can also be taken or deleted from a Consistency Group.


As part of this release, enhancements have been made to the plug-ins that ViPR Controller works with. This table shows the enhancements related to the vRO workflow while there were no changes to Microsoft’s SCVMM and VMware’s vROps/vCOps. Let us look into how the workflow has been enhanced.


Prior to ViPR Controller 2.4, when the vRO administrator wanted to add ViPR Controller configuration, the vRO configurator was leveraged. While this was convenient, it also meant that every time something was updated in the ViPR Controller, the service needed to be
restarted. This impacted the availability of the plug-in during the restart. With ViPR Controller 2.4, the ViPR Controller configuration is moved to vRO workflow from the vRO configurator. The configuration of tenant and project is also being moved to vRO workflow. By moving the ViPR Controller configuration to the vRO workflow, there is no need to restart the service. Additionally, tenants and projects are now configurable within the workflow. Both of these enhancements make the EMC ViPR Plug-in for vRO more time efficient to use and minimize the need to restart the service.

For vRO users who have upgraded to ViPR Controller 2.4, a message appears indicating that the ViPR Controller configuration has been moved to the vRO workflow when the user
accesses the VMware vRealize Orchestrator Configuration. Shown here is the vRealize Orchestrator management interface where the configuration folder is selected to show the vRO workflow. Before vRO can be used, the administrator must decide to either proceed with the existing ViPR Controller configuration details or update the ViPR Controller configuration.
– By selecting to proceed with the existing ViPR Controller configuration, the plug-in continues to work; however tenants, projects and virtual arrays will need to be added manually.
– By choosing to update the ViPR Controller configuration, the user will be provided with a series of screens to input details of the ViPR Controller before being able to use vRO

here’s a demo of provisioning/ decomission a volume from XtremIO 4.X using the VIPR C 2.4

 

Similar Posts

Leave a ReplyCancel reply