Dell EMC AppSync 3.1 Is Out, Here’s What’s New!

This post was written by Marco Abela and Itzik Reich

The AppSync 3.1 release (GA on 12/14/16) includes major new features for enabling iCDM use cases with XtremIO. Let’s take a look and what is new for XtremIO:


Crash consistent SQL copies


-Protects the database without using VSS or VDI, and depends on the array to create crash consistent copies


The AppSync 3.0 release introduced a “Non VDI” feature. This “non VDI” feature creates copies of SQL using the Microsoft VSS freeze/thaw framework at the filesystem level. With AppSync 3.1, by selecting “Crash-Consistent”, no application level freeze/thaw is done, resulting in less overhead needed for creating the copy of your SQL server using only XVC. This is equivalent to taking a snapshot of the relevant volumes from the XMS.

Restrictions/Limitations:
If VNX/VNXe/Unity, SQL databases subscribed must all be part of same CG or LunGroup
Mount with recovery will work only for “Attach Database” Recovery Type option
When restoring, only “No Recovery” type will be supported
assqlrestore.exe is not supported for “crash-consistent” SQL copies
Transaction log backup is not supported
Not supported on ViPR

SQL Cluster Mount


Concern Statement: AppSync does not provide the ability to copy SQL clusters using mount points, nor mount to alternate mount paths, only same as original source.
Solution: Ability to mount to alternate paths, including mounting back to the same production cluster, as a cluster resource, under an alternate clustered SQL instance and also mount multiplecopies simultaneously as clustered resources on a single mount host.
Restrictions/Limitations:
For mount points, the root drive must be present and must already be a clustered physical disk resource
Cannot use the default mount path, i.e. C:\AppSyncMounts as this is a system drive


FileSystem Enhancements:

Repurposing support for File Systems:
This new version introduces the repurpose workflow for File Systems compatible with AppSync. This is very exciting for supporting iCDM use cases for Applications in which AppSync does not have direct application support for, allowing you to create copies for test/dev uses cases using a combination of Filesystem freeze/thaw and scripts if needed as part of the AppSync repurpose flow.

Nested Mount Points, Path Mapping, and FileSystem with PowerHA, are also key new FileSystem enhancements. For a summary of what these mean see below:


Repurpose Workflows for File Systems
Concern Statement: The repurposing workflow does not support file systems, only Oracle and SQL. Epic and other file system users need to be able and utilize the Repurposing workflow.
Solution: Enable the repurpose flow (Wizard) for File Systems on all AppSync supported OS and storage. This would be supported on all AppSync supported OS and RecoverPoint Local and Remote.

Unlike SQL and Oracle repurposing, multiple file systems can be repurposed together – as seen in the screenshot.


Repurpose Workflows RULES for File Systems
When copy is refreshed after performing an on demand mount, AppSync unmounts the mounted copy, refreshes (create a new copy & only on successful creation of new ones expire the old copy) and mounts the copy back to the mount host with the same mount options
Applicable to all storage types except for XtremIO volumes that are in a consistency group (see point below)
Not applicable for RecoverPoint repurposing
RecoverPoint repurposing on-demand mounts are not re-mounted
With VNX/RP-VNX, you cannot repurpose from a mounted copy
With VMAX2/RP-VMAX2, you cannot repurpose from a gen1 copy if the gen1 copy created is a snap
When using XtremIO CGs – the copy is unmounted (only application unmount – no storage cleanup),
storage is refreshed and is mounted (only application mount) to the mount host with same mount options
Repurposing NFS file systems and/or Unity environments, are not supported.

Repurpose Workflows RULES for File Systems
File system repurposing workflows support the same rules as SQL and Oracle, such as:
Gen 1 copies are the only copy that is considered application consistent
Restores from Gen 1 through RecoverPoint (intermediary copy) or SRDF are not supported
Restores from Gen 2 are not supported
Callout scripts use the Label field
Freeze and thaw callouts are only supported for Gen 1 copies
Unmount callouts are supported for Gen 1 and Gen 2 copies
Example:
appsync_[freeze/thaw/unmount]_filesystem_<label_name>
If the number of filesystems exceeds the allowable storage units, which is 12 by default, defined
in server settings for each storage type, the operation will fail
“max.vmax.block.affinity.storage units” for VMAX array
“max.vnx.block.affinity.storage.units” for VNX array
“max.common.block.affinity.storage.units” for VPLEX, Unity, and XtremIO arrays


Persistent Filesystem Mount
Concern Statement: File systems mounted with AppSync are not persistent upon the host
rebooting.
Solution: Offer persistent filesystem mount options so the hosts mounted filesystems (by
AppSync), are automatically mounted, upon a host reboot.
Applies to all file systems, including those which support Oracle DB copies
For AIX, ensure the mount setting in /etc/filesystems on the source host, is set to TRUE
(AppSync uses same settings as on source)
For Linux, AppSync modifies the /etc/fstab file:
Entries include the notation/comment “# line added by AppSync Agent”
Unmounting within AppSync, removed the entry

Feature Benefit
Nested mount points Ability to copy and correctly mount, refresh and restore Nested Mount production environments, eliminating the current restriction.
Path Mapping support Ability to mount specific application copies to location specified by the user. This eliminates the restriction of allowing mounts on original path or default path only
FS Plan with PowerHA cluster AppSync will become aware of the node failover within PowerHA cluster so that all supported use cases work seamlessly in a failover scenario.

Currently, when a failover happens on one cluster node and the file system is activated on another, the File System is not followed by AppSync. For that reason this configuration is currently not supported

Repurpose Flow with FS Plan The popular Repurpose Wizard will become available with the file system plan. This will be supported on all AppSync supported OS and storage, including RecoverPoint.

The combination of all these new FS enhancements enable iCDM use cases for……

Epic

That’s right…As XtremIO has become the second product worldwide to be distinguished as “Very Common” with more than 20% Epic customers and counting, we have worked with the AppSync team to enabled iCDM use cases for EPIC. The filesystem enhancements above help enable these use cases, and further demonstrate the robustness XtremIO provides for EPIC software.

Staggering Volume Deletes:
In order to avoid possible temporary latency increases, which can be caused by the massive deletion of multiple volumes/snapshots with high change rate, AppSync introduces a logic to delete XtremIO volumes at a rate of one volume every 60 seconds. This logic is disabled by default, and should be enabled only in the rare circumstance where this increased latency may be observed. The cleanup thread is triggered every 30th minute of the hour (that is, once in an hour).

The cleanup thread gets triggered every 30th minute of the hour (by default)
The cleanup thread starting Hour, Minute, and time delay can all be configured

In order to enable this, user will have to access the AppSync server settings by going to http://APPSYNC_SERVER_IP:8085/appsync/?manageserversettings=true and going to SettingsàAppSync Server SettingsàManage All Settings and change the value of “maint.xtremio.snapcleanup.enable” from “false” to “true”.

Limitations:
All File system from a single VG must be mounted and unmounted together (applies to nested and non-nested mount points)

XtremIO CG support for repurpose workflow:

The repurpose flow now supports awareness into applications laid out on XtremIO using Consistency Groups:

– For Windows applications (SQL, Exchange, Filesystems), all the app components e.x. db files,log files etc.. should be part of same CG (one and only one and not being part of more than one CG) for AppSync to use CG based API calls.


– For Oracle, all the db,control files should be part of one CG and the Archive log should be part of another CG.

What is the benefit of this? Using XtremIO Virtual Copies (XVC) to its full potential for quickest operation time. This reduces application freeze time, as well as reduces the overall length of the copy creation time and later refresh process. During the refresh operation, it will tell you if it was able to use CG based refresh or not:

 

With CG:


You will notice the status screen mentioning that the refresh was done “..using the CG..”

To analyze this a little further in looking at REST logs issued to the XMS:

The snap operation was done with a single API specifying to snap the CG:

2016-12-13 18:49:32,189 – RestLogger – INFO – rest_server::log_request:96 – REST call: <POST /v2/types/snapshots HTTP/1.1> with args {} and content {u’snap-suffix’: u’.snap.20161213_090905.g1′, u’cluster-id’: u’xbricksc306′, u’consistency-group-id’: u’MarcoFS_CG’}

And refresh specifying Refreshing from CG to Consistency Group through single API:

2016-12-13 18:50:23,426 – RestLogger – INFO – rest_server::log_request:96 – REST call: <POST /v2/types/snapshots HTTP/1.1> with args {} and content {u’from-consistency-group-id’: u’MarcoFS_CG’, u’cluster-id’: u’xbricksc306′, u’to-snapshot-set-id’: u’SnapshotSet.1481647772251′}

Without CG:

You will receive a message in the status screen stating that volumes have not been laid out in a CG, or not done so as specified per the prerequisites earlier.

Windows Server 2016 support


Both as AppSync server & agent
Clustering support still pending qualification (check with ESM at time of GA)
Microsoft Edge as a browser is not supported for AppSync GUI


vSphere 2016 tolerance support (ESX 6.5) – no new functionality.


Path Mapping


AppSync currently does not allow users to modify the root path of a Mount operation – a limitation that Replication Manager does not have.
Solution: Specify a complete Mount Path for the application copy being mounted. Change the root path for a Mount Host (specify a substitution), so that the layout is replicated using the substitution.
Unix Example:
/prd01 can become /sup01
Windows Examples:
E:\ can becomes H:\
F:\ can become G:\FolderName

Limitations:

Mounting Windows file system examples:
When E:\ is configured to become H:\
Ensure that the H:\ IS NOT already mounted
When E:\ is configured to become H:\SomeFolder (e.g. H:\myCopy)
Ensure that the H: drive is already available, as AppSync relies on the root mount drive letter to exist, in order to mount the myCopy directory (in this example), which was originally the E:\ drive and contents
When no target mount path is configured, AppSync mounts as “Same as original path”
In this case, the job fails if Mount on Server is set to: Original Host (E:\ cannot mount back as E:\)
Currently supported on File System Service Plans and Repurposing file systems only
Path mapping is not supported with Oracle, Exchange or SQL


Progress Dialog Monitor

Concern Statement: When the progress dialog is closed, users have to go to the event window to look for updates, manually having to refresh the window.
Solution: Allow users to view and monitor the Service Plan run progress through the GUI by launching the process dialog window which updates automatically.


Oracle Database Restart


After a mount host reboot, the Oracle database does not automatically restart
Solution: Offers the ability to reboot a mount host AppSync recovered Oracle on, to the same state as before the reboot
Unlike the conventional /etc/oratab entry, AppSync creates a startupscript
/etc/asdbora on AIX
/etc/init.d/asdbora on Linux
A symlink to this named S99asdbora script is found under /etc/rc.d
Configurable through the AppSync UI (disabled by default)
Not available for RMAN or “Mount on standalone server and prepare scripts for manual database recovery”


File System Service Plan with AIX PowerHA


Concern Statement: Currently AIX PowerHA clustered environments support Oracle only – no support for clustered file systems. Epic, and other file system plan users, need support for file system clustering.
When a failover happens, the file system is not followed by AppSync
Solution: Support PowerHA environments utilizing file system Service Plans and Repurposing workflows
AppSync will become aware of the node failover within PowerHA cluster so that all supported use cases work seamlessly in a failover scenario
Setup Details:
1. Add/register all nodes of the PowerHA cluster to AppSync before registering the virtual IP
2. The virtual IP (Service label IP/name) resource must be configured in the resource group for the clustered application,
as well as added/registered to AppSync (as if it were another physical)
Each resource group must have a unique Service label IP
3. Protect the file systems belonging to that particular resource group, rather than protecting the file systems by navigating the nodes
Note: Volume groups are not concurrent after a restore, you must manually make them concurrent.

Oracle Protection on PowerHA Changes
Previously: Setting up Oracle with PowerHA only involved adding both nodes
There was no need for a servicelableIP/Name as Oracle told AppSync which active node to manage
Application mapping is configured through the Application (Oracle) Service Plan, and not through the node, as
a file system service plan would be configured
AppSync 3.1: Now requires registering the use of the servicelabelP/Name of the Oracle DB
resource group
Similar to configuring AppSync to support file system service plans under PowerHA
Add all physical nodes, and then register the servicelabelIP/Name
Configure the Oracle service plan using the servicelableIP/Name
Repurposing uses the servicelabelIP/Name
If the servicelabelIP/Name is not registered, AppSync will not discover the Oracle databases
If upgrading from a previous version to 3.1, the servicelabelIP/Name must be registered, otherwise the job
fails with an appropriate message – no reconfiguration is required, simply register the servicelabelIP/Name


Similar Posts

Leave a ReplyCancel reply