AppSync is a software that enables Integrated Copy Data Management (iCDM) with Dell EMC’s primary storage systems.
AppSync simplifies and automates the process of generating and consuming copies of production data. By abstracting the underlying storage and replication technologies, and through deep application integration, AppSync empowers application owners to satisfy copy demand for operational recovery and data repurposing on their own. In turn, storage administrators need only be concerned with initial setup and policy management, resulting in an agile, frictionless environment.
AppSync automatically discovers application databases, learns the database structure, and maps it through the Virtualization Layer to the underlying storage LUN. It then orchestrates all the activities required from copy creation and validation through mounting at the target host and launching or recovering the application. Supported workflows also include refresh, expire, and restore production.
AppSync supports the following applications and storage arrays:
l Applications — Oracle, Microsoft SQL Server, Microsoft Exchange, VMware VMFS datastores, VMware NFS datastores, and NFS File systems.
l Storage — VMAX, VMAX 3, VMAX All Flash, PowerMax, VNX (Block and File), VNXe (Block and File), XtremIO, ViPR Controller, VPLEX, Dell SC, and Unity (Block and File).
AppSync 3.9 includes the following new features and enhancements:
XtremIO Quality of Service (QoS)
· Applying or modifying a QoS policy on the target device created by AppSync as part of copy creation.
· Applying or modifying a QoS policy during AppSync on-demand/on job mounts.
AppSync 3.9 now has the ability to associate mounted copies to an XtremIO QoS policy
XtremIO arrays allow users to enable user-defined service levels for IOPS on array resources such as luns
– This allows the end user more control of the array network and storage resources
You can define a template or QoS policy on the array with the following attributes:
– The maximum IOPS limit or (maximum I/O size + maximum number of I/O’s) on the array
– Burst percentage
– Type (fixed or adaptive)
Once a QoS policy is configured on the array, AppSync can leverage it
AppSync sets QoS on the target devices created by AppSync, it can’t alter production lun QoS policies
AppSync can not leverage QoS policies for CG’s or IG’s
AppSync can only apply or change the QoS policy on fresh mounts. If you wish to modify it, you need to unmount and remount with a new desired policy
The update of QoS policies happens when rediscovering the array
– This includes when new policies are created or deleted
The QoS policy is applied after adding the target LUNs to the initiator group
If the QoS operation fails, the mount operation will complete with a warning
QoS is always applied at the snapshot volume level
– For Silver, the QoS policy is applied on the target CG of the replication session
If the QoS policy is renamed on the array, it doesn’t affect AppSync, as AppSync persists the policy by the UUID, and not the name
QoS objects will be removed from the AppSync database when an XtremIO array is removed from AppSync
XtremIO Native Replication Restore
· Supports for restore of source volumes with remote XtremIO copies created using native replication. Supported only on XtremIO 6.2 and later.
Native replication remote restore is supported on XtremIO XMS 6.2+
In protection workflows (service plans), only the read-only copies are restored
– Any changes made to a mounted copy (the RW copy) will be discarded
– This is different from the local copy workflow
In repurposing workflows, either read-only or modified copies are supported
– User will be able to select the option
– There is no difference with the local vs. remote copy workflow
All prior XtremIO native bookmarks will be deleted as part of the remote restore process
The process consists of 4 Phases: Validation, Unmount, Restore, Mount
Validation Phase
NR restore fails if the session is in test-copy mode, error, or inconsistent state
NR restore fails if direction isn’t source-to-target
NR restore fails if all remote bookmarks captured together are not intact
– Example: Bookmarks of Oracle data CG and log CG
If restore is from remote repurpose RW copy, AppSync validates the linked CG copy is valid
Source application is unmounted from source host and storage is detached
Source luns will not be unmapped from production host IG
Restore Phase
Check the states of the replication session and maintain its current state – fail if invalid
Failover using the target bookmark, after which the direction is target-to-source
Wait until failover completes
When restoring from the RW repurposed copy, AppSync first refreshes the target CG using the remote linked CG
Failback, then after, the replication session is returned to original direction
Wait until failback completes
• Mount
Re-attach the luns on the production host
Mount and recover the application
Oracle enhancements:
· Support for Oracle Container Databases (CDBs).
Supports discovering, mapping, protecting, and repurposing Oracle CDBs
– Same workflows supported as standalone databases
– RecoverPoint environments are not currently qualified/supported
PDBs are viewed by selecting the CDB, then clicking the Show PDBs button
Requirements:
– AppSync requires that all PDBs of a CDB reside on the same storage array
– All PDBs within a CDB must be in Read-Write mode for hot backup
This includes all PDB instances on all RAC nodes, otherwise no hot backup option can be utilized
PDB$SEED is the exception and is always protected – mandatory for recovery purposes
Effected entities:
– AppSync detects effected entities at the CDB level, for example:
If pdb1 on CDB1 shares a filesystem with pdb2 on CDB2, then when AppSync attempts to restore CDB1, it will detect that the second CDB is an affected entity as well, requiring that it be shutdown [manually] before a restore
It is best to not share filesystems between different PDBs/CDBs
Limitations:
– Workflows at the PDB granularity level are not supported – everything happens at the CDB layer
· Support for Oracle FLEX ASM aware node selection.
AppSync now has the ability to discover FlexASM type instances
AppSync requires at least one available RAC node of an ASM instance in order to place the database in hot backup mode
– If an ASM instance is down on one node, AppSync will detect another node for a running ASM instance, and then set it for further processing
Limitations:
– AppSync only supports creating copies on nodes where both the database and ASM instances are available
– Mount and recovery works only if the ASM instance is up and running
If standalone, ASM must be up and running on at least that one node
If using Grid, an ASM instance is able to run on any one of the RAC nodes so long as cluster services are online on all remote nodes – cluster services must be running on all nodes
Deselect any node during mount that does not have both database and ASM instances running
Deselect any node during mount that does not have both database and ASM instances running
Note: Only nodes with live database and ASM instances are supported.
· Support for ASM 4KB sector disk drives.
AppSync supports logical volumes with an underlying 4K sector configuration with limitations…
– Supported on Linux with Oracle ASM (filesystems are not supported)
– Supported on Dell SC when creating 4K sectors in emulation mode
– Supported on XtremIO when application hosts are physical, or iSCSI connected using native mode
– Supported only with Oracle 12c R2+
There is a new event message warning of a potential mount failure in case of an
unsupported environment, such as when running Oracle 12.1 and 11gR2
“AppSync detected ASM diskgroup sector size as 4096 for Oracle version {VersionNumber}. Mount of the diskgroup might fail with inconsistent sector size error“
TCE enhancements:
· Provides an option to unlink a VMAX target device when a SnapVX snapshot is unmounted.
· Provides an option to consolidate or delete old Postgres datastore.log files.
Delete old datastore.log files (SER:6630)
– AppSync now removes datastore.log files when they are over a month old
Removed “A reboot of Exchange may be required after installation of the host plugin” from UI and documentation (SER:6620)
– AppSync stops services that use certain dll files when installing/removing the AppSync plug-in on MS Exchange hosts
Option to unlink VMAX target device when a SnapVX snapshot is unmounted – Service Plan Settings
Previously, the unlink operations only occurred during copy expiration
Only supported when copy is linked in no-copy mode (snapshot)
– Not applicable when copy is linked in copy mode (clone)
Option is presented in on-demand, service plan, and second gen repurpose
Not supported for 1st Gen
Does not allow restore of modified target devices
– Warning is seen
Option to unlink VMAX target device when a SnapVX snapshot is unmounted – Repurposing 2nd Gen
The AppSync User and Administration Guide provides more information on the new features and enhancements.