Category Archives: April

How to use “ndmpcopy” in clustered Data ONTAP 8.2.x

Introduction

“ndmpcopy” in clustered Data ONTAP has two modes

  1. node-scope-mode : you need to track the volume location if a volume move is performed
  2. vserver-scope-mode : no issues, even if the volume is moved to a different node. 

In this scenario i’ll use vserver-scope-mode to perform a “ndmpocpy” within the same cluster and same SVM.

In my test I copied a 1GB file to a new folder under same volume.

Login to the cluster

snowy-mgmt::> set diag -rows 0
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

List of volumes on SVM “snowy”

snowy-mgmt::*> vol show -vserver snowy
(volume show)
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
snowy     SNOWY62_vol001 snowy01_hybdata_01 online RW 1TB    83.20GB   91%
snowy     SNOWY62_vol001_sv snowy02_hybdata_01 online DP 1TB 84.91GB   91%
snowy     HRauhome01   snowy01_hybdfc_01 online RW       100GB    94.96GB    5%
snowy     rootvol      snowy01_hybdfc_01 online RW        20MB    18.88MB    5%
4 entries were displayed.

snowy-mgmt::*> df -g SNOWY62_vol001
Filesystem               total       used      avail capacity  Mounted on                 Vserver
/vol/SNOWY62_vol001/ 972GB       3GB       83GB      91%  /SNOWY62_vol001       snowy
/vol/SNOWY62_vol001/.snapshot 51GB 0GB     51GB       0%  /SNOWY62_vol001/.snapshot  snowy
2 entries were displayed.

snowy-mgmt::*> vol show -vserver snowy -fields volume,junction-path
(volume show)
vserver volume              junction-path
------- ------------------- --------------------
snowy   SNOWY62_vol001 /SNOWY62_vol001
snowy   SNOWY62_vol001_sv -
snowy   HRauhome01          /hrauhome01
snowy   rootvol             /
4 entries were displayed.

Create a “ndmpuser” with role “backup”

snowy-mgmt::*> security login show
Vserver: snowy
Authentication                  Acct
UserName         Application Method         Role Name        Locked
---------------- ----------- -------------- ---------------- ------
vsadmin          ontapi      password       vsadmin          yes
vsadmin          ssh         password       vsadmin          yes

Vserver: snowy-mgmt
Authentication                  Acct
UserName         Application Method         Role Name        Locked
---------------- ----------- -------------- ---------------- ------
admin            console     password       admin            no
admin            http        password       admin            no
admin            ontapi      password       admin            no
admin            service-processor password admin            no
admin            ssh         password       admin            no
autosupport      console     password       autosupport      yes
8 entries were displayed.

snowy-mgmt::*> security login create -username ndmpuser -application ssh -authmethod password -role backup -vserver snowy-mgmt
Please enter a password for user 'ndmpuser':
Please enter it again:

snowy-mgmt::*> vserver services ndmp generate-password -vserver snowy-mgmt -user ndmpuser
Vserver: snowy-mgmt
User: ndmpuser
Password: Ip3gRJchR0FGPLA7

Turn on “ndmp” service on the cluster mgmt. SVM

snowy-mgmt::*> vserver services ndmp on -vserver snowy-mgmt

In the Nodeshell initiate “ndmpcopy”

snowy-mgmt::*> node run -node snowy-01
Type 'exit' or 'Ctrl-D' to return to the CLI

snowy-01> ndmpcopy
usage:
ndmpcopy [<options>] <source> <destination>
<source> and <destination> are of the form [<filer>:]<path>
If an IPv6 address is specified, it must be enclosed in square brackets

options:
[-sa <username>:<password>]
[-da <username>:<password>]
    source/destination filer authentication
[-st { text | md5 }]
[-dt { text | md5 }]
    source/destination filer authentication type
    default is md5
[-l { 0 | 1 | 2 }]
    incremental level
    default is 0
[-d]
    debug mode
[-f]
    force flag, to copy system files
[-mcs { inet | inet6 }]
    force specified address mode for source control connection
[-mcd { inet | inet6 }]
    force specified address mode for destination control connection
[-md { inet | inet6 }]
    force specified address mode for data connection
[-h]
    display this message
[-p]
    accept the password interactively
[-exclude <value>]
    exclude the files/dirs from backup path

snowy-01>
snowy-01> ndmpcopy -sa ndmpuser:Ip3gRJchR0FGPLA7 -da ndmpuser:Ip3gRJchR0FGPLA7 10.10.2.72:/snowy/SNOWY62_vol001/TestFil2_002 10.10.2.72:/snowy/SNOWY62_vol001/destination
Ndmpcopy: Starting copy [ 14 ] ...
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Log: DUMP: creating "/snowy/SNOWY62_vol001/../snapshot_for_backup.7" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP: Using Partial Volume Dump of selected subtrees
Ndmpcopy: 10.10.2.72: Log: DUMP: Using snapshot_for_backup.7 snapshot
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of this level 0 dump: Tue Apr  5 08:27:43 2016.
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of last level 0 dump: the epoch.
Ndmpcopy: 10.10.2.72: Log: DUMP: Dumping /snowy/SNOWY62_vol001/TestFil2_002 to NDMP connection
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass I)[regular files]
Ndmpcopy: 10.10.2.72: Log: DUMP: Reading file names from NDMP.
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass II)[directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: estimated 1050638 KB.
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass III) [directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass IV) [regular files]
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:52 2016: Begin level 0 restore
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:54 2016: Reading directories from the backup
Ndmpcopy: 10.10.2.72: Log: RESTORE: Warning: /vol/SNOWY62_vol001/destination/ will not be restored as a qtree: exists as a normal subdirectory.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Could not create qtree `/vol/SNOWY62_vol001/destination/'. Creating a regular directory instead.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:55 2016: Creating files and directories.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:55 2016: Writing data to files.
Ndmpcopy: 10.10.2.72: Log: ACL_START is '1075865600'
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass V) [ACLs]
Ndmpcopy: 10.10.2.72: Log: DUMP: 1050657 KB
Ndmpcopy: 10.10.2.72: Log: DUMP: DUMP IS DONE
Ndmpcopy: 10.10.2.72: Log: RESTORE: RESTORE IS DONE
Ndmpcopy: 10.10.2.72: Notify: restore successful
Ndmpcopy: 10.10.2.72: Log: DUMP: Deleting "/snowy/SNOWY62_vol001/../snapshot_for_backup.7" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP_DATE is '5754776159'
Ndmpcopy: 10.10.2.72: Notify: dump successful
Ndmpcopy: Transfer successful [ 0 hours, 1 minutes, 26 seconds ]
Ndmpcopy: Done

Although I used the cluster-mgmt lif in the ndmpcopy syntax, I didn’t see any traffic flowing on the lif 

snowy-mgmt::*> statistics show-periodic -node cluster:summary -object lif:vserver -instance snowy-mgmt -counter instance_name|recv_data|sent_data -interval 1
snowy-mgmt: lif:vserver.snowy-mgmt: 4/5/2016 08:27:20
instance     recv     sent   Complete     Number of
name     data     data  Aggregation  Constituents
-------- -------- -------- ----------- -------------
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1

Another “ndmpcopy” job with different statistics command
snowy-01> ndmpcopy -sa ndmpuser:Ip3gRJchR0FGPLA7 -da ndmpuser:Ip3gRJchR0FGPLA7 10.10.2.72:/snowy/SNOWY62_vol001/TestFil1_001 10.10.2.72:/snowy/SNOWY62_vol001/destination
Ndmpcopy: Starting copy [ 15 ] ...
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Log: DUMP: creating "/snowy/SNOWY62_vol001/../snapshot_for_backup.9" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP: Using Partial Volume Dump of selected subtrees
Ndmpcopy: 10.10.2.72: Log: DUMP: Using snapshot_for_backup.9 snapshot
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of this level 0 dump: Tue Apr  5 08:30:40 2016.
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of last level 0 dump: the epoch.
Ndmpcopy: 10.10.2.72: Log: DUMP: Dumping /snowy/SNOWY62_vol001/TestFil1_001 to NDMP connection
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass I)[regular files]
Ndmpcopy: 10.10.2.72: Log: DUMP: Reading file names from NDMP.
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass II)[directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: estimated 1050638 KB.
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass III) [directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass IV) [regular files]
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:47 2016: Begin level 0 restore
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:49 2016: Reading directories from the backup
Ndmpcopy: 10.10.2.72: Log: RESTORE: Warning: /vol/SNOWY62_vol001/destination/ will not be restored as a qtree: exists as a normal subdirectory.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Could not create qtree `/vol/SNOWY62_vol001/destination/'. Creating a regular directory instead.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:50 2016: Creating files and directories.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:50 2016: Writing data to files.
Ndmpcopy: 10.10.2.72: Log: ACL_START is '1075865600'
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass V) [ACLs]
Ndmpcopy: 10.10.2.72: Log: DUMP: 1050657 KB
Ndmpcopy: 10.10.2.72: Log: DUMP: DUMP IS DONE
Ndmpcopy: 10.10.2.72: Log: DUMP: Deleting "/snowy/SNOWY62_vol001/../snapshot_for_backup.9" snapshot.
Ndmpcopy: 10.10.2.72: Log: RESTORE: RESTORE IS DONE
Ndmpcopy: 10.10.2.72: Notify: restore successful
Ndmpcopy: 10.10.2.72: Log: DUMP_DATE is '5754776336'
Ndmpcopy: 10.10.2.72: Notify: dump successful
Ndmpcopy: Transfer successful [ 0 hours, 1 minutes, 20 seconds ]
Ndmpcopy: Done
snowy-01>

snowy-mgmt::*> statistics show-periodic -object lif -instance snowy-mgmt:cluster_mgmt -counter instance_name|recv_data|sent_data -interval 1
snowy-mgmt: lif.snowy-mgmt:cluster_mgmt: 4/5/2016 08:30:10
instance     recv     sent   Complete     Number of
name     data     data  Aggregation  Constituents
-------- -------- -------- ----------- -------------
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a

Data ONTAP 8.3 New Features

Unsupported features Support for 7-mode
32bit data
flex cache volumes
vol guarantee of file not supported
SSLv2 no longer supported for web services in the cluster
Deprecated core dump commands
Deprecated vserver commands
Deprecated system services ndmp commands
Deprecation of health dashboard
New platform and hardware support Support for 6-TB capacity disks
Support for Storage Encryption on SSDs
Manageability enhancements Support of server CA and client types of digital certificates
Enhancements for the SP
Enhancements for managing the cluster time
Enhancements for feature license management – system feature-,usage show summary, system feature-usage show-history
OnCommand System Manager is now included with Data ONTAP – https://cluster-mgmt-LIF
Access of nodeshell commands and options in the clustershell
Systemshell access now requires diagnostic privilege level
FIPS 140-2 support for HTTPS
Audit log for the nodeshell now included in AutoSupport messages
Changes to core dump management
Changes to management of the “autosupport” role and account
Support for automated nondisruptive upgrades
Active directory groups can be created for cluster and SVM administrator user accounts
Changes to cluster setup
Enhancements to AutoSupport payload support
New and updated commands for performance data
Enhancements for Storage QoS- Data ONTAP now supports cache monitoring and autovolume workloads, and it provides an
improved system interface to define workloads and control them. Starting with Data ONTAP 8.3, the system automatically assigns an autovolume workload to each volume, and the volume statistics are available in the configuration data. New qos statistics volume commands enable you to monitor Storage QoS at the volume level instead of the aggregate level.
Enhancements to continuous segment cleaning
MetroCluster enhancements Support for MetroCluster configurations
Networking and security protocol enhancements Default port roles no longer automatically defined – Prior to Data ONTAP 8.3, default roles were assigned to each network port during configuration of
the cluster. Port roles are now defined when configuring the LIF that is hosted on the port.
Support for multi-tenancy – Beginning with Data ONTAP 8.3, you can create IPspaces to enable a single storage system to support multiple tenants. That is, you can configure the system to be accessed by clients from more than one disconnected network, even if those clients are using the same IP address. An IPspace defines a distinct IP address space in which Storage Virtual Machines (SVMs) reside. Ports and IP addresses defined for an IPspace are applicable only within that IPspace. A distinct routing table is maintained for each SVM within an IPspace; therefore, no cross-SVM or cross-
IPspace traffic routing occurs. A default IPspace is created when the initial cluster is created. You create additional IPspaces when
you need SVMs to have their own storage, administration, and routing.
IPv6 support enhancements
New networking features – Beginning with Data ONTAP 8.3, new networking features have been added to simplify the configuration of IPspaces and to manage pools of IP addresses.
The network IPspace commands enable you to create and manage additional IPspaces on your cluster. Creating an IPspace is an initial step in creating the infrastructure required for setting up multi-tenancy.
The network broadcast-domain commands enable you to configure and manage ports in a broadcast domain. When you create a broadcast domain, a failover group is automatically created for the ports in the broadcast domain.
The network subnet commands enable you to allocate specific blocks, or pools, of IP addresses for your Data ONTAP network configuration. When creating LIFs, an address can be assigned from the subnet and a default route to a gateway is automatically added to the associated SVM.
Storage resource management enhancements Performance increase for random read operations
Root-data partitioning available for entry-level and AFF platforms
Support for Flash Pool SSD partitioning
Support for Storage Encryption on SSDs
New disk name format – Starting in Data ONTAP 8.3, disk names have a new format that is based on the physical location of the disk. The disk name no longer changes depending on the node from which it is accessed.
Flash Pool caching of compressed data for read operations
Increased maximum cache limits
Change to the default action taken when moving a volume cutover occurs
Support for caching policies on Flash Pool aggregates
Infinite Volume enhancements
Inline detection on deduplication-enabled volumes to identify blocks of zeros
FlexClone file and FlexClone LUN enhancements
FlexArray Virtualization (V-Series) enhancements Support for MetroCluster configuration with array LUNs
Support for Data ONTAP installation on a system that uses only array LUNs
Support for load distribution over multiple paths to an array LUN
Support for two FC initiator ports connecting to a single target port
File access protocol enhancements Changes to the way clustered Data ONTAP advertises DFS capabilities
Support for netgroup-by-host searches for NIS and LDAP
Capability to configure the number of group IDs allowed for NFS users
Capability to configure the ports used by some NFSv3 services
New commands for managing protocols for SVMs
LDAP support for RFC2307bis
New limits for local UNIX users, groups, and group members
LDAP client configurations now support multiple DNs
Support for qtree exports
New commands for configuring and troubleshooting name services
Enhancements for Kerberos 5
Capability for NFS clients to view the NFS exports list using “showmount -e
Support for Storage-Level Access Guard
Support for FPolicy passthrough-read for offline data
Support for auditing CIFS logon and logoff events
Support for additional group policy objects
Support for Dynamic Access Control and central access policies
Support for auditing central access policy staging events
Support for NetBIOS aliases for CIFS servers
Support for AES encryption security for Kerberos-based communication
Support for setting the minimum authentication security level
Capability to determine whether SMB sessions are signed
Support for scheduling CIFS servers for automatic computer account password changes
Support for configuring character mapping for SMB file name translation
Support for additional share parameters and new share properties
Support for configuring bypass traverse checking for SMB users
Enhancements for managing home directories
Support for closing open files and SMB sessions
Support for managing shares using Microsoft Management Console
Support for additional CIFS server options
SAN enhancements Support for IPspaces in SAN
Enhancement to reduce the number of paths from host to LUNs
Support for moving LUNs across volumes
Data protection enhancements MetroCluster support
Support for network compression for SnapMirror and SnapVault
Capability to reset certain SEDs to factory settings
Support for greater number of cluster peers
Support for restoring files or LUNs from a SnapVault secondary
Storage Encryption support for IPv6
Support for version-flexible replication
Support for cluster peer authentication
Support for single sign-on (SSO) authentication
Support for secure NDMP
Support for SMTape
Support for KMIP 1.1
Hybrid cloud enhancements Introducing Cloud ONTAP for Amazon Web Services (AWS )
New and changed features in OnCommand System Manager 8.3 Accessing System Manager 8.3
Storage pools
Enhanced aggregate management
Enhancements in the Disks window
Networking simplicity
Broadcast domains – You can create broadcast domains to provide a logical division of a computer network. Broadcast domains enable you to group network ports that belong to the same datalink layer. The ports in the group can then be used to create network interfaces for data traffic or management traffic.
Subnets – You can create a subnet to provide a logical subdivision of an IP network to pre-allocate the IP addresses. A subnet enables you to create network interfaces more easily by specifying a subnet instead of an IP address and network mask values for each new interface.
IPv6 support – You can use IPv6 address on your cluster for various operations such as configuring LIFs, cluster peering, configuring DNS, configuring NIS, and configuring Kerberos.
Enhanced Storage Virtual Machine (SVM) setup wizard
BranchCache configuration
LUN move
Authenticated cluster peering
Qtree exports
Service Processors
Version-flexible mirror relationship – You can create a mirror relationship that is independent of the Data ONTAP version running on the
source and destination clusters.
Snapshot policies – You can create Snapshot policies at the SVM level, which enables you to create Snapshot policies for
a specific SVM

How to clean Snapmirrors in clustered Data ONTAP

This Blog post lists the procedure used to clean up snapmirror relationships in clustered Data ONTAP.  In this scenario the destination cluster has large number of volumes that are snapmirror target

Drawing8

Source Cluster: SrcCluster
Destination Cluster: DstCluster
Source SVM: SrcVsv
Destination SVM: DstVsv

Use snapmirror show command to display source and destination paths

    DstCluster::*> snapmirror show -source-cluster SrcCluster
    Source Destination Mirror Relationship Total Last
    Path Type Path State Status Progress Healthy Updated
    ———– —- ———— ——- ————– ——— ——- ——–
    SrcVsv:src_vol_c0165 DP DstVsv:SrcCluster_src_vol_c0165r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0180 DP DstVsv:SrcCluster_src_vol_c0180r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0282 DP DstVsv:SrcCluster_src_vol_c0282r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0284 DP DstVsv:SrcCluster_src_vol_c0284r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0286 DP DstVsv:SrcCluster_src_vol_c0286r Snapmirrored Idle – true –
    SrcVsv:src_vol_c0313 DP DstVsv:SrcCluster_src_vol_c0313r Snapmirrored Idle – true –

Use snapmirror quiesce command to quiesce the snapmirror relations

    DstCluster::*> snapmirror quiesce -source-cluster SrcCluster -destination-path DstVsv:*
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0165r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0180r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0282r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0284r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0286r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0313r”.

Break the snapmirror relations

    DstCluster::*> snapmirror break -source-cluster SrcCluster -destination-path DstVsv:*
    [Job 18594] Job succeeded: SnapMirror Break Succeeded
    [Job 18595] Job succeeded: SnapMirror Break Succeeded
    [Job 18596] Job succeeded: SnapMirror Break Succeeded
    [Job 18597] Job succeeded: SnapMirror Break Succeeded
    [Job 18598] Job succeeded: SnapMirror Break Succeeded
    [Job 18599] Job succeeded: SnapMirror Break Succeeded
    [Job 18600] Job succeeded: SnapMirror Break Succeeded

[ ON THE SOURCE CLUSTER ] : Use snapmirror list-destinations command to check valid  snapmirror relations

    SrcCluster::> snapmirror list-destinations -destination-vserver DstVsv
    Source Destination Transfer Last Relationship
    Path Type Path Status Progress Updated Id
    ———– —– ———— ——- ——— ———— —————
    SrcVsv:src_vol_c0165 DP DstVsv:SrcCluster_src_vol_c0165r – – – a58a0def-2c61-11e4-b66e-123478563412
    SrcVsv:src_vol_c0282 DP DstVsv:SrcCluster_src_vol_c0282r – – – a5cbb158-2c61-11e4-8fb5-123478563412
    SrcVsv:src_vol_c0284 DP DstVsv:SrcCluster_src_vol_c0284r Idle – – a908f9f0-2c61-11e4-a8c1-123478563412
    SrcVsv:src_vol_c0286 DP DstVsv:SrcCluster_src_vol_c0286r Idle – – a773c018-2c61-11e4-8fb5-123478563412
    SrcVsv:src_vol_c0313 DP DstVsv:SrcCluster_src_vol_c0313r Idle – – aaaa528e-2c61-11e4-8fb5-123478563412

Issue snapmirror release command from the source cluster

    SrcCluster::> snapmirror release -destination-vserver DstVsv -destination-volume *
    [Job 74850] Job succeeded: SnapMirror Release Succeeded
    [Job 74851] Job succeeded: SnapMirror Release Succeeded
    [Job 74852] Job succeeded: SnapMirror Release Succeeded
    [Job 74853] Job succeeded: SnapMirror Release Succeeded
    [Job 74854] Job succeeded: SnapMirror Release Succeeded
    [Job 74855] Job succeeded: SnapMirror Release Succeeded

Delete the snapmirror relations from Destination cluster

    DstCluster::*> snapmirror delete -source-vserver SrcVsv -destination-path DstVsv:*
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0043r”.
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0048r”.
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0070r”.
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0082r”.

NetApp Powershell toolkit can be very handy to automate large number of tasks

    Import Data ONTAP Module in Powershell command window
    Import-Module DataOnTap
    Connect to the Source cluster
    Connect-NcController SrcCluster -Credential admin
    Save a list of source volumes in a file called srcvolumes.txt. We will import the volumes in the variable $volumes and parse through each volume deleting snapshots with name “snapmirror*”
    $volumes = get-content C:\scripts\srcvolumes.txt
    foreach ($vol in $volumes) {Get-NcSnapshot $vol snapmi* | Remove-NcSnapshot -IgnoreOwners -Confirm:$false}
    Save a list of destination volumes in a file called dstvolumes.txt. we will import the volumes in a variable $volumes and we will offline and destroy each destination volume
    $volumes = get-content C:\scripts\dstvolumes.txt
    foreach ($vol in $volumes) {Dismount-NcVol $vol -VserverContext DstVsv -Confirm:$false}
    foreach ($vol in $volumes) {Set-NcVol $vol -Offline -VserverContext DstVsv -Confirm:$false}
    foreach ($vol in $volumes) {Remove-NcVol $vol -VserverContext DstVsv -Confirm:$false}

NetApp FAS storage Head Swap Procedure

INTRODUCTION

This document contains the verification checklists that document the Head Swap Procedure.

Current Serial Numbers:
700000293005 – WALLS1
700000293017 – WALLS2

New Serial Numbers:
700002090378 – new_WALLS1
700002090366 – new_WALLS2

Current SYSID
0151745322 – WALLS1
0151745252 – WALLS2

New SYSID
2016870400 – new_WALLS1
2016870518 – new_WALLS2

Current WALLS1
slot 1 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
slot 2 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
slot 3 OK: X1938A: Flash Cache 512 GB
slot 4 OK: X1107A: Chelsio S320E 2x10G NIC

Current WALLS2
sysconfig: slot 1 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
sysconfig: slot 2 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
sysconfig: slot 3 OK: X1938A: Flash Cache 512 GB
sysconfig: slot 4 OK: X1107A: Chelsio S320E 2x10G NIC

Current WALLS1
RLM Status
IP Address:         10.43.6.88
Netmask:            255.255.255.0
Gateway:            10.43.6.1

Current WALLS2
RLM Status
IP Address:         10.43.6.120
Netmask:            255.255.255.0
Gateway:            10.43.6.1

New Licenses
Heep a Copy of the new liceses

PRE UPGRADE PROCEDURE

Make a note of the following before performing Head swap activity

  1. Old Controllers Serial Numbers
  2. Old Controllers System ID
  3. New Controllers Serial Numbers
  4. New Controllers System ID
  5. Location of expansion modules (PCI, PCIe) cards on the old controllers
  6. Label the cables attached to the Old controller Heads
  7. Make a note of the Remote LAN Mode (RLM) or Service Processor (SP) IP addresses
  8. Serial connection with the controllers to view console messages
  9. Make a note of the licenses for the new systems
  10. Make sure the Network Adapters on new controllers match the location of old controller. If not, you have to modify /etc/rc file for making new ports active.

STEPS FOR HEAD SWAP NETAPP FAS STORAGE SYSTEM

Tools required before and after head-swap

  1. Grounding strap
  2. #2 Phillips screw driver

New FAS storage system Installation and Setup

  1. Power-on the new heads with a console; check the ontap version on new controller; it should match the current version on old controller. Depending upon that we can downgrade or upgrade the ontap version.
  2. Below steps are to be followed to upgrade ontap on new controller to match old controller.
  3. Download the ontap version from url:

http://www.now.netapp.com

  1. Take the backup of system files from old controller:

/etc/hosts
/etc/rc
/etc/cifs_homedir.cfg
/etc/exports
/etc/snapmirror.conf (only on destination filers)
/etc/resolv.conf
/etc/hosts.equiv
/etc/nsswitch.conf
/etc/quotas
/etc/usermap.cfg
/etc/tapeconfig

  1. Trigger an autosupport
  2. options autosupport.doit “Pre Head Swap”

  1. Disable the autosupport
  2. options autosupport.enable off

  1. Disable the cluster
  2. cf disable

  1. Keep the ontap software in the /etc/software directory
  2. Software install <ontap_software>
    Note* don’t run download command as yet
    Make sure the Network Adapters on new controllers match the location of old controller. If not, you have to modify /etc/rc file for making new ports active

  1. If snapmirror and snapvault relations exist on this system; update them on all the volumes before system shutdown. Follow the commands as below:
  2. HALT the system
  3. halt

  1. Power-down the controller head and then all the disk shelves one at a time.
  2. Remove the power cables, network cables and SAS cables form the old controller and
  3. Remove old controller from the rack unit and replace with new controller
    Mount and screw as required on the rack

  1. Attach the SAS cables to the disk shelves, network cables to network ports, power cords to PSU’s.
  2. Power-on the disk shelves one at a time (Wait for about 1 minute till all the disks are spun properly and green led’s are stable)
  3. Power-up new controller
  4. Hit control+C when asked for an option to enter into maintenance mode
  5. Select option 5
  6. Check whether all the disks are visible by the system; hit command:
  7. disk show -v

  1. Assign the disks to this controller
  2. disk show -v (keep a copy of all the disks for future reference)
    disk reassign -s <old_sysid> -d <new_sysid> or disk assign all (change the system ID’s)
    disk show -v

  1. Clear the mailbox by enterning the following commands:
  2. mailbox destroy local
    mailbox destroy partner

  1. Halt the system
  2. Halt

  1. On the loader prompt, verify date –u with another Controller in production
  2. Boot the system in normal mode
  3. bye or boot_ontap

  1. Once the system is booted; install the ontap
  2. download

  1. Reboot the system
  2. reboot

  1. Enable the cluster
  2. cf enable

  1. Verify the HA pair is setup correctly
  2. disk show –a (storage show disk –p will also tell if MPHA is enabled)

  1. Add the license for new controller
  2. License required for a-sis; nearstore_option; sv_ontap_sec

  1. Enable and Trigger an autosupport
  2. options autosupport.enable on
    options autosupport.doit “Post Head Swap”

  1. Perform SP Setup
  2. sp setup

Testing Plan:

  1. Connect to the etc directory on new controller via NFS or CIFS and browse the contents
  2. Connect to other CIFS or NFS shares on the new controller
  3. Run the snapvault/snapmirror update once the head swap is completed
  4. Restore a test file from volume/snapshot
  5. Check with network connectivity
  6. Check the ontap version; shelf firmware version

Backout procedure:

  1. Power-down new controller and attach old controller; recable old controller back to what it was
  2. Bring down the old controller to its cfe or loader prompt
  3. Swap the PCI/PCIe cards back
  4. Reboot the system