Category Archives: 2015

Zoning on Cisco FC Switches


* A combination of multiple ports in Cisco is referred to as a ‘zone’
* A combination of multiple zones is called a ‘zoneset’
* At any time, multiple zonesets can exist in a switch. However, only one zoneset will be active


1. Change to config mode.

    fabsw1# config
    Enter configuration commands, one per line. End with CNTL/Z.

2. Create a new zone.

    fabsw1(config)# zone name vs_v3070_7b vsan 1

3. Add ports to this zone, and exit zone configuration.

    fabsw1(config-zone)# member pwwn 50:00:1f:e1:50:01:81:e8
    fabsw1(config-zone)# exit

4. Switch to zoneset-config, here vsan is a unique ID associated with each zoneset.

    fabsw1(config)# zoneset name ZONESET_V1 vsan 1

5. Add new zone to the zone set.

    fabsw1(config-zoneset)# member vs_v3070_7b

6. Exit to normal mode and check if the new zone is added to zoneset.

    fabsw1(config-zoneset)# end
    fabsw1# show zoneset
    zoneset name ZONESET_V1 vsan 1

7. Execute the following command from config mode to activate the new zoneset:

    fabsw1# zoneset activate name ZONESET_V1 vsan 1

8. Save the running configuration as startup configuration.

    fabsw1# copy running-config startup-config
    [########################################] 100%

9. Use the following command to verify

    fabsw1# show current-config

Zoning on Brocade FC Switches

1. telnet/ssh to the brocade switches and login as user admin

switch 1 – fabric Prd
switch 2 – fabric Prd

2. Create aliases for the hosts HBA’s on each switch

    switch_1> alicreate “srv16_HBA_0″,”10:00:00:c9:2f:a1:7a”
    switch_2> alicreate “srv16_HBA_1″,”10:00:00:c9:2f:a1:7b”

3. Create the storage zones using the aliases on each switch

    switch_1> zonecreate “Z_srv16_A”, “srv16_HBA_0; storage port1”
    switch_2> zonecreate “Z_srv16_B”, “srv16_HBA_1; storage port1”

Check the configuration

    switch_1> zoneshow Z_srv16_A
    switch_2> zoneshow Z_srv16_B

4. Add the zones to the fabrics (active configuration) on each switch

    switch_1> cfgshow FAB_A
    switch_1> cfgadd “FAB_A”,”Z_srv16_A”
    switch_1> cfgsave
    switch_1> cfgenable FAB_A

Check the configuration

    switch_1> zoneshow

    switch_2> cfgshow FAB_B
    switch_2> cfgadd “FAB_B”,”Z_srv16_B”
    switch_2> cfgsave
    switch_2> cfgenable FAB_A

Check the configuration

    switch_2> zoneshow

Data ONTAP 8.3 New Features

Unsupported features Support for 7-mode
32bit data
flex cache volumes
vol guarantee of file not supported
SSLv2 no longer supported for web services in the cluster
Deprecated core dump commands
Deprecated vserver commands
Deprecated system services ndmp commands
Deprecation of health dashboard
New platform and hardware support Support for 6-TB capacity disks
Support for Storage Encryption on SSDs
Manageability enhancements Support of server CA and client types of digital certificates
Enhancements for the SP
Enhancements for managing the cluster time
Enhancements for feature license management – system feature-,usage show summary, system feature-usage show-history
OnCommand System Manager is now included with Data ONTAP – https://cluster-mgmt-LIF
Access of nodeshell commands and options in the clustershell
Systemshell access now requires diagnostic privilege level
FIPS 140-2 support for HTTPS
Audit log for the nodeshell now included in AutoSupport messages
Changes to core dump management
Changes to management of the “autosupport” role and account
Support for automated nondisruptive upgrades
Active directory groups can be created for cluster and SVM administrator user accounts
Changes to cluster setup
Enhancements to AutoSupport payload support
New and updated commands for performance data
Enhancements for Storage QoS- Data ONTAP now supports cache monitoring and autovolume workloads, and it provides an
improved system interface to define workloads and control them. Starting with Data ONTAP 8.3, the system automatically assigns an autovolume workload to each volume, and the volume statistics are available in the configuration data. New qos statistics volume commands enable you to monitor Storage QoS at the volume level instead of the aggregate level.
Enhancements to continuous segment cleaning
MetroCluster enhancements Support for MetroCluster configurations
Networking and security protocol enhancements Default port roles no longer automatically defined – Prior to Data ONTAP 8.3, default roles were assigned to each network port during configuration of
the cluster. Port roles are now defined when configuring the LIF that is hosted on the port.
Support for multi-tenancy – Beginning with Data ONTAP 8.3, you can create IPspaces to enable a single storage system to support multiple tenants. That is, you can configure the system to be accessed by clients from more than one disconnected network, even if those clients are using the same IP address. An IPspace defines a distinct IP address space in which Storage Virtual Machines (SVMs) reside. Ports and IP addresses defined for an IPspace are applicable only within that IPspace. A distinct routing table is maintained for each SVM within an IPspace; therefore, no cross-SVM or cross-
IPspace traffic routing occurs. A default IPspace is created when the initial cluster is created. You create additional IPspaces when
you need SVMs to have their own storage, administration, and routing.
IPv6 support enhancements
New networking features – Beginning with Data ONTAP 8.3, new networking features have been added to simplify the configuration of IPspaces and to manage pools of IP addresses.
The network IPspace commands enable you to create and manage additional IPspaces on your cluster. Creating an IPspace is an initial step in creating the infrastructure required for setting up multi-tenancy.
The network broadcast-domain commands enable you to configure and manage ports in a broadcast domain. When you create a broadcast domain, a failover group is automatically created for the ports in the broadcast domain.
The network subnet commands enable you to allocate specific blocks, or pools, of IP addresses for your Data ONTAP network configuration. When creating LIFs, an address can be assigned from the subnet and a default route to a gateway is automatically added to the associated SVM.
Storage resource management enhancements Performance increase for random read operations
Root-data partitioning available for entry-level and AFF platforms
Support for Flash Pool SSD partitioning
Support for Storage Encryption on SSDs
New disk name format – Starting in Data ONTAP 8.3, disk names have a new format that is based on the physical location of the disk. The disk name no longer changes depending on the node from which it is accessed.
Flash Pool caching of compressed data for read operations
Increased maximum cache limits
Change to the default action taken when moving a volume cutover occurs
Support for caching policies on Flash Pool aggregates
Infinite Volume enhancements
Inline detection on deduplication-enabled volumes to identify blocks of zeros
FlexClone file and FlexClone LUN enhancements
FlexArray Virtualization (V-Series) enhancements Support for MetroCluster configuration with array LUNs
Support for Data ONTAP installation on a system that uses only array LUNs
Support for load distribution over multiple paths to an array LUN
Support for two FC initiator ports connecting to a single target port
File access protocol enhancements Changes to the way clustered Data ONTAP advertises DFS capabilities
Support for netgroup-by-host searches for NIS and LDAP
Capability to configure the number of group IDs allowed for NFS users
Capability to configure the ports used by some NFSv3 services
New commands for managing protocols for SVMs
LDAP support for RFC2307bis
New limits for local UNIX users, groups, and group members
LDAP client configurations now support multiple DNs
Support for qtree exports
New commands for configuring and troubleshooting name services
Enhancements for Kerberos 5
Capability for NFS clients to view the NFS exports list using “showmount -e
Support for Storage-Level Access Guard
Support for FPolicy passthrough-read for offline data
Support for auditing CIFS logon and logoff events
Support for additional group policy objects
Support for Dynamic Access Control and central access policies
Support for auditing central access policy staging events
Support for NetBIOS aliases for CIFS servers
Support for AES encryption security for Kerberos-based communication
Support for setting the minimum authentication security level
Capability to determine whether SMB sessions are signed
Support for scheduling CIFS servers for automatic computer account password changes
Support for configuring character mapping for SMB file name translation
Support for additional share parameters and new share properties
Support for configuring bypass traverse checking for SMB users
Enhancements for managing home directories
Support for closing open files and SMB sessions
Support for managing shares using Microsoft Management Console
Support for additional CIFS server options
SAN enhancements Support for IPspaces in SAN
Enhancement to reduce the number of paths from host to LUNs
Support for moving LUNs across volumes
Data protection enhancements MetroCluster support
Support for network compression for SnapMirror and SnapVault
Capability to reset certain SEDs to factory settings
Support for greater number of cluster peers
Support for restoring files or LUNs from a SnapVault secondary
Storage Encryption support for IPv6
Support for version-flexible replication
Support for cluster peer authentication
Support for single sign-on (SSO) authentication
Support for secure NDMP
Support for SMTape
Support for KMIP 1.1
Hybrid cloud enhancements Introducing Cloud ONTAP for Amazon Web Services (AWS )
New and changed features in OnCommand System Manager 8.3 Accessing System Manager 8.3
Storage pools
Enhanced aggregate management
Enhancements in the Disks window
Networking simplicity
Broadcast domains – You can create broadcast domains to provide a logical division of a computer network. Broadcast domains enable you to group network ports that belong to the same datalink layer. The ports in the group can then be used to create network interfaces for data traffic or management traffic.
Subnets – You can create a subnet to provide a logical subdivision of an IP network to pre-allocate the IP addresses. A subnet enables you to create network interfaces more easily by specifying a subnet instead of an IP address and network mask values for each new interface.
IPv6 support – You can use IPv6 address on your cluster for various operations such as configuring LIFs, cluster peering, configuring DNS, configuring NIS, and configuring Kerberos.
Enhanced Storage Virtual Machine (SVM) setup wizard
BranchCache configuration
LUN move
Authenticated cluster peering
Qtree exports
Service Processors
Version-flexible mirror relationship – You can create a mirror relationship that is independent of the Data ONTAP version running on the
source and destination clusters.
Snapshot policies – You can create Snapshot policies at the SVM level, which enables you to create Snapshot policies for
a specific SVM

How to clean Snapmirrors in clustered Data ONTAP

This Blog post lists the procedure used to clean up snapmirror relationships in clustered Data ONTAP.  In this scenario the destination cluster has large number of volumes that are snapmirror target


Source Cluster: SrcCluster
Destination Cluster: DstCluster
Source SVM: SrcVsv
Destination SVM: DstVsv

Use snapmirror show command to display source and destination paths

    DstCluster::*> snapmirror show -source-cluster SrcCluster
    Source Destination Mirror Relationship Total Last
    Path Type Path State Status Progress Healthy Updated
    ———– —- ———— ——- ————– ——— ——- ——–
    SrcVsv:src_vol_c0165 DP DstVsv:SrcCluster_src_vol_c0165r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0180 DP DstVsv:SrcCluster_src_vol_c0180r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0282 DP DstVsv:SrcCluster_src_vol_c0282r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0284 DP DstVsv:SrcCluster_src_vol_c0284r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0286 DP DstVsv:SrcCluster_src_vol_c0286r Snapmirrored Idle – true –
    SrcVsv:src_vol_c0313 DP DstVsv:SrcCluster_src_vol_c0313r Snapmirrored Idle – true –

Use snapmirror quiesce command to quiesce the snapmirror relations

    DstCluster::*> snapmirror quiesce -source-cluster SrcCluster -destination-path DstVsv:*
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0165r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0180r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0282r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0284r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0286r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0313r”.

Break the snapmirror relations

    DstCluster::*> snapmirror break -source-cluster SrcCluster -destination-path DstVsv:*
    [Job 18594] Job succeeded: SnapMirror Break Succeeded
    [Job 18595] Job succeeded: SnapMirror Break Succeeded
    [Job 18596] Job succeeded: SnapMirror Break Succeeded
    [Job 18597] Job succeeded: SnapMirror Break Succeeded
    [Job 18598] Job succeeded: SnapMirror Break Succeeded
    [Job 18599] Job succeeded: SnapMirror Break Succeeded
    [Job 18600] Job succeeded: SnapMirror Break Succeeded

[ ON THE SOURCE CLUSTER ] : Use snapmirror list-destinations command to check valid  snapmirror relations

    SrcCluster::> snapmirror list-destinations -destination-vserver DstVsv
    Source Destination Transfer Last Relationship
    Path Type Path Status Progress Updated Id
    ———– —– ———— ——- ——— ———— —————
    SrcVsv:src_vol_c0165 DP DstVsv:SrcCluster_src_vol_c0165r – – – a58a0def-2c61-11e4-b66e-123478563412
    SrcVsv:src_vol_c0282 DP DstVsv:SrcCluster_src_vol_c0282r – – – a5cbb158-2c61-11e4-8fb5-123478563412
    SrcVsv:src_vol_c0284 DP DstVsv:SrcCluster_src_vol_c0284r Idle – – a908f9f0-2c61-11e4-a8c1-123478563412
    SrcVsv:src_vol_c0286 DP DstVsv:SrcCluster_src_vol_c0286r Idle – – a773c018-2c61-11e4-8fb5-123478563412
    SrcVsv:src_vol_c0313 DP DstVsv:SrcCluster_src_vol_c0313r Idle – – aaaa528e-2c61-11e4-8fb5-123478563412

Issue snapmirror release command from the source cluster

    SrcCluster::> snapmirror release -destination-vserver DstVsv -destination-volume *
    [Job 74850] Job succeeded: SnapMirror Release Succeeded
    [Job 74851] Job succeeded: SnapMirror Release Succeeded
    [Job 74852] Job succeeded: SnapMirror Release Succeeded
    [Job 74853] Job succeeded: SnapMirror Release Succeeded
    [Job 74854] Job succeeded: SnapMirror Release Succeeded
    [Job 74855] Job succeeded: SnapMirror Release Succeeded

Delete the snapmirror relations from Destination cluster

    DstCluster::*> snapmirror delete -source-vserver SrcVsv -destination-path DstVsv:*
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0043r”.
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0048r”.
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0070r”.
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0082r”.

NetApp Powershell toolkit can be very handy to automate large number of tasks

    Import Data ONTAP Module in Powershell command window
    Import-Module DataOnTap
    Connect to the Source cluster
    Connect-NcController SrcCluster -Credential admin
    Save a list of source volumes in a file called srcvolumes.txt. We will import the volumes in the variable $volumes and parse through each volume deleting snapshots with name “snapmirror*”
    $volumes = get-content C:\scripts\srcvolumes.txt
    foreach ($vol in $volumes) {Get-NcSnapshot $vol snapmi* | Remove-NcSnapshot -IgnoreOwners -Confirm:$false}
    Save a list of destination volumes in a file called dstvolumes.txt. we will import the volumes in a variable $volumes and we will offline and destroy each destination volume
    $volumes = get-content C:\scripts\dstvolumes.txt
    foreach ($vol in $volumes) {Dismount-NcVol $vol -VserverContext DstVsv -Confirm:$false}
    foreach ($vol in $volumes) {Set-NcVol $vol -Offline -VserverContext DstVsv -Confirm:$false}
    foreach ($vol in $volumes) {Remove-NcVol $vol -VserverContext DstVsv -Confirm:$false}

NetApp FAS storage Head Swap Procedure


This document contains the verification checklists that document the Head Swap Procedure.

Current Serial Numbers:
700000293005 – WALLS1
700000293017 – WALLS2

New Serial Numbers:
700002090378 – new_WALLS1
700002090366 – new_WALLS2

Current SYSID
0151745322 – WALLS1
0151745252 – WALLS2

2016870400 – new_WALLS1
2016870518 – new_WALLS2

Current WALLS1
slot 1 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
slot 2 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
slot 3 OK: X1938A: Flash Cache 512 GB
slot 4 OK: X1107A: Chelsio S320E 2x10G NIC

Current WALLS2
sysconfig: slot 1 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
sysconfig: slot 2 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
sysconfig: slot 3 OK: X1938A: Flash Cache 512 GB
sysconfig: slot 4 OK: X1107A: Chelsio S320E 2x10G NIC

Current WALLS1
RLM Status
IP Address:

Current WALLS2
RLM Status
IP Address:

New Licenses
Heep a Copy of the new liceses


Make a note of the following before performing Head swap activity

  1. Old Controllers Serial Numbers
  2. Old Controllers System ID
  3. New Controllers Serial Numbers
  4. New Controllers System ID
  5. Location of expansion modules (PCI, PCIe) cards on the old controllers
  6. Label the cables attached to the Old controller Heads
  7. Make a note of the Remote LAN Mode (RLM) or Service Processor (SP) IP addresses
  8. Serial connection with the controllers to view console messages
  9. Make a note of the licenses for the new systems
  10. Make sure the Network Adapters on new controllers match the location of old controller. If not, you have to modify /etc/rc file for making new ports active.


Tools required before and after head-swap

  1. Grounding strap
  2. #2 Phillips screw driver

New FAS storage system Installation and Setup

  1. Power-on the new heads with a console; check the ontap version on new controller; it should match the current version on old controller. Depending upon that we can downgrade or upgrade the ontap version.
  2. Below steps are to be followed to upgrade ontap on new controller to match old controller.
  3. Download the ontap version from url:

  1. Take the backup of system files from old controller:

/etc/snapmirror.conf (only on destination filers)

  1. Trigger an autosupport
  2. options autosupport.doit “Pre Head Swap”

  1. Disable the autosupport
  2. options autosupport.enable off

  1. Disable the cluster
  2. cf disable

  1. Keep the ontap software in the /etc/software directory
  2. Software install <ontap_software>
    Note* don’t run download command as yet
    Make sure the Network Adapters on new controllers match the location of old controller. If not, you have to modify /etc/rc file for making new ports active

  1. If snapmirror and snapvault relations exist on this system; update them on all the volumes before system shutdown. Follow the commands as below:
  2. HALT the system
  3. halt

  1. Power-down the controller head and then all the disk shelves one at a time.
  2. Remove the power cables, network cables and SAS cables form the old controller and
  3. Remove old controller from the rack unit and replace with new controller
    Mount and screw as required on the rack

  1. Attach the SAS cables to the disk shelves, network cables to network ports, power cords to PSU’s.
  2. Power-on the disk shelves one at a time (Wait for about 1 minute till all the disks are spun properly and green led’s are stable)
  3. Power-up new controller
  4. Hit control+C when asked for an option to enter into maintenance mode
  5. Select option 5
  6. Check whether all the disks are visible by the system; hit command:
  7. disk show -v

  1. Assign the disks to this controller
  2. disk show -v (keep a copy of all the disks for future reference)
    disk reassign -s <old_sysid> -d <new_sysid> or disk assign all (change the system ID’s)
    disk show -v

  1. Clear the mailbox by enterning the following commands:
  2. mailbox destroy local
    mailbox destroy partner

  1. Halt the system
  2. Halt

  1. On the loader prompt, verify date –u with another Controller in production
  2. Boot the system in normal mode
  3. bye or boot_ontap

  1. Once the system is booted; install the ontap
  2. download

  1. Reboot the system
  2. reboot

  1. Enable the cluster
  2. cf enable

  1. Verify the HA pair is setup correctly
  2. disk show –a (storage show disk –p will also tell if MPHA is enabled)

  1. Add the license for new controller
  2. License required for a-sis; nearstore_option; sv_ontap_sec

  1. Enable and Trigger an autosupport
  2. options autosupport.enable on
    options autosupport.doit “Post Head Swap”

  1. Perform SP Setup
  2. sp setup

Testing Plan:

  1. Connect to the etc directory on new controller via NFS or CIFS and browse the contents
  2. Connect to other CIFS or NFS shares on the new controller
  3. Run the snapvault/snapmirror update once the head swap is completed
  4. Restore a test file from volume/snapshot
  5. Check with network connectivity
  6. Check the ontap version; shelf firmware version

Backout procedure:

  1. Power-down new controller and attach old controller; recable old controller back to what it was
  2. Bring down the old controller to its cfe or loader prompt
  3. Swap the PCI/PCIe cards back
  4. Reboot the system


Unjoin Nodes from a Cluster (cDOT)

  1. Move or delete all volumes and member volumes from aggregates owned by the node to be unjoined
    • nitish-mgmt::> volume move target-aggr show -vserver nitish -volume nitish_test9
    • nitish-mgmt::> volume move start -perform-validation-only true -vserver nitish -volume nitish_test9 -destination-aggregate nitish01_sas_1
    • nitish-mgmt::> volume move start -vserver nitish -volume nitish_test9 -destination-aggregate nitish01_sas_1
    • nitish-mgmt::> vol move show
  2. Quiesce and Break LS Snapmirrors with destinations on aggregates that are owned by the node/s being removed
    • nitish-mgmt::> snapmirror show -type LS
    • nitish-mgmt::> snapmirror quiesce –destination-path <destination-path>
    • nitish-mgmt::> snapmirror break –destination-path <destination-path>
    • nitish-mgmt::> snapmirror delete –destination-path <destination-path>
  3. Offline and delete those snapmirror destination volumes
    • nitish-mgmt::> vol offline –vserver nitish –volume nitish_test9
    • nitish-mgmt::> vol destroy –vserver nitish –volume nitish_test9
  4. Move or delete all aggregates (except for the mroot aggregate) owned by the node to be unjoined
    • nitish-mgmt::> aggr offline <aggr-name>
    • nitish-mgmt::> aggr delete <aggr-name>
  5. Delete or re-home all data LIFs from the node to be unjoined to other nodes in the cluster
    • nitish-mgmt::> network interface delete
    • nitish-mgmt::> network interface migrate
    • nitish-mgmt::> network interface modify
  6. Modify all LIF failover rules to remove ports on the node to be unjoined
    • nitish-mgmt::> failover-groups delete -failover-group data -node nitish-01 -port e0e
    • nitish-mgmt::> failover-groups delete -failover-group data -node nitish-02 -port e0e
  7. Disable SFO on the node to be unjoined
    • nitish-mgmt::> storage failover modify -node nitish-01 –enabled false
  8. Move epsilon to a node other than the node to be unjoined
    • nitish-mgmt::*> cluster show
    • nitish-mgmt::*> cluster ring show
    • nitish-mgmt::*> cluster modify -node nitish-01 –epsilon false
    • nitish-mgmt::*> cluster modify –node nitish-03 –epsilon true
    • nitish-mgmt::*> cluster show
    • nitish-mgmt::*> cluster ring show
  9. Delete all VLANs on the node to be unjoined
    • nitish-mgmt::> vlan delete
  10. Trigger an autosupport from the node to be unjoined
    • nitish-mgmt::> system node autosupport invoke -type all -node nitish-01 -message “pre_unjoin”
  11. Run the cluster unjoin command from a different node in the cluster besides the node that is to be unjoined
    • nitish-mgmt::*> unjoin –node nitish-01
    • nitish-mgmt::*> unjoin –node nitish-02
  • Warning: This command will unjoin node “nitish-01” from the cluster. You must unjoin the failover partner as well. After the node is successfully unjoined, erase its configuration and initialize all disks by using the “Clean configuration and initialize all disks (4)” option from the boot menu.Do you want to continue? {y|n}: y

    [Job 32] Cleaning cluster database [Job 32] Job succeeded: Cluster unjoin succeeded


    * *

    * Press Ctrl-C for Boot Menu. *

    * *


    This node was removed from a cluster. Before booting, use

    option (4) to initialize all disks and setup a new system.

    Normal Boot is prohibited.

    Please choose one of the following:

    (1) Normal Boot.

    (2) Boot without /etc/rc.

    (3) Change password.

    (4) Clean configuration and initialize all disks.

    (5) Maintenance mode boot.

    (6) Update flash from backup config.

    (7) Install new software first.

    (8) Reboot node.

    Selection (1-8)?

    Once Option 4 (or ‘wipeconfig’ and ‘init’) is run, the node is considered to be a ‘fresh’ node.

After the node is unjoined from the cluster, it cannot be re-joined to this or any other cluster until the wipeclean process is performed.

InterCluster Replication setup in clustered Data ONTAP 8.2

Clusters must be joined in a peer relationship before replication between different clusters is possible.
Cluster peering is a one-time operation that must be performed by the cluster administrators.
An intercluster LIF must be created on an intercluster-capable port, which is a port assigned the role of intercluster or a port assigned the role of data.

NOTE: Ports that are used for the intracluster cluster interconnect may not be used for intercluster replication.

Cluster peer requirements include the following:

  • The time on the clusters must be in sync within 300 seconds (five minutes) for peering to be successful.

Cluster peers can be in different time zones

  • At least one intercluster LIF must be created on every node in the cluster.
  • Every intercluster LIF requires an IP address dedicated for intercluster replication.
  • The correct maximum transmission unit (MTU) value must be used on the network ports that are used for replication.
  • All paths on a node used for intercluster replication should have equal performance characteristics.
  • The intercluster network must provide connectivity among all intercluster LIFs on all nodes in the cluster peers.
  • Every intercluster LIF on every node in a cluster must be able to connect to every intercluster LIF on every node in the peer cluster.
  1. Check the role of the ports in the cluster.
  2.    cluster01::> network port show

  1. Change the role of the port used on each node to intercluster.
  2.    cluster01::> network port modify -node cluster01-01 -port e0e -role intercluster

  1. Create an intercluster LIF on each node in cluster01.
  2. This example uses the LIF naming convention <nodename>_icl# for intercluster LIF.

       cluster01::> network int create -SVM cluster01-01 -lif cluster01-01_icl01 -role intercluster -home-node cluster01-01 -home-port e0e -address -netmask

  1. Repeat the above steps for destination Cluster Cluster02
  1. Configure Cluster Peers
  2.    cluster01::> cluster peer create -peer-addrs, –username admin

       Password: *********

  1. Display the newly created cluster peer relationship.
  2.    cluster01::> cluster peer show –instance

  1. Preview the health of the cluster peer relationship.
  2.    cluster01::> cluster peer health show

  1. Create SVM peer relationship
  2.    cluster01::> vserver peer create -vserver -peer-vserver -applications snapmirror -peer-cluster cluster02

  1. Verify SVM peer relationship status
  2.    cluster01::> vserver peer show-all

Complete the following requirements before creating an intercluster SnapMirror relationship:

  • Configure the source and destination nodes for intercluster networking.
  • Configure the source and destination clusters in a peer relationship.
  • Configure source and destination SVM’s in a peer relationship.
  • Create a destination NetApp SVM; volumes cannot exist in Cluster-Mode without a SVM.
  • Verify that the source and destination SVMs have the same language type.
  • Create a destination volume with a type of data protection (DP), with a size equal to or greater than that of the source volume.
  • Assign a schedule to the SnapMirror relationship to perform periodic updates.
  1. Create a snapmirror schedule on Destination Cluster
  2.    cluster02::> job schedule cron create -name Hourly_SnapMirror -minute 0

  1. Create a SnapMirror relationship with –type DP and assign the schedule created in the previous step. Vs1 and vs5 are the SVM’s.
  2.    cluster02::>snapmirror create -source-path vs1:vol1 -destination-path vs5:vol1 -type   DP -schedule Hourly_SnapMirror

  1. Review the SnapMirror relationship.
  2.    cluster02::>snapmirror show

  1. Initialize the SnapMirror relationship in the destination cluster.
  2.    cluster02::>snapmirror initialize -destination-path vs5:vol1

  1. Verify the progress of the replication.
  2.    cluster02::>snapmirror show

SnapMirror relationships can be failed over using the snapmirror break command and resynchronized in either direction using the Snapmirror resync command.

In order for NAS clients to access data in the destination volumes, CIFS shares and NFS export policies must be created in the destination SVM and assigned to the volumes.

Perl Script to collect Perfstats from NetApp Storage Systems

I have written a Perl script that is used to run perfstats on NetApp storage systems. This script runs on a Linux host.

The script checks for any running instance of perfstats on the storage system. If another perfstats instance is running on the filer, this script stops and logs an error message. The script can be scheduled to run as cronjob.



$datetime = &date;
$logdir = "/tmp/";
$projects_directory = "/prj/perfstats"; # directory that contains script

my $args = $#ARGV + 1;
if ($args < 2) {
print "insufficient arguments exitting the script\n";
logit("insufficient arguments exitting the script\n");
exit 1;

$perfcmd_controller = shift;
$perfcmd_directory = shift; # perfcmd directory contains the perl script and script.
# To use RSH use below command
$perfcmd_command = "/ -f $perfcmd_controller -l root: -F -I -i 10 -t 5";
# To use SSH use below command
# $perfcmd_command    = "/ -f $perfcmd_controller -S -l nitish -F -I -i 30 -t 4";
$perfcmd_final_cmd = "$perfcmd_directory$perfcmd_command";
$perfcmd_output_suffix = "/$perfcmd_controller.perfcmd.$datetime.out";
$perfcmd_output = "$projects_directory$perfcmd_output_suffix";

print "\n";

my @perf_processes = `ps aux | grep perfstat`;
my $process;
my $count = 0;

foreach $process (@perf_processes){
if ($process =~ /$perfcmd_controller/) {
logit("first instance of perfstats running for $perfcmd_controlle\n");
if ($count >1) {
logit("exiting because another instance of perfstat is running for $perfcmd_controller\n");
exit 1;

logit("No previous iteration of perfstat is running for $perfcmd_controller");
# # Run the Perfstat and send output to the output file
$output = `$perfcmd_final_cmd > $perfcmd_output || die "Error runnng perfstat"`;


exit 0;

######### Functions ###########
sub date {
@date = localtime();
$year = $date[5] + 1900;
$month = '0'.($date[4]+1);
$day = $date[3];
$hour = $date[2];
$min = $date[1];
$sec = $date[0];
$fill = '_';

$out = $year.$fill.$month.$fill.$day.$fill.$hour.$fill.$min.$fill.$sec;
return $out;
sub logit {
my $s = shift;
my ($logsec,$logmin,$loghour,$logmday,$logmon,$logyear,$logwday,$logyday,$logisdst)=localtime(time);
my $prefix = "[";
my $suffix = "]";
my $l_name = "perfcmd";
my $logtimestamp = sprintf("%4d-%02d-%02d %02d:%02d:%02d",$logyear+1900,$logmon+1,$logmday,$loghour,$logmin,$logsec);
my $logtimestamp_final = "$prefix$logtimestamp$suffix";
my $logfile="$logdir$l_name-$logmon-$logmday-logfile.log";
my $fh;
open($fh, '>>', "$logfile") or die "$logfile: $!";
print $fh "$logtimestamp_final $s\n";
sub print_usage() {
print "################################################################################\n";
print "#\tusage:\n";
print "#\t./perfcmd <filer name or IP Address> <location of>\n";
print "#\te.g.\n";
print "#\t./ ntap_filer /usr/nitish/perfcmd\n";
print "#\t./  /usr/nitish/perfcmd\n";
print "################################################################################\n";

Please follow the below procedure to schedule cronjobs on linux host

export EDITOR=vi
crontab -e
10,15,20,25 6-14 * * * /usr/nitish/perfstats/ "ntap_filer" "/usr2/nitish/perfstats"
crontab -l

7 Mode Data ONTAP (8.2.1) Upgrade Procedure

I have been working more on NetApp clustered Data ONTAP systems lately. Recently i was asked to upgrade some 7 mode systems which didn’t have /etc directory shared via CIFS protocol i.e. CIFS not running on the Base filer “vfiler0”. The web server is configured on another 7 mode storage system.

    1. Download the latest copy of Data ONTAP Upgrade/Downgrade guide from NetApp Support
    2. Copy the downloaded image to web server shared directory
    3. Follow Step 15 for international sites (where copying Data ONTAP image may take longer) and copy ONTAP images on to the filer. Do not install (update) the image yet
    4. Login to the 7 mode system and send autosupports
      • options autosupport.doit pre_NDU_upgrade
      • options autosupport.enable off
      • options snmp.enable off
    5. Take backup of the vfier configuration on the system
      • vfiler status -r
      • ifconfig -a
      • rdfile /etc/rc
    6. Check for Snapmirror/Snapvault relations on the system. You must upgrade the destination system first.
    7. Verify all failed drives are replaced (vol status -f)
    8. Delete old core files from /etc/crash directory
    9. Verify no deduplication processes are active
      • sis status
      • sis stop (if dedupe is active on any volume)
    10. Confirm all the disks are multipathed
      • storage show disk -p
    11. Verify all aggregates are online
      • aggr status
    12. Make sure all aggregates have atleast 5-6 % free capacity
      • df -Ag
    13. Disable autogiveback
      • options off
    14. Turn off snapmirror and snapvault
      • snapmirror off
      • options snapvault.enable off
    15. For International sites copy the image to the controller and install using following commands:
      • software get http://<web-server>/data_ontap/821_q_image.tgz
      • software list (lists the files in /etc/software directory)
      • software update 821_q_image.tgz -r (install files without rebooting)
      • version -b (verify the new image is installed)
    16. For local sites
      • software update http://<web-server>/data_ontap/821_q_image.tgz -r
      • version -b (verify the new image is installed)
    17. Perform cf takeover from the partner
      • cf takeover (this will reboot the partner)
    18. Perform cf giveback from the partner once local system shows “waiting for giveback”
      • cf giveback -f
    19. Perform cf takeover from the partner
      • cf takeover -n (this will reboot the partner)
    20. Perform cf giveback from the partner once local system shows “waiting for giveback”
      • cf giveback -f
    21. Perform the same steps on both the nodes and verify both have the new code
      • version -b
    22. Turn on snap mirror and snapvault
      • snapmirror on
      • options snapvault.enable on
    23. Turn on autogiveback
      • options on
    24. Turn on autosupport
      • options autosupport.enable on
      • options autosupport.doit post_NDU_upgrade
      • options snmp.enable on
    25. Upon completion of the upgrade process invoke below commands to see if any issues
      • sysconfig -a
      • vol status -f
      • vol status
      • aggr status
      • vol status -s
    26. Update SP Firmware on the filers (system node service-processor image update-progress show)


If you are upgrading multiple 7 mode filers at a time, you may use “for” loop from linux/unix shell to run same command on multiple filers.  

for i in filer1 filer2 filer3 filer4; do echo “”; echo $i ; sudo rsh $i “priv set -q diag; sis status”; echo “”; done