Category Archives: NetApp

NFS Storage : Root squashed to anon user on Linux Host

This scenario is based on a problem where “root” user is squashed to “anon” and all files created on an NFS exported volume have permissions as “nfsnobody”.

On an NFS drive mounted on a Linux host exported through cDOT clustered Data ONTAP system running ONTAP 8.2.1 the root user reports when he creates a file on the mounted NFS storage the file permissions are changed to “nfsnobody”.

After investigating “ls -l” and “ls -ln” commands on the Linux host the file permissions are reported as “nfsnobody” “65534” :

root_squashed_1

Checking the export-policy rule on NetApp storage system i see
“User ID To Which Anonymous Users Are Mapped: 65534
“Superuser Security Types: any

orignal_export_policy_rule

unix-user-group-show

This means all the “anonymous” users will be mapped to “65534” which is “nfsnobody” in Linux, “nobody” in Unix, “pcuser” in Windows. And the remote “root” user on the Linux host has restricted permissions after logging to the NFS server. This is a security feature implemented on a shared NFS storage.

To fix this issue if we change the Superuser AUTH type to “sys”, this means the user will be authenticated at the client (operating system) and will come in as an identified user. This way the user will not be squashed to “anonymous / anon” and retain it’s permissions. superuser-sys

Now if i try to create some files as a “root” user on the Linux host, the files retain the permissions as “root” and not “anon”.

root-as-root

This fixes the problem.

Integrate Oncommand Performance Manager with Oncommand Unified Manager

Oncommand Performance Manager is the Performance Management component of Unified Manager. Both these products are self contained, however, they can be integrated so that all performance events can be viewed from Unified Manager Dashboard.

In this post we deploy a new instance of Performance Manager vApp on an ESXi host and integrate with a running instance of Unified Manager.

Software components used:

  • Oncommand Unified Manager Version 6.2P1
  • Oncommand Performance Manager Version 2.0.0RC1

Setup Performance Manager

Import OnCommandPerformanceManager-netapp-2.0.0RC1.ova file

Deploy_OPM_OVA

After the ova file is imported you may face issues powering on.

Poweron_issues_OVA

As the vApp has CPU and Memory reservations set. In a lab environment with limited resources we can remove the reservations to boot up the vApp.

OVA_Booting

After the vApp boots up, you need to Install VMware tools to proceed further.

As the installation progress, the setup wizard automatically runs to configure TimeZone, Networking (static/dynamic), Creation of Maintenance user, generate SSL certificate and starts Performance Manager services. Once complete, log in to Performance Manager console and verify settings (Network, DNS, timezone)

OPM_ConsoleNow login to Performance manager Web UI and complete the setup wizard. Do not enable Autosupport for vApp deployed in the Lab environment.

Login_OPMOpen Administration Tab (Top right corner)OPM_Administration

Setup Connection with Unified Manager

To view Performance events in Unified Manager Dashboard a connection between Performance Manager and Unified Manager must be made.

Setting up a Connection includes creating a specialized Events Publisher user in the Unified Manager web UI and enabling the Unified Manager server connection in the maintenance console of the Performance Manager server.

  • Click Administration -> Manage Users
  • In the Manage Users page, click Add
  • In the Add User dialog box, select Local User for type and Event Publisher for role and enter the other required information
  • click Add

eventpublisher_user_OUM

Connect Performance Manager to Unified Manager from vApp console

OPM_Connection_screen1

OPM_Connection_Registered

This completes the Integration part with Unified Manager. You can integrate multiple Performance Manager Servers with a single Unified Manager Server. When Performance Manager generates performance events they pass on to the Unified Manager server and are viewed on the Unified Manager Dashboard. So the admin keeps monitoring one window instead of logging in to multiple Performance Manager web UI’s.

Data ONTAP 8.3 New Features

Unsupported features Support for 7-mode
32bit data
flex cache volumes
vol guarantee of file not supported
SSLv2 no longer supported for web services in the cluster
Deprecated core dump commands
Deprecated vserver commands
Deprecated system services ndmp commands
Deprecation of health dashboard
New platform and hardware support Support for 6-TB capacity disks
Support for Storage Encryption on SSDs
Manageability enhancements Support of server CA and client types of digital certificates
Enhancements for the SP
Enhancements for managing the cluster time
Enhancements for feature license management – system feature-,usage show summary, system feature-usage show-history
OnCommand System Manager is now included with Data ONTAP – https://cluster-mgmt-LIF
Access of nodeshell commands and options in the clustershell
Systemshell access now requires diagnostic privilege level
FIPS 140-2 support for HTTPS
Audit log for the nodeshell now included in AutoSupport messages
Changes to core dump management
Changes to management of the “autosupport” role and account
Support for automated nondisruptive upgrades
Active directory groups can be created for cluster and SVM administrator user accounts
Changes to cluster setup
Enhancements to AutoSupport payload support
New and updated commands for performance data
Enhancements for Storage QoS- Data ONTAP now supports cache monitoring and autovolume workloads, and it provides an
improved system interface to define workloads and control them. Starting with Data ONTAP 8.3, the system automatically assigns an autovolume workload to each volume, and the volume statistics are available in the configuration data. New qos statistics volume commands enable you to monitor Storage QoS at the volume level instead of the aggregate level.
Enhancements to continuous segment cleaning
MetroCluster enhancements Support for MetroCluster configurations
Networking and security protocol enhancements Default port roles no longer automatically defined – Prior to Data ONTAP 8.3, default roles were assigned to each network port during configuration of
the cluster. Port roles are now defined when configuring the LIF that is hosted on the port.
Support for multi-tenancy – Beginning with Data ONTAP 8.3, you can create IPspaces to enable a single storage system to support multiple tenants. That is, you can configure the system to be accessed by clients from more than one disconnected network, even if those clients are using the same IP address. An IPspace defines a distinct IP address space in which Storage Virtual Machines (SVMs) reside. Ports and IP addresses defined for an IPspace are applicable only within that IPspace. A distinct routing table is maintained for each SVM within an IPspace; therefore, no cross-SVM or cross-
IPspace traffic routing occurs. A default IPspace is created when the initial cluster is created. You create additional IPspaces when
you need SVMs to have their own storage, administration, and routing.
IPv6 support enhancements
New networking features – Beginning with Data ONTAP 8.3, new networking features have been added to simplify the configuration of IPspaces and to manage pools of IP addresses.
The network IPspace commands enable you to create and manage additional IPspaces on your cluster. Creating an IPspace is an initial step in creating the infrastructure required for setting up multi-tenancy.
The network broadcast-domain commands enable you to configure and manage ports in a broadcast domain. When you create a broadcast domain, a failover group is automatically created for the ports in the broadcast domain.
The network subnet commands enable you to allocate specific blocks, or pools, of IP addresses for your Data ONTAP network configuration. When creating LIFs, an address can be assigned from the subnet and a default route to a gateway is automatically added to the associated SVM.
Storage resource management enhancements Performance increase for random read operations
Root-data partitioning available for entry-level and AFF platforms
Support for Flash Pool SSD partitioning
Support for Storage Encryption on SSDs
New disk name format – Starting in Data ONTAP 8.3, disk names have a new format that is based on the physical location of the disk. The disk name no longer changes depending on the node from which it is accessed.
Flash Pool caching of compressed data for read operations
Increased maximum cache limits
Change to the default action taken when moving a volume cutover occurs
Support for caching policies on Flash Pool aggregates
Infinite Volume enhancements
Inline detection on deduplication-enabled volumes to identify blocks of zeros
FlexClone file and FlexClone LUN enhancements
FlexArray Virtualization (V-Series) enhancements Support for MetroCluster configuration with array LUNs
Support for Data ONTAP installation on a system that uses only array LUNs
Support for load distribution over multiple paths to an array LUN
Support for two FC initiator ports connecting to a single target port
File access protocol enhancements Changes to the way clustered Data ONTAP advertises DFS capabilities
Support for netgroup-by-host searches for NIS and LDAP
Capability to configure the number of group IDs allowed for NFS users
Capability to configure the ports used by some NFSv3 services
New commands for managing protocols for SVMs
LDAP support for RFC2307bis
New limits for local UNIX users, groups, and group members
LDAP client configurations now support multiple DNs
Support for qtree exports
New commands for configuring and troubleshooting name services
Enhancements for Kerberos 5
Capability for NFS clients to view the NFS exports list using “showmount -e
Support for Storage-Level Access Guard
Support for FPolicy passthrough-read for offline data
Support for auditing CIFS logon and logoff events
Support for additional group policy objects
Support for Dynamic Access Control and central access policies
Support for auditing central access policy staging events
Support for NetBIOS aliases for CIFS servers
Support for AES encryption security for Kerberos-based communication
Support for setting the minimum authentication security level
Capability to determine whether SMB sessions are signed
Support for scheduling CIFS servers for automatic computer account password changes
Support for configuring character mapping for SMB file name translation
Support for additional share parameters and new share properties
Support for configuring bypass traverse checking for SMB users
Enhancements for managing home directories
Support for closing open files and SMB sessions
Support for managing shares using Microsoft Management Console
Support for additional CIFS server options
SAN enhancements Support for IPspaces in SAN
Enhancement to reduce the number of paths from host to LUNs
Support for moving LUNs across volumes
Data protection enhancements MetroCluster support
Support for network compression for SnapMirror and SnapVault
Capability to reset certain SEDs to factory settings
Support for greater number of cluster peers
Support for restoring files or LUNs from a SnapVault secondary
Storage Encryption support for IPv6
Support for version-flexible replication
Support for cluster peer authentication
Support for single sign-on (SSO) authentication
Support for secure NDMP
Support for SMTape
Support for KMIP 1.1
Hybrid cloud enhancements Introducing Cloud ONTAP for Amazon Web Services (AWS )
New and changed features in OnCommand System Manager 8.3 Accessing System Manager 8.3
Storage pools
Enhanced aggregate management
Enhancements in the Disks window
Networking simplicity
Broadcast domains – You can create broadcast domains to provide a logical division of a computer network. Broadcast domains enable you to group network ports that belong to the same datalink layer. The ports in the group can then be used to create network interfaces for data traffic or management traffic.
Subnets – You can create a subnet to provide a logical subdivision of an IP network to pre-allocate the IP addresses. A subnet enables you to create network interfaces more easily by specifying a subnet instead of an IP address and network mask values for each new interface.
IPv6 support – You can use IPv6 address on your cluster for various operations such as configuring LIFs, cluster peering, configuring DNS, configuring NIS, and configuring Kerberos.
Enhanced Storage Virtual Machine (SVM) setup wizard
BranchCache configuration
LUN move
Authenticated cluster peering
Qtree exports
Service Processors
Version-flexible mirror relationship – You can create a mirror relationship that is independent of the Data ONTAP version running on the
source and destination clusters.
Snapshot policies – You can create Snapshot policies at the SVM level, which enables you to create Snapshot policies for
a specific SVM

How to clean Snapmirrors in clustered Data ONTAP

This Blog post lists the procedure used to clean up snapmirror relationships in clustered Data ONTAP.  In this scenario the destination cluster has large number of volumes that are snapmirror target

Drawing8

Source Cluster: SrcCluster
Destination Cluster: DstCluster
Source SVM: SrcVsv
Destination SVM: DstVsv

Use snapmirror show command to display source and destination paths

    DstCluster::*> snapmirror show -source-cluster SrcCluster
    Source Destination Mirror Relationship Total Last
    Path Type Path State Status Progress Healthy Updated
    ———– —- ———— ——- ————– ——— ——- ——–
    SrcVsv:src_vol_c0165 DP DstVsv:SrcCluster_src_vol_c0165r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0180 DP DstVsv:SrcCluster_src_vol_c0180r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0282 DP DstVsv:SrcCluster_src_vol_c0282r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0284 DP DstVsv:SrcCluster_src_vol_c0284r Snapmirrored Idle – false –
    SrcVsv:src_vol_c0286 DP DstVsv:SrcCluster_src_vol_c0286r Snapmirrored Idle – true –
    SrcVsv:src_vol_c0313 DP DstVsv:SrcCluster_src_vol_c0313r Snapmirrored Idle – true –

Use snapmirror quiesce command to quiesce the snapmirror relations

    DstCluster::*> snapmirror quiesce -source-cluster SrcCluster -destination-path DstVsv:*
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0165r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0180r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0282r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0284r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0286r”.
    Operation succeeded: snapmirror quiesce for destination “DstVsv:SrcCluster_src_vol_c0313r”.

Break the snapmirror relations

    DstCluster::*> snapmirror break -source-cluster SrcCluster -destination-path DstVsv:*
    [Job 18594] Job succeeded: SnapMirror Break Succeeded
    [Job 18595] Job succeeded: SnapMirror Break Succeeded
    [Job 18596] Job succeeded: SnapMirror Break Succeeded
    [Job 18597] Job succeeded: SnapMirror Break Succeeded
    [Job 18598] Job succeeded: SnapMirror Break Succeeded
    [Job 18599] Job succeeded: SnapMirror Break Succeeded
    [Job 18600] Job succeeded: SnapMirror Break Succeeded

[ ON THE SOURCE CLUSTER ] : Use snapmirror list-destinations command to check valid  snapmirror relations

    SrcCluster::> snapmirror list-destinations -destination-vserver DstVsv
    Source Destination Transfer Last Relationship
    Path Type Path Status Progress Updated Id
    ———– —– ———— ——- ——— ———— —————
    SrcVsv:src_vol_c0165 DP DstVsv:SrcCluster_src_vol_c0165r – – – a58a0def-2c61-11e4-b66e-123478563412
    SrcVsv:src_vol_c0282 DP DstVsv:SrcCluster_src_vol_c0282r – – – a5cbb158-2c61-11e4-8fb5-123478563412
    SrcVsv:src_vol_c0284 DP DstVsv:SrcCluster_src_vol_c0284r Idle – – a908f9f0-2c61-11e4-a8c1-123478563412
    SrcVsv:src_vol_c0286 DP DstVsv:SrcCluster_src_vol_c0286r Idle – – a773c018-2c61-11e4-8fb5-123478563412
    SrcVsv:src_vol_c0313 DP DstVsv:SrcCluster_src_vol_c0313r Idle – – aaaa528e-2c61-11e4-8fb5-123478563412

Issue snapmirror release command from the source cluster

    SrcCluster::> snapmirror release -destination-vserver DstVsv -destination-volume *
    [Job 74850] Job succeeded: SnapMirror Release Succeeded
    [Job 74851] Job succeeded: SnapMirror Release Succeeded
    [Job 74852] Job succeeded: SnapMirror Release Succeeded
    [Job 74853] Job succeeded: SnapMirror Release Succeeded
    [Job 74854] Job succeeded: SnapMirror Release Succeeded
    [Job 74855] Job succeeded: SnapMirror Release Succeeded

Delete the snapmirror relations from Destination cluster

    DstCluster::*> snapmirror delete -source-vserver SrcVsv -destination-path DstVsv:*
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0043r”.
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0048r”.
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0070r”.
    Operation succeeded: snapmirror delete for the relationship with destination “DstVsv:SrcCluster_src_vol_0082r”.

NetApp Powershell toolkit can be very handy to automate large number of tasks

    Import Data ONTAP Module in Powershell command window
    Import-Module DataOnTap
    Connect to the Source cluster
    Connect-NcController SrcCluster -Credential admin
    Save a list of source volumes in a file called srcvolumes.txt. We will import the volumes in the variable $volumes and parse through each volume deleting snapshots with name “snapmirror*”
    $volumes = get-content C:\scripts\srcvolumes.txt
    foreach ($vol in $volumes) {Get-NcSnapshot $vol snapmi* | Remove-NcSnapshot -IgnoreOwners -Confirm:$false}
    Save a list of destination volumes in a file called dstvolumes.txt. we will import the volumes in a variable $volumes and we will offline and destroy each destination volume
    $volumes = get-content C:\scripts\dstvolumes.txt
    foreach ($vol in $volumes) {Dismount-NcVol $vol -VserverContext DstVsv -Confirm:$false}
    foreach ($vol in $volumes) {Set-NcVol $vol -Offline -VserverContext DstVsv -Confirm:$false}
    foreach ($vol in $volumes) {Remove-NcVol $vol -VserverContext DstVsv -Confirm:$false}

NetApp FAS storage Head Swap Procedure

INTRODUCTION

This document contains the verification checklists that document the Head Swap Procedure.

Current Serial Numbers:
700000293005 – WALLS1
700000293017 – WALLS2

New Serial Numbers:
700002090378 – new_WALLS1
700002090366 – new_WALLS2

Current SYSID
0151745322 – WALLS1
0151745252 – WALLS2

New SYSID
2016870400 – new_WALLS1
2016870518 – new_WALLS2

Current WALLS1
slot 1 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
slot 2 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
slot 3 OK: X1938A: Flash Cache 512 GB
slot 4 OK: X1107A: Chelsio S320E 2x10G NIC

Current WALLS2
sysconfig: slot 1 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
sysconfig: slot 2 OK: X2065A: PMC PM8001; PCI-E quad-port SAS (PM8003)
sysconfig: slot 3 OK: X1938A: Flash Cache 512 GB
sysconfig: slot 4 OK: X1107A: Chelsio S320E 2x10G NIC

Current WALLS1
RLM Status
IP Address:         10.43.6.88
Netmask:            255.255.255.0
Gateway:            10.43.6.1

Current WALLS2
RLM Status
IP Address:         10.43.6.120
Netmask:            255.255.255.0
Gateway:            10.43.6.1

New Licenses
Heep a Copy of the new liceses

PRE UPGRADE PROCEDURE

Make a note of the following before performing Head swap activity

  1. Old Controllers Serial Numbers
  2. Old Controllers System ID
  3. New Controllers Serial Numbers
  4. New Controllers System ID
  5. Location of expansion modules (PCI, PCIe) cards on the old controllers
  6. Label the cables attached to the Old controller Heads
  7. Make a note of the Remote LAN Mode (RLM) or Service Processor (SP) IP addresses
  8. Serial connection with the controllers to view console messages
  9. Make a note of the licenses for the new systems
  10. Make sure the Network Adapters on new controllers match the location of old controller. If not, you have to modify /etc/rc file for making new ports active.

STEPS FOR HEAD SWAP NETAPP FAS STORAGE SYSTEM

Tools required before and after head-swap

  1. Grounding strap
  2. #2 Phillips screw driver

New FAS storage system Installation and Setup

  1. Power-on the new heads with a console; check the ontap version on new controller; it should match the current version on old controller. Depending upon that we can downgrade or upgrade the ontap version.
  2. Below steps are to be followed to upgrade ontap on new controller to match old controller.
  3. Download the ontap version from url:

http://www.now.netapp.com

  1. Take the backup of system files from old controller:

/etc/hosts
/etc/rc
/etc/cifs_homedir.cfg
/etc/exports
/etc/snapmirror.conf (only on destination filers)
/etc/resolv.conf
/etc/hosts.equiv
/etc/nsswitch.conf
/etc/quotas
/etc/usermap.cfg
/etc/tapeconfig

  1. Trigger an autosupport
  2. options autosupport.doit “Pre Head Swap”

  1. Disable the autosupport
  2. options autosupport.enable off

  1. Disable the cluster
  2. cf disable

  1. Keep the ontap software in the /etc/software directory
  2. Software install <ontap_software>
    Note* don’t run download command as yet
    Make sure the Network Adapters on new controllers match the location of old controller. If not, you have to modify /etc/rc file for making new ports active

  1. If snapmirror and snapvault relations exist on this system; update them on all the volumes before system shutdown. Follow the commands as below:
  2. HALT the system
  3. halt

  1. Power-down the controller head and then all the disk shelves one at a time.
  2. Remove the power cables, network cables and SAS cables form the old controller and
  3. Remove old controller from the rack unit and replace with new controller
    Mount and screw as required on the rack

  1. Attach the SAS cables to the disk shelves, network cables to network ports, power cords to PSU’s.
  2. Power-on the disk shelves one at a time (Wait for about 1 minute till all the disks are spun properly and green led’s are stable)
  3. Power-up new controller
  4. Hit control+C when asked for an option to enter into maintenance mode
  5. Select option 5
  6. Check whether all the disks are visible by the system; hit command:
  7. disk show -v

  1. Assign the disks to this controller
  2. disk show -v (keep a copy of all the disks for future reference)
    disk reassign -s <old_sysid> -d <new_sysid> or disk assign all (change the system ID’s)
    disk show -v

  1. Clear the mailbox by enterning the following commands:
  2. mailbox destroy local
    mailbox destroy partner

  1. Halt the system
  2. Halt

  1. On the loader prompt, verify date –u with another Controller in production
  2. Boot the system in normal mode
  3. bye or boot_ontap

  1. Once the system is booted; install the ontap
  2. download

  1. Reboot the system
  2. reboot

  1. Enable the cluster
  2. cf enable

  1. Verify the HA pair is setup correctly
  2. disk show –a (storage show disk –p will also tell if MPHA is enabled)

  1. Add the license for new controller
  2. License required for a-sis; nearstore_option; sv_ontap_sec

  1. Enable and Trigger an autosupport
  2. options autosupport.enable on
    options autosupport.doit “Post Head Swap”

  1. Perform SP Setup
  2. sp setup

Testing Plan:

  1. Connect to the etc directory on new controller via NFS or CIFS and browse the contents
  2. Connect to other CIFS or NFS shares on the new controller
  3. Run the snapvault/snapmirror update once the head swap is completed
  4. Restore a test file from volume/snapshot
  5. Check with network connectivity
  6. Check the ontap version; shelf firmware version

Backout procedure:

  1. Power-down new controller and attach old controller; recable old controller back to what it was
  2. Bring down the old controller to its cfe or loader prompt
  3. Swap the PCI/PCIe cards back
  4. Reboot the system

 

Unjoin Nodes from a Cluster (cDOT)

  1. Move or delete all volumes and member volumes from aggregates owned by the node to be unjoined
    • nitish-mgmt::> volume move target-aggr show -vserver nitish -volume nitish_test9
    • nitish-mgmt::> volume move start -perform-validation-only true -vserver nitish -volume nitish_test9 -destination-aggregate nitish01_sas_1
    • nitish-mgmt::> volume move start -vserver nitish -volume nitish_test9 -destination-aggregate nitish01_sas_1
    • nitish-mgmt::> vol move show
  2. Quiesce and Break LS Snapmirrors with destinations on aggregates that are owned by the node/s being removed
    • nitish-mgmt::> snapmirror show -type LS
    • nitish-mgmt::> snapmirror quiesce –destination-path <destination-path>
    • nitish-mgmt::> snapmirror break –destination-path <destination-path>
    • nitish-mgmt::> snapmirror delete –destination-path <destination-path>
  3. Offline and delete those snapmirror destination volumes
    • nitish-mgmt::> vol offline –vserver nitish –volume nitish_test9
    • nitish-mgmt::> vol destroy –vserver nitish –volume nitish_test9
  4. Move or delete all aggregates (except for the mroot aggregate) owned by the node to be unjoined
    • nitish-mgmt::> aggr offline <aggr-name>
    • nitish-mgmt::> aggr delete <aggr-name>
  5. Delete or re-home all data LIFs from the node to be unjoined to other nodes in the cluster
    • nitish-mgmt::> network interface delete
    • nitish-mgmt::> network interface migrate
    • nitish-mgmt::> network interface modify
  6. Modify all LIF failover rules to remove ports on the node to be unjoined
    • nitish-mgmt::> failover-groups delete -failover-group data -node nitish-01 -port e0e
    • nitish-mgmt::> failover-groups delete -failover-group data -node nitish-02 -port e0e
  7. Disable SFO on the node to be unjoined
    • nitish-mgmt::> storage failover modify -node nitish-01 –enabled false
  8. Move epsilon to a node other than the node to be unjoined
    • nitish-mgmt::*> cluster show
    • nitish-mgmt::*> cluster ring show
    • nitish-mgmt::*> cluster modify -node nitish-01 –epsilon false
    • nitish-mgmt::*> cluster modify –node nitish-03 –epsilon true
    • nitish-mgmt::*> cluster show
    • nitish-mgmt::*> cluster ring show
  9. Delete all VLANs on the node to be unjoined
    • nitish-mgmt::> vlan delete
  10. Trigger an autosupport from the node to be unjoined
    • nitish-mgmt::> system node autosupport invoke -type all -node nitish-01 -message “pre_unjoin”
  11. Run the cluster unjoin command from a different node in the cluster besides the node that is to be unjoined
    • nitish-mgmt::*> unjoin –node nitish-01
    • nitish-mgmt::*> unjoin –node nitish-02
  • Warning: This command will unjoin node “nitish-01” from the cluster. You must unjoin the failover partner as well. After the node is successfully unjoined, erase its configuration and initialize all disks by using the “Clean configuration and initialize all disks (4)” option from the boot menu.Do you want to continue? {y|n}: y

    [Job 32] Cleaning cluster database [Job 32] Job succeeded: Cluster unjoin succeeded

    *******************************

    * *

    * Press Ctrl-C for Boot Menu. *

    * *

    *******************************

    This node was removed from a cluster. Before booting, use

    option (4) to initialize all disks and setup a new system.

    Normal Boot is prohibited.

    Please choose one of the following:

    (1) Normal Boot.

    (2) Boot without /etc/rc.

    (3) Change password.

    (4) Clean configuration and initialize all disks.

    (5) Maintenance mode boot.

    (6) Update flash from backup config.

    (7) Install new software first.

    (8) Reboot node.

    Selection (1-8)?

    Once Option 4 (or ‘wipeconfig’ and ‘init’) is run, the node is considered to be a ‘fresh’ node.

After the node is unjoined from the cluster, it cannot be re-joined to this or any other cluster until the wipeclean process is performed.

InterCluster Replication setup in clustered Data ONTAP 8.2

Clusters must be joined in a peer relationship before replication between different clusters is possible.
Cluster peering is a one-time operation that must be performed by the cluster administrators.
An intercluster LIF must be created on an intercluster-capable port, which is a port assigned the role of intercluster or a port assigned the role of data.

 
NOTE: Ports that are used for the intracluster cluster interconnect may not be used for intercluster replication.
 

Cluster peer requirements include the following:

  • The time on the clusters must be in sync within 300 seconds (five minutes) for peering to be successful.

Cluster peers can be in different time zones

  • At least one intercluster LIF must be created on every node in the cluster.
  • Every intercluster LIF requires an IP address dedicated for intercluster replication.
  • The correct maximum transmission unit (MTU) value must be used on the network ports that are used for replication.
  • All paths on a node used for intercluster replication should have equal performance characteristics.
  • The intercluster network must provide connectivity among all intercluster LIFs on all nodes in the cluster peers.
  • Every intercluster LIF on every node in a cluster must be able to connect to every intercluster LIF on every node in the peer cluster.
  1. Check the role of the ports in the cluster.
  2.    cluster01::> network port show

  1. Change the role of the port used on each node to intercluster.
  2.    cluster01::> network port modify -node cluster01-01 -port e0e -role intercluster

  1. Create an intercluster LIF on each node in cluster01.
  2. This example uses the LIF naming convention <nodename>_icl# for intercluster LIF.

       cluster01::> network int create -SVM cluster01-01 -lif cluster01-01_icl01 -role intercluster -home-node cluster01-01 -home-port e0e -address 192.168.1.201 -netmask 255.255.255.0

  1. Repeat the above steps for destination Cluster Cluster02
  1. Configure Cluster Peers
  2.    cluster01::> cluster peer create -peer-addrs
    192.168.2.203,192.168.2.204 –username admin

       Password: *********

  1. Display the newly created cluster peer relationship.
  2.    cluster01::> cluster peer show –instance

  1. Preview the health of the cluster peer relationship.
  2.    cluster01::> cluster peer health show

  1. Create SVM peer relationship
  2.    cluster01::> vserver peer create -vserver vs1.example0.com -peer-vserver vs5.example0.com -applications snapmirror -peer-cluster cluster02

  1. Verify SVM peer relationship status
  2.    cluster01::> vserver peer show-all

Complete the following requirements before creating an intercluster SnapMirror relationship:

  • Configure the source and destination nodes for intercluster networking.
  • Configure the source and destination clusters in a peer relationship.
  • Configure source and destination SVM’s in a peer relationship.
  • Create a destination NetApp SVM; volumes cannot exist in Cluster-Mode without a SVM.
  • Verify that the source and destination SVMs have the same language type.
  • Create a destination volume with a type of data protection (DP), with a size equal to or greater than that of the source volume.
  • Assign a schedule to the SnapMirror relationship to perform periodic updates.
  1. Create a snapmirror schedule on Destination Cluster
  2.    cluster02::> job schedule cron create -name Hourly_SnapMirror -minute 0

  1. Create a SnapMirror relationship with –type DP and assign the schedule created in the previous step. Vs1 and vs5 are the SVM’s.
  2.    cluster02::>snapmirror create -source-path vs1:vol1 -destination-path vs5:vol1 -type   DP -schedule Hourly_SnapMirror

  1. Review the SnapMirror relationship.
  2.    cluster02::>snapmirror show

  1. Initialize the SnapMirror relationship in the destination cluster.
  2.    cluster02::>snapmirror initialize -destination-path vs5:vol1

  1. Verify the progress of the replication.
  2.    cluster02::>snapmirror show

SnapMirror relationships can be failed over using the snapmirror break command and resynchronized in either direction using the Snapmirror resync command.

In order for NAS clients to access data in the destination volumes, CIFS shares and NFS export policies must be created in the destination SVM and assigned to the volumes.

7 Mode Data ONTAP (8.2.1) Upgrade Procedure

I have been working more on NetApp clustered Data ONTAP systems lately. Recently i was asked to upgrade some 7 mode systems which didn’t have /etc directory shared via CIFS protocol i.e. CIFS not running on the Base filer “vfiler0”. The web server is configured on another 7 mode storage system.

    1. Download the latest copy of Data ONTAP Upgrade/Downgrade guide from NetApp Support
    2. Copy the downloaded image to web server shared directory
    3. Follow Step 15 for international sites (where copying Data ONTAP image may take longer) and copy ONTAP images on to the filer. Do not install (update) the image yet
    4. Login to the 7 mode system and send autosupports
      • options autosupport.doit pre_NDU_upgrade
      • options autosupport.enable off
      • options snmp.enable off
    5. Take backup of the vfier configuration on the system
      • vfiler status -r
      • ifconfig -a
      • rdfile /etc/rc
    6. Check for Snapmirror/Snapvault relations on the system. You must upgrade the destination system first.
    7. Verify all failed drives are replaced (vol status -f)
    8. Delete old core files from /etc/crash directory
    9. Verify no deduplication processes are active
      • sis status
      • sis stop (if dedupe is active on any volume)
    10. Confirm all the disks are multipathed
      • storage show disk -p
    11. Verify all aggregates are online
      • aggr status
    12. Make sure all aggregates have atleast 5-6 % free capacity
      • df -Ag
    13. Disable autogiveback
      • options cf.giveback.auto.enable off
    14. Turn off snapmirror and snapvault
      • snapmirror off
      • options snapvault.enable off
    15. For International sites copy the image to the controller and install using following commands:
      • software get http://<web-server>/data_ontap/821_q_image.tgz
      • software list (lists the files in /etc/software directory)
      • software update 821_q_image.tgz -r (install files without rebooting)
      • version -b (verify the new image is installed)
    16. For local sites
      • software update http://<web-server>/data_ontap/821_q_image.tgz -r
      • version -b (verify the new image is installed)
    17. Perform cf takeover from the partner
      • cf takeover (this will reboot the partner)
    18. Perform cf giveback from the partner once local system shows “waiting for giveback”
      • cf giveback -f
    19. Perform cf takeover from the partner
      • cf takeover -n (this will reboot the partner)
    20. Perform cf giveback from the partner once local system shows “waiting for giveback”
      • cf giveback -f
    21. Perform the same steps on both the nodes and verify both have the new code
      • version -b
    22. Turn on snap mirror and snapvault
      • snapmirror on
      • options snapvault.enable on
    23. Turn on autogiveback
      • options cf.giveback.auto.enable on
    24. Turn on autosupport
      • options autosupport.enable on
      • options autosupport.doit post_NDU_upgrade
      • options snmp.enable on
    25. Upon completion of the upgrade process invoke below commands to see if any issues
      • sysconfig -a
      • vol status -f
      • vol status
      • aggr status
      • vol status -s
    26. Update SP Firmware on the filers (system node service-processor image update-progress show)

 

If you are upgrading multiple 7 mode filers at a time, you may use “for” loop from linux/unix shell to run same command on multiple filers.  

for i in filer1 filer2 filer3 filer4; do echo “”; echo $i ; sudo rsh $i “priv set -q diag; sis status”; echo “”; done

 

 

clustered Data ONTAP 8.2.2P1 Upgrade Procedure

Here is the procedure to upgrade an 8 node cluster to ONTAP 8.2.2P1.

Upgrade Prerequisites

  1. Replace any failed disks

Pre-upgrade Checklist

    1. Update shelf firmware on all nodes (use latest shelf firmware files available on NetApp Support)
    2. Update disk firmware on all nodes (Using the all.zip file available on NetApp Support)
    3. Send autosupport from all the nodes
      • system node autosupport invoke -type all -node * -message “Upgrading to 8.2.2P1″
    4. Verify Cluster Health
      • ::> cluster show
    5. Verify Cluster is in RDB
      • ::> set advanced
      • ::> cluster ring show -unitname vldb
      • ::> cluster ring show -unitname mgmt
      • ::> cluster ring show -unitname vifmgr
    6. Verify vserver health
      • ::> storage aggregate show -state !online
      • ::> volume show -state !online
      • ::> network interface show -status-oper down
      • ::> network interface show -is-home false
    7. Verify lif failover configuration (data lif’s)
      • ::> network interface failover show

Start Upgrade

    1. Determine the current image & Download the new image
      • ::> system node image show
        New image file – 822P1_q_image.tgz
    2. Verify no jobs are running
      • ::> job show
    3. Delete any running or queued aggregate, volume, SnapMirror copy, or Snapshot job
      • ::> job delete –id <job-id>
      • ::> system node image update -node * -package
        http://<web-server>/data_ontap/8.2.2P1_q_image.tgz -setdefault true
    4. Verify software is installed
      • ::> system node image show
    5. Determine the “Epsilon” server
      • ::> set adv
      • ::*> cluster show

Reboot the epsilon server first and wait for it to come up; then move on to other nodes

      • ::> storage failover show
      • ::> storage failover modify -node * -auto-giveback false
      •  ::> network interface migrate-all -node clusternode-02
      • ::> storage failover takeover -bynode clusternode-01
      • ::> storage failover giveback –fromnode clusternode-01 -override-vetoes true
      • ::> storage failover show (keep verifying aggr show for aggrs to return back)

Verify the node booted up with 8.2.2P1 image

      • ::> system node image show

      Once the aggregates are home, verify the lif’s, if not home

      • ::> network interface revert *

 

Repeat the following steps in the order below for all the nodes

      • ::> network interface migrate-all -node clusternode-01
      • ::> storage failover takeover -bynode clusternode-02
      • ::> storage failover giveback –fromnode clusternode-02 -override-vetoes true
      • ::> storage failover show (keep verifying aggr show for aggrs to return back)

Once the aggregates are home, verify the lif’s, if not home

      • ::> network interface revert *

Ensure that the cluster is in quorum and that services are running before upgrading the next pair of nodes:

    • ::> cluster show
    • ::> cluster ring show

Reboot the nodes in the order:

  1. clusternode-01 (once this is up)
  2. clusternode-02, clusternode-04, clusternode-06 (once these are up then)
  3. clusternode-03, clusternode-05, clusternode-07 (once these are up then)
  4. clusternode-08

Enable Autogiveback for all the nodes

  • ::> storage failover modify -node nodename -auto-giveback true

Verify Post-upgrade cluster is healthy

  • ::> set advanced
  • ::> system node upgrade-revert show

The status for each node should be listed as complete.

Verify Cluster Health

  • ::> cluster show

Verify Cluster is in RDB

  • ::> set advanced
  • ::> cluster ring show -unitname vldb
  • ::> cluster ring show -unitname mgmt
  • ::> cluster ring show -unitname vifmgr

Verify vserver health

  • ::> storage aggregate show -state !online
  • ::> volume show -state !online
  • ::> network interface show -status-oper down
  • ::> network interface show -is-home false

Verify lif failover configuration (data lif’s)

  • ::> network interface failover show

Backout Plan

  1. Verify that the Data ONTAP 8.2.2P1 Cluster-Mode software is installed:
    • system node image show
  2. Trigger autosupport
    • ::> system node autosupport invoke -type all -node <nodename> -message “Reverting to 8.2.2P1 Cluster-Mode”
  3. Check revert to settings
    • ::> system node revert-to -node <nodename> -check-only true -version 8.2.2P1
  4. Revert the node to 8.2.2P1
    • ::> system node revert-to -node <nodename> -version 8.2.P1