Category Archives: March

Convert Snapmirror DP relation to XDP in clustered Data ONTAP

With clustered Data ONTAP 8.2 snapvault (XDP) feature was introduced and the ability to convert existing snapmirror (DP) relations to snapvault (XDP) .

I had tested this feature long time ago, however, never used it in production environment. Recently i got a chance to implement this as a production volume with high change rate (snapshots involved) grew up to 60 TB (20 TB used by snapshots). Due to FAS 3250 in the cluster the max. volume size is 70 TB. After discussions with the customer it was decided to create a local snapvault copy of the production volume that would contain all existing snapshots and accumulate more in coming days until a new snapvault cluster is setup. The data in the volume is highly compressable so the snapvault destination would consume less space.

Overview of this process:

  1. Create a snapmirror DP relation
  2. Initialize the snapmirror DP relation
  3. Quiesce/Break/Delete the DP relation
  4. Resync the relation as snapmirror XDP
  5. Continue with vault updates

CREATE SOURCE VOLUME
snowy-mgmt::*> vol create -volume AU2004NP0062_vol001 -aggregate snowy01_hybdata_01 -space-guarantee none -size 1tb -junction-path /AU2004NP0066_vol001 -state online -junction-active true (volume create) [Job 85] Job succeeded: Successful snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields security-style (volume show) vserver volume security-style ------- ------------------- -------------- snowy AU2004NP0062_vol001 ntfs

CREATE DESTINATION VOLUME
snowy-mgmt::*> vol create -volume AU2004NP0062_vol001_sv -aggregate snowy02_hybdata_01 -space-guarantee none -size 3tb -type DP -state online (volume create) [Job 86] Job succeeded: Successful snowy-mgmt::*> vol show -volume AU2004NP0062_vol001* (volume show) Vserver Volume Aggregate State Type Size Available Used% --------- ------------ ------------ ---------- ---- ---------- ---------- ----- snowy AU2004NP0062_vol001 snowy01_hybdata_01 online RW 1TB 87.01GB 91% snowy AU2004NP0062_vol001_sv snowy02_hybdata_01 online DP 3TB 87.01GB 97% 2 entries were displayed.

CREATE SNAPMIRROR RELATION
snowy-mgmt::*> snapmirror create -source-path snowy:AU2004NP0062_vol001 -destination-path snowy:AU2004NP0062_vol001_sv -type DP -vserver snowy Operation succeeded: snapmirror create for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show -type DP Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 DP snowy:AU2004NP0062_vol001_sv Uninitialized Idle - true -

INITIALIZE SNAPMIRROR
snowy-mgmt::*> snapmirror initialize -destination-path snowy:AU2004NP0062_vol001_sv Operation is queued: snapmirror initialize of destination "snowy:AU2004NP0062_vol001_sv".

CREATE SNAPSHOTS ON SOURCE VOLUME
snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_01 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_02 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_03 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_04 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_05 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_06 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_07 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_08 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_09 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_00 snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001 ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001 hourly.2016-03-24_1005 valid 60KB 0% 27% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 33% 80KB sas_snap_01 valid 60KB 0% 27% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 32% sas_snap_04 valid 60KB 0% 27% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 32% sas_snap_00 valid 56KB 0% 26% 12 entries were displayed. snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 DP snowy:AU2004NP0062_vol001_sv Snapmirrored Idle - true -

UPDATE SNAPMIRROR TO TRANSFER ALL SNAPSHOTS TO DESTINATION VOLUME
snowy-mgmt::*> snapmirror update -destination-path snowy:* Operation is queued: snapmirror update of destination "snowy:AU2004NP0062_vol001_sv". 1 entry was acted on.

SNAPSHOTS REACHED THE DESTINATION VOLUME
snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001_sv ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001_sv hourly.2016-03-24_1005 valid 60KB 0% 28% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 34% 80KB sas_snap_01 valid 60KB 0% 28% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 33% sas_snap_04 valid 60KB 0% 28% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 33% sas_snap_00 valid 72KB 0% 32% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 0% 0B 13 entries were displayed.

QUIESCE, BREAK AND DELETE SNAPMIRRORS
snowy-mgmt::*> snapmirror quiesce -destination-path snowy:AU2004NP0062_vol001_sv Operation succeeded: snapmirror quiesce for destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror break -destination-path snowy:AU2004NP0062_vol001_sv [Job 87] Job succeeded: SnapMirror Break Succeeded snowy-mgmt::*> snapmirror delete -destination-path snowy:AU2004NP0062_vol001_sv Operation succeeded: snapmirror delete for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show This table is currently empty.

RESYNC SNAPMIRROR AS XDP RELATION
snowy-mgmt::*> snapmirror resync -source-path snowy:AU2004NP0062_vol001 -destination-path snowy:AU2004NP0062_vol001_sv -type XDP Warning: All data newer than Snapshot copy snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 on volume snowy:AU2004NP0062_vol001_sv will be deleted. Verify there is no XDP relationship whose source volume is "snowy:AU2004NP0062_vol001_sv". If such a relationship exists then you are creating an unsupported XDP to XDP cascade. Do you want to continue? {y|n}: y [Job 88] Job succeeded: SnapMirror Resync Transfer Queued snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 XDP snowy:AU2004NP0062_vol001_sv Snapmirrored Idle - true -

SNAPSHOTS EXIST ON BOTH SOURCE AND DESTINATION VOLUME AFTER RESYNC
snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001 ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001 hourly.2016-03-24_1005 valid 68KB 0% 29% sas_snap_01 valid 60KB 0% 27% sas_snap_02 valid 64KB 0% 28% sas_snap_03 valid 76KB 0% 32% sas_snap_04 valid 60KB 0% 27% sas_snap_05 valid 64KB 0% 28% sas_snap_06 valid 64KB 0% 28% sas_snap_07 valid 64KB 0% 28% sas_snap_08 valid 64KB 0% 28% sas_snap_09 valid 76KB 0% 32% sas_snap_00 valid 72KB 0% 31% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 31% 72KB 12 entries were displayed. snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001_sv ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001_sv hourly.2016-03-24_1005 valid 60KB 0% 28% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 34% 80KB sas_snap_01 valid 60KB 0% 28% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 33% sas_snap_04 valid 60KB 0% 28% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 33% sas_snap_00 valid 72KB 0% 32% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 33% 76KB 13 entries were displayed.

TURN ON VOLUME EFFICIENCY - DESTINATION VOLUME
snowy-mgmt::*> vol efficiency on -volume AU2004NP0062_vol001_sv (volume efficiency on) Efficiency for volume "AU2004NP0062_vol001_sv" of Vserver "snowy" is enabled. Already existing data could be processed by running "volume efficiency start -vserver snowy -volume AU2004NP0062_vol001_sv -scan-old-data true".

CRETE A CIFS SHARE ON SOURCE VOLUME AND COPY SOME DATA
snowy-mgmt::*> cifs share create -share-name sas_vol -path /AU2004NP0062_vol001 -share-properties oplocks,browsable,changenotify snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB

CREATE SNAPSHOT AND SNAPMIRROR POLICIES WITH SAME SNAPMIRROR LABLES
snowy-mgmt::*> cron show (job schedule cron show) Name Description ---------------- ----------------------------------------------------- 5min @:00,:05,:10,:15,:20,:25,:30,:35,:40,:45,:50,:55 8hour @2:15,10:15,18:15 daily @0:10 hourly @:05 weekly Sun@0:15 5 entries were displayed. snowy-mgmt::*> snapshot policy create -policy keep_more_snaps -enabled true -schedule1 5min -count1 5 -prefix1 sv -snapmirror-label1 mins -vserver snowy snowy-mgmt::*> snapmirror policy create -vserver snowy -policy XDP_POL snowy-mgmt::*> snapmirror policy add-rule -vserver snowy -policy XDP_POL -snapmirror-label mins -keep 50

APPLY SNAPSHOT POLICY TO SOURCE VOLUME
snowy-mgmt::*> volume modify -volume AU2004NP0062_vol001 -snapshot-policy keep_more_snaps Warning: You are changing the Snapshot policy on volume AU2004NP0062_vol001 to keep_more_snaps. Any Snapshot copies on this volume from the previous policy will not be deleted by this new Snapshot policy. Do you want to continue? {y|n}: y Volume modify successful on volume: AU2004NP0062_vol001

APPLY SNAPMIRROR POLICY TO DESTINATION VOLUME
snowy-mgmt::*> snapmirror modify -destination-path snowy:AU2004NP0062_vol001_sv -policy XDP_POL Operation succeeded: snapmirror modify for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields snapshot-policy (volume show) vserver volume snapshot-policy ------- ------------------- --------------- snowy AU2004NP0062_vol001 keep_more_snaps snowy-mgmt::*> snapshot policy show keep_more_snaps -instance Vserver: snowy Snapshot Policy Name: keep_more_snaps Snapshot Policy Enabled: true Policy Owner: vserver-admin Comment: - Total Number of Schedules: 1 Schedule Count Prefix SnapMirror Label ---------------------- ----- --------------------- ------------------- 5min 5 sv mins

UPDATE SNAPMIRROR RELATIONSHIP (SNAPVAULT)
snowy-mgmt::*> snapmirror update -destination-path snowy:AU2004NP0062_vol001_sv Operation is queued: snapmirror update of destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 XDP snowy:AU2004NP0062_vol001_sv Snapmirrored Transferring 0B true 03/24 10:56:29

THE SIZE OF BOTH SOURCE AND DESTINATION VOLUME IS SAME
snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 2.06GB 2 entries were displayed.

DEDUPE JOB IS RUNNING
snowy-mgmt::*> sis status Vserver Volume State Status Progress Policy ---------- ---------------- -------- ------------ ------------------ ---------- snowy AU2004NP0062_vol001_sv Enabled Active 539904 KB (25%) Done -

SIZE OF DESTINATION VOLUME AFTER DEDUPE JOB COMPLETED
snowy-mgmt::*> sis status Vserver Volume State Status Progress Policy ---------- ---------------- -------- ------------ ------------------ ---------- snowy AU2004NP0062_vol001_sv Enabled Idle Idle for 00:00:05 - snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 274.9MB 2 entries were displayed.

START COMPRESSION JOB ON DESTINATION VOLUME BY SCANNING EXISTING DATA
snowy-mgmt::*> vol efficiency start -volume AU2004NP0062_vol001_sv -scan-old-data (volume efficiency start) Warning: This operation scans all of the data in volume "AU2004NP0062_vol001_sv" of Vserver "snowy". It may take a significant time, and may degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "AU2004NP0062_vol001_sv" of Vserver "snowy" has started.

SIZE OF DESTINATION VOLUME AFTER COMPRESSION JOB COMPLETED
snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 49.76MB 2 entries were displayed.

Unjoin Nodes from a Cluster (cDOT)

  1. Move or delete all volumes and member volumes from aggregates owned by the node to be unjoined
    • nitish-mgmt::> volume move target-aggr show -vserver nitish -volume nitish_test9
    • nitish-mgmt::> volume move start -perform-validation-only true -vserver nitish -volume nitish_test9 -destination-aggregate nitish01_sas_1
    • nitish-mgmt::> volume move start -vserver nitish -volume nitish_test9 -destination-aggregate nitish01_sas_1
    • nitish-mgmt::> vol move show
  2. Quiesce and Break LS Snapmirrors with destinations on aggregates that are owned by the node/s being removed
    • nitish-mgmt::> snapmirror show -type LS
    • nitish-mgmt::> snapmirror quiesce –destination-path <destination-path>
    • nitish-mgmt::> snapmirror break –destination-path <destination-path>
    • nitish-mgmt::> snapmirror delete –destination-path <destination-path>
  3. Offline and delete those snapmirror destination volumes
    • nitish-mgmt::> vol offline –vserver nitish –volume nitish_test9
    • nitish-mgmt::> vol destroy –vserver nitish –volume nitish_test9
  4. Move or delete all aggregates (except for the mroot aggregate) owned by the node to be unjoined
    • nitish-mgmt::> aggr offline <aggr-name>
    • nitish-mgmt::> aggr delete <aggr-name>
  5. Delete or re-home all data LIFs from the node to be unjoined to other nodes in the cluster
    • nitish-mgmt::> network interface delete
    • nitish-mgmt::> network interface migrate
    • nitish-mgmt::> network interface modify
  6. Modify all LIF failover rules to remove ports on the node to be unjoined
    • nitish-mgmt::> failover-groups delete -failover-group data -node nitish-01 -port e0e
    • nitish-mgmt::> failover-groups delete -failover-group data -node nitish-02 -port e0e
  7. Disable SFO on the node to be unjoined
    • nitish-mgmt::> storage failover modify -node nitish-01 –enabled false
  8. Move epsilon to a node other than the node to be unjoined
    • nitish-mgmt::*> cluster show
    • nitish-mgmt::*> cluster ring show
    • nitish-mgmt::*> cluster modify -node nitish-01 –epsilon false
    • nitish-mgmt::*> cluster modify –node nitish-03 –epsilon true
    • nitish-mgmt::*> cluster show
    • nitish-mgmt::*> cluster ring show
  9. Delete all VLANs on the node to be unjoined
    • nitish-mgmt::> vlan delete
  10. Trigger an autosupport from the node to be unjoined
    • nitish-mgmt::> system node autosupport invoke -type all -node nitish-01 -message “pre_unjoin”
  11. Run the cluster unjoin command from a different node in the cluster besides the node that is to be unjoined
    • nitish-mgmt::*> unjoin –node nitish-01
    • nitish-mgmt::*> unjoin –node nitish-02
  • Warning: This command will unjoin node “nitish-01” from the cluster. You must unjoin the failover partner as well. After the node is successfully unjoined, erase its configuration and initialize all disks by using the “Clean configuration and initialize all disks (4)” option from the boot menu.Do you want to continue? {y|n}: y

    [Job 32] Cleaning cluster database [Job 32] Job succeeded: Cluster unjoin succeeded

    *******************************

    * *

    * Press Ctrl-C for Boot Menu. *

    * *

    *******************************

    This node was removed from a cluster. Before booting, use

    option (4) to initialize all disks and setup a new system.

    Normal Boot is prohibited.

    Please choose one of the following:

    (1) Normal Boot.

    (2) Boot without /etc/rc.

    (3) Change password.

    (4) Clean configuration and initialize all disks.

    (5) Maintenance mode boot.

    (6) Update flash from backup config.

    (7) Install new software first.

    (8) Reboot node.

    Selection (1-8)?

    Once Option 4 (or ‘wipeconfig’ and ‘init’) is run, the node is considered to be a ‘fresh’ node.

After the node is unjoined from the cluster, it cannot be re-joined to this or any other cluster until the wipeclean process is performed.

InterCluster Replication setup in clustered Data ONTAP 8.2

Clusters must be joined in a peer relationship before replication between different clusters is possible.
Cluster peering is a one-time operation that must be performed by the cluster administrators.
An intercluster LIF must be created on an intercluster-capable port, which is a port assigned the role of intercluster or a port assigned the role of data.

 
NOTE: Ports that are used for the intracluster cluster interconnect may not be used for intercluster replication.
 

Cluster peer requirements include the following:

  • The time on the clusters must be in sync within 300 seconds (five minutes) for peering to be successful.

Cluster peers can be in different time zones

  • At least one intercluster LIF must be created on every node in the cluster.
  • Every intercluster LIF requires an IP address dedicated for intercluster replication.
  • The correct maximum transmission unit (MTU) value must be used on the network ports that are used for replication.
  • All paths on a node used for intercluster replication should have equal performance characteristics.
  • The intercluster network must provide connectivity among all intercluster LIFs on all nodes in the cluster peers.
  • Every intercluster LIF on every node in a cluster must be able to connect to every intercluster LIF on every node in the peer cluster.
  1. Check the role of the ports in the cluster.
  2.    cluster01::> network port show

  1. Change the role of the port used on each node to intercluster.
  2.    cluster01::> network port modify -node cluster01-01 -port e0e -role intercluster

  1. Create an intercluster LIF on each node in cluster01.
  2. This example uses the LIF naming convention <nodename>_icl# for intercluster LIF.

       cluster01::> network int create -SVM cluster01-01 -lif cluster01-01_icl01 -role intercluster -home-node cluster01-01 -home-port e0e -address 192.168.1.201 -netmask 255.255.255.0

  1. Repeat the above steps for destination Cluster Cluster02
  1. Configure Cluster Peers
  2.    cluster01::> cluster peer create -peer-addrs
    192.168.2.203,192.168.2.204 –username admin

       Password: *********

  1. Display the newly created cluster peer relationship.
  2.    cluster01::> cluster peer show –instance

  1. Preview the health of the cluster peer relationship.
  2.    cluster01::> cluster peer health show

  1. Create SVM peer relationship
  2.    cluster01::> vserver peer create -vserver vs1.example0.com -peer-vserver vs5.example0.com -applications snapmirror -peer-cluster cluster02

  1. Verify SVM peer relationship status
  2.    cluster01::> vserver peer show-all

Complete the following requirements before creating an intercluster SnapMirror relationship:

  • Configure the source and destination nodes for intercluster networking.
  • Configure the source and destination clusters in a peer relationship.
  • Configure source and destination SVM’s in a peer relationship.
  • Create a destination NetApp SVM; volumes cannot exist in Cluster-Mode without a SVM.
  • Verify that the source and destination SVMs have the same language type.
  • Create a destination volume with a type of data protection (DP), with a size equal to or greater than that of the source volume.
  • Assign a schedule to the SnapMirror relationship to perform periodic updates.
  1. Create a snapmirror schedule on Destination Cluster
  2.    cluster02::> job schedule cron create -name Hourly_SnapMirror -minute 0

  1. Create a SnapMirror relationship with –type DP and assign the schedule created in the previous step. Vs1 and vs5 are the SVM’s.
  2.    cluster02::>snapmirror create -source-path vs1:vol1 -destination-path vs5:vol1 -type   DP -schedule Hourly_SnapMirror

  1. Review the SnapMirror relationship.
  2.    cluster02::>snapmirror show

  1. Initialize the SnapMirror relationship in the destination cluster.
  2.    cluster02::>snapmirror initialize -destination-path vs5:vol1

  1. Verify the progress of the replication.
  2.    cluster02::>snapmirror show

SnapMirror relationships can be failed over using the snapmirror break command and resynchronized in either direction using the Snapmirror resync command.

In order for NAS clients to access data in the destination volumes, CIFS shares and NFS export policies must be created in the destination SVM and assigned to the volumes.

Perl Script to collect Perfstats from NetApp Storage Systems

I have written a Perl script that is used to run perfstats on NetApp storage systems. This script runs on a Linux host.

The script checks for any running instance of perfstats on the storage system. If another perfstats instance is running on the filer, this script stops and logs an error message. The script can be scheduled to run as cronjob.

#!/usr/bin/perl
#/bin/bash

print_usage();

$datetime = &date;
$logdir = "/tmp/";
$projects_directory = "/prj/perfstats"; # directory that contains perfstats.sh script

my $args = $#ARGV + 1;
if ($args < 2) {
print "insufficient arguments exitting the script\n";
logit("insufficient arguments exitting the script\n");
exit 1;
}
logit("Starting perfcmd.pl\n");

$perfcmd_controller = shift;
$perfcmd_directory = shift; # perfcmd directory contains the perl script perfcmd.pl and perfstat.sh script.
#########################################################
# To use RSH use below command
$perfcmd_command = "/perfstat7_20130425.sh -f $perfcmd_controller -l root: -F -I -i 10 -t 5";
#########################################################
# To use SSH use below command
# $perfcmd_command    = "/perfstat7_20130425.sh -f $perfcmd_controller -S -l nitish -F -I -i 30 -t 4";
#########################################################
$perfcmd_final_cmd = "$perfcmd_directory$perfcmd_command";
$perfcmd_output_suffix = "/$perfcmd_controller.perfcmd.$datetime.out";
$perfcmd_output = "$projects_directory$perfcmd_output_suffix";

print "\n";

my @perf_processes = `ps aux | grep perfstat`;
my $process;
my $count = 0;

foreach $process (@perf_processes){
if ($process =~ /$perfcmd_controller/) {
logit("first instance of perfstats running for $perfcmd_controlle\n");
$count++;
}
}
if ($count >1) {
logit("exiting because another instance of perfstat is running for $perfcmd_controller\n");
exit 1;
}

logit("No previous iteration of perfstat is running for $perfcmd_controller");
# # Run the Perfstat and send output to the output file
$output = `$perfcmd_final_cmd > $perfcmd_output || die "Error runnng perfstat"`;

logit("Completed perfcmd.pl\n");

exit 0;

###############################
######### Functions ###########
###############################
sub date {
@date = localtime();
$year = $date[5] + 1900;
$month = '0'.($date[4]+1);
$day = $date[3];
$hour = $date[2];
$min = $date[1];
$sec = $date[0];
$fill = '_';

$out = $year.$fill.$month.$fill.$day.$fill.$hour.$fill.$min.$fill.$sec;
return $out;
}
sub logit {
my $s = shift;
my ($logsec,$logmin,$loghour,$logmday,$logmon,$logyear,$logwday,$logyday,$logisdst)=localtime(time);
my $prefix = "[";
my $suffix = "]";
my $l_name = "perfcmd";
my $logtimestamp = sprintf("%4d-%02d-%02d %02d:%02d:%02d",$logyear+1900,$logmon+1,$logmday,$loghour,$logmin,$logsec);
my $logtimestamp_final = "$prefix$logtimestamp$suffix";
$logmon++;
my $logfile="$logdir$l_name-$logmon-$logmday-logfile.log";
my $fh;
open($fh, '>>', "$logfile") or die "$logfile: $!";
print $fh "$logtimestamp_final $s\n";
close($fh);
}
sub print_usage() {
print "################################################################################\n";
print "#\tusage:\n";
print "#\t./perfcmd <filer name or IP Address> <location of perfstat.sh>\n";
print "#\te.g.\n";
print "#\t./perfcmd.pl ntap_filer /usr/nitish/perfcmd\n";
print "#\t./perfcmd.pl  /usr/nitish/perfcmd\n";
print "################################################################################\n";
}

__END__
Please follow the below procedure to schedule cronjobs on linux host

bash
export EDITOR=vi
crontab -e
/usr/nitish/perfstats/perfcmd.pl
10,15,20,25 6-14 * * * /usr/nitish/perfstats/perfcmd.pl "ntap_filer" "/usr2/nitish/perfstats"
crontab -l

7 Mode Data ONTAP (8.2.1) Upgrade Procedure

I have been working more on NetApp clustered Data ONTAP systems lately. Recently i was asked to upgrade some 7 mode systems which didn’t have /etc directory shared via CIFS protocol i.e. CIFS not running on the Base filer “vfiler0”. The web server is configured on another 7 mode storage system.

    1. Download the latest copy of Data ONTAP Upgrade/Downgrade guide from NetApp Support
    2. Copy the downloaded image to web server shared directory
    3. Follow Step 15 for international sites (where copying Data ONTAP image may take longer) and copy ONTAP images on to the filer. Do not install (update) the image yet
    4. Login to the 7 mode system and send autosupports
      • options autosupport.doit pre_NDU_upgrade
      • options autosupport.enable off
      • options snmp.enable off
    5. Take backup of the vfier configuration on the system
      • vfiler status -r
      • ifconfig -a
      • rdfile /etc/rc
    6. Check for Snapmirror/Snapvault relations on the system. You must upgrade the destination system first.
    7. Verify all failed drives are replaced (vol status -f)
    8. Delete old core files from /etc/crash directory
    9. Verify no deduplication processes are active
      • sis status
      • sis stop (if dedupe is active on any volume)
    10. Confirm all the disks are multipathed
      • storage show disk -p
    11. Verify all aggregates are online
      • aggr status
    12. Make sure all aggregates have atleast 5-6 % free capacity
      • df -Ag
    13. Disable autogiveback
      • options cf.giveback.auto.enable off
    14. Turn off snapmirror and snapvault
      • snapmirror off
      • options snapvault.enable off
    15. For International sites copy the image to the controller and install using following commands:
      • software get http://<web-server>/data_ontap/821_q_image.tgz
      • software list (lists the files in /etc/software directory)
      • software update 821_q_image.tgz -r (install files without rebooting)
      • version -b (verify the new image is installed)
    16. For local sites
      • software update http://<web-server>/data_ontap/821_q_image.tgz -r
      • version -b (verify the new image is installed)
    17. Perform cf takeover from the partner
      • cf takeover (this will reboot the partner)
    18. Perform cf giveback from the partner once local system shows “waiting for giveback”
      • cf giveback -f
    19. Perform cf takeover from the partner
      • cf takeover -n (this will reboot the partner)
    20. Perform cf giveback from the partner once local system shows “waiting for giveback”
      • cf giveback -f
    21. Perform the same steps on both the nodes and verify both have the new code
      • version -b
    22. Turn on snap mirror and snapvault
      • snapmirror on
      • options snapvault.enable on
    23. Turn on autogiveback
      • options cf.giveback.auto.enable on
    24. Turn on autosupport
      • options autosupport.enable on
      • options autosupport.doit post_NDU_upgrade
      • options snmp.enable on
    25. Upon completion of the upgrade process invoke below commands to see if any issues
      • sysconfig -a
      • vol status -f
      • vol status
      • aggr status
      • vol status -s
    26. Update SP Firmware on the filers (system node service-processor image update-progress show)

 

If you are upgrading multiple 7 mode filers at a time, you may use “for” loop from linux/unix shell to run same command on multiple filers.  

for i in filer1 filer2 filer3 filer4; do echo “”; echo $i ; sudo rsh $i “priv set -q diag; sis status”; echo “”; done

 

 

clustered Data ONTAP 8.2.2P1 Upgrade Procedure

Here is the procedure to upgrade an 8 node cluster to ONTAP 8.2.2P1.

Upgrade Prerequisites

  1. Replace any failed disks

Pre-upgrade Checklist

    1. Update shelf firmware on all nodes (use latest shelf firmware files available on NetApp Support)
    2. Update disk firmware on all nodes (Using the all.zip file available on NetApp Support)
    3. Send autosupport from all the nodes
      • system node autosupport invoke -type all -node * -message “Upgrading to 8.2.2P1″
    4. Verify Cluster Health
      • ::> cluster show
    5. Verify Cluster is in RDB
      • ::> set advanced
      • ::> cluster ring show -unitname vldb
      • ::> cluster ring show -unitname mgmt
      • ::> cluster ring show -unitname vifmgr
    6. Verify vserver health
      • ::> storage aggregate show -state !online
      • ::> volume show -state !online
      • ::> network interface show -status-oper down
      • ::> network interface show -is-home false
    7. Verify lif failover configuration (data lif’s)
      • ::> network interface failover show

Start Upgrade

    1. Determine the current image & Download the new image
      • ::> system node image show
        New image file – 822P1_q_image.tgz
    2. Verify no jobs are running
      • ::> job show
    3. Delete any running or queued aggregate, volume, SnapMirror copy, or Snapshot job
      • ::> job delete –id <job-id>
      • ::> system node image update -node * -package
        http://<web-server>/data_ontap/8.2.2P1_q_image.tgz -setdefault true
    4. Verify software is installed
      • ::> system node image show
    5. Determine the “Epsilon” server
      • ::> set adv
      • ::*> cluster show

Reboot the epsilon server first and wait for it to come up; then move on to other nodes

      • ::> storage failover show
      • ::> storage failover modify -node * -auto-giveback false
      •  ::> network interface migrate-all -node clusternode-02
      • ::> storage failover takeover -bynode clusternode-01
      • ::> storage failover giveback –fromnode clusternode-01 -override-vetoes true
      • ::> storage failover show (keep verifying aggr show for aggrs to return back)

Verify the node booted up with 8.2.2P1 image

      • ::> system node image show

      Once the aggregates are home, verify the lif’s, if not home

      • ::> network interface revert *

 

Repeat the following steps in the order below for all the nodes

      • ::> network interface migrate-all -node clusternode-01
      • ::> storage failover takeover -bynode clusternode-02
      • ::> storage failover giveback –fromnode clusternode-02 -override-vetoes true
      • ::> storage failover show (keep verifying aggr show for aggrs to return back)

Once the aggregates are home, verify the lif’s, if not home

      • ::> network interface revert *

Ensure that the cluster is in quorum and that services are running before upgrading the next pair of nodes:

    • ::> cluster show
    • ::> cluster ring show

Reboot the nodes in the order:

  1. clusternode-01 (once this is up)
  2. clusternode-02, clusternode-04, clusternode-06 (once these are up then)
  3. clusternode-03, clusternode-05, clusternode-07 (once these are up then)
  4. clusternode-08

Enable Autogiveback for all the nodes

  • ::> storage failover modify -node nodename -auto-giveback true

Verify Post-upgrade cluster is healthy

  • ::> set advanced
  • ::> system node upgrade-revert show

The status for each node should be listed as complete.

Verify Cluster Health

  • ::> cluster show

Verify Cluster is in RDB

  • ::> set advanced
  • ::> cluster ring show -unitname vldb
  • ::> cluster ring show -unitname mgmt
  • ::> cluster ring show -unitname vifmgr

Verify vserver health

  • ::> storage aggregate show -state !online
  • ::> volume show -state !online
  • ::> network interface show -status-oper down
  • ::> network interface show -is-home false

Verify lif failover configuration (data lif’s)

  • ::> network interface failover show

Backout Plan

  1. Verify that the Data ONTAP 8.2.2P1 Cluster-Mode software is installed:
    • system node image show
  2. Trigger autosupport
    • ::> system node autosupport invoke -type all -node <nodename> -message “Reverting to 8.2.2P1 Cluster-Mode”
  3. Check revert to settings
    • ::> system node revert-to -node <nodename> -check-only true -version 8.2.2P1
  4. Revert the node to 8.2.2P1
    • ::> system node revert-to -node <nodename> -version 8.2.P1