Author Archives: TheTecharch

How to use “ndmpcopy” in clustered Data ONTAP 8.2.x

Introduction

“ndmpcopy” in clustered Data ONTAP has two modes

  1. node-scope-mode : you need to track the volume location if a volume move is performed
  2. vserver-scope-mode : no issues, even if the volume is moved to a different node. 

In this scenario i’ll use vserver-scope-mode to perform a “ndmpocpy” within the same cluster and same SVM.

In my test I copied a 1GB file to a new folder under same volume.

Login to the cluster

snowy-mgmt::> set diag -rows 0
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

List of volumes on SVM “snowy”

snowy-mgmt::*> vol show -vserver snowy
(volume show)
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
snowy     SNOWY62_vol001 snowy01_hybdata_01 online RW 1TB    83.20GB   91%
snowy     SNOWY62_vol001_sv snowy02_hybdata_01 online DP 1TB 84.91GB   91%
snowy     HRauhome01   snowy01_hybdfc_01 online RW       100GB    94.96GB    5%
snowy     rootvol      snowy01_hybdfc_01 online RW        20MB    18.88MB    5%
4 entries were displayed.

snowy-mgmt::*> df -g SNOWY62_vol001
Filesystem               total       used      avail capacity  Mounted on                 Vserver
/vol/SNOWY62_vol001/ 972GB       3GB       83GB      91%  /SNOWY62_vol001       snowy
/vol/SNOWY62_vol001/.snapshot 51GB 0GB     51GB       0%  /SNOWY62_vol001/.snapshot  snowy
2 entries were displayed.

snowy-mgmt::*> vol show -vserver snowy -fields volume,junction-path
(volume show)
vserver volume              junction-path
------- ------------------- --------------------
snowy   SNOWY62_vol001 /SNOWY62_vol001
snowy   SNOWY62_vol001_sv -
snowy   HRauhome01          /hrauhome01
snowy   rootvol             /
4 entries were displayed.

Create a “ndmpuser” with role “backup”

snowy-mgmt::*> security login show
Vserver: snowy
Authentication                  Acct
UserName         Application Method         Role Name        Locked
---------------- ----------- -------------- ---------------- ------
vsadmin          ontapi      password       vsadmin          yes
vsadmin          ssh         password       vsadmin          yes

Vserver: snowy-mgmt
Authentication                  Acct
UserName         Application Method         Role Name        Locked
---------------- ----------- -------------- ---------------- ------
admin            console     password       admin            no
admin            http        password       admin            no
admin            ontapi      password       admin            no
admin            service-processor password admin            no
admin            ssh         password       admin            no
autosupport      console     password       autosupport      yes
8 entries were displayed.

snowy-mgmt::*> security login create -username ndmpuser -application ssh -authmethod password -role backup -vserver snowy-mgmt
Please enter a password for user 'ndmpuser':
Please enter it again:

snowy-mgmt::*> vserver services ndmp generate-password -vserver snowy-mgmt -user ndmpuser
Vserver: snowy-mgmt
User: ndmpuser
Password: Ip3gRJchR0FGPLA7

Turn on “ndmp” service on the cluster mgmt. SVM

snowy-mgmt::*> vserver services ndmp on -vserver snowy-mgmt

In the Nodeshell initiate “ndmpcopy”

snowy-mgmt::*> node run -node snowy-01
Type 'exit' or 'Ctrl-D' to return to the CLI

snowy-01> ndmpcopy
usage:
ndmpcopy [<options>] <source> <destination>
<source> and <destination> are of the form [<filer>:]<path>
If an IPv6 address is specified, it must be enclosed in square brackets

options:
[-sa <username>:<password>]
[-da <username>:<password>]
    source/destination filer authentication
[-st { text | md5 }]
[-dt { text | md5 }]
    source/destination filer authentication type
    default is md5
[-l { 0 | 1 | 2 }]
    incremental level
    default is 0
[-d]
    debug mode
[-f]
    force flag, to copy system files
[-mcs { inet | inet6 }]
    force specified address mode for source control connection
[-mcd { inet | inet6 }]
    force specified address mode for destination control connection
[-md { inet | inet6 }]
    force specified address mode for data connection
[-h]
    display this message
[-p]
    accept the password interactively
[-exclude <value>]
    exclude the files/dirs from backup path

snowy-01>
snowy-01> ndmpcopy -sa ndmpuser:Ip3gRJchR0FGPLA7 -da ndmpuser:Ip3gRJchR0FGPLA7 10.10.2.72:/snowy/SNOWY62_vol001/TestFil2_002 10.10.2.72:/snowy/SNOWY62_vol001/destination
Ndmpcopy: Starting copy [ 14 ] ...
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Log: DUMP: creating "/snowy/SNOWY62_vol001/../snapshot_for_backup.7" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP: Using Partial Volume Dump of selected subtrees
Ndmpcopy: 10.10.2.72: Log: DUMP: Using snapshot_for_backup.7 snapshot
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of this level 0 dump: Tue Apr  5 08:27:43 2016.
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of last level 0 dump: the epoch.
Ndmpcopy: 10.10.2.72: Log: DUMP: Dumping /snowy/SNOWY62_vol001/TestFil2_002 to NDMP connection
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass I)[regular files]
Ndmpcopy: 10.10.2.72: Log: DUMP: Reading file names from NDMP.
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass II)[directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: estimated 1050638 KB.
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass III) [directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass IV) [regular files]
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:52 2016: Begin level 0 restore
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:54 2016: Reading directories from the backup
Ndmpcopy: 10.10.2.72: Log: RESTORE: Warning: /vol/SNOWY62_vol001/destination/ will not be restored as a qtree: exists as a normal subdirectory.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Could not create qtree `/vol/SNOWY62_vol001/destination/'. Creating a regular directory instead.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:55 2016: Creating files and directories.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:55 2016: Writing data to files.
Ndmpcopy: 10.10.2.72: Log: ACL_START is '1075865600'
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass V) [ACLs]
Ndmpcopy: 10.10.2.72: Log: DUMP: 1050657 KB
Ndmpcopy: 10.10.2.72: Log: DUMP: DUMP IS DONE
Ndmpcopy: 10.10.2.72: Log: RESTORE: RESTORE IS DONE
Ndmpcopy: 10.10.2.72: Notify: restore successful
Ndmpcopy: 10.10.2.72: Log: DUMP: Deleting "/snowy/SNOWY62_vol001/../snapshot_for_backup.7" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP_DATE is '5754776159'
Ndmpcopy: 10.10.2.72: Notify: dump successful
Ndmpcopy: Transfer successful [ 0 hours, 1 minutes, 26 seconds ]
Ndmpcopy: Done

Although I used the cluster-mgmt lif in the ndmpcopy syntax, I didn’t see any traffic flowing on the lif 

snowy-mgmt::*> statistics show-periodic -node cluster:summary -object lif:vserver -instance snowy-mgmt -counter instance_name|recv_data|sent_data -interval 1
snowy-mgmt: lif:vserver.snowy-mgmt: 4/5/2016 08:27:20
instance     recv     sent   Complete     Number of
name     data     data  Aggregation  Constituents
-------- -------- -------- ----------- -------------
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1

Another “ndmpcopy” job with different statistics command
snowy-01> ndmpcopy -sa ndmpuser:Ip3gRJchR0FGPLA7 -da ndmpuser:Ip3gRJchR0FGPLA7 10.10.2.72:/snowy/SNOWY62_vol001/TestFil1_001 10.10.2.72:/snowy/SNOWY62_vol001/destination
Ndmpcopy: Starting copy [ 15 ] ...
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Log: DUMP: creating "/snowy/SNOWY62_vol001/../snapshot_for_backup.9" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP: Using Partial Volume Dump of selected subtrees
Ndmpcopy: 10.10.2.72: Log: DUMP: Using snapshot_for_backup.9 snapshot
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of this level 0 dump: Tue Apr  5 08:30:40 2016.
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of last level 0 dump: the epoch.
Ndmpcopy: 10.10.2.72: Log: DUMP: Dumping /snowy/SNOWY62_vol001/TestFil1_001 to NDMP connection
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass I)[regular files]
Ndmpcopy: 10.10.2.72: Log: DUMP: Reading file names from NDMP.
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass II)[directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: estimated 1050638 KB.
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass III) [directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass IV) [regular files]
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:47 2016: Begin level 0 restore
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:49 2016: Reading directories from the backup
Ndmpcopy: 10.10.2.72: Log: RESTORE: Warning: /vol/SNOWY62_vol001/destination/ will not be restored as a qtree: exists as a normal subdirectory.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Could not create qtree `/vol/SNOWY62_vol001/destination/'. Creating a regular directory instead.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:50 2016: Creating files and directories.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:50 2016: Writing data to files.
Ndmpcopy: 10.10.2.72: Log: ACL_START is '1075865600'
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass V) [ACLs]
Ndmpcopy: 10.10.2.72: Log: DUMP: 1050657 KB
Ndmpcopy: 10.10.2.72: Log: DUMP: DUMP IS DONE
Ndmpcopy: 10.10.2.72: Log: DUMP: Deleting "/snowy/SNOWY62_vol001/../snapshot_for_backup.9" snapshot.
Ndmpcopy: 10.10.2.72: Log: RESTORE: RESTORE IS DONE
Ndmpcopy: 10.10.2.72: Notify: restore successful
Ndmpcopy: 10.10.2.72: Log: DUMP_DATE is '5754776336'
Ndmpcopy: 10.10.2.72: Notify: dump successful
Ndmpcopy: Transfer successful [ 0 hours, 1 minutes, 20 seconds ]
Ndmpcopy: Done
snowy-01>

snowy-mgmt::*> statistics show-periodic -object lif -instance snowy-mgmt:cluster_mgmt -counter instance_name|recv_data|sent_data -interval 1
snowy-mgmt: lif.snowy-mgmt:cluster_mgmt: 4/5/2016 08:30:10
instance     recv     sent   Complete     Number of
name     data     data  Aggregation  Constituents
-------- -------- -------- ----------- -------------
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a

Convert Snapmirror DP relation to XDP in clustered Data ONTAP

With clustered Data ONTAP 8.2 snapvault (XDP) feature was introduced and the ability to convert existing snapmirror (DP) relations to snapvault (XDP) .

I had tested this feature long time ago, however, never used it in production environment. Recently i got a chance to implement this as a production volume with high change rate (snapshots involved) grew up to 60 TB (20 TB used by snapshots). Due to FAS 3250 in the cluster the max. volume size is 70 TB. After discussions with the customer it was decided to create a local snapvault copy of the production volume that would contain all existing snapshots and accumulate more in coming days until a new snapvault cluster is setup. The data in the volume is highly compressable so the snapvault destination would consume less space.

Overview of this process:

  1. Create a snapmirror DP relation
  2. Initialize the snapmirror DP relation
  3. Quiesce/Break/Delete the DP relation
  4. Resync the relation as snapmirror XDP
  5. Continue with vault updates

CREATE SOURCE VOLUME
snowy-mgmt::*> vol create -volume AU2004NP0062_vol001 -aggregate snowy01_hybdata_01 -space-guarantee none -size 1tb -junction-path /AU2004NP0066_vol001 -state online -junction-active true (volume create) [Job 85] Job succeeded: Successful snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields security-style (volume show) vserver volume security-style ------- ------------------- -------------- snowy AU2004NP0062_vol001 ntfs

CREATE DESTINATION VOLUME
snowy-mgmt::*> vol create -volume AU2004NP0062_vol001_sv -aggregate snowy02_hybdata_01 -space-guarantee none -size 3tb -type DP -state online (volume create) [Job 86] Job succeeded: Successful snowy-mgmt::*> vol show -volume AU2004NP0062_vol001* (volume show) Vserver Volume Aggregate State Type Size Available Used% --------- ------------ ------------ ---------- ---- ---------- ---------- ----- snowy AU2004NP0062_vol001 snowy01_hybdata_01 online RW 1TB 87.01GB 91% snowy AU2004NP0062_vol001_sv snowy02_hybdata_01 online DP 3TB 87.01GB 97% 2 entries were displayed.

CREATE SNAPMIRROR RELATION
snowy-mgmt::*> snapmirror create -source-path snowy:AU2004NP0062_vol001 -destination-path snowy:AU2004NP0062_vol001_sv -type DP -vserver snowy Operation succeeded: snapmirror create for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show -type DP Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 DP snowy:AU2004NP0062_vol001_sv Uninitialized Idle - true -

INITIALIZE SNAPMIRROR
snowy-mgmt::*> snapmirror initialize -destination-path snowy:AU2004NP0062_vol001_sv Operation is queued: snapmirror initialize of destination "snowy:AU2004NP0062_vol001_sv".

CREATE SNAPSHOTS ON SOURCE VOLUME
snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_01 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_02 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_03 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_04 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_05 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_06 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_07 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_08 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_09 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_00 snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001 ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001 hourly.2016-03-24_1005 valid 60KB 0% 27% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 33% 80KB sas_snap_01 valid 60KB 0% 27% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 32% sas_snap_04 valid 60KB 0% 27% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 32% sas_snap_00 valid 56KB 0% 26% 12 entries were displayed. snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 DP snowy:AU2004NP0062_vol001_sv Snapmirrored Idle - true -

UPDATE SNAPMIRROR TO TRANSFER ALL SNAPSHOTS TO DESTINATION VOLUME
snowy-mgmt::*> snapmirror update -destination-path snowy:* Operation is queued: snapmirror update of destination "snowy:AU2004NP0062_vol001_sv". 1 entry was acted on.

SNAPSHOTS REACHED THE DESTINATION VOLUME
snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001_sv ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001_sv hourly.2016-03-24_1005 valid 60KB 0% 28% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 34% 80KB sas_snap_01 valid 60KB 0% 28% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 33% sas_snap_04 valid 60KB 0% 28% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 33% sas_snap_00 valid 72KB 0% 32% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 0% 0B 13 entries were displayed.

QUIESCE, BREAK AND DELETE SNAPMIRRORS
snowy-mgmt::*> snapmirror quiesce -destination-path snowy:AU2004NP0062_vol001_sv Operation succeeded: snapmirror quiesce for destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror break -destination-path snowy:AU2004NP0062_vol001_sv [Job 87] Job succeeded: SnapMirror Break Succeeded snowy-mgmt::*> snapmirror delete -destination-path snowy:AU2004NP0062_vol001_sv Operation succeeded: snapmirror delete for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show This table is currently empty.

RESYNC SNAPMIRROR AS XDP RELATION
snowy-mgmt::*> snapmirror resync -source-path snowy:AU2004NP0062_vol001 -destination-path snowy:AU2004NP0062_vol001_sv -type XDP Warning: All data newer than Snapshot copy snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 on volume snowy:AU2004NP0062_vol001_sv will be deleted. Verify there is no XDP relationship whose source volume is "snowy:AU2004NP0062_vol001_sv". If such a relationship exists then you are creating an unsupported XDP to XDP cascade. Do you want to continue? {y|n}: y [Job 88] Job succeeded: SnapMirror Resync Transfer Queued snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 XDP snowy:AU2004NP0062_vol001_sv Snapmirrored Idle - true -

SNAPSHOTS EXIST ON BOTH SOURCE AND DESTINATION VOLUME AFTER RESYNC
snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001 ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001 hourly.2016-03-24_1005 valid 68KB 0% 29% sas_snap_01 valid 60KB 0% 27% sas_snap_02 valid 64KB 0% 28% sas_snap_03 valid 76KB 0% 32% sas_snap_04 valid 60KB 0% 27% sas_snap_05 valid 64KB 0% 28% sas_snap_06 valid 64KB 0% 28% sas_snap_07 valid 64KB 0% 28% sas_snap_08 valid 64KB 0% 28% sas_snap_09 valid 76KB 0% 32% sas_snap_00 valid 72KB 0% 31% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 31% 72KB 12 entries were displayed. snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001_sv ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001_sv hourly.2016-03-24_1005 valid 60KB 0% 28% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 34% 80KB sas_snap_01 valid 60KB 0% 28% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 33% sas_snap_04 valid 60KB 0% 28% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 33% sas_snap_00 valid 72KB 0% 32% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 33% 76KB 13 entries were displayed.

TURN ON VOLUME EFFICIENCY - DESTINATION VOLUME
snowy-mgmt::*> vol efficiency on -volume AU2004NP0062_vol001_sv (volume efficiency on) Efficiency for volume "AU2004NP0062_vol001_sv" of Vserver "snowy" is enabled. Already existing data could be processed by running "volume efficiency start -vserver snowy -volume AU2004NP0062_vol001_sv -scan-old-data true".

CRETE A CIFS SHARE ON SOURCE VOLUME AND COPY SOME DATA
snowy-mgmt::*> cifs share create -share-name sas_vol -path /AU2004NP0062_vol001 -share-properties oplocks,browsable,changenotify snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB

CREATE SNAPSHOT AND SNAPMIRROR POLICIES WITH SAME SNAPMIRROR LABLES
snowy-mgmt::*> cron show (job schedule cron show) Name Description ---------------- ----------------------------------------------------- 5min @:00,:05,:10,:15,:20,:25,:30,:35,:40,:45,:50,:55 8hour @2:15,10:15,18:15 daily @0:10 hourly @:05 weekly Sun@0:15 5 entries were displayed. snowy-mgmt::*> snapshot policy create -policy keep_more_snaps -enabled true -schedule1 5min -count1 5 -prefix1 sv -snapmirror-label1 mins -vserver snowy snowy-mgmt::*> snapmirror policy create -vserver snowy -policy XDP_POL snowy-mgmt::*> snapmirror policy add-rule -vserver snowy -policy XDP_POL -snapmirror-label mins -keep 50

APPLY SNAPSHOT POLICY TO SOURCE VOLUME
snowy-mgmt::*> volume modify -volume AU2004NP0062_vol001 -snapshot-policy keep_more_snaps Warning: You are changing the Snapshot policy on volume AU2004NP0062_vol001 to keep_more_snaps. Any Snapshot copies on this volume from the previous policy will not be deleted by this new Snapshot policy. Do you want to continue? {y|n}: y Volume modify successful on volume: AU2004NP0062_vol001

APPLY SNAPMIRROR POLICY TO DESTINATION VOLUME
snowy-mgmt::*> snapmirror modify -destination-path snowy:AU2004NP0062_vol001_sv -policy XDP_POL Operation succeeded: snapmirror modify for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields snapshot-policy (volume show) vserver volume snapshot-policy ------- ------------------- --------------- snowy AU2004NP0062_vol001 keep_more_snaps snowy-mgmt::*> snapshot policy show keep_more_snaps -instance Vserver: snowy Snapshot Policy Name: keep_more_snaps Snapshot Policy Enabled: true Policy Owner: vserver-admin Comment: - Total Number of Schedules: 1 Schedule Count Prefix SnapMirror Label ---------------------- ----- --------------------- ------------------- 5min 5 sv mins

UPDATE SNAPMIRROR RELATIONSHIP (SNAPVAULT)
snowy-mgmt::*> snapmirror update -destination-path snowy:AU2004NP0062_vol001_sv Operation is queued: snapmirror update of destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 XDP snowy:AU2004NP0062_vol001_sv Snapmirrored Transferring 0B true 03/24 10:56:29

THE SIZE OF BOTH SOURCE AND DESTINATION VOLUME IS SAME
snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 2.06GB 2 entries were displayed.

DEDUPE JOB IS RUNNING
snowy-mgmt::*> sis status Vserver Volume State Status Progress Policy ---------- ---------------- -------- ------------ ------------------ ---------- snowy AU2004NP0062_vol001_sv Enabled Active 539904 KB (25%) Done -

SIZE OF DESTINATION VOLUME AFTER DEDUPE JOB COMPLETED
snowy-mgmt::*> sis status Vserver Volume State Status Progress Policy ---------- ---------------- -------- ------------ ------------------ ---------- snowy AU2004NP0062_vol001_sv Enabled Idle Idle for 00:00:05 - snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 274.9MB 2 entries were displayed.

START COMPRESSION JOB ON DESTINATION VOLUME BY SCANNING EXISTING DATA
snowy-mgmt::*> vol efficiency start -volume AU2004NP0062_vol001_sv -scan-old-data (volume efficiency start) Warning: This operation scans all of the data in volume "AU2004NP0062_vol001_sv" of Vserver "snowy". It may take a significant time, and may degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "AU2004NP0062_vol001_sv" of Vserver "snowy" has started.

SIZE OF DESTINATION VOLUME AFTER COMPRESSION JOB COMPLETED
snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 49.76MB 2 entries were displayed.

Querying OCUM Database using Powershell

Oncommand Unified Manager (OCUM) is the software to monitor and troubleshoot cluster or SVM issues relating to data storage capacity, availability, performance and protection. OCUM polls the clustered Data ONTAP stoage systmes and stores all inventory information in MySQL database. Using powershell we can query MySQL database and retrieve information to create reports.

All we need is MySQL .NET connector to query OCUM database and retrieve information from various tables. Another tool that is helpful is “HeidiSQL” client for MySQL. You can connect to OCUM Database using Heidi SQL and view all the tables and columns within the database.

Download and use version 2.0 MySQL Connector with OCUM 6.2

Donwload link to HeidiSQL

NetApp Communities Post

First of all you’ll need to create a “Database User” with Role “Report Schema” (OCUM GUI -> Administration -> ManagerUsers -> Add)

Use HeidiSQL to connect to OCUM database

Connect-OCUM

OCUM

Ocum_report

Sample Powershell Code to connect to OCUM Database and retrieve information

# Get-cDOTAggrVolReport.ps1
# Date : 2016_03_10 12:12 PM
# This script uses MySQL .net connector at location E:\ssh\MySql.Data.dll to query OCCUM 6.2 database

# Function MySQL queries OCUM database
# usage: MySQL -Query <sql-query>
function MySQL {
Param(
[Parameter(
Mandatory = $true,
ParameterSetName = '',
ValueFromPipeline = $true)]
[string]$Query
)

$MySQLAdminUserName = 'reportuser'
$MySQLAdminPassword = 'Netapp123'
$MySQLDatabase = 'ocum_report'
$MySQLHost = '192.168.0.71'
$ConnectionString = "server=" + $MySQLHost + ";port=3306;Integrated Security=False;uid=" + $MySQLAdminUserName + ";pwd=" + $MySQLAdminPassword + ";database="+$MySQLDatabase

Try {
[void][System.Reflection.Assembly]::LoadFrom("E:\ssh\MySql.Data.dll")
$Connection = New-Object MySql.Data.MySqlClient.MySqlConnection
$Connection.ConnectionString = $ConnectionString
$Connection.Open()

$Command = New-Object MySql.Data.MySqlClient.MySqlCommand($Query, $Connection)
$DataAdapter = New-Object MySql.Data.MySqlClient.MySqlDataAdapter($Command)
$DataSet = New-Object System.Data.DataSet
$RecordCount = $dataAdapter.Fill($dataSet, "data")
$DataSet.Tables[0]
}

Catch {
Write-Host "ERROR : Unable to run query : $query `n$Error[0]"
}

Finally {
$Connection.Close()
}
}

# Define disk location to store aggregate and volume size reports retrieved from OCUM
$rptdir = "E:\ssh\aggr-vol-space"
$rpt = "E:\ssh\aggr-vol-space"
$filedate = (Get-Date).ToString('yyyyMMdd')
$aggrrptFilename = "aggrSize`_$filedate.csv"
$aggrrptFile = Join-Path $rpt $aggrrptFilename
$volrptFilename = "volSize`_$filedate.csv"
$volrptFile = Join-Path $rpt $volrptFilename

# verify Report directory exists
if ( -not (Test-Path $rptDir) ) {
write-host "Error: Report directory $rptDir does not exist."
exit
}

# Produce aggregate report from OCUM
#$aggrs = MySQL -Query "select aggregate.name as 'Aggregate', aggregate.sizeTotal as 'TotalSize KB', aggregate.sizeUsed as 'UsedSize KB', aggregate.sizeUsedPercent as 'Used %', aggregate.sizeAvail as 'Available KB', aggregate.hasLocalRoot as 'HasRootVolume' from aggregate"
$aggrs = MySQL -Query "select aggregate.name as 'Aggregate', round(aggregate.sizeTotal/Power(1024,3),1) as 'TotalSize GB', round(aggregate.sizeUsed/Power(1024,3),1) as 'UsedSize GB', aggregate.sizeUsedPercent as 'Used %', round(aggregate.sizeAvail/Power(1024,3),1) as 'Available GB', aggregate.hasLocalRoot as 'HasRootVolume' from aggregate"
$aggrs | where {$_.HasRootVolume -eq $False} | export-csv -NoTypeInformation $aggrrptFile

# Produce volume report from OCUM
$vols = MySQL -Query "select volume.name as 'Volume', clusternode.name as 'Nodename', aggregate.name as 'Aggregate', round(volume.size/Power(1024,3),1) as 'TotalSize GB', round(volume.sizeUsed/Power(1024,3),1) as 'UsedSize GB', volume.sizeUsedPercent as 'Used %', round(volume.sizeAvail/Power(1024,3),1) as 'AvaliableSize GB', volume.isSvmRoot as 'isSvmRoot', volume.isLoadSharingMirror as 'isLSMirror' from volume,clusternode,aggregate where clusternode.id = volume.nodeId AND volume.aggregateId = aggregate.id"
$vols | where {$_.isSvmRoot -eq $False -and $_.isLSMirror -eq $False -and $_.Volume -notmatch "vol0$"} | export-csv -NoTypeInformation $volrptFile

Update ONTAP Image on cDOT by copying files locally

  • Download ONTAP image on your computer
  • Access a CIFS share / NFS mount (cDOT volume) and copy the image to this volume
  • Log in to Systemshell of each node using diag user
  • sudo cp -rf /clus/<vserver>/volume /mroot/etc/software
  • exit systemshell
  • system node image package show
  • system node image update -node <node-name> -package file://localhost/mroot/etc/software/831_q_image.tgz
  • system node image show
login as: admin 
Password:
 
cluster1::> set diag -rows 0
Warning: These diagnostic commands are for use by NetApp personnel only.
 Do you want to continue? {y|n}: y

cluster1::*> security login show -user-or-group-name diag
 Vserver: cluster1
 Authentication                  Acct
 User/Group Name  Application Method         Role Name        Locked
 ---------------- ----------- -------------- ---------------- ------
 diag             console     password       admin            no

cluster1::*> security login unlock -username diag
 
cluster1::*> security login password -username diag
 Enter a new password:
 Enter it again:

cluster1::*> version
 NetApp Release 8.3.1RC1: Fri Jun 12 21:46:00 UTC 2015

cluster1::*> system node image package show
 This table is currently empty.

cluster1::*> system node image package show -node cluster1-01 -package file://localhost/mroot/etc/software
 There are no entries matching your query.

cluster1::*> systemshell -node cluster1-01
 (system node systemshell)
 Data ONTAP/amd64 (cluster1-01) (pts/2)
 login: diag
 Password:
 Last login: Mon Jul 27 16:54:10 from localhost

cluster1-01% sudo mkdir /mroot/etc/software
 
cluster1-01% sudo ls /clus/NAS/ntfs
 .snapshot       831_q_image.tgz BGInfo          nfs-on-ntfs     no-user-access  ntfs-cifs.txt

cluster1-01% sudo cp -rf /clus/NAS/ntfs/831_q_image.tgz /mroot/etc/software
 
cluster1-01% sudo ls /mroot/etc/software
 831_q_image.tgz

cluster1-01% exit
 logout

cluster1::*> system node image package show
 Package
 Node         Repository     Package File Name
 ------------ -------------- -----------------
 cluster1-01
 mroot
 831_q_image.tgz

cluster1::*> system node image update -node cluster1-01 -package file://localhost/mroot/etc/software/image.tgz

cluster1::*> system node image package show
 Package
 Node         Repository     Package File Name
 ------------ -------------- -----------------
 cluster1-01
 mroot
 831_q_image.tgz

cluster1::*> system node image update -node cluster1-01 -package 
file://localhost/mroot/etc/software/831_q_image.tgz

Software update started on node cluster1-01. Updating image2 with package file://localhost/mroot/etc/software/831_q_image.tgz.
 Listing package contents.
 Decompressing package contents.
 Invoking script (install phase). This may take up to 60 minutes.
 Mode of operation is UPDATE
 Current image is image1
 Alternate image is image2
 Package MD5 checksums pass
 Versions are compatible
 Available space on boot device is 1372 MB
 Required  space on boot device is 438 MB
 Kernel binary matches install machine type
 LIF checker script is invoked.
 NO CONFIGURATIONS WILL BE CHANGED DURING THIS TEST.
 Checking ALL Vservers for sufficiency LIFs.
 Running in upgrade mode.
 Running in report mode.
 Enabling Script Optimizations.
 No need to do upgrade check of external servers for this installed version.
 LIF checker script has validated configuration.
 NFS netgroup check script is invoked.
 NFS netgroup check script has run successfully.
 NFS exports DNS check script is invoked.
 netapp_nfs_exports_dns_check script begin
 netapp_nfs_exports_dns_check script end
 NFS exports DNS check script has completed.
 Getting ready to install image
 Directory /cfcard/x86_64/freebsd/image2 created
 Syncing device...
 Extracting to /cfcard/x86_64/freebsd/image2...
 x CHECKSUM
 x VERSION
 x COMPAT.TXT
 x BUILD
 x netapp_nfs_netgroup_check
 x metadata.xml
 x netapp_nfs_exports_dns_check
 x INSTALL
 x netapp_sufficiency_lif_checker
 x cap.xml
 x platform.ko
 x kernel
 x fw.tgz
 x platfs.img
 x rootfs.img
 Installed MD5 checksums pass
 Installing diagnostic and firmware files
 Firmware MD5 checksums pass
 Installation complete. image2 updated on node cluster1-01.
 
cluster1::*>

cluster1::*> system node image show
 Is      Is                                Install
 Node     Image   Default Current Version                   Date
 -------- ------- ------- ------- ------------------------- -------------------
 cluster1-01
 image1  true    true    8.3.1RC1                  -
 image2  false   false   8.3.1                     2/10/2016 05:32:50
 2 entries were displayed.
 

Evernote Search Tips

evernote-searches

any (converts AND to OR)
e.g. any: nitish sumit kirti

tag
+tag:apple
-tag:apple
tag:apple
tag:apple tag:microsoft (AND)
any: tag:apple tag:microsoft (OR)

notebook
notebook:netapp

inTitle
intitle:”Health Check”

created (YYYYMMDD)
created:20151224 (created on or after this date)
created:20151224 -created:20151225 (give me notes created exactly on December 24 2015)

created:day-1 (notes created yesterday)
created:week-1 (notes created 1 week ago)
created:month-1
created:year-1
created:month-3 -created:month-2 (notes created 3 months ago)

updated
updated:20151224

todo
todo:true (done)
todo:false (not done)
todo:*

sourceUrl
sourceUrl:http://thetecharch.com

resource
resource:image/png (png, gif, jpg, jpeg)
resource:application/pdf
resource:application/vnd.evernote.ink (Matches notes with one or more ink resources)
-resource:application/msword
-resource:application/ms-excel
-resource:application/ms-powerpoint

source
source:app.ms.word (notes created within MS Word)

Use + or – to Include or Exclude Certain Words
+microsoft -google

Use Wildcards
‘*’ -> search everything
“netapp” -> search for exact word

NFS Storage : Root squashed to anon user on Linux Host

This scenario is based on a problem where “root” user is squashed to “anon” and all files created on an NFS exported volume have permissions as “nfsnobody”.

On an NFS drive mounted on a Linux host exported through cDOT clustered Data ONTAP system running ONTAP 8.2.1 the root user reports when he creates a file on the mounted NFS storage the file permissions are changed to “nfsnobody”.

After investigating “ls -l” and “ls -ln” commands on the Linux host the file permissions are reported as “nfsnobody” “65534” :

root_squashed_1

Checking the export-policy rule on NetApp storage system i see
“User ID To Which Anonymous Users Are Mapped: 65534
“Superuser Security Types: any

orignal_export_policy_rule

unix-user-group-show

This means all the “anonymous” users will be mapped to “65534” which is “nfsnobody” in Linux, “nobody” in Unix, “pcuser” in Windows. And the remote “root” user on the Linux host has restricted permissions after logging to the NFS server. This is a security feature implemented on a shared NFS storage.

To fix this issue if we change the Superuser AUTH type to “sys”, this means the user will be authenticated at the client (operating system) and will come in as an identified user. This way the user will not be squashed to “anonymous / anon” and retain it’s permissions. superuser-sys

Now if i try to create some files as a “root” user on the Linux host, the files retain the permissions as “root” and not “anon”.

root-as-root

This fixes the problem.

Integrate Oncommand Performance Manager with Oncommand Unified Manager

Oncommand Performance Manager is the Performance Management component of Unified Manager. Both these products are self contained, however, they can be integrated so that all performance events can be viewed from Unified Manager Dashboard.

In this post we deploy a new instance of Performance Manager vApp on an ESXi host and integrate with a running instance of Unified Manager.

Software components used:

  • Oncommand Unified Manager Version 6.2P1
  • Oncommand Performance Manager Version 2.0.0RC1

Setup Performance Manager

Import OnCommandPerformanceManager-netapp-2.0.0RC1.ova file

Deploy_OPM_OVA

After the ova file is imported you may face issues powering on.

Poweron_issues_OVA

As the vApp has CPU and Memory reservations set. In a lab environment with limited resources we can remove the reservations to boot up the vApp.

OVA_Booting

After the vApp boots up, you need to Install VMware tools to proceed further.

As the installation progress, the setup wizard automatically runs to configure TimeZone, Networking (static/dynamic), Creation of Maintenance user, generate SSL certificate and starts Performance Manager services. Once complete, log in to Performance Manager console and verify settings (Network, DNS, timezone)

OPM_ConsoleNow login to Performance manager Web UI and complete the setup wizard. Do not enable Autosupport for vApp deployed in the Lab environment.

Login_OPMOpen Administration Tab (Top right corner)OPM_Administration

Setup Connection with Unified Manager

To view Performance events in Unified Manager Dashboard a connection between Performance Manager and Unified Manager must be made.

Setting up a Connection includes creating a specialized Events Publisher user in the Unified Manager web UI and enabling the Unified Manager server connection in the maintenance console of the Performance Manager server.

  • Click Administration -> Manage Users
  • In the Manage Users page, click Add
  • In the Add User dialog box, select Local User for type and Event Publisher for role and enter the other required information
  • click Add

eventpublisher_user_OUM

Connect Performance Manager to Unified Manager from vApp console

OPM_Connection_screen1

OPM_Connection_Registered

This completes the Integration part with Unified Manager. You can integrate multiple Performance Manager Servers with a single Unified Manager Server. When Performance Manager generates performance events they pass on to the Unified Manager server and are viewed on the Unified Manager Dashboard. So the admin keeps monitoring one window instead of logging in to multiple Performance Manager web UI’s.

DataONTAP 8.x Simulator Golden Image Setup Procedure

Download NetApp DataONTAP 8.x clustered ONTAP simulator from
https://support.netapp.com

  1. Modify DataONTAP.vmx file with
  • Hostname
  • Network Ports = 8
  • # Add pciBridge Parameters to .vmx file

  •     pciBridge0.present = “TRUE”
        priBridge4.present = “TRUE”
        pciBridge4.virtualDev = “pcieRootPort”
        pciBridge4.functions = “8”
        priBridge5.present = “TRUE”
        pciBridge5.virtualDev = “pcieRootPort”
        pciBridge5.functions = “8”
        priBridge6.present = “TRUE”
        pciBridge6.virtualDev = “pcieRootPort”
        pciBridge6.functions = “8”
        priBridge7.present = “TRUE”
        pciBridge7.virtualDev = “pcieRootPort”
        pciBridge7.functions = “8”

  1. Open the Virtual Machine in VMWare Fusion/Workstation
  2. Open VMWare Fusion/Workstation and add 4 new Network Adapters to the NetApp simulator appliance
  3. Check the following
  • network ports 0 & 1 in “Private” network
  • network ports 2 to 6 in “NAT network”
  1. Power on the Virtual Machine – NetApp Simulator appliance
  2. Break the Boot Sequence and verify the Serial Number (in case of node 2 change the serial number)
  • Press Space Bar when the appliance starts
  • VLOADER> setenv SYS_SERIAL_NUM 4034389-06-2
  • VLOADER> setenv bootarg.nvram.sysid 4034389062
  • VLOADER> printenv SYS_SERIAL_NUM
  • VLOADER> printenv bootarg.nvram.sysid
  1. Boot up
  • VLOADER> boot
  1. Create routing-group, route, lif for the node-mgmt lif on port e0c
  2. Create admin user ssh login

  ::> set advanced

  ::*> security login create -user-or-group-name admin -application ssh -authmethod password -role admin

  1. Create password for admin user

  ::*> security login password -username admin –vserver Default

  1. Unlock diag user and create a password for the diag user

  ::> security login unlock -username diag
  ::> security login password -username diag
  Please enter a new password: *********
  Please enter it again: *********

  1. Create new disks in the System shell for ports 2 and 3

From advanced mode, log in to the system shell using the diag user account by entering the system node systemshellcommand:
  ::> storage disk show
  ::> set -privilege advanced
  ::*> system node systemshell –node localhost
  login: diag
  Password: *********
 
  % setenv PATH “${PATH}:/usr/sbin”
  % echo $PATH
  /sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local /bin:/var/home/diag/bin:/usr/sbin
 
  % cd /sim/dev
  % ls ,disks
  ,reservations
  Shelf:DiskShelf14 v0.16:NETAPP__:VD-1000MB-FZ-520:11944700:2104448  
 
  % vsim_makedisks -h
  % sudo vsim_makedisks -n 14 -t 35 -a 2
  Creating ,disks/v2.16:NETAPP__:VD-500MB-SS-520_:19143800:1080448 …
  Creating ,disks/v2.32:NETAPP__:VD-500MB-SS-520_:19143813:1080448 Shelf file Shelf:DiskShelf14 updated
 
  % sudo vsim_makedisks -n 14 -t 37 -a 3
Verify the disks you added in step 6 (and optionally step 7) by entering this ls command:
  % ls ,disks/
 
Reboot the simulater to make the new disks available by entering these commands:
  % exit

  1. Reboot the node

  ::*> system node reboot –node localhost
Warning: Are you sure you want to reboot the node? {y|n}: y

  1. Make sure the new disks are detected

  ::> storage disk show

  1. Replace the existing root aggregate disks with new bigger disks

      disk replace start –f <old_disk> <new_disk>

  1. Verify the root aggregate has new disks and old disks appear as spares
  2. Options disk.auto_assign off
  3. disk remove_ownership v0* and disk remove_ownership v1*

  
    > disk assign v0.16 -s unowned -f
    > disk assign v0.17 -s unowned -f
    > disk assign v0.18 -s unowned -f

  1. Delete disks v0* and v1* from systemshell

  
    systemshell
    login: diag
    Password: *********
    setenv PATH “${PATH}:/usr/sbin”
    cd /sim/dev/,disks
    ls
    sudo rm v0*
    sudo rm v1*
    sudo rm ,reservations
    cd /sim/dev
    sudo vsim_makedisks -n 14 -t 31 -a 0
    sudo vsim_makedisks -n 14 -t 31 -a 1
    exit
    reboot
     

  1. Verify all the disks are bigger disks and total network ports are 8

Zoning on Cisco FC Switches

Fundamentals:

* A combination of multiple ports in Cisco is referred to as a ‘zone’
* A combination of multiple zones is called a ‘zoneset’
* At any time, multiple zonesets can exist in a switch. However, only one zoneset will be active

Cisco_Zoning

1. Change to config mode.

    fabsw1# config
    Enter configuration commands, one per line. End with CNTL/Z.
    fabsw1(config)#

2. Create a new zone.

    fabsw1(config)# zone name vs_v3070_7b vsan 1
    fabsw1(config-zone)#

3. Add ports to this zone, and exit zone configuration.

    fabsw1(config-zone)# member pwwn 50:00:1f:e1:50:01:81:e8
    fabsw1(config-zone)# exit
    fabsw1(config)#

4. Switch to zoneset-config, here vsan is a unique ID associated with each zoneset.

    fabsw1(config)# zoneset name ZONESET_V1 vsan 1
    fabsw1(config-zoneset)#

5. Add new zone to the zone set.

    fabsw1(config-zoneset)# member vs_v3070_7b
    fabsw1(config-zoneset)#

6. Exit to normal mode and check if the new zone is added to zoneset.

    fabsw1(config-zoneset)# end
    fabsw1#
    fabsw1# show zoneset
    zoneset name ZONESET_V1 vsan 1
    fabsw1#

7. Execute the following command from config mode to activate the new zoneset:

    fabsw1# zoneset activate name ZONESET_V1 vsan 1

8. Save the running configuration as startup configuration.

    fabsw1# copy running-config startup-config
    [########################################] 100%
    fabsw1#

9. Use the following command to verify

    fabsw1# show current-config