Category Archives: 2016

Setup VPN to Access Home Lab

I have been recently working on a personal project to access my home lab from outside home. My Home lab has a mac mini server 2012 where i have baremetal ESXi running instead of OSX.
I have an ADSL 2 conection at my home where modem/router functionality is both handeled by the same device. The modem/router does not have VPN functionality inbuild. My ISP is Telstra here in Australia.
Modem/Router Model : Technicolor TG799vac (Telstra Gateway MAX)

I bought an ASUS RT-68U router, the stock Firmware (ASUSWRT) on this router has openVPN server built in. After some research, i formulated a plan to configure my current Telstra modem/router in Bridge mode and use ASUS RT-68U as the primary wifi device for my home network. Here is the plan:

  • Telstra modem has a LAN IP address of 10.0.0.138
    • Login to http://10.0.0.138 and take screenshots of all tabs that might be useful
    • Record your ISP username and figure out the password. You’ll need to use this while setting us ASUS RT-68U router
  • Poweron ASUS RT-68U router and check if it boots up, i mean make sure it’s not faulty
  • Configure Telstra Modem/Router in Bridge Mode
    • Have a laptop directly connected with ethernet cable to Telstra router
    • Go to http://10.0.0.138
    • Go to Advanced
      • Turn off wifi for 2.4 and 5.0 GHZ (also guest wifi)
    • Go to Local Network
      • Scroll to Bridge Mode
      • Confirm
    • The router/model restarts in to bridge mode
    • The router will have RED LEDs, don’t worry
    • As a second measure, turn off wifi with the button on the front of the router/modem
    • After reboot, the router is still accessible via http://10.0.0.138 however, does not have all the tabs previously available
  • Now verify if internet is working by setting up a PPPoE connection
  • Connect ASUS RT-68U to Telstra modem
    • plug the LAN port from Telstra modem in to WAN port of ASUS RT-68U router
    • restart ASUS RT-68U
    • plug your laptop to one of the LAN ports on ASUS RT-68U
    • ASUS router web page automatically opens up else use the default IP address in the setup sheet which comes with ASUS RT-68U
    • Setup ASUS RT-68U with your Telstra username and password and configure Wifi
    • Connect your devices to new wifi and verify everything works
  • For VPN, Go to Advanced Settings -> VPN -> OpenVPN
    • Setup using Advanced Settings, use TUN instead of TAP
    • OpenVPN server on ASUS RT-68U assigns an IP address in the range 10.8.0.0/24 which is different to Local LAN IP range. With this IP address you cannot ping any of local devices. You will need to add a static route in OpenVPN settings page, to get this functionality working
      • push “route <local-LAN-IP-range> <subnet-mask>”
      • e.g push “route 192.168.2.1 255.255.255.0”
      • OpenVPN-Route
  • Connect to Home VPN from outside on a MAC

Get-NodePerfData – Powershell Script to query NetApp Oncommand Performance Manager (OPM)

<#
script : Get-NodePerfData.ps1
Example:
Get-NodePerfData.ps1

This script queries OPM(version 2.1/7.0) Server and extract following performance counters for each node in clusters
    Date, Time, avgProcessorBusy, cpuBusy, cifsOps, nfsOps, avgLatency

All data is saved in to "thismonth" directory. e.g. 1608 (YYMM)

#>
Function Get-TzDateTime{
   Return (Get-TzDate) + " " + (Get-TzTime)
}
Function Get-TzDate{
   Return Get-Date -uformat "%Y-%m-%d"
}
Function Get-TzTime{
   Return Get-Date -uformat "%H:%M:%S"
}
Function Log-Msg{
   <#
   .SYNOPSIS
   This function appends a message to log file based on the message type.
   .DESCRIPTION
   Appends a message to a log a file.
   .PARAMETER
   Accepts an integer representing the log file extension type
   .PARAMETER
   Accepts a string value containing the message to append to the log file.
   .EXAMPLE
   Log-Msg -logType 0 -message "Command completed succuessfully"
   .EXAMPLE
   Log-Msg -logType 2, -message "Application is not installed"
   #>
   [CmdletBinding()]
   Param(
      [Parameter(Position=0,
         Mandatory=$True,
         ValueFromPipeLine=$True,
         ValueFromPipeLineByPropertyName=$True)]
      [Int]$logType,
      [Parameter(Position=1,
         Mandatory=$True,
         ValueFromPipeLine=$True,
         ValueFromPipeLineByPropertyName=$True)]
      [String]$message
   )
   Switch($logType){
      0 {$extension = "log"; break}
      1 {$extension = "err"; break}
      2 {$extension = "err"; break}
      3 {$extension = "csv"; break}
      default {$extension = "log"}
   }
   If($logType -eq 1){
      $message = ("Error " + $error[0] + " " + $message)
   }
   $prefix = Get-TzDateTime
   ($prefix + "," + $message) | Out-File -filePath `
   ($scriptLogPath + "." + $extension) -encoding ASCII -append
}
function MySQLOPM {
    Param(
      [Parameter(
      Mandatory = $true,
      ParameterSetName = '',
      ValueFromPipeline = $true)]
      [string]$Switch,
      [string]$Query
      )

    if($switch -match 'performance') {
        $MySQLDatabase = 'netapp_performance'
    }
    elseif($switch -match 'model'){
        $MySQLDatabase = 'netapp_model_view'    
    }
    $MySQLAdminUserName = 'report'
    $MySQLAdminPassword = 'password123'
    $MySQLHost = 'opm-server'
    $ConnectionString = "server=" + $MySQLHost + ";port=3306;Integrated Security=False;uid=" + $MySQLAdminUserName + ";pwd=" + $MySQLAdminPassword + ";database="+$MySQLDatabase

    Try {
      [void][System.Reflection.Assembly]::LoadFrom("E:\ssh\L080898\MySql.Data.dll")
      $Connection = New-Object MySql.Data.MySqlClient.MySqlConnection
      $Connection.ConnectionString = $ConnectionString
      $Connection.Open()

      $Command = New-Object MySql.Data.MySqlClient.MySqlCommand($Query, $Connection)
      $DataAdapter = New-Object MySql.Data.MySqlClient.MySqlDataAdapter($Command)
      $DataSet = New-Object System.Data.DataSet
      $RecordCount = $dataAdapter.Fill($dataSet, "data")
      $DataSet.Tables[0]
      }

    Catch {
      Write-Host "ERROR : Unable to run query : $query `n$Error[0]"
     }

    Finally {
      $Connection.Close()
    }
}
#'------------------------------------------------------------------------------
#'Initialization Section. Define Global Variables.
#'------------------------------------------------------------------------------
##'Set Date and Time Variables
[String]$lastmonth      = (Get-Date).AddMonths(-1).ToString('yyMM')
[String]$thismonth      = (Get-Date).ToString('yyMM')
[String]$yesterday      = (Get-Date).AddDays(-1).ToString('yyMMdd')
[String]$today          = (Get-Date).ToString('yyMMdd')
[String]$fileTime       = (Get-Date).ToString('HHmm')
[String]$workDay        = (Get-Date).AddDays(-1).DayOfWeek
[String]$DOM            = (Get-Date).ToString('dd')
[String]$filedate       = (Get-Date).ToString('yyyyMMdd')
##'Set Path Variables
[String]$scriptPath     = Split-Path($MyInvocation.MyCommand.Path)
[String]$scriptSpec     = $MyInvocation.MyCommand.Definition
[String]$scriptBaseName = (Get-Item $scriptSpec).BaseName
[String]$scriptName     = (Get-Item $scriptSpec).Name
[String]$scriptLogPath  = $scriptPath + "\Logs\" + (Get-TzDate) + "-" + $scriptBaseName
[System.Object]$fso     = New-Object -ComObject "Scripting.FileSystemObject"
[String]$outputPath     = $scriptPath + "\Reports\" + $thismonth
[string]$logPath        = $scriptPath+ "\Logs"

# MySQL Query to get objectid, name of all nodes
$nodes = MySQLOPM -Switch model -Query "select objid,name from node"

# Create hash of nodename and objid
$hash =@{}

foreach ($line in $nodes) {
    $hash.add($line.name, $line.objid)
}
# Create Log Directory
if ( -not (Test-Path $logPath) ) { 
       Try{
          New-Item -Type directory -Path $logPath -ErrorAction Stop | Out-Null
          Log-Msg 0 "Created Folder ""logPath"""
       }
       Catch{
          Log-Msg 0 "Failed creating folder ""$logPath"" . Error " + $_.Exception.Message
          Exit -1;
       }
    }

# Check hash is not empty, then query OPM server to extract counters
if ($hash.count -gt 0) {

    # If Report directory does not exist then create
    if ( -not (Test-Path $outputPath) ) { 
       Try{
          New-Item -Type directory -Path $outputPath -ErrorAction Stop | Out-Null
          Log-Msg 0 "Created Folder ""$outputPath"""
       }
       Catch{
          Log-Msg 0 "Failed creating folder ""$outputPath"" . Error " + $_.Exception.Message
          Exit -1;
       }
    }
    # foreach node
    foreach ($h in $hash.GetEnumerator()) {
    
        $nodeperffilename  = "$($h.name)`_$filedate.csv"
        $nodePerfFile = Join-Path $outputPath $nodeperffilename

        # MySQL Query to query each object and save data to lastmonth directory
        MySQLOPM -Switch performance -Query "select objid,Date_Format(FROM_UNIXTIME(time/1000), '%Y:%m:%d') AS Date ,Date_Format(FROM_UNIXTIME(time/1000), '%H:%i') AS Time, round(avgProcessorBusy,1) AS cpuBusy,round(cifsOps,1) AS cifsOps,round(nfsOps,1) AS nfsOps,round((avgLatency/1000),1) As avgLatency from sample_node where objid=$($h.value)" | Export-Csv -Path $nodePerfFile -NoTypeInformation
        Log-Msg 0 "Exported Performance Logs for $($h.name)"
    }
} 

Volume Clone Split Extremely Slow in clustered Data ONTAP

Problem

My colleague had been dealing with growth on an extremely large volume (60 TB) for some time. After discussing with Business groups it was aggreed to split the volume in two seperate volumes.The largest directory identified was 20 TB that could be moved to it’s own volume. Discussions started on the best possible solution to get this job completed quickly.

Possible Solutions

  • robocopy / securecopy the directory to another volume. Past experience says this could be lot more time consuming.
  • ndmpcopy the large directory to a new volume. The ndmpcopy session needs to be kept open, if the job fails during transfer, we have to restart from begining. Also, there are no progress updates available.
  • clone the volume, delete data not required, split the clone. This seems to be a nice solution.
  • vol move. We don’t want to copy entire 60 TB volume and delete data. Therefore, we didn’t consider this solution.

So, we aggreed on the 3rd  Solution (clone, delete, split).

What actually happened

snowy-mgmt::> volume clone split start -vserver snowy -flexclone snowy_vol_001_clone
Warning: Are you sure you want to split clone volume snowy_vol_001_clone in Vserver snowy ?
{y|n}: y
[Job 3325] Job is queued: Split snowy_vol_001_clone.
 
Several hours later:
snowy-mgmt::> volume clone split show
                                Inodes              Blocks
                        ——————— ———————
Vserver   FlexClone      Processed      Total    Scanned    Updated % Complete
——— ————- ———- ———- ———- ———- ———-
snowy snowy_vol_001_clone          55      65562    1532838    1531276          0
 
Two Days later:
snowy-mgmt::> volume clone split show
                                Inodes              Blocks
                        ——————— ———————
Vserver   FlexClone      Processed      Total    Scanned    Updated % Complete
——— ————- ———- ———- ———- ———- ———-
snowy snowy_vol_001_clone         440      65562 1395338437 1217762917          0

This is a huge problem. The split operation will never complete in time.

What we found

We found the problem was with the way clone split works. Data ONTAP uses a background scanner to copy the shared data from the partent volume to the FlexClone volume. The scanner has one active message at any time that is processing only one inode, so the split tends to be faster on a volume with fewer inodes. Also, the background scanner runs at a low priority and can take considreable amount of time to complete. This means for a large volume with millions of inodes, it will take a huge amount of time to perform the split operation.

Workaround

“volume move a clone”

snowy-mgmt::*> vol move start -vserver snowy -volume snowy_vol_001_clone -destination-aggregate snowy01_aggr_01
  (volume move start)
 
Warning: Volume will no longer be a clone volume after the move and any associated space efficiency savings will be lost. Do you want to proceed? {y|n}: y

Benefits of vol move a FlexClone:

  • Faster than FlexClone split.
  • Data can be moved to different aggregate or node.

Reference

FAQ – FlexClone split

Error Handling in Powershell Scripts

Introduction

I have been writing powershell scripts to address various problems with utmost efficiency. I have been incorporating error handling in my scripts, however, i refreshed my knowledge and i am sharing this with fellow IT professionals. While running powershell cmdlets, you encounter two kinds of errors (Terminating and Non Terminating):

  • Terminating : These will halt the function or operation. e.g. syntax error, running out of memory. Can be caught and handled.

Terminating-Error

  • Non Terminating : These allow the function or operation to continue. e.g. file not found, permission issues, if the file is empty the operation continues to next peice of code. Difficult to capture.

So How do you capture non terminating errors in a funcion?

Powershell provides various Variables and Actions to handle errors and exceptions:

  • $ErrorActionPreference : environment variable which applies to all cmdlets in the shell or the script
  • -ErrorAction : applies to specific cmdlets where it is applied
  • $Error : whenever an exception occurs its added to $Error variable. By default the variable holds 256 errors. The $Error variable is an array where the first element is the most recent exception. As new exceptions occur, the new one pushes the others down the list.
  • -ErrorVariable: accepts the name of a variable and if the command generates and error, it’ll be placed in that variable.
  • Try .. Catch Constructs : Try part contains the command or commands that you think might cause an error. You have to set their -ErrorAction to Stop in order to catch the error. The catch part runs if an error occurs within the Try part.

-ErrorAction : Use ErrorAction parameter to treat non terminating errors as terminating. Every powershell cmdlet supports ErrorAction.Powershell halts execution on terminating errors. For non terminating errors we have the option to tell powershell how to handle these situations.

Available Choices

  • SilentlyContinue : error messages are supressed and execution continues
  • Stop : forces execution to stop, behaves like a terminating error
  • Continue : default option. Errors will display and execution will continue
  • Inquire : prompt the user for input to see if we should proceed
  • Ignore : error is ignored and not logged to the error stream

function Invoke-SshCmd ($cmd){
try {
Invoke-NcSsh $cmd -ErrorAction stop | out-null
"The command completed successfully"
}
catch {
Write-ErrMsg "The command did not complete successfully"
}
}

$ErrorActionPreference : It is also possible to treat all errors as terminating using the ErrorActionPreference variable.You can do this either for the script your are working with or for the whole PowerShell session.

-ErrorVariable : Below example captures error in variable “$x”

function Invoke-SshCmd ($cmd){
try {
Invoke-NcSsh $cmd -ErrorVariable x -ErrorAction SilentlyContinue | out-null
"The command completed successfully"
}
catch {
Write-ErrMsg "The command did not complete successfully : $x.exception"
}
}

$x.InvocationInfo : provides details about the context which the command was executed
$x.Exception : has the error message string
If there is a further underlying problem that is captured in $x.Exception.innerexception
The error message can be futher broken in:
$x.Exception.Message
and $x.Exception.ItemName
$($x.Exception.Message) another way of accessing the error message.

$Error : Below example captures error in default $error variable

function Invoke-SshCmd ($cmd){
try {
Invoke-NcSsh $cmd -ErrorAction stop | out-null
"The command completed successfully"
}
catch {
Write-ErrMsg "The command did not complete successfully : $error[0].exception"
}
}

Query Oncommand Performance Manager (OPM) Database using Powershell

Introduction

OnCommand Performance Manager (OPM) provides performance monitoring and event root-cause analysis for systems running clustered Data ONTAP software. It is the performance management part of OnCommand Unified Manager. OPM 2.1 is well integrated with Unified Manager 6.4. You can view and analyze events in the Performance Manager UI or view them in the Unified Manager Dashboard.

Performance Manager collects current performance data from all monitored clusters every five minutes (5, 10, 15). It analyzes this data to identify performance events and potential issues. It retains 30 days of five-minute historical performance data and 390 days of one-hour historical performance data. This enables you to view very granular performance details for the current month, and general performance trends for up to a year.

Accessing the Database

Using powershell you can query MySQL database and retrieve information to create performance charts in Microsoft Excel or other tools. In order to access OPM databse you’ll need a user created with “Database User” role.

OPM-User

The following databases are availbale in OPM 2.1

  • information_schema
  • netapp_model
  • netapp_model_view
  • netapp_performance
  • opm

Out of the above, the two databases that have more relevant information are “netapp_model_view” and “netapp_performance”Database “netapp_model_view” has tables that define the objects and relationships among the objects for which performance data is collected, such as aggregates, SVMs, clusters, volumes, etc.  Database netapp_performance has tables which contain the raw data collected as well as periodic rollups used to quickly generate the graphs OPM presents through its GUI.

Refer to MySQL function in my previous post on Querying OCUM Database using Powershell to connect to OPM database.

Understanding Database

OPM assigns each object (node, cluster, lif, port, aggregate, volumes etc.) a unique id. These id’s are independent of id’s in OCUM database. Theser id’s are stored in tables in “netapp_model_view” database. You can perform join on various tables through the object id’s.

Actual performance data is collected and stored in tables in “netapp_performance” database. All table have a suffix “sample_”. Each table row contains OPM object id for the object (node, cluster, lif, port, aggregate, volumes etc.), the timestamp of the collection and the raw data.

Few useful Database queries

Below example queries database to retrieve performance counter of a node.

Connect to “netapp_model_view” database and list the objid and name from table nodes

"MySQL -Query ""select objid,name from node"" | Format-Table -AutoSize"

Connect to “netapp_performance” database and export cpuBusy, cifsOps, avgLatency from table node

"MySQL -Query ""select objid,Date_Format(FROM_UNIXTIME(time/1000), '%Y:%m:%d %H:%i') AS Time,cpuBusy,cifsOps,avgLatency from sample_node where objid=2"" | Export-Csv -Path E:\snowy-01.csv -NoTypeInformation"

How to use “ndmpcopy” in clustered Data ONTAP 8.2.x

Introduction

“ndmpcopy” in clustered Data ONTAP has two modes

  1. node-scope-mode : you need to track the volume location if a volume move is performed
  2. vserver-scope-mode : no issues, even if the volume is moved to a different node. 

In this scenario i’ll use vserver-scope-mode to perform a “ndmpocpy” within the same cluster and same SVM.

In my test I copied a 1GB file to a new folder under same volume.

Login to the cluster

snowy-mgmt::> set diag -rows 0
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

List of volumes on SVM “snowy”

snowy-mgmt::*> vol show -vserver snowy
(volume show)
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
snowy     SNOWY62_vol001 snowy01_hybdata_01 online RW 1TB    83.20GB   91%
snowy     SNOWY62_vol001_sv snowy02_hybdata_01 online DP 1TB 84.91GB   91%
snowy     HRauhome01   snowy01_hybdfc_01 online RW       100GB    94.96GB    5%
snowy     rootvol      snowy01_hybdfc_01 online RW        20MB    18.88MB    5%
4 entries were displayed.

snowy-mgmt::*> df -g SNOWY62_vol001
Filesystem               total       used      avail capacity  Mounted on                 Vserver
/vol/SNOWY62_vol001/ 972GB       3GB       83GB      91%  /SNOWY62_vol001       snowy
/vol/SNOWY62_vol001/.snapshot 51GB 0GB     51GB       0%  /SNOWY62_vol001/.snapshot  snowy
2 entries were displayed.

snowy-mgmt::*> vol show -vserver snowy -fields volume,junction-path
(volume show)
vserver volume              junction-path
------- ------------------- --------------------
snowy   SNOWY62_vol001 /SNOWY62_vol001
snowy   SNOWY62_vol001_sv -
snowy   HRauhome01          /hrauhome01
snowy   rootvol             /
4 entries were displayed.

Create a “ndmpuser” with role “backup”

snowy-mgmt::*> security login show
Vserver: snowy
Authentication                  Acct
UserName         Application Method         Role Name        Locked
---------------- ----------- -------------- ---------------- ------
vsadmin          ontapi      password       vsadmin          yes
vsadmin          ssh         password       vsadmin          yes

Vserver: snowy-mgmt
Authentication                  Acct
UserName         Application Method         Role Name        Locked
---------------- ----------- -------------- ---------------- ------
admin            console     password       admin            no
admin            http        password       admin            no
admin            ontapi      password       admin            no
admin            service-processor password admin            no
admin            ssh         password       admin            no
autosupport      console     password       autosupport      yes
8 entries were displayed.

snowy-mgmt::*> security login create -username ndmpuser -application ssh -authmethod password -role backup -vserver snowy-mgmt
Please enter a password for user 'ndmpuser':
Please enter it again:

snowy-mgmt::*> vserver services ndmp generate-password -vserver snowy-mgmt -user ndmpuser
Vserver: snowy-mgmt
User: ndmpuser
Password: Ip3gRJchR0FGPLA7

Turn on “ndmp” service on the cluster mgmt. SVM

snowy-mgmt::*> vserver services ndmp on -vserver snowy-mgmt

In the Nodeshell initiate “ndmpcopy”

snowy-mgmt::*> node run -node snowy-01
Type 'exit' or 'Ctrl-D' to return to the CLI

snowy-01> ndmpcopy
usage:
ndmpcopy [<options>] <source> <destination>
<source> and <destination> are of the form [<filer>:]<path>
If an IPv6 address is specified, it must be enclosed in square brackets

options:
[-sa <username>:<password>]
[-da <username>:<password>]
    source/destination filer authentication
[-st { text | md5 }]
[-dt { text | md5 }]
    source/destination filer authentication type
    default is md5
[-l { 0 | 1 | 2 }]
    incremental level
    default is 0
[-d]
    debug mode
[-f]
    force flag, to copy system files
[-mcs { inet | inet6 }]
    force specified address mode for source control connection
[-mcd { inet | inet6 }]
    force specified address mode for destination control connection
[-md { inet | inet6 }]
    force specified address mode for data connection
[-h]
    display this message
[-p]
    accept the password interactively
[-exclude <value>]
    exclude the files/dirs from backup path

snowy-01>
snowy-01> ndmpcopy -sa ndmpuser:Ip3gRJchR0FGPLA7 -da ndmpuser:Ip3gRJchR0FGPLA7 10.10.2.72:/snowy/SNOWY62_vol001/TestFil2_002 10.10.2.72:/snowy/SNOWY62_vol001/destination
Ndmpcopy: Starting copy [ 14 ] ...
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Log: DUMP: creating "/snowy/SNOWY62_vol001/../snapshot_for_backup.7" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP: Using Partial Volume Dump of selected subtrees
Ndmpcopy: 10.10.2.72: Log: DUMP: Using snapshot_for_backup.7 snapshot
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of this level 0 dump: Tue Apr  5 08:27:43 2016.
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of last level 0 dump: the epoch.
Ndmpcopy: 10.10.2.72: Log: DUMP: Dumping /snowy/SNOWY62_vol001/TestFil2_002 to NDMP connection
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass I)[regular files]
Ndmpcopy: 10.10.2.72: Log: DUMP: Reading file names from NDMP.
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass II)[directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: estimated 1050638 KB.
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass III) [directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass IV) [regular files]
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:52 2016: Begin level 0 restore
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:54 2016: Reading directories from the backup
Ndmpcopy: 10.10.2.72: Log: RESTORE: Warning: /vol/SNOWY62_vol001/destination/ will not be restored as a qtree: exists as a normal subdirectory.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Could not create qtree `/vol/SNOWY62_vol001/destination/'. Creating a regular directory instead.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:55 2016: Creating files and directories.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:55 2016: Writing data to files.
Ndmpcopy: 10.10.2.72: Log: ACL_START is '1075865600'
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass V) [ACLs]
Ndmpcopy: 10.10.2.72: Log: DUMP: 1050657 KB
Ndmpcopy: 10.10.2.72: Log: DUMP: DUMP IS DONE
Ndmpcopy: 10.10.2.72: Log: RESTORE: RESTORE IS DONE
Ndmpcopy: 10.10.2.72: Notify: restore successful
Ndmpcopy: 10.10.2.72: Log: DUMP: Deleting "/snowy/SNOWY62_vol001/../snapshot_for_backup.7" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP_DATE is '5754776159'
Ndmpcopy: 10.10.2.72: Notify: dump successful
Ndmpcopy: Transfer successful [ 0 hours, 1 minutes, 26 seconds ]
Ndmpcopy: Done

Although I used the cluster-mgmt lif in the ndmpcopy syntax, I didn’t see any traffic flowing on the lif 

snowy-mgmt::*> statistics show-periodic -node cluster:summary -object lif:vserver -instance snowy-mgmt -counter instance_name|recv_data|sent_data -interval 1
snowy-mgmt: lif:vserver.snowy-mgmt: 4/5/2016 08:27:20
instance     recv     sent   Complete     Number of
name     data     data  Aggregation  Constituents
-------- -------- -------- ----------- -------------
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1

Another “ndmpcopy” job with different statistics command
snowy-01> ndmpcopy -sa ndmpuser:Ip3gRJchR0FGPLA7 -da ndmpuser:Ip3gRJchR0FGPLA7 10.10.2.72:/snowy/SNOWY62_vol001/TestFil1_001 10.10.2.72:/snowy/SNOWY62_vol001/destination
Ndmpcopy: Starting copy [ 15 ] ...
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Log: DUMP: creating "/snowy/SNOWY62_vol001/../snapshot_for_backup.9" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP: Using Partial Volume Dump of selected subtrees
Ndmpcopy: 10.10.2.72: Log: DUMP: Using snapshot_for_backup.9 snapshot
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of this level 0 dump: Tue Apr  5 08:30:40 2016.
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of last level 0 dump: the epoch.
Ndmpcopy: 10.10.2.72: Log: DUMP: Dumping /snowy/SNOWY62_vol001/TestFil1_001 to NDMP connection
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass I)[regular files]
Ndmpcopy: 10.10.2.72: Log: DUMP: Reading file names from NDMP.
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass II)[directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: estimated 1050638 KB.
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass III) [directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass IV) [regular files]
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:47 2016: Begin level 0 restore
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:49 2016: Reading directories from the backup
Ndmpcopy: 10.10.2.72: Log: RESTORE: Warning: /vol/SNOWY62_vol001/destination/ will not be restored as a qtree: exists as a normal subdirectory.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Could not create qtree `/vol/SNOWY62_vol001/destination/'. Creating a regular directory instead.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:50 2016: Creating files and directories.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:50 2016: Writing data to files.
Ndmpcopy: 10.10.2.72: Log: ACL_START is '1075865600'
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass V) [ACLs]
Ndmpcopy: 10.10.2.72: Log: DUMP: 1050657 KB
Ndmpcopy: 10.10.2.72: Log: DUMP: DUMP IS DONE
Ndmpcopy: 10.10.2.72: Log: DUMP: Deleting "/snowy/SNOWY62_vol001/../snapshot_for_backup.9" snapshot.
Ndmpcopy: 10.10.2.72: Log: RESTORE: RESTORE IS DONE
Ndmpcopy: 10.10.2.72: Notify: restore successful
Ndmpcopy: 10.10.2.72: Log: DUMP_DATE is '5754776336'
Ndmpcopy: 10.10.2.72: Notify: dump successful
Ndmpcopy: Transfer successful [ 0 hours, 1 minutes, 20 seconds ]
Ndmpcopy: Done
snowy-01>

snowy-mgmt::*> statistics show-periodic -object lif -instance snowy-mgmt:cluster_mgmt -counter instance_name|recv_data|sent_data -interval 1
snowy-mgmt: lif.snowy-mgmt:cluster_mgmt: 4/5/2016 08:30:10
instance     recv     sent   Complete     Number of
name     data     data  Aggregation  Constituents
-------- -------- -------- ----------- -------------
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a

Convert Snapmirror DP relation to XDP in clustered Data ONTAP

With clustered Data ONTAP 8.2 snapvault (XDP) feature was introduced and the ability to convert existing snapmirror (DP) relations to snapvault (XDP) .

I had tested this feature long time ago, however, never used it in production environment. Recently i got a chance to implement this as a production volume with high change rate (snapshots involved) grew up to 60 TB (20 TB used by snapshots). Due to FAS 3250 in the cluster the max. volume size is 70 TB. After discussions with the customer it was decided to create a local snapvault copy of the production volume that would contain all existing snapshots and accumulate more in coming days until a new snapvault cluster is setup. The data in the volume is highly compressable so the snapvault destination would consume less space.

Overview of this process:

  1. Create a snapmirror DP relation
  2. Initialize the snapmirror DP relation
  3. Quiesce/Break/Delete the DP relation
  4. Resync the relation as snapmirror XDP
  5. Continue with vault updates

CREATE SOURCE VOLUME
snowy-mgmt::*> vol create -volume AU2004NP0062_vol001 -aggregate snowy01_hybdata_01 -space-guarantee none -size 1tb -junction-path /AU2004NP0066_vol001 -state online -junction-active true (volume create) [Job 85] Job succeeded: Successful snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields security-style (volume show) vserver volume security-style ------- ------------------- -------------- snowy AU2004NP0062_vol001 ntfs

CREATE DESTINATION VOLUME
snowy-mgmt::*> vol create -volume AU2004NP0062_vol001_sv -aggregate snowy02_hybdata_01 -space-guarantee none -size 3tb -type DP -state online (volume create) [Job 86] Job succeeded: Successful snowy-mgmt::*> vol show -volume AU2004NP0062_vol001* (volume show) Vserver Volume Aggregate State Type Size Available Used% --------- ------------ ------------ ---------- ---- ---------- ---------- ----- snowy AU2004NP0062_vol001 snowy01_hybdata_01 online RW 1TB 87.01GB 91% snowy AU2004NP0062_vol001_sv snowy02_hybdata_01 online DP 3TB 87.01GB 97% 2 entries were displayed.

CREATE SNAPMIRROR RELATION
snowy-mgmt::*> snapmirror create -source-path snowy:AU2004NP0062_vol001 -destination-path snowy:AU2004NP0062_vol001_sv -type DP -vserver snowy Operation succeeded: snapmirror create for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show -type DP Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 DP snowy:AU2004NP0062_vol001_sv Uninitialized Idle - true -

INITIALIZE SNAPMIRROR
snowy-mgmt::*> snapmirror initialize -destination-path snowy:AU2004NP0062_vol001_sv Operation is queued: snapmirror initialize of destination "snowy:AU2004NP0062_vol001_sv".

CREATE SNAPSHOTS ON SOURCE VOLUME
snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_01 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_02 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_03 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_04 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_05 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_06 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_07 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_08 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_09 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_00 snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001 ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001 hourly.2016-03-24_1005 valid 60KB 0% 27% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 33% 80KB sas_snap_01 valid 60KB 0% 27% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 32% sas_snap_04 valid 60KB 0% 27% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 32% sas_snap_00 valid 56KB 0% 26% 12 entries were displayed. snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 DP snowy:AU2004NP0062_vol001_sv Snapmirrored Idle - true -

UPDATE SNAPMIRROR TO TRANSFER ALL SNAPSHOTS TO DESTINATION VOLUME
snowy-mgmt::*> snapmirror update -destination-path snowy:* Operation is queued: snapmirror update of destination "snowy:AU2004NP0062_vol001_sv". 1 entry was acted on.

SNAPSHOTS REACHED THE DESTINATION VOLUME
snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001_sv ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001_sv hourly.2016-03-24_1005 valid 60KB 0% 28% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 34% 80KB sas_snap_01 valid 60KB 0% 28% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 33% sas_snap_04 valid 60KB 0% 28% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 33% sas_snap_00 valid 72KB 0% 32% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 0% 0B 13 entries were displayed.

QUIESCE, BREAK AND DELETE SNAPMIRRORS
snowy-mgmt::*> snapmirror quiesce -destination-path snowy:AU2004NP0062_vol001_sv Operation succeeded: snapmirror quiesce for destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror break -destination-path snowy:AU2004NP0062_vol001_sv [Job 87] Job succeeded: SnapMirror Break Succeeded snowy-mgmt::*> snapmirror delete -destination-path snowy:AU2004NP0062_vol001_sv Operation succeeded: snapmirror delete for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show This table is currently empty.

RESYNC SNAPMIRROR AS XDP RELATION
snowy-mgmt::*> snapmirror resync -source-path snowy:AU2004NP0062_vol001 -destination-path snowy:AU2004NP0062_vol001_sv -type XDP Warning: All data newer than Snapshot copy snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 on volume snowy:AU2004NP0062_vol001_sv will be deleted. Verify there is no XDP relationship whose source volume is "snowy:AU2004NP0062_vol001_sv". If such a relationship exists then you are creating an unsupported XDP to XDP cascade. Do you want to continue? {y|n}: y [Job 88] Job succeeded: SnapMirror Resync Transfer Queued snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 XDP snowy:AU2004NP0062_vol001_sv Snapmirrored Idle - true -

SNAPSHOTS EXIST ON BOTH SOURCE AND DESTINATION VOLUME AFTER RESYNC
snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001 ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001 hourly.2016-03-24_1005 valid 68KB 0% 29% sas_snap_01 valid 60KB 0% 27% sas_snap_02 valid 64KB 0% 28% sas_snap_03 valid 76KB 0% 32% sas_snap_04 valid 60KB 0% 27% sas_snap_05 valid 64KB 0% 28% sas_snap_06 valid 64KB 0% 28% sas_snap_07 valid 64KB 0% 28% sas_snap_08 valid 64KB 0% 28% sas_snap_09 valid 76KB 0% 32% sas_snap_00 valid 72KB 0% 31% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 31% 72KB 12 entries were displayed. snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001_sv ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001_sv hourly.2016-03-24_1005 valid 60KB 0% 28% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 34% 80KB sas_snap_01 valid 60KB 0% 28% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 33% sas_snap_04 valid 60KB 0% 28% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 33% sas_snap_00 valid 72KB 0% 32% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 33% 76KB 13 entries were displayed.

TURN ON VOLUME EFFICIENCY - DESTINATION VOLUME
snowy-mgmt::*> vol efficiency on -volume AU2004NP0062_vol001_sv (volume efficiency on) Efficiency for volume "AU2004NP0062_vol001_sv" of Vserver "snowy" is enabled. Already existing data could be processed by running "volume efficiency start -vserver snowy -volume AU2004NP0062_vol001_sv -scan-old-data true".

CRETE A CIFS SHARE ON SOURCE VOLUME AND COPY SOME DATA
snowy-mgmt::*> cifs share create -share-name sas_vol -path /AU2004NP0062_vol001 -share-properties oplocks,browsable,changenotify snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB

CREATE SNAPSHOT AND SNAPMIRROR POLICIES WITH SAME SNAPMIRROR LABLES
snowy-mgmt::*> cron show (job schedule cron show) Name Description ---------------- ----------------------------------------------------- 5min @:00,:05,:10,:15,:20,:25,:30,:35,:40,:45,:50,:55 8hour @2:15,10:15,18:15 daily @0:10 hourly @:05 weekly Sun@0:15 5 entries were displayed. snowy-mgmt::*> snapshot policy create -policy keep_more_snaps -enabled true -schedule1 5min -count1 5 -prefix1 sv -snapmirror-label1 mins -vserver snowy snowy-mgmt::*> snapmirror policy create -vserver snowy -policy XDP_POL snowy-mgmt::*> snapmirror policy add-rule -vserver snowy -policy XDP_POL -snapmirror-label mins -keep 50

APPLY SNAPSHOT POLICY TO SOURCE VOLUME
snowy-mgmt::*> volume modify -volume AU2004NP0062_vol001 -snapshot-policy keep_more_snaps Warning: You are changing the Snapshot policy on volume AU2004NP0062_vol001 to keep_more_snaps. Any Snapshot copies on this volume from the previous policy will not be deleted by this new Snapshot policy. Do you want to continue? {y|n}: y Volume modify successful on volume: AU2004NP0062_vol001

APPLY SNAPMIRROR POLICY TO DESTINATION VOLUME
snowy-mgmt::*> snapmirror modify -destination-path snowy:AU2004NP0062_vol001_sv -policy XDP_POL Operation succeeded: snapmirror modify for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields snapshot-policy (volume show) vserver volume snapshot-policy ------- ------------------- --------------- snowy AU2004NP0062_vol001 keep_more_snaps snowy-mgmt::*> snapshot policy show keep_more_snaps -instance Vserver: snowy Snapshot Policy Name: keep_more_snaps Snapshot Policy Enabled: true Policy Owner: vserver-admin Comment: - Total Number of Schedules: 1 Schedule Count Prefix SnapMirror Label ---------------------- ----- --------------------- ------------------- 5min 5 sv mins

UPDATE SNAPMIRROR RELATIONSHIP (SNAPVAULT)
snowy-mgmt::*> snapmirror update -destination-path snowy:AU2004NP0062_vol001_sv Operation is queued: snapmirror update of destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 XDP snowy:AU2004NP0062_vol001_sv Snapmirrored Transferring 0B true 03/24 10:56:29

THE SIZE OF BOTH SOURCE AND DESTINATION VOLUME IS SAME
snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 2.06GB 2 entries were displayed.

DEDUPE JOB IS RUNNING
snowy-mgmt::*> sis status Vserver Volume State Status Progress Policy ---------- ---------------- -------- ------------ ------------------ ---------- snowy AU2004NP0062_vol001_sv Enabled Active 539904 KB (25%) Done -

SIZE OF DESTINATION VOLUME AFTER DEDUPE JOB COMPLETED
snowy-mgmt::*> sis status Vserver Volume State Status Progress Policy ---------- ---------------- -------- ------------ ------------------ ---------- snowy AU2004NP0062_vol001_sv Enabled Idle Idle for 00:00:05 - snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 274.9MB 2 entries were displayed.

START COMPRESSION JOB ON DESTINATION VOLUME BY SCANNING EXISTING DATA
snowy-mgmt::*> vol efficiency start -volume AU2004NP0062_vol001_sv -scan-old-data (volume efficiency start) Warning: This operation scans all of the data in volume "AU2004NP0062_vol001_sv" of Vserver "snowy". It may take a significant time, and may degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "AU2004NP0062_vol001_sv" of Vserver "snowy" has started.

SIZE OF DESTINATION VOLUME AFTER COMPRESSION JOB COMPLETED
snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 49.76MB 2 entries were displayed.

Querying OCUM Database using Powershell

Oncommand Unified Manager (OCUM) is the software to monitor and troubleshoot cluster or SVM issues relating to data storage capacity, availability, performance and protection. OCUM polls the clustered Data ONTAP stoage systmes and stores all inventory information in MySQL database. Using powershell we can query MySQL database and retrieve information to create reports.

All we need is MySQL .NET connector to query OCUM database and retrieve information from various tables. Another tool that is helpful is “HeidiSQL” client for MySQL. You can connect to OCUM Database using Heidi SQL and view all the tables and columns within the database.

Download and use version 2.0 MySQL Connector with OCUM 6.2

Donwload link to HeidiSQL

NetApp Communities Post

First of all you’ll need to create a “Database User” with Role “Report Schema” (OCUM GUI -> Administration -> ManagerUsers -> Add)

Use HeidiSQL to connect to OCUM database

Connect-OCUM

OCUM

Ocum_report

Sample Powershell Code to connect to OCUM Database and retrieve information

# Get-cDOTAggrVolReport.ps1
# Date : 2016_03_10 12:12 PM
# This script uses MySQL .net connector at location E:\ssh\MySql.Data.dll to query OCCUM 6.2 database

# Function MySQL queries OCUM database
# usage: MySQL -Query <sql-query>
function MySQL {
Param(
[Parameter(
Mandatory = $true,
ParameterSetName = '',
ValueFromPipeline = $true)]
[string]$Query
)

$MySQLAdminUserName = 'reportuser'
$MySQLAdminPassword = 'Netapp123'
$MySQLDatabase = 'ocum_report'
$MySQLHost = '192.168.0.71'
$ConnectionString = "server=" + $MySQLHost + ";port=3306;Integrated Security=False;uid=" + $MySQLAdminUserName + ";pwd=" + $MySQLAdminPassword + ";database="+$MySQLDatabase

Try {
[void][System.Reflection.Assembly]::LoadFrom("E:\ssh\MySql.Data.dll")
$Connection = New-Object MySql.Data.MySqlClient.MySqlConnection
$Connection.ConnectionString = $ConnectionString
$Connection.Open()

$Command = New-Object MySql.Data.MySqlClient.MySqlCommand($Query, $Connection)
$DataAdapter = New-Object MySql.Data.MySqlClient.MySqlDataAdapter($Command)
$DataSet = New-Object System.Data.DataSet
$RecordCount = $dataAdapter.Fill($dataSet, "data")
$DataSet.Tables[0]
}

Catch {
Write-Host "ERROR : Unable to run query : $query `n$Error[0]"
}

Finally {
$Connection.Close()
}
}

# Define disk location to store aggregate and volume size reports retrieved from OCUM
$rptdir = "E:\ssh\aggr-vol-space"
$rpt = "E:\ssh\aggr-vol-space"
$filedate = (Get-Date).ToString('yyyyMMdd')
$aggrrptFilename = "aggrSize`_$filedate.csv"
$aggrrptFile = Join-Path $rpt $aggrrptFilename
$volrptFilename = "volSize`_$filedate.csv"
$volrptFile = Join-Path $rpt $volrptFilename

# verify Report directory exists
if ( -not (Test-Path $rptDir) ) {
write-host "Error: Report directory $rptDir does not exist."
exit
}

# Produce aggregate report from OCUM
#$aggrs = MySQL -Query "select aggregate.name as 'Aggregate', aggregate.sizeTotal as 'TotalSize KB', aggregate.sizeUsed as 'UsedSize KB', aggregate.sizeUsedPercent as 'Used %', aggregate.sizeAvail as 'Available KB', aggregate.hasLocalRoot as 'HasRootVolume' from aggregate"
$aggrs = MySQL -Query "select aggregate.name as 'Aggregate', round(aggregate.sizeTotal/Power(1024,3),1) as 'TotalSize GB', round(aggregate.sizeUsed/Power(1024,3),1) as 'UsedSize GB', aggregate.sizeUsedPercent as 'Used %', round(aggregate.sizeAvail/Power(1024,3),1) as 'Available GB', aggregate.hasLocalRoot as 'HasRootVolume' from aggregate"
$aggrs | where {$_.HasRootVolume -eq $False} | export-csv -NoTypeInformation $aggrrptFile

# Produce volume report from OCUM
$vols = MySQL -Query "select volume.name as 'Volume', clusternode.name as 'Nodename', aggregate.name as 'Aggregate', round(volume.size/Power(1024,3),1) as 'TotalSize GB', round(volume.sizeUsed/Power(1024,3),1) as 'UsedSize GB', volume.sizeUsedPercent as 'Used %', round(volume.sizeAvail/Power(1024,3),1) as 'AvaliableSize GB', volume.isSvmRoot as 'isSvmRoot', volume.isLoadSharingMirror as 'isLSMirror' from volume,clusternode,aggregate where clusternode.id = volume.nodeId AND volume.aggregateId = aggregate.id"
$vols | where {$_.isSvmRoot -eq $False -and $_.isLSMirror -eq $False -and $_.Volume -notmatch "vol0$"} | export-csv -NoTypeInformation $volrptFile

Update ONTAP Image on cDOT by copying files locally

  • Download ONTAP image on your computer
  • Access a CIFS share / NFS mount (cDOT volume) and copy the image to this volume
  • Log in to Systemshell of each node using diag user
  • sudo cp -rf /clus/<vserver>/volume /mroot/etc/software
  • exit systemshell
  • system node image package show
  • system node image update -node <node-name> -package file://localhost/mroot/etc/software/831_q_image.tgz
  • system node image show
login as: admin 
Password:
 
cluster1::> set diag -rows 0
Warning: These diagnostic commands are for use by NetApp personnel only.
 Do you want to continue? {y|n}: y

cluster1::*> security login show -user-or-group-name diag
 Vserver: cluster1
 Authentication                  Acct
 User/Group Name  Application Method         Role Name        Locked
 ---------------- ----------- -------------- ---------------- ------
 diag             console     password       admin            no

cluster1::*> security login unlock -username diag
 
cluster1::*> security login password -username diag
 Enter a new password:
 Enter it again:

cluster1::*> version
 NetApp Release 8.3.1RC1: Fri Jun 12 21:46:00 UTC 2015

cluster1::*> system node image package show
 This table is currently empty.

cluster1::*> system node image package show -node cluster1-01 -package file://localhost/mroot/etc/software
 There are no entries matching your query.

cluster1::*> systemshell -node cluster1-01
 (system node systemshell)
 Data ONTAP/amd64 (cluster1-01) (pts/2)
 login: diag
 Password:
 Last login: Mon Jul 27 16:54:10 from localhost

cluster1-01% sudo mkdir /mroot/etc/software
 
cluster1-01% sudo ls /clus/NAS/ntfs
 .snapshot       831_q_image.tgz BGInfo          nfs-on-ntfs     no-user-access  ntfs-cifs.txt

cluster1-01% sudo cp -rf /clus/NAS/ntfs/831_q_image.tgz /mroot/etc/software
 
cluster1-01% sudo ls /mroot/etc/software
 831_q_image.tgz

cluster1-01% exit
 logout

cluster1::*> system node image package show
 Package
 Node         Repository     Package File Name
 ------------ -------------- -----------------
 cluster1-01
 mroot
 831_q_image.tgz

cluster1::*> system node image update -node cluster1-01 -package file://localhost/mroot/etc/software/image.tgz

cluster1::*> system node image package show
 Package
 Node         Repository     Package File Name
 ------------ -------------- -----------------
 cluster1-01
 mroot
 831_q_image.tgz

cluster1::*> system node image update -node cluster1-01 -package 
file://localhost/mroot/etc/software/831_q_image.tgz

Software update started on node cluster1-01. Updating image2 with package file://localhost/mroot/etc/software/831_q_image.tgz.
 Listing package contents.
 Decompressing package contents.
 Invoking script (install phase). This may take up to 60 minutes.
 Mode of operation is UPDATE
 Current image is image1
 Alternate image is image2
 Package MD5 checksums pass
 Versions are compatible
 Available space on boot device is 1372 MB
 Required  space on boot device is 438 MB
 Kernel binary matches install machine type
 LIF checker script is invoked.
 NO CONFIGURATIONS WILL BE CHANGED DURING THIS TEST.
 Checking ALL Vservers for sufficiency LIFs.
 Running in upgrade mode.
 Running in report mode.
 Enabling Script Optimizations.
 No need to do upgrade check of external servers for this installed version.
 LIF checker script has validated configuration.
 NFS netgroup check script is invoked.
 NFS netgroup check script has run successfully.
 NFS exports DNS check script is invoked.
 netapp_nfs_exports_dns_check script begin
 netapp_nfs_exports_dns_check script end
 NFS exports DNS check script has completed.
 Getting ready to install image
 Directory /cfcard/x86_64/freebsd/image2 created
 Syncing device...
 Extracting to /cfcard/x86_64/freebsd/image2...
 x CHECKSUM
 x VERSION
 x COMPAT.TXT
 x BUILD
 x netapp_nfs_netgroup_check
 x metadata.xml
 x netapp_nfs_exports_dns_check
 x INSTALL
 x netapp_sufficiency_lif_checker
 x cap.xml
 x platform.ko
 x kernel
 x fw.tgz
 x platfs.img
 x rootfs.img
 Installed MD5 checksums pass
 Installing diagnostic and firmware files
 Firmware MD5 checksums pass
 Installation complete. image2 updated on node cluster1-01.
 
cluster1::*>

cluster1::*> system node image show
 Is      Is                                Install
 Node     Image   Default Current Version                   Date
 -------- ------- ------- ------- ------------------------- -------------------
 cluster1-01
 image1  true    true    8.3.1RC1                  -
 image2  false   false   8.3.1                     2/10/2016 05:32:50
 2 entries were displayed.