Category Archives: NetApp

Rendezvous with NetApp AFF A700 All Flash Array

A few days back i got an opportunity to visit Data Centre to setup NetApp AFF A700 All Flash Array. I was very exited. It’s been a long wait to get hands on with new shiny Flash Arrays from NetApp.
Although this storage system looks small in size, however, when it comes to peformance, it’s a beast.
We got this system for a POC (proof of concept), A700 storage controller and a Disk shelf with 3.8TB SSD drives.

I clicked some pictures while setting up the storage array.

A700 Front
A700-Front

A700 Rear
A700-Rear

Controller
Controller

Racked
Final

Hickups after Powering on the Storage Controller
After powering on the disk shelf and the storage controller, the console on my mac showed ASCII characters. I immediately figured out, this is an incorrect Baud Rate issue. The usual baud rate of 9600 is not going to work properly. After changing multiple values, i used the baud rate to setup NetApp E Series systems and it worked fine. It took about 1 hour to figure out the correct value. The documentation provided by NetApp didn’t have the proper Baud Rate value for connecting to storage console.

If any of you reading this post are going to be working with NetApp AFF A series systems please use the Baud Rate: 115200 for connecting to storage array console.

Powershell Script to Setup new NetApp clustered Data ONTAP system

Setting up a new NetApp clustered Data ONTAP system involves a number of steps. I have tried to automate these steps using NetApp Powershell toolkit. This saves time and reduces human errors while configuring new systems.
This script assumes that hardware is installed, “cluster setup” is run and all nodes joined in. I use a suffix of “-mgmt” with the cluster name when i configure a new clustered Data ONTAP system. This script has been tested to work with ONTAP 8.3 (simulators).

At present i have automated the following tasks:
    1.  Rename Nodes
2.  Rename root aggregates
3.  Create failover groups for cluster-mgmt and node-mgmt interfaces
4.  Add feature licenses
5.  Configure Storage Failover
6.  Unlock diag User
7.  Setup diag user password
8.  Create admin user for access to logs through http
9.  Setup Timezone and NTP server
10. Remove 10 Gbe ports from Default broadcast domain
11. Create ifgroups and add ports to ifgroups
12. Enable Cisco Discovery Protocol (cdpd) on all of the nodes
13. Setup disk auto assignment
14. Setup flexscale options
15. Disable flowcontrol on all the ports
16. Create data aggregates

The script displays the results on Powershell console as it iterates through the setup tasks. A transcript is also saved as a text file.

Cluster_config_screenshot

Source Code

 

<# .SYNOPSIS Automate Setup of a new NetApp clustered Data ONTAP cluster install .DESCRIPTION The script assumes the basic cluster setup is completed and all nodes joined. The script automates the following tasks: 1. Rename Nodes 2. Rename root aggregates 3. Create failover groups for cluster-mgmt and node-mgmt interfaces 4. Add feature licenses 5. Configure Storage Failover 6. Unlock diag User 7. Setup diag user password 8. Create admin user for access to logs through http 9. Setup Timezone and NTP server 10. Remove 10 Gbe ports from Default broadcast domain 11. Create ifgroups and add ports to ifgrps 12. Enable Cisco Discovery Protocol (cdpd) on all of the nodes 13. Setup disk auto assignment 14. Setup flexscale options 15. Disable flowcontrol on all the ports 16. Create data aggregates Example: PS C:\Users\vadmin\Documents\pshell-scripts> .\cluster_config_v1.5.ps1
.PARAMETER settingsFilePath
    Location of the File with User defined Parameters.
.EXAMPLE
    PS C:\Users\vadmin\Documents\pshell-scripts> .\cluster_config_v1.5.ps1
#>
#####################
# Declare Variables
#####################
$ClusterName             = "ntapclu1-mgmt"
$mgmtIP                  = "aa.bb.cc.dd"
$mgmtSubnet              = "aaa.bbb.ccc.ddd"
$mgmtGateway             = "aa.bb.cc.xx"
$ntpServer               = "ntp-server1"
$ClusterNameMgmtPort     = "e0d"
$NodeMgmtPort            = "e0c"
$timezone                = "Australia/Sydney"
[int]$maxraidsize        = 17 #raid group size for creating an aggregate
[int]$diskCount          = 51 
$TranscriptPath          = "c:\temp\cluster_setup_transcript_$(get-date -format "yyyyMMdd_hhmmtt").txt"
$licensesPath            = "c:\temp\licenses.txt" 
$ifgrp_a0a_port1         = "e0e"
$ifgrp_a0a_port2         = "e0f"
$timezone                = 'Australia/Sydney'

###########################
# Declare the functions
###########################
function Write-ErrMsg ($msg) {
    $fg_color = "White"
    $bg_color = "Red"
    Write-host " "
    Write-host $msg -ForegroundColor $fg_color -BackgroundColor $bg_color
    Write-host " "
}
#'------------------------------------------------------------------------------
function Write-Msg ($msg) {
    $color = "yellow"
    Write-host " "
    Write-host $msg -foregroundcolor $color
    Write-host " "
}
#'------------------------------------------------------------------------------
function Invoke-SshCmd ($cmd){
    try {
        Invoke-NcSsh $cmd -ErrorAction stop | out-null
        "The command completed successfully"
    }
    catch {
       Write-ErrMsg "The command did not complete successfully"
    }
}
#'------------------------------------------------------------------------------
function Check-LoadedModule {
  Param( 
    [parameter(Mandatory = $true)]
    [string]$ModuleName
  )
  $LoadedModules = Get-Module | Select Name
  if ($LoadedModules -notlike "*$ModuleName*") {
    try {
        Import-Module -Name $ModuleName -ErrorAction Stop
        Write-Msg ("The module DataONTAP is imported")
    }
    catch {
        Write-ErrMsg ("Could not find the Module DataONTAP on this system. Please download from NetApp Support")
        stop-transcript
        exit 
    }
  }
}
#'------------------------------------------------------------------------------
##############################
# Begin Cluster Setup Process
##############################
#'------------------------------------------------------------------------------
## Load Data ONTAP Module
start-transcript -path $TranscriptPath
#'------------------------------------------------------------------------------
Write-Msg  "##### Beginning Cluster Setup #####"
Check-LoadedModule -ModuleName DataONTAP
try {
    Connect-nccontroller $ClusterName -ErrorAction Stop | Out-Null   
    "connected to " + $ClusterName
    }
catch {
    Write-ErrMsg ("Failed connecting to Cluster " + $ClusterName + " : $_.")
    stop-transcript
    exit
}
#'------------------------------------------------------------------------------
## Get the nodes in the cluster
$nodes = (get-ncnode).node
#'------------------------------------------------------------------------------
## Rename the nodes (remove "-mgmt" string)
Write-Msg  "+++ Renaming Node SVMs +++"
foreach ($node in $nodes) { 
    Rename-NcNode -node $node -newname ($node -replace "-mgmt") -Confirm:$false |Out-Null
} 
Get-NcNode |select Node,NodeModel,IsEpsilonNode | Format-Table -AutoSize
$nodes = (get-ncnode).node
#'------------------------------------------------------------------------------
## Rename root aggregates
Write-Msg  "+++ Renaming root aggregates +++"
# get each of the nodes
Get-NcNode | %{ 
    $nodeName = $_.Node
    # determine the current root aggregate name
    $currentAggrName = (
        Get-NcAggr | ?{ 
             $_.AggrOwnershipAttributes.HomeName -eq $nodeName `
               -and $_.AggrRaidAttributes.HasLocalRoot -eq $true 
        }).Name
    # no dashes
    $newAggrName = $nodeName -replace "-", "_"
    # can't start with numbers
    $newAggrName = $newAggrName -replace "^\d+", " "
    # append the root identifier
    $newAggrName = "$($newAggrName)_root"
    if ($currentAggrName -ne $newAggrName) {
        Rename-NcAggr -Name $currentAggrName -NewName $newAggrName | Out-Null 
    }
    sleep -s 5
    Write-Host "Renamed aggregates containing node root volumes"
    (Get-NcAggr | ?{ $_.AggrOwnershipAttributes.HomeName -eq $node -and $_.AggrRaidAttributes.HasLocalRoot -eq $true }).Name 
}
#'------------------------------------------------------------------------------
## Create failover groups for cluster-mgmt and node-mgmt interfaces
Write-Msg  "+++ Create failover groups for cluster-mgmt and node-mgmt interfaces +++"
# get admin vserver name
$adminSVMTemplate = Get-NcVserver -Template
Initialize-NcObjectProperty -Object $adminSVMTemplate -Name VserverType | Out-Null
$adminSVMTemplate.VserverType = "admin"
$adminSVM         = (Get-NcVserver -Query $adminSVMTemplate).Vserver
# create cluster-mgmt failover group 
$clusterPorts     = ((get-ncnode).Node | % { $_,$ClusterNameMgmtPort -join ":" })
$nodePorts        = ((get-ncnode).Node | % { $_,$NodeMgmtPort -join ":" })
$firstClusterPort = $clusterPorts[0]
$allClusterPorts  = $clusterPorts[1..($clusterPorts.Length-1)]
New-NcNetFailoverGroup -Name cluster_mgmt -Vserver $adminSVM -Target $firstClusterPort | Out-Null
foreach ($cPort in $allClusterPorts) {
    Add-NcNetFailoverGroupTarget -Name cluster_mgmt -Vserver $adminSVM -Target $cPort | Out-Null
}
Set-NcNetInterface -Name cluster_mgmt -Vserver $adminSVM -FailoverPolicy broadcast_domain_wide -FailoverGroup cluster_mgmt | Out-Null
Write-Host "Created cluster-mgmt failover group"
Get-NcNetInterface -Name cluster_mgmt  | select InterfaceName,FailoverGroup,FailoverPolicy
# create node-mgmt failover-group for each node
foreach ($node in $nodes) {
    $prt1 = ($node,$NodeMgmtPort -join ":")
    $prt2 = ($node,$ClusterNameMgmtPort -join ":")
    New-NcNetFailoverGroup -Name $node"_mgmt" -Vserver $adminSVM -Target $prt1 | Out-Null
    Add-NcNetFailoverGroupTarget -Name $node"_mgmt" -Vserver $adminSVM -Target $prt2 | Out-Null
    $nodeMgmtLif = (Get-NcNetInterface -Role node-mgmt | Where-Object {$_.HomeNode -match "$node"}).InterfaceName
    Set-NcNetInterface -Name $nodeMgmtLif -Vserver $adminSVM -FailoverPolicy local-only -FailoverGroup $node"_mgmt" | Out-Null
    sleep -s 5
    Write-Host "Created node-mgmt failover group for node "$node
    Get-NcNetInterface -Role node-mgmt | Where-Object {$_.HomeNode -match "$node"} | select InterfaceName,FailoverGroup,FailoverPolicy
}
sleep -s 15
#'------------------------------------------------------------------------------
## Add licenses to cluster
Write-Msg "+++ Adding licenses +++"
$test_lic_path = Test-Path -Path $licensesPath
if ($test_lic_path -eq "True") {
    $count_licenses = (get-content $licensesPath).count
    if ($count_licenses -ne 0) {
        Get-Content $licensesPath |  foreach { Add-NcLicense -license $_ }
        Write-Host "Licenses successfully added"
        Write-Host " "
    }
    else {
        Write-ErrMsg ("License file is empty. Please add the licenses manually")
    }
}
else {
    Write-ErrMsg ("License file does not exist. Please add the licenses manually")       
}
sleep -s 15
#'------------------------------------------------------------------------------
## Configure storage failover
Write-Msg  "+++ Configure SFO +++"
Write-Host "SFO Does not work with Simulators"
if ($nodes.count -gt 2) {
    foreach ($node in $nodes) {
        $sfo_enabled = Invoke-NcSsh "storage failover modify -node " $node " -enabled true"
        if (($sfo_enabled.Value.ToString().Contains("Error")) -or ($sfo_enabled.Value.ToString().Contains("error"))) {
            Write-ErrMsg ($sfo_enabled.Value)
        }
        else {
            Write-Host ("Storage Failover is enabled on node " + $node)
        }

	    $sfo_autogive = Invoke-NcSsh "storage failover modify -node " $node " -auto-giveback true"
        if (($sfo_autogive.Value.ToString().Contains("Error")) -or ($sfo_autogive.Value.ToString().Contains("error"))) {
                Write-ErrMsg ($sfo_autogive.Value)
        }
        else {
            Write-Host ("Storage Failover option auto giveback is enabled on node " + $node)
            Write-Host " "
        }
        sleep -s 2
    }
}
elseif ($nodes.count -eq 2) {
    foreach ($node in $nodes) {
        $sfo_enabled = Invoke-NcSsh "cluster ha modify -configured true"
        if (($sfo_enabled.Value.ToString().Contains("Error")) -or ($sfo_enabled.Value.ToString().Contains("error"))) {
            Write-ErrMsg ($sfo_enabled.Value)
        }
        else {
            Write-Host ("Cluster ha is enabled on node " + $node)
            Write-Host
        }  
    }
}
else {
    Write-Host "No HA required for single node cluster. Continuing with the setup"
    Write-Host " "
}
sleep -s 15
#'------------------------------------------------------------------------------
## Unlock the diag user
Write-Msg "+++ Unlock the diag user +++"
try {
    Unlock-NcUser -username diag -vserver $ClusterName -ErrorAction stop |Out-Null
    Write-Host "Diag user is unlocked"
}
catch {
    Write-ErrMsg "Diag user is either unlocked or script could not unlock the diag user"
}
#'------------------------------------------------------------------------------
## Setup diag user password
Set-Ncuserpassword -UserName diag -password netapp123! -vserver $ClusterName | Out-Null
Write-Host "created diag user password"
sleep -s 15
#'------------------------------------------------------------------------------
## Create admin user for access to logs through http
Write-Msg "+++ create web log user +++"
Set-NcUser -UserName admin -Vserver $ClusterName -Application http -role admin -AuthMethod password | Out-Null
Write-Host "created admin user access for http log collection"
sleep -s 15
#'------------------------------------------------------------------------------
## Set Date and NTP on each node
Write-Msg  "+++ setting Timezones/NTP/Datetime +++"
foreach ($node in $nodes) {
    Set-NcTime -Node $node -Timezone $timeZone | Out-Null
    Set-NcTime -Node $node -DateTime (Get-Date) | Out-Null
}
New-NcNtpServer -ServerName $ntpServer -IsPreferred | Out-Null
Write-Host "NTP Sever setup complete"
sleep -s 15
#'------------------------------------------------------------------------------
## Remove 10 Gbe ports from Default broadcast domain
Write-Msg  "+++ Rmoving 10Gbe Ports from Default broadcast domain +++"
# remove ports from Default broadcast domain
$broadCastTemplate = Get-NcNetPortBroadcastDomain -Template
Initialize-NcObjectProperty -Object $broadCastTemplate -Name Ports | Out-Null
$broadCastTemplate.BroadcastDomain = "Default"
$defaultBroadCastPorts = ((Get-NcNetPortBroadcastDomain -Query $broadCastTemplate).Ports).Port
foreach ($bPort in $defaultBroadCastPorts) {
	if (($bPort -notlike "*$ClusterNameMgmtPort") -and ($bPort -notlike "*$NodeMgmtPort")) {
		Write-Host "Removing Port: " $bPort
		Set-NcNetPortBroadcastDomain -Name Default -RemovePort $bPort | Out-Null
	}	
}
sleep -s 15
#'------------------------------------------------------------------------------
## Create ifgroups and add ports to ifgrps
Write-Msg  "+++ starting ifgroup creation +++"
foreach ($node in $nodes) {
    try {
        New-NcNetPortIfgrp -Name a0a -Node $node -DistributionFunction port -Mode multimode_lacp -ErrorAction Stop | Out-Null
        Add-NcNetPortIfgrpPort -name a0a -node $node -port $ifgrp_a0a_port1 -ErrorAction Continue | Out-Null
        Add-NcNetPortIfgrpPort -name a0a -node $node -port $ifgrp_a0a_port2 -ErrorAction Continue | Out-Null
        Write-Host ("Successfully created ifgrp a0a on node " + $node)
    }
    catch {
        Write-ErrMsg ("Error exception in ifgrp a0a " + $node + " : $_.")
    }
}
sleep -s 15
#'------------------------------------------------------------------------------
## Enable cdpd on all of the nodes
Write-Msg  "+++ enable cdpd on nodes +++"
foreach ($node in $nodes) {
    $cdpd_cmd = Invoke-NcSsh "node run -node " $node " -command options cdpd.enable on"
    if (($cdpd_cmd.Value.ToString().Contains("Error")) -or ($cdpd_cmd.Value.ToString().Contains("error"))) {
        Write-ErrMsg ($cdpd_cmd.Value)
    }
    else {
        Write-Host ("Successfully modified cdpd options for " + $node)
    }
}
sleep -s 15
#'------------------------------------------------------------------------------
## Set option disk.auto_assign on
Write-Msg  "+++ Setting disk autoassign +++"
foreach ($node in $nodes) {
    $set_disk_auto = Invoke-NcSsh "node run -node " $node " -command options disk.auto_assign on"
    if (($set_disk_auto.Value.ToString().Contains("Error")) -or ($set_disk_auto.Value.ToString().Contains("error"))) {
        Write-ErrMsg ($set_disk_auto.Value)
    }
    else {
        Write-Host ("Successfully modified disk autoassign option on node " + $node)
    }   
}
sleep -s 15
#'------------------------------------------------------------------------------
## Set flexscale options
Write-Msg  "+++ Setting flexscale options +++"
foreach ($node in $nodes) {
	$flexscale_enable = Invoke-NcSsh "node run -node " $node " -command options flexscale.enable on" 
    if (($flexscale_enable.Value.ToString().Contains("Error")) -or ($flexscale_enable.Value.ToString().Contains("error"))) {
        Write-ErrMsg ($flexscale_enable.Value)
    }
    else {
        Write-Host ("options flexscale.enable set to on for node " + $node)
    } 

	$flexscale_lopri = Invoke-NcSsh "node run -node " $node " -command options flexscale.lopri_blocks on"
    if (($flexscale_lopri.Value.ToString().Contains("Error")) -or ($flexscale_lopri.Value.ToString().Contains("error"))) {
        Write-ErrMsg ($flexscale_lopri.Value)
    }
    else {
        Write-Host ("options flexscale.lopri_blocks set to on for node " + $node)
    } 

	$flexscale_data = Invoke-NcSsh "node run -node " $node " -command options flexscale.normal_data_blocks on"
    if (($flexscale_data.Value.ToString().Contains("Error")) -or ($flexscale_data.Value.ToString().Contains("error"))) {
        Write-ErrMsg ($flexscale_data.Value)
    }
    else {
        Write-Host ("options flexscale.normal_data_blocks set to on for node " + $node)
        Write-Host " "
    } 

}
sleep -s 15
#'------------------------------------------------------------------------------
## Disable flowcontrol on all of the ports
Write-Msg  "+++ Setting flowcontrol +++"
foreach ($node in $nodes) {
    try {
        Write-Host "Setting flowcontrol for ports on node: " $node
        get-ncnetport -Node $node | Where-Object {$_.Port -notlike "a0*"} | select-object -Property name, node | set-ncnetport -flowcontrol none -ErrorAction Stop | Out-Null
        sleep -s 15
        Get-NcNetPort -Node $node | Select-Object -Property Name,AdministrativeFlowcontrol | Format-Table -AutoSize
    }
    catch {
        Write-ErrMsg ("Error setting flowcontrol on node " + $node + ": $_.")
    }
}
sleep -s 15
#'------------------------------------------------------------------------------
## Create data aggregates
Write-Msg  "+++ Creating Data Aggregates +++"
# get each of the nodes
Get-NcNode | %{ 
    $nodeName = $_.Node
    # no dashes
    $newAggrName = $nodeName -replace "-", "_"
    # can't start with numbers
    $newAggrName = $newAggrName -replace "^\d+", " "
    # append the root identifier
    $newAggrName = "$($newAggrName)_data_01"
    # create an aggreagate
    $aggrProps = @{
        'Name' = $newAggrName;
        'Node' = $nodeName;
        'DiskCount' = $diskCount;
        'RaidSize' = $maxraidsize;
        'RaidType' = "raid_dp";
    }
    New-NcAggr @aggrProps | Out-Null
#
    sleep -s 15
    # enable free space reallocation
    Get-NcAggr $newAggrName | Set-NcAggrOption -Key free_space_realloc -Value on
}
#'------------------------------------------------------------------------------
Write-Host " "
Write-Host " "
stop-transcript
#'------------------------------------------------------------------------------

Get-NodePerfData – Powershell Script to query NetApp Oncommand Performance Manager (OPM)

<#
script : Get-NodePerfData.ps1
Example:
Get-NodePerfData.ps1

This script queries OPM(version 2.1/7.0) Server and extract following performance counters for each node in clusters
    Date, Time, avgProcessorBusy, cpuBusy, cifsOps, nfsOps, avgLatency

All data is saved in to "thismonth" directory. e.g. 1608 (YYMM)

#>
Function Get-TzDateTime{
   Return (Get-TzDate) + " " + (Get-TzTime)
}
Function Get-TzDate{
   Return Get-Date -uformat "%Y-%m-%d"
}
Function Get-TzTime{
   Return Get-Date -uformat "%H:%M:%S"
}
Function Log-Msg{
   <#
   .SYNOPSIS
   This function appends a message to log file based on the message type.
   .DESCRIPTION
   Appends a message to a log a file.
   .PARAMETER
   Accepts an integer representing the log file extension type
   .PARAMETER
   Accepts a string value containing the message to append to the log file.
   .EXAMPLE
   Log-Msg -logType 0 -message "Command completed succuessfully"
   .EXAMPLE
   Log-Msg -logType 2, -message "Application is not installed"
   #>
   [CmdletBinding()]
   Param(
      [Parameter(Position=0,
         Mandatory=$True,
         ValueFromPipeLine=$True,
         ValueFromPipeLineByPropertyName=$True)]
      [Int]$logType,
      [Parameter(Position=1,
         Mandatory=$True,
         ValueFromPipeLine=$True,
         ValueFromPipeLineByPropertyName=$True)]
      [String]$message
   )
   Switch($logType){
      0 {$extension = "log"; break}
      1 {$extension = "err"; break}
      2 {$extension = "err"; break}
      3 {$extension = "csv"; break}
      default {$extension = "log"}
   }
   If($logType -eq 1){
      $message = ("Error " + $error[0] + " " + $message)
   }
   $prefix = Get-TzDateTime
   ($prefix + "," + $message) | Out-File -filePath `
   ($scriptLogPath + "." + $extension) -encoding ASCII -append
}
function MySQLOPM {
    Param(
      [Parameter(
      Mandatory = $true,
      ParameterSetName = '',
      ValueFromPipeline = $true)]
      [string]$Switch,
      [string]$Query
      )

    if($switch -match 'performance') {
        $MySQLDatabase = 'netapp_performance'
    }
    elseif($switch -match 'model'){
        $MySQLDatabase = 'netapp_model_view'    
    }
    $MySQLAdminUserName = 'report'
    $MySQLAdminPassword = 'password123'
    $MySQLHost = 'opm-server'
    $ConnectionString = "server=" + $MySQLHost + ";port=3306;Integrated Security=False;uid=" + $MySQLAdminUserName + ";pwd=" + $MySQLAdminPassword + ";database="+$MySQLDatabase

    Try {
      [void][System.Reflection.Assembly]::LoadFrom("E:\ssh\L080898\MySql.Data.dll")
      $Connection = New-Object MySql.Data.MySqlClient.MySqlConnection
      $Connection.ConnectionString = $ConnectionString
      $Connection.Open()

      $Command = New-Object MySql.Data.MySqlClient.MySqlCommand($Query, $Connection)
      $DataAdapter = New-Object MySql.Data.MySqlClient.MySqlDataAdapter($Command)
      $DataSet = New-Object System.Data.DataSet
      $RecordCount = $dataAdapter.Fill($dataSet, "data")
      $DataSet.Tables[0]
      }

    Catch {
      Write-Host "ERROR : Unable to run query : $query `n$Error[0]"
     }

    Finally {
      $Connection.Close()
    }
}
#'------------------------------------------------------------------------------
#'Initialization Section. Define Global Variables.
#'------------------------------------------------------------------------------
##'Set Date and Time Variables
[String]$lastmonth      = (Get-Date).AddMonths(-1).ToString('yyMM')
[String]$thismonth      = (Get-Date).ToString('yyMM')
[String]$yesterday      = (Get-Date).AddDays(-1).ToString('yyMMdd')
[String]$today          = (Get-Date).ToString('yyMMdd')
[String]$fileTime       = (Get-Date).ToString('HHmm')
[String]$workDay        = (Get-Date).AddDays(-1).DayOfWeek
[String]$DOM            = (Get-Date).ToString('dd')
[String]$filedate       = (Get-Date).ToString('yyyyMMdd')
##'Set Path Variables
[String]$scriptPath     = Split-Path($MyInvocation.MyCommand.Path)
[String]$scriptSpec     = $MyInvocation.MyCommand.Definition
[String]$scriptBaseName = (Get-Item $scriptSpec).BaseName
[String]$scriptName     = (Get-Item $scriptSpec).Name
[String]$scriptLogPath  = $scriptPath + "\Logs\" + (Get-TzDate) + "-" + $scriptBaseName
[System.Object]$fso     = New-Object -ComObject "Scripting.FileSystemObject"
[String]$outputPath     = $scriptPath + "\Reports\" + $thismonth
[string]$logPath        = $scriptPath+ "\Logs"

# MySQL Query to get objectid, name of all nodes
$nodes = MySQLOPM -Switch model -Query "select objid,name from node"

# Create hash of nodename and objid
$hash =@{}

foreach ($line in $nodes) {
    $hash.add($line.name, $line.objid)
}
# Create Log Directory
if ( -not (Test-Path $logPath) ) { 
       Try{
          New-Item -Type directory -Path $logPath -ErrorAction Stop | Out-Null
          Log-Msg 0 "Created Folder ""logPath"""
       }
       Catch{
          Log-Msg 0 "Failed creating folder ""$logPath"" . Error " + $_.Exception.Message
          Exit -1;
       }
    }

# Check hash is not empty, then query OPM server to extract counters
if ($hash.count -gt 0) {

    # If Report directory does not exist then create
    if ( -not (Test-Path $outputPath) ) { 
       Try{
          New-Item -Type directory -Path $outputPath -ErrorAction Stop | Out-Null
          Log-Msg 0 "Created Folder ""$outputPath"""
       }
       Catch{
          Log-Msg 0 "Failed creating folder ""$outputPath"" . Error " + $_.Exception.Message
          Exit -1;
       }
    }
    # foreach node
    foreach ($h in $hash.GetEnumerator()) {
    
        $nodeperffilename  = "$($h.name)`_$filedate.csv"
        $nodePerfFile = Join-Path $outputPath $nodeperffilename

        # MySQL Query to query each object and save data to lastmonth directory
        MySQLOPM -Switch performance -Query "select objid,Date_Format(FROM_UNIXTIME(time/1000), '%Y:%m:%d') AS Date ,Date_Format(FROM_UNIXTIME(time/1000), '%H:%i') AS Time, round(avgProcessorBusy,1) AS cpuBusy,round(cifsOps,1) AS cifsOps,round(nfsOps,1) AS nfsOps,round((avgLatency/1000),1) As avgLatency from sample_node where objid=$($h.value)" | Export-Csv -Path $nodePerfFile -NoTypeInformation
        Log-Msg 0 "Exported Performance Logs for $($h.name)"
    }
} 

Volume Clone Split Extremely Slow in clustered Data ONTAP

Problem

My colleague had been dealing with growth on an extremely large volume (60 TB) for some time. After discussing with Business groups it was aggreed to split the volume in two seperate volumes.The largest directory identified was 20 TB that could be moved to it’s own volume. Discussions started on the best possible solution to get this job completed quickly.

Possible Solutions

  • robocopy / securecopy the directory to another volume. Past experience says this could be lot more time consuming.
  • ndmpcopy the large directory to a new volume. The ndmpcopy session needs to be kept open, if the job fails during transfer, we have to restart from begining. Also, there are no progress updates available.
  • clone the volume, delete data not required, split the clone. This seems to be a nice solution.
  • vol move. We don’t want to copy entire 60 TB volume and delete data. Therefore, we didn’t consider this solution.

So, we aggreed on the 3rd  Solution (clone, delete, split).

What actually happened

snowy-mgmt::> volume clone split start -vserver snowy -flexclone snowy_vol_001_clone
Warning: Are you sure you want to split clone volume snowy_vol_001_clone in Vserver snowy ?
{y|n}: y
[Job 3325] Job is queued: Split snowy_vol_001_clone.
 
Several hours later:
snowy-mgmt::> volume clone split show
                                Inodes              Blocks
                        ——————— ———————
Vserver   FlexClone      Processed      Total    Scanned    Updated % Complete
——— ————- ———- ———- ———- ———- ———-
snowy snowy_vol_001_clone          55      65562    1532838    1531276          0
 
Two Days later:
snowy-mgmt::> volume clone split show
                                Inodes              Blocks
                        ——————— ———————
Vserver   FlexClone      Processed      Total    Scanned    Updated % Complete
——— ————- ———- ———- ———- ———- ———-
snowy snowy_vol_001_clone         440      65562 1395338437 1217762917          0

This is a huge problem. The split operation will never complete in time.

What we found

We found the problem was with the way clone split works. Data ONTAP uses a background scanner to copy the shared data from the partent volume to the FlexClone volume. The scanner has one active message at any time that is processing only one inode, so the split tends to be faster on a volume with fewer inodes. Also, the background scanner runs at a low priority and can take considreable amount of time to complete. This means for a large volume with millions of inodes, it will take a huge amount of time to perform the split operation.

Workaround

“volume move a clone”

snowy-mgmt::*> vol move start -vserver snowy -volume snowy_vol_001_clone -destination-aggregate snowy01_aggr_01
  (volume move start)
 
Warning: Volume will no longer be a clone volume after the move and any associated space efficiency savings will be lost. Do you want to proceed? {y|n}: y

Benefits of vol move a FlexClone:

  • Faster than FlexClone split.
  • Data can be moved to different aggregate or node.

Reference

FAQ – FlexClone split

Query Oncommand Performance Manager (OPM) Database using Powershell

Introduction

OnCommand Performance Manager (OPM) provides performance monitoring and event root-cause analysis for systems running clustered Data ONTAP software. It is the performance management part of OnCommand Unified Manager. OPM 2.1 is well integrated with Unified Manager 6.4. You can view and analyze events in the Performance Manager UI or view them in the Unified Manager Dashboard.

Performance Manager collects current performance data from all monitored clusters every five minutes (5, 10, 15). It analyzes this data to identify performance events and potential issues. It retains 30 days of five-minute historical performance data and 390 days of one-hour historical performance data. This enables you to view very granular performance details for the current month, and general performance trends for up to a year.

Accessing the Database

Using powershell you can query MySQL database and retrieve information to create performance charts in Microsoft Excel or other tools. In order to access OPM databse you’ll need a user created with “Database User” role.

OPM-User

The following databases are availbale in OPM 2.1

  • information_schema
  • netapp_model
  • netapp_model_view
  • netapp_performance
  • opm

Out of the above, the two databases that have more relevant information are “netapp_model_view” and “netapp_performance”Database “netapp_model_view” has tables that define the objects and relationships among the objects for which performance data is collected, such as aggregates, SVMs, clusters, volumes, etc.  Database netapp_performance has tables which contain the raw data collected as well as periodic rollups used to quickly generate the graphs OPM presents through its GUI.

Refer to MySQL function in my previous post on Querying OCUM Database using Powershell to connect to OPM database.

Understanding Database

OPM assigns each object (node, cluster, lif, port, aggregate, volumes etc.) a unique id. These id’s are independent of id’s in OCUM database. Theser id’s are stored in tables in “netapp_model_view” database. You can perform join on various tables through the object id’s.

Actual performance data is collected and stored in tables in “netapp_performance” database. All table have a suffix “sample_”. Each table row contains OPM object id for the object (node, cluster, lif, port, aggregate, volumes etc.), the timestamp of the collection and the raw data.

Few useful Database queries

Below example queries database to retrieve performance counter of a node.

Connect to “netapp_model_view” database and list the objid and name from table nodes

"MySQL -Query ""select objid,name from node"" | Format-Table -AutoSize"

Connect to “netapp_performance” database and export cpuBusy, cifsOps, avgLatency from table node

"MySQL -Query ""select objid,Date_Format(FROM_UNIXTIME(time/1000), '%Y:%m:%d %H:%i') AS Time,cpuBusy,cifsOps,avgLatency from sample_node where objid=2"" | Export-Csv -Path E:\snowy-01.csv -NoTypeInformation"

How to use “ndmpcopy” in clustered Data ONTAP 8.2.x

Introduction

“ndmpcopy” in clustered Data ONTAP has two modes

  1. node-scope-mode : you need to track the volume location if a volume move is performed
  2. vserver-scope-mode : no issues, even if the volume is moved to a different node. 

In this scenario i’ll use vserver-scope-mode to perform a “ndmpocpy” within the same cluster and same SVM.

In my test I copied a 1GB file to a new folder under same volume.

Login to the cluster

snowy-mgmt::> set diag -rows 0
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

List of volumes on SVM “snowy”

snowy-mgmt::*> vol show -vserver snowy
(volume show)
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
snowy     SNOWY62_vol001 snowy01_hybdata_01 online RW 1TB    83.20GB   91%
snowy     SNOWY62_vol001_sv snowy02_hybdata_01 online DP 1TB 84.91GB   91%
snowy     HRauhome01   snowy01_hybdfc_01 online RW       100GB    94.96GB    5%
snowy     rootvol      snowy01_hybdfc_01 online RW        20MB    18.88MB    5%
4 entries were displayed.

snowy-mgmt::*> df -g SNOWY62_vol001
Filesystem               total       used      avail capacity  Mounted on                 Vserver
/vol/SNOWY62_vol001/ 972GB       3GB       83GB      91%  /SNOWY62_vol001       snowy
/vol/SNOWY62_vol001/.snapshot 51GB 0GB     51GB       0%  /SNOWY62_vol001/.snapshot  snowy
2 entries were displayed.

snowy-mgmt::*> vol show -vserver snowy -fields volume,junction-path
(volume show)
vserver volume              junction-path
------- ------------------- --------------------
snowy   SNOWY62_vol001 /SNOWY62_vol001
snowy   SNOWY62_vol001_sv -
snowy   HRauhome01          /hrauhome01
snowy   rootvol             /
4 entries were displayed.

Create a “ndmpuser” with role “backup”

snowy-mgmt::*> security login show
Vserver: snowy
Authentication                  Acct
UserName         Application Method         Role Name        Locked
---------------- ----------- -------------- ---------------- ------
vsadmin          ontapi      password       vsadmin          yes
vsadmin          ssh         password       vsadmin          yes

Vserver: snowy-mgmt
Authentication                  Acct
UserName         Application Method         Role Name        Locked
---------------- ----------- -------------- ---------------- ------
admin            console     password       admin            no
admin            http        password       admin            no
admin            ontapi      password       admin            no
admin            service-processor password admin            no
admin            ssh         password       admin            no
autosupport      console     password       autosupport      yes
8 entries were displayed.

snowy-mgmt::*> security login create -username ndmpuser -application ssh -authmethod password -role backup -vserver snowy-mgmt
Please enter a password for user 'ndmpuser':
Please enter it again:

snowy-mgmt::*> vserver services ndmp generate-password -vserver snowy-mgmt -user ndmpuser
Vserver: snowy-mgmt
User: ndmpuser
Password: Ip3gRJchR0FGPLA7

Turn on “ndmp” service on the cluster mgmt. SVM

snowy-mgmt::*> vserver services ndmp on -vserver snowy-mgmt

In the Nodeshell initiate “ndmpcopy”

snowy-mgmt::*> node run -node snowy-01
Type 'exit' or 'Ctrl-D' to return to the CLI

snowy-01> ndmpcopy
usage:
ndmpcopy [<options>] <source> <destination>
<source> and <destination> are of the form [<filer>:]<path>
If an IPv6 address is specified, it must be enclosed in square brackets

options:
[-sa <username>:<password>]
[-da <username>:<password>]
    source/destination filer authentication
[-st { text | md5 }]
[-dt { text | md5 }]
    source/destination filer authentication type
    default is md5
[-l { 0 | 1 | 2 }]
    incremental level
    default is 0
[-d]
    debug mode
[-f]
    force flag, to copy system files
[-mcs { inet | inet6 }]
    force specified address mode for source control connection
[-mcd { inet | inet6 }]
    force specified address mode for destination control connection
[-md { inet | inet6 }]
    force specified address mode for data connection
[-h]
    display this message
[-p]
    accept the password interactively
[-exclude <value>]
    exclude the files/dirs from backup path

snowy-01>
snowy-01> ndmpcopy -sa ndmpuser:Ip3gRJchR0FGPLA7 -da ndmpuser:Ip3gRJchR0FGPLA7 10.10.2.72:/snowy/SNOWY62_vol001/TestFil2_002 10.10.2.72:/snowy/SNOWY62_vol001/destination
Ndmpcopy: Starting copy [ 14 ] ...
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Log: DUMP: creating "/snowy/SNOWY62_vol001/../snapshot_for_backup.7" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP: Using Partial Volume Dump of selected subtrees
Ndmpcopy: 10.10.2.72: Log: DUMP: Using snapshot_for_backup.7 snapshot
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of this level 0 dump: Tue Apr  5 08:27:43 2016.
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of last level 0 dump: the epoch.
Ndmpcopy: 10.10.2.72: Log: DUMP: Dumping /snowy/SNOWY62_vol001/TestFil2_002 to NDMP connection
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass I)[regular files]
Ndmpcopy: 10.10.2.72: Log: DUMP: Reading file names from NDMP.
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass II)[directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: estimated 1050638 KB.
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass III) [directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass IV) [regular files]
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:52 2016: Begin level 0 restore
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:54 2016: Reading directories from the backup
Ndmpcopy: 10.10.2.72: Log: RESTORE: Warning: /vol/SNOWY62_vol001/destination/ will not be restored as a qtree: exists as a normal subdirectory.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Could not create qtree `/vol/SNOWY62_vol001/destination/'. Creating a regular directory instead.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:55 2016: Creating files and directories.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:27:55 2016: Writing data to files.
Ndmpcopy: 10.10.2.72: Log: ACL_START is '1075865600'
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass V) [ACLs]
Ndmpcopy: 10.10.2.72: Log: DUMP: 1050657 KB
Ndmpcopy: 10.10.2.72: Log: DUMP: DUMP IS DONE
Ndmpcopy: 10.10.2.72: Log: RESTORE: RESTORE IS DONE
Ndmpcopy: 10.10.2.72: Notify: restore successful
Ndmpcopy: 10.10.2.72: Log: DUMP: Deleting "/snowy/SNOWY62_vol001/../snapshot_for_backup.7" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP_DATE is '5754776159'
Ndmpcopy: 10.10.2.72: Notify: dump successful
Ndmpcopy: Transfer successful [ 0 hours, 1 minutes, 26 seconds ]
Ndmpcopy: Done

Although I used the cluster-mgmt lif in the ndmpcopy syntax, I didn’t see any traffic flowing on the lif 

snowy-mgmt::*> statistics show-periodic -node cluster:summary -object lif:vserver -instance snowy-mgmt -counter instance_name|recv_data|sent_data -interval 1
snowy-mgmt: lif:vserver.snowy-mgmt: 4/5/2016 08:27:20
instance     recv     sent   Complete     Number of
name     data     data  Aggregation  Constituents
-------- -------- -------- ----------- -------------
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1
snowy-mgmt     0B       0B      No      1

Another “ndmpcopy” job with different statistics command
snowy-01> ndmpcopy -sa ndmpuser:Ip3gRJchR0FGPLA7 -da ndmpuser:Ip3gRJchR0FGPLA7 10.10.2.72:/snowy/SNOWY62_vol001/TestFil1_001 10.10.2.72:/snowy/SNOWY62_vol001/destination
Ndmpcopy: Starting copy [ 15 ] ...
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Notify: Connection established
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Connect: Authentication successful
Ndmpcopy: 10.10.2.72: Log: DUMP: creating "/snowy/SNOWY62_vol001/../snapshot_for_backup.9" snapshot.
Ndmpcopy: 10.10.2.72: Log: DUMP: Using Partial Volume Dump of selected subtrees
Ndmpcopy: 10.10.2.72: Log: DUMP: Using snapshot_for_backup.9 snapshot
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of this level 0 dump: Tue Apr  5 08:30:40 2016.
Ndmpcopy: 10.10.2.72: Log: DUMP: Date of last level 0 dump: the epoch.
Ndmpcopy: 10.10.2.72: Log: DUMP: Dumping /snowy/SNOWY62_vol001/TestFil1_001 to NDMP connection
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass I)[regular files]
Ndmpcopy: 10.10.2.72: Log: DUMP: Reading file names from NDMP.
Ndmpcopy: 10.10.2.72: Log: DUMP: mapping (Pass II)[directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: estimated 1050638 KB.
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass III) [directories]
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass IV) [regular files]
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:47 2016: Begin level 0 restore
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:49 2016: Reading directories from the backup
Ndmpcopy: 10.10.2.72: Log: RESTORE: Warning: /vol/SNOWY62_vol001/destination/ will not be restored as a qtree: exists as a normal subdirectory.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Could not create qtree `/vol/SNOWY62_vol001/destination/'. Creating a regular directory instead.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:50 2016: Creating files and directories.
Ndmpcopy: 10.10.2.72: Log: RESTORE: Tue Apr  5 08:30:50 2016: Writing data to files.
Ndmpcopy: 10.10.2.72: Log: ACL_START is '1075865600'
Ndmpcopy: 10.10.2.72: Log: DUMP: dumping (Pass V) [ACLs]
Ndmpcopy: 10.10.2.72: Log: DUMP: 1050657 KB
Ndmpcopy: 10.10.2.72: Log: DUMP: DUMP IS DONE
Ndmpcopy: 10.10.2.72: Log: DUMP: Deleting "/snowy/SNOWY62_vol001/../snapshot_for_backup.9" snapshot.
Ndmpcopy: 10.10.2.72: Log: RESTORE: RESTORE IS DONE
Ndmpcopy: 10.10.2.72: Notify: restore successful
Ndmpcopy: 10.10.2.72: Log: DUMP_DATE is '5754776336'
Ndmpcopy: 10.10.2.72: Notify: dump successful
Ndmpcopy: Transfer successful [ 0 hours, 1 minutes, 20 seconds ]
Ndmpcopy: Done
snowy-01>

snowy-mgmt::*> statistics show-periodic -object lif -instance snowy-mgmt:cluster_mgmt -counter instance_name|recv_data|sent_data -interval 1
snowy-mgmt: lif.snowy-mgmt:cluster_mgmt: 4/5/2016 08:30:10
instance     recv     sent   Complete     Number of
name     data     data  Aggregation  Constituents
-------- -------- -------- ----------- -------------
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a
snowy-mgmt:cluster_mgmt 0B 0B      n/a      n/a

Convert Snapmirror DP relation to XDP in clustered Data ONTAP

With clustered Data ONTAP 8.2 snapvault (XDP) feature was introduced and the ability to convert existing snapmirror (DP) relations to snapvault (XDP) .

I had tested this feature long time ago, however, never used it in production environment. Recently i got a chance to implement this as a production volume with high change rate (snapshots involved) grew up to 60 TB (20 TB used by snapshots). Due to FAS 3250 in the cluster the max. volume size is 70 TB. After discussions with the customer it was decided to create a local snapvault copy of the production volume that would contain all existing snapshots and accumulate more in coming days until a new snapvault cluster is setup. The data in the volume is highly compressable so the snapvault destination would consume less space.

Overview of this process:

  1. Create a snapmirror DP relation
  2. Initialize the snapmirror DP relation
  3. Quiesce/Break/Delete the DP relation
  4. Resync the relation as snapmirror XDP
  5. Continue with vault updates

CREATE SOURCE VOLUME
snowy-mgmt::*> vol create -volume AU2004NP0062_vol001 -aggregate snowy01_hybdata_01 -space-guarantee none -size 1tb -junction-path /AU2004NP0066_vol001 -state online -junction-active true (volume create) [Job 85] Job succeeded: Successful snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields security-style (volume show) vserver volume security-style ------- ------------------- -------------- snowy AU2004NP0062_vol001 ntfs

CREATE DESTINATION VOLUME
snowy-mgmt::*> vol create -volume AU2004NP0062_vol001_sv -aggregate snowy02_hybdata_01 -space-guarantee none -size 3tb -type DP -state online (volume create) [Job 86] Job succeeded: Successful snowy-mgmt::*> vol show -volume AU2004NP0062_vol001* (volume show) Vserver Volume Aggregate State Type Size Available Used% --------- ------------ ------------ ---------- ---- ---------- ---------- ----- snowy AU2004NP0062_vol001 snowy01_hybdata_01 online RW 1TB 87.01GB 91% snowy AU2004NP0062_vol001_sv snowy02_hybdata_01 online DP 3TB 87.01GB 97% 2 entries were displayed.

CREATE SNAPMIRROR RELATION
snowy-mgmt::*> snapmirror create -source-path snowy:AU2004NP0062_vol001 -destination-path snowy:AU2004NP0062_vol001_sv -type DP -vserver snowy Operation succeeded: snapmirror create for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show -type DP Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 DP snowy:AU2004NP0062_vol001_sv Uninitialized Idle - true -

INITIALIZE SNAPMIRROR
snowy-mgmt::*> snapmirror initialize -destination-path snowy:AU2004NP0062_vol001_sv Operation is queued: snapmirror initialize of destination "snowy:AU2004NP0062_vol001_sv".

CREATE SNAPSHOTS ON SOURCE VOLUME
snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_01 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_02 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_03 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_04 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_05 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_06 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_07 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_08 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_09 snowy-mgmt::*> snap create -volume AU2004NP0062_vol001 -snapshot sas_snap_00 snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001 ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001 hourly.2016-03-24_1005 valid 60KB 0% 27% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 33% 80KB sas_snap_01 valid 60KB 0% 27% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 32% sas_snap_04 valid 60KB 0% 27% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 32% sas_snap_00 valid 56KB 0% 26% 12 entries were displayed. snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 DP snowy:AU2004NP0062_vol001_sv Snapmirrored Idle - true -

UPDATE SNAPMIRROR TO TRANSFER ALL SNAPSHOTS TO DESTINATION VOLUME
snowy-mgmt::*> snapmirror update -destination-path snowy:* Operation is queued: snapmirror update of destination "snowy:AU2004NP0062_vol001_sv". 1 entry was acted on.

SNAPSHOTS REACHED THE DESTINATION VOLUME
snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001_sv ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001_sv hourly.2016-03-24_1005 valid 60KB 0% 28% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 34% 80KB sas_snap_01 valid 60KB 0% 28% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 33% sas_snap_04 valid 60KB 0% 28% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 33% sas_snap_00 valid 72KB 0% 32% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 0% 0B 13 entries were displayed.

QUIESCE, BREAK AND DELETE SNAPMIRRORS
snowy-mgmt::*> snapmirror quiesce -destination-path snowy:AU2004NP0062_vol001_sv Operation succeeded: snapmirror quiesce for destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror break -destination-path snowy:AU2004NP0062_vol001_sv [Job 87] Job succeeded: SnapMirror Break Succeeded snowy-mgmt::*> snapmirror delete -destination-path snowy:AU2004NP0062_vol001_sv Operation succeeded: snapmirror delete for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show This table is currently empty.

RESYNC SNAPMIRROR AS XDP RELATION
snowy-mgmt::*> snapmirror resync -source-path snowy:AU2004NP0062_vol001 -destination-path snowy:AU2004NP0062_vol001_sv -type XDP Warning: All data newer than Snapshot copy snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 on volume snowy:AU2004NP0062_vol001_sv will be deleted. Verify there is no XDP relationship whose source volume is "snowy:AU2004NP0062_vol001_sv". If such a relationship exists then you are creating an unsupported XDP to XDP cascade. Do you want to continue? {y|n}: y [Job 88] Job succeeded: SnapMirror Resync Transfer Queued snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 XDP snowy:AU2004NP0062_vol001_sv Snapmirrored Idle - true -

SNAPSHOTS EXIST ON BOTH SOURCE AND DESTINATION VOLUME AFTER RESYNC
snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001 ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001 hourly.2016-03-24_1005 valid 68KB 0% 29% sas_snap_01 valid 60KB 0% 27% sas_snap_02 valid 64KB 0% 28% sas_snap_03 valid 76KB 0% 32% sas_snap_04 valid 60KB 0% 27% sas_snap_05 valid 64KB 0% 28% sas_snap_06 valid 64KB 0% 28% sas_snap_07 valid 64KB 0% 28% sas_snap_08 valid 64KB 0% 28% sas_snap_09 valid 76KB 0% 32% sas_snap_00 valid 72KB 0% 31% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 31% 72KB 12 entries were displayed. snowy-mgmt::*> snapshot show -volume AU2004NP0062_vol001_sv ---Blocks--- Vserver Volume Snapshot State Size Total% Used% -------- ------- ------------------------------- -------- -------- ------ ----- snowy AU2004NP0062_vol001_sv hourly.2016-03-24_1005 valid 60KB 0% 28% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100529 valid 0% 34% 80KB sas_snap_01 valid 60KB 0% 28% sas_snap_02 valid 64KB 0% 29% sas_snap_03 valid 76KB 0% 33% sas_snap_04 valid 60KB 0% 28% sas_snap_05 valid 64KB 0% 29% sas_snap_06 valid 64KB 0% 29% sas_snap_07 valid 64KB 0% 29% sas_snap_08 valid 64KB 0% 29% sas_snap_09 valid 76KB 0% 33% sas_snap_00 valid 72KB 0% 32% snapmirror.6f937c3b-8f54-11e5-bd3f-123478563412_2147484677.2016-03-24_100707 valid 0% 33% 76KB 13 entries were displayed.

TURN ON VOLUME EFFICIENCY - DESTINATION VOLUME
snowy-mgmt::*> vol efficiency on -volume AU2004NP0062_vol001_sv (volume efficiency on) Efficiency for volume "AU2004NP0062_vol001_sv" of Vserver "snowy" is enabled. Already existing data could be processed by running "volume efficiency start -vserver snowy -volume AU2004NP0062_vol001_sv -scan-old-data true".

CRETE A CIFS SHARE ON SOURCE VOLUME AND COPY SOME DATA
snowy-mgmt::*> cifs share create -share-name sas_vol -path /AU2004NP0062_vol001 -share-properties oplocks,browsable,changenotify snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB

CREATE SNAPSHOT AND SNAPMIRROR POLICIES WITH SAME SNAPMIRROR LABLES
snowy-mgmt::*> cron show (job schedule cron show) Name Description ---------------- ----------------------------------------------------- 5min @:00,:05,:10,:15,:20,:25,:30,:35,:40,:45,:50,:55 8hour @2:15,10:15,18:15 daily @0:10 hourly @:05 weekly Sun@0:15 5 entries were displayed. snowy-mgmt::*> snapshot policy create -policy keep_more_snaps -enabled true -schedule1 5min -count1 5 -prefix1 sv -snapmirror-label1 mins -vserver snowy snowy-mgmt::*> snapmirror policy create -vserver snowy -policy XDP_POL snowy-mgmt::*> snapmirror policy add-rule -vserver snowy -policy XDP_POL -snapmirror-label mins -keep 50

APPLY SNAPSHOT POLICY TO SOURCE VOLUME
snowy-mgmt::*> volume modify -volume AU2004NP0062_vol001 -snapshot-policy keep_more_snaps Warning: You are changing the Snapshot policy on volume AU2004NP0062_vol001 to keep_more_snaps. Any Snapshot copies on this volume from the previous policy will not be deleted by this new Snapshot policy. Do you want to continue? {y|n}: y Volume modify successful on volume: AU2004NP0062_vol001

APPLY SNAPMIRROR POLICY TO DESTINATION VOLUME
snowy-mgmt::*> snapmirror modify -destination-path snowy:AU2004NP0062_vol001_sv -policy XDP_POL Operation succeeded: snapmirror modify for the relationship with destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> vol show -volume AU2004NP0062_vol001 -fields snapshot-policy (volume show) vserver volume snapshot-policy ------- ------------------- --------------- snowy AU2004NP0062_vol001 keep_more_snaps snowy-mgmt::*> snapshot policy show keep_more_snaps -instance Vserver: snowy Snapshot Policy Name: keep_more_snaps Snapshot Policy Enabled: true Policy Owner: vserver-admin Comment: - Total Number of Schedules: 1 Schedule Count Prefix SnapMirror Label ---------------------- ----- --------------------- ------------------- 5min 5 sv mins

UPDATE SNAPMIRROR RELATIONSHIP (SNAPVAULT)
snowy-mgmt::*> snapmirror update -destination-path snowy:AU2004NP0062_vol001_sv Operation is queued: snapmirror update of destination "snowy:AU2004NP0062_vol001_sv". snowy-mgmt::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- snowy:AU2004NP0062_vol001 XDP snowy:AU2004NP0062_vol001_sv Snapmirrored Transferring 0B true 03/24 10:56:29

THE SIZE OF BOTH SOURCE AND DESTINATION VOLUME IS SAME
snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 2.06GB 2 entries were displayed.

DEDUPE JOB IS RUNNING
snowy-mgmt::*> sis status Vserver Volume State Status Progress Policy ---------- ---------------- -------- ------------ ------------------ ---------- snowy AU2004NP0062_vol001_sv Enabled Active 539904 KB (25%) Done -

SIZE OF DESTINATION VOLUME AFTER DEDUPE JOB COMPLETED
snowy-mgmt::*> sis status Vserver Volume State Status Progress Policy ---------- ---------------- -------- ------------ ------------------ ---------- snowy AU2004NP0062_vol001_sv Enabled Idle Idle for 00:00:05 - snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 274.9MB 2 entries were displayed.

START COMPRESSION JOB ON DESTINATION VOLUME BY SCANNING EXISTING DATA
snowy-mgmt::*> vol efficiency start -volume AU2004NP0062_vol001_sv -scan-old-data (volume efficiency start) Warning: This operation scans all of the data in volume "AU2004NP0062_vol001_sv" of Vserver "snowy". It may take a significant time, and may degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "AU2004NP0062_vol001_sv" of Vserver "snowy" has started.

SIZE OF DESTINATION VOLUME AFTER COMPRESSION JOB COMPLETED
snowy-mgmt::*> vol show -volume AU* -fields used (volume show) vserver volume used ------- ------------------- ------ snowy AU2004NP0062_vol001 2.01GB snowy AU2004NP0062_vol001_sv 49.76MB 2 entries were displayed.

Querying OCUM Database using Powershell

Oncommand Unified Manager (OCUM) is the software to monitor and troubleshoot cluster or SVM issues relating to data storage capacity, availability, performance and protection. OCUM polls the clustered Data ONTAP stoage systmes and stores all inventory information in MySQL database. Using powershell we can query MySQL database and retrieve information to create reports.

All we need is MySQL .NET connector to query OCUM database and retrieve information from various tables. Another tool that is helpful is “HeidiSQL” client for MySQL. You can connect to OCUM Database using Heidi SQL and view all the tables and columns within the database.

Download and use version 2.0 MySQL Connector with OCUM 6.2

Donwload link to HeidiSQL

NetApp Communities Post

First of all you’ll need to create a “Database User” with Role “Report Schema” (OCUM GUI -> Administration -> ManagerUsers -> Add)

Use HeidiSQL to connect to OCUM database

Connect-OCUM

OCUM

Ocum_report

Sample Powershell Code to connect to OCUM Database and retrieve information

# Get-cDOTAggrVolReport.ps1
# Date : 2016_03_10 12:12 PM
# This script uses MySQL .net connector at location E:\ssh\MySql.Data.dll to query OCCUM 6.2 database

# Function MySQL queries OCUM database
# usage: MySQL -Query <sql-query>
function MySQL {
Param(
[Parameter(
Mandatory = $true,
ParameterSetName = '',
ValueFromPipeline = $true)]
[string]$Query
)

$MySQLAdminUserName = 'reportuser'
$MySQLAdminPassword = 'Netapp123'
$MySQLDatabase = 'ocum_report'
$MySQLHost = '192.168.0.71'
$ConnectionString = "server=" + $MySQLHost + ";port=3306;Integrated Security=False;uid=" + $MySQLAdminUserName + ";pwd=" + $MySQLAdminPassword + ";database="+$MySQLDatabase

Try {
[void][System.Reflection.Assembly]::LoadFrom("E:\ssh\MySql.Data.dll")
$Connection = New-Object MySql.Data.MySqlClient.MySqlConnection
$Connection.ConnectionString = $ConnectionString
$Connection.Open()

$Command = New-Object MySql.Data.MySqlClient.MySqlCommand($Query, $Connection)
$DataAdapter = New-Object MySql.Data.MySqlClient.MySqlDataAdapter($Command)
$DataSet = New-Object System.Data.DataSet
$RecordCount = $dataAdapter.Fill($dataSet, "data")
$DataSet.Tables[0]
}

Catch {
Write-Host "ERROR : Unable to run query : $query `n$Error[0]"
}

Finally {
$Connection.Close()
}
}

# Define disk location to store aggregate and volume size reports retrieved from OCUM
$rptdir = "E:\ssh\aggr-vol-space"
$rpt = "E:\ssh\aggr-vol-space"
$filedate = (Get-Date).ToString('yyyyMMdd')
$aggrrptFilename = "aggrSize`_$filedate.csv"
$aggrrptFile = Join-Path $rpt $aggrrptFilename
$volrptFilename = "volSize`_$filedate.csv"
$volrptFile = Join-Path $rpt $volrptFilename

# verify Report directory exists
if ( -not (Test-Path $rptDir) ) {
write-host "Error: Report directory $rptDir does not exist."
exit
}

# Produce aggregate report from OCUM
#$aggrs = MySQL -Query "select aggregate.name as 'Aggregate', aggregate.sizeTotal as 'TotalSize KB', aggregate.sizeUsed as 'UsedSize KB', aggregate.sizeUsedPercent as 'Used %', aggregate.sizeAvail as 'Available KB', aggregate.hasLocalRoot as 'HasRootVolume' from aggregate"
$aggrs = MySQL -Query "select aggregate.name as 'Aggregate', round(aggregate.sizeTotal/Power(1024,3),1) as 'TotalSize GB', round(aggregate.sizeUsed/Power(1024,3),1) as 'UsedSize GB', aggregate.sizeUsedPercent as 'Used %', round(aggregate.sizeAvail/Power(1024,3),1) as 'Available GB', aggregate.hasLocalRoot as 'HasRootVolume' from aggregate"
$aggrs | where {$_.HasRootVolume -eq $False} | export-csv -NoTypeInformation $aggrrptFile

# Produce volume report from OCUM
$vols = MySQL -Query "select volume.name as 'Volume', clusternode.name as 'Nodename', aggregate.name as 'Aggregate', round(volume.size/Power(1024,3),1) as 'TotalSize GB', round(volume.sizeUsed/Power(1024,3),1) as 'UsedSize GB', volume.sizeUsedPercent as 'Used %', round(volume.sizeAvail/Power(1024,3),1) as 'AvaliableSize GB', volume.isSvmRoot as 'isSvmRoot', volume.isLoadSharingMirror as 'isLSMirror' from volume,clusternode,aggregate where clusternode.id = volume.nodeId AND volume.aggregateId = aggregate.id"
$vols | where {$_.isSvmRoot -eq $False -and $_.isLSMirror -eq $False -and $_.Volume -notmatch "vol0$"} | export-csv -NoTypeInformation $volrptFile

Update ONTAP Image on cDOT by copying files locally

  • Download ONTAP image on your computer
  • Access a CIFS share / NFS mount (cDOT volume) and copy the image to this volume
  • Log in to Systemshell of each node using diag user
  • sudo cp -rf /clus/<vserver>/volume /mroot/etc/software
  • exit systemshell
  • system node image package show
  • system node image update -node <node-name> -package file://localhost/mroot/etc/software/831_q_image.tgz
  • system node image show
login as: admin 
Password:
 
cluster1::> set diag -rows 0
Warning: These diagnostic commands are for use by NetApp personnel only.
 Do you want to continue? {y|n}: y

cluster1::*> security login show -user-or-group-name diag
 Vserver: cluster1
 Authentication                  Acct
 User/Group Name  Application Method         Role Name        Locked
 ---------------- ----------- -------------- ---------------- ------
 diag             console     password       admin            no

cluster1::*> security login unlock -username diag
 
cluster1::*> security login password -username diag
 Enter a new password:
 Enter it again:

cluster1::*> version
 NetApp Release 8.3.1RC1: Fri Jun 12 21:46:00 UTC 2015

cluster1::*> system node image package show
 This table is currently empty.

cluster1::*> system node image package show -node cluster1-01 -package file://localhost/mroot/etc/software
 There are no entries matching your query.

cluster1::*> systemshell -node cluster1-01
 (system node systemshell)
 Data ONTAP/amd64 (cluster1-01) (pts/2)
 login: diag
 Password:
 Last login: Mon Jul 27 16:54:10 from localhost

cluster1-01% sudo mkdir /mroot/etc/software
 
cluster1-01% sudo ls /clus/NAS/ntfs
 .snapshot       831_q_image.tgz BGInfo          nfs-on-ntfs     no-user-access  ntfs-cifs.txt

cluster1-01% sudo cp -rf /clus/NAS/ntfs/831_q_image.tgz /mroot/etc/software
 
cluster1-01% sudo ls /mroot/etc/software
 831_q_image.tgz

cluster1-01% exit
 logout

cluster1::*> system node image package show
 Package
 Node         Repository     Package File Name
 ------------ -------------- -----------------
 cluster1-01
 mroot
 831_q_image.tgz

cluster1::*> system node image update -node cluster1-01 -package file://localhost/mroot/etc/software/image.tgz

cluster1::*> system node image package show
 Package
 Node         Repository     Package File Name
 ------------ -------------- -----------------
 cluster1-01
 mroot
 831_q_image.tgz

cluster1::*> system node image update -node cluster1-01 -package 
file://localhost/mroot/etc/software/831_q_image.tgz

Software update started on node cluster1-01. Updating image2 with package file://localhost/mroot/etc/software/831_q_image.tgz.
 Listing package contents.
 Decompressing package contents.
 Invoking script (install phase). This may take up to 60 minutes.
 Mode of operation is UPDATE
 Current image is image1
 Alternate image is image2
 Package MD5 checksums pass
 Versions are compatible
 Available space on boot device is 1372 MB
 Required  space on boot device is 438 MB
 Kernel binary matches install machine type
 LIF checker script is invoked.
 NO CONFIGURATIONS WILL BE CHANGED DURING THIS TEST.
 Checking ALL Vservers for sufficiency LIFs.
 Running in upgrade mode.
 Running in report mode.
 Enabling Script Optimizations.
 No need to do upgrade check of external servers for this installed version.
 LIF checker script has validated configuration.
 NFS netgroup check script is invoked.
 NFS netgroup check script has run successfully.
 NFS exports DNS check script is invoked.
 netapp_nfs_exports_dns_check script begin
 netapp_nfs_exports_dns_check script end
 NFS exports DNS check script has completed.
 Getting ready to install image
 Directory /cfcard/x86_64/freebsd/image2 created
 Syncing device...
 Extracting to /cfcard/x86_64/freebsd/image2...
 x CHECKSUM
 x VERSION
 x COMPAT.TXT
 x BUILD
 x netapp_nfs_netgroup_check
 x metadata.xml
 x netapp_nfs_exports_dns_check
 x INSTALL
 x netapp_sufficiency_lif_checker
 x cap.xml
 x platform.ko
 x kernel
 x fw.tgz
 x platfs.img
 x rootfs.img
 Installed MD5 checksums pass
 Installing diagnostic and firmware files
 Firmware MD5 checksums pass
 Installation complete. image2 updated on node cluster1-01.
 
cluster1::*>

cluster1::*> system node image show
 Is      Is                                Install
 Node     Image   Default Current Version                   Date
 -------- ------- ------- ------- ------------------------- -------------------
 cluster1-01
 image1  true    true    8.3.1RC1                  -
 image2  false   false   8.3.1                     2/10/2016 05:32:50
 2 entries were displayed.