clustered Data ONTAP Upgrade Procedure from 8.2 to 8.3

Upgrade Prerequisites

Pre-upgrade Checklist
 • Send  autosupport from all the nodes
 system node autosupport invoke -type all -node * -message “Upgrading to 8.3.2P5"

• Verify Cluster Health
 cluster show

• Verify Cluster is in RDB
 set advanced
 cluster ring show -unitname vldb
 cluster ring show -unitname mgmt
 cluster ring show -unitname vifmgr
 cluster ring show -unitname bcomd

• Verify vserver health
 storage aggregate show -state !online
 volume show -state !online
 network interface show -status-oper down
 network interface show -is-home false
 storage disk show –state broken
 storage disk show –state maintenance|pending|reconstructing

• Revert not home lif’s
 network interface revert -vserver <vserver-name> -lif <lif-name>
 system health status show
 dashboard alarm show

• Verify lif failover configuration (data lif’s)
 network interface failover show

• Move all LS Mirror source Volumes to an aggregate on the node that will be upgraded last
vol move start -vserver svm1 -volume rootvol -destination-aggregate snowy008_aggr01_sas
vol move start -vserver svm2 -volume rootvol -destination-aggregate snowy008_aggr01_sas

• Modify existing failover-groups to failover lif’s on first two nodes to be upgraded
 snowy_CIFS_fg_a0a-101
failover-groups delete -failover-group snowy_CIFS_fg_a0a-101 -node snowy003 -port a0a-101
failover-groups delete -failover-group snowy_CIFS_fg_a0a-101 -node snowy004 -port a0a-101
failover-groups delete -failover-group snowy_CIFS_fg_a0a-101 -node snowy005 -port a0a-101
failover-groups delete -failover-group snowy_CIFS_fg_a0a-101 -node snowy006 -port a0a-101
failover-groups delete -failover-group snowy_CIFS_fg_a0a-101 -node snowy007 -port a0a-101
failover-groups delete -failover-group snowy_CIFS_fg_a0a-101 -node snowy008 -port a0a-101

snowy::> failover-groups show -failover-group snowy_CIFS_fg_a0a-101

snowy_NFS_fg_a0a-102
failover-groups delete -failover-group snowy_NFS_fg_a0a-102 -node snowy003 -port a0a-102
failover-groups delete -failover-group snowy_NFS_fg_a0a-102 -node snowy004 -port a0a-102
failover-groups delete -failover-group snowy_NFS_fg_a0a-102 -node snowy005 -port a0a-102
failover-groups delete -failover-group snowy_NFS_fg_a0a-102 -node snowy006 -port a0a-102
failover-groups delete -failover-group snowy_NFS_fg_a0a-102 -node snowy007 -port a0a-102
failover-groups delete -failover-group snowy_NFS_fg_a0a-102 -node snowy008 -port a0a-102

snowy::> failover-groups show -failover-group snowy_NFS_fg_a0a-102

• Modify one lif on each SVM to have home_node on either of first two nodes
network interface modify -vserver svm1 -lif svm1_CIFS_01 -home-node snowy001 -auto-revert true
network interface modify -vserver svm2 -lif svm2_CIFS_01 -home-node snowy001 -auto-revert true

• Revert back “not home” lif’s
network interface revert *
###############################################################################
UPGRADE

Determine the current image & Download the new image on each node
 • Current Image
 system node image show

• Verify no jobs are running
 job show

• Install Data ONTAP on all the nodes from each Service Processor SSH console
 system node image update -node snowy001 -package https://webserver/832P5_q_image.tgz -replace-package true
 system node image update -node snowy002 -package https://webserver/832P5_q_image.tgz -replace-package true
 system node image update -node snowy003 -package https://webserver/832P5_q_image.tgz -replace-package true
 system node image update -node snowy004 -package https://webserver/832P5_q_image.tgz -replace-package true
 system node image update -node snowy005 -package https://webserver/832P5_q_image.tgz -replace-package true
 system node image update -node snowy006 -package https://webserver/832P5_q_image.tgz -replace-package true
 system node image update -node snowy007 -package https://webserver/832P5_q_image.tgz -replace-package true
 system node image update -node snowy008 -package https://webserver/832P5_q_image.tgz -replace-package true

• Disable 32 bit aggregate support
 storage aggregate 64bit-upgrade 32bit-disable

• Verify software is installed
 system node image show

• Set 8.3.2P5 image as default image
 system image modify {-node snowy001 -iscurrent false} -isdefault true
 system image modify {-node snowy002 -iscurrent false} -isdefault true
 system image modify {-node snowy003 -iscurrent false} -isdefault true
 system image modify {-node snowy004 -iscurrent false} -isdefault true
 system image modify {-node snowy005 -iscurrent false} -isdefault true
 system image modify {-node snowy006 -iscurrent false} -isdefault true
 system image modify {-node snowy007 -iscurrent false} -isdefault true
 system image modify {-node snowy008 -iscurrent false} -isdefault true
#################
###  REBOOT NODES 1 and 2
#################

Reboot the First two Nodes in the cluster

• Delete any running or queued aggregate, volume, SnapMirror copy, or Snapshot job
snowy::> job delete –id <job-id>

STEP 1: Reboot the node snowy002 first and wait for it to come up; then move on to partner node

storage failover show
storage failover takeover -bynode snowy001
“snowy002” reboots, verify snowy002 is in Waiting for Giveback state
There is approximately 15 minutes’ gap before giveback is initiated and services are given back to original owner
storage failover giveback –fromnode snowy001 -override-vetoes true
storage failover show-giveback (keep verifying aggr show for aggrs to return back)

• Verify the node booted up with 8.3.2P5 image
 system node image show
 Once the aggregates are home, verify the lif’s, if not home
 system node upgrade-revert show –node snowy002
 network interface revert *

• Verify that node’s ports and LIFs are up and operational
 network port show –node snowy002
 network interface show –data-protocol nfs|cifs –role data –curr-node snowy002

• Verifying the networking configuration after a major upgrade
 After completing a major upgrade to Data ONTAP 8.3.2P5, you should verify that the LIFs required for external server connectivity, failover groups, and broadcast domains are configured correctly for your environment.

1. Verify the broadcast domains:
network port broadcast-domain show
During the upgrade to the Data ONTAP 8.3 release family, Data ONTAP automatically creates broadcast domains based on the failover groups in the cluster.
For each layer 2 network, you should verify that a broadcast domain exists, andthat it includes all of the ports that belong to the network. If you need to make any changes, you can use the network port broadcast-domain Commands.

2. If necessary, use network interface modify command to change the LIFs that you configured for external server connectivity.

###########
STEP 2: Reboot the partner node (snowy001)

Move all the Data lifs from snowy001 to snowy002

• Takeover node
storage failover takeover -bynode snowy002 -option allow-version-mismatch
“snowy001” reboots, verify snowy001 is in Waiting for Giveback state
There is approximately 15 minutes’ gap before giveback is initiated and services are given back to original owner
storage failover giveback –fromnode snowy002 -override-vetoes true
storage failover show (keep verifying aggr show for aggrs to return back)
system node upgrade-revert show –node snowy001

• Once the aggregates are home, verify the lif’s, if not home
 network interface revert *

• Verify that node’s ports and LIFs are up and operational
 network port show –node snowy001
 network interface show –data-protocol nfs|cifs –role data –curr-node snowy001

• Verifying the networking configuration after a major upgrade
 After completing a major upgrade to Data ONTAP 8.3.2P5, you should verify that the LIFs required for external server connectivity, failover groups, and broadcast domains are configured correctly for your environment.

1. Verify the broadcast domains:
 network port broadcast-domain show
 During the upgrade to the Data ONTAP 8.3 release family, Data ONTAP automatica lly creates broadcast domains based on the failover groups in the cluster.
 For each layer 2 network, you should verify that a broadcast domain exists, an d that it includes all of the ports that belong to the network. If you need to make any changes, you can use the network port broadcast-domain Commands.

2. If necessary, use network interface modify command to change the LIFs that you configured for external server connectivity.

• Ensure that the cluster is in quorum and that services are running before upgrading the next pair of nodes:
 cluster show
 cluster ring show
#############
# NODES 3 and 4
#############
Follow above STEPS and reboot following nodes in the order:
 • snowy004
 • snowy003

#############
# NODES 5 and 6
#############
Follow above STEPS and reboot following nodes in the order:
 • snowy006
 • snowy005

#############
# NODES 7 and 8
#############
Follow above STEPS and reboot following nodes in the order:
 • snowy007
 • snowy008 (node 8 is the last node to be upgraded and rebooted, it hosts all LS Mirror source volumes)
###############################################################################
snowy::*> vol show -volume rootvol -fields aggregate
 (volume show)
 vserver      volume  aggregate
 ------------ ------- --------------------------
 svm1 rootvol snowy008_aggr01_sas
 svm2 rootvol snowy008_aggr01_sas

• Move all LS Mirror source Volumes to their original nodes
 vol move start -vserver svm1 -volume rootvol -destination-aggregate snowy007_aggr01_sas
 vol move start -vserver svm2 -volume rootvol -destination-aggregate snowy005_aggr01_sas

• Move all the lif’s back to their original home nodes
network interface modify -vserver svm1 -lif svm1_CIFS_01 -home-node snowy007
network interface modify -vserver svm2 -lif svm2_CIFS_01 -home-node snowy008

• Revert lif’s back to their home nodes
 network interface revert *

• Modify failover-groups and add ports from nodes 03 to 08
 snowy_CIFS_fg_a0a-101
failover-groups create -failover-group snowy_CIFS_fg_a0a-101 -node snowy003 -port a0a-101
failover-groups create -failover-group snowy_CIFS_fg_a0a-101 -node snowy004 -port a0a-101
failover-groups create -failover-group snowy_CIFS_fg_a0a-101 -node snowy005 -port a0a-101
failover-groups create -failover-group snowy_CIFS_fg_a0a-101 -node snowy006 -port a0a-101
failover-groups create -failover-group snowy_CIFS_fg_a0a-101 -node snowy007 -port a0a-101
failover-groups create -failover-group snowy_CIFS_fg_a0a-101 -node snowy008 -port a0a-101

snowy_NFS_fg_a0a-102
failover-groups create -failover-group snowy_NFS_fg_a0a-102 -node snowy003 -port a0a-102
failover-groups create -failover-group snowy_NFS_fg_a0a-102 -node snowy004 -port a0a-102
failover-groups create -failover-group snowy_NFS_fg_a0a-102 -node snowy005 -port a0a-102
failover-groups create -failover-group snowy_NFS_fg_a0a-102 -node snowy006 -port a0a-102
failover-groups create -failover-group snowy_NFS_fg_a0a-102 -node snowy007 -port a0a-102
failover-groups create -failover-group snowy_NFS_fg_a0a-102 -node snowy008 -port a0a-102
##############################################################################
##########         ADD ABOVE PORTS TO BROADCAST DOMAINS     ##################
##############################################################################

Verify Post-upgrade cluster is healthy

• Check Data ONTAP version on the cluster and each node
 set advanced
 version (This should report as 8.3.2P5)
 system node image show -fields iscurrent,isdefault
 system node upgrade-revert show
 The status for each node should be listed as complete.

• Check Cluster Health
 cluster show

• Check Cluster is in RDB
 set advanced
 cluster ring show -unitname vldb
 cluster ring show -unitname mgmt
 cluster ring show -unitname vifmgr
 cluster ring show -unitname bcomd
 cluster ring show -unitname crs

• Check vserver health
 storage aggregate show -state !online
 volume show -state !online
 network interface show -status-oper down
 network interface show -is-home false
 failover-groups show

• Check lif failover configuration (data lif’s)
 network interface failover show

• Check for Network Broadcast Domains
 network port broadcast-domain show

• Check AV Servers are scanning the files

• Check DNS connectivity
 dns show -state disabled

• Check connectivity to Domain Controller
 cifs domain discovered-servers show -vserver svm1 -status ok
 (check for all Production vservers)

• Verify CIFS is working on all the SVMs
 Browse through shares on following SVMs from Windows Host
 \\svm1
 \\svm2

• Verify NFS is working
 Check connectivity to Unix/Linux Hosts where NFS exports are mounted

Leave a Reply

Your email address will not be published. Required fields are marked *