clustered Data ONTAP 8.2.2P1 Upgrade Procedure

Here is the procedure to upgrade an 8 node cluster to ONTAP 8.2.2P1.

Upgrade Prerequisites

  1. Replace any failed disks

Pre-upgrade Checklist

    1. Update shelf firmware on all nodes (use latest shelf firmware files available on NetApp Support)
    2. Update disk firmware on all nodes (Using the all.zip file available on NetApp Support)
    3. Send autosupport from all the nodes
      • system node autosupport invoke -type all -node * -message “Upgrading to 8.2.2P1″
    4. Verify Cluster Health
      • ::> cluster show
    5. Verify Cluster is in RDB
      • ::> set advanced
      • ::> cluster ring show -unitname vldb
      • ::> cluster ring show -unitname mgmt
      • ::> cluster ring show -unitname vifmgr
    6. Verify vserver health
      • ::> storage aggregate show -state !online
      • ::> volume show -state !online
      • ::> network interface show -status-oper down
      • ::> network interface show -is-home false
    7. Verify lif failover configuration (data lif’s)
      • ::> network interface failover show

Start Upgrade

    1. Determine the current image & Download the new image
      • ::> system node image show
        New image file – 822P1_q_image.tgz
    2. Verify no jobs are running
      • ::> job show
    3. Delete any running or queued aggregate, volume, SnapMirror copy, or Snapshot job
      • ::> job delete –id <job-id>
      • ::> system node image update -node * -package
        http://<web-server>/data_ontap/8.2.2P1_q_image.tgz -setdefault true
    4. Verify software is installed
      • ::> system node image show
    5. Determine the “Epsilon” server
      • ::> set adv
      • ::*> cluster show

Reboot the epsilon server first and wait for it to come up; then move on to other nodes

      • ::> storage failover show
      • ::> storage failover modify -node * -auto-giveback false
      •  ::> network interface migrate-all -node clusternode-02
      • ::> storage failover takeover -bynode clusternode-01
      • ::> storage failover giveback –fromnode clusternode-01 -override-vetoes true
      • ::> storage failover show (keep verifying aggr show for aggrs to return back)

Verify the node booted up with 8.2.2P1 image

      • ::> system node image show

      Once the aggregates are home, verify the lif’s, if not home

      • ::> network interface revert *

 

Repeat the following steps in the order below for all the nodes

      • ::> network interface migrate-all -node clusternode-01
      • ::> storage failover takeover -bynode clusternode-02
      • ::> storage failover giveback –fromnode clusternode-02 -override-vetoes true
      • ::> storage failover show (keep verifying aggr show for aggrs to return back)

Once the aggregates are home, verify the lif’s, if not home

      • ::> network interface revert *

Ensure that the cluster is in quorum and that services are running before upgrading the next pair of nodes:

    • ::> cluster show
    • ::> cluster ring show

Reboot the nodes in the order:

  1. clusternode-01 (once this is up)
  2. clusternode-02, clusternode-04, clusternode-06 (once these are up then)
  3. clusternode-03, clusternode-05, clusternode-07 (once these are up then)
  4. clusternode-08

Enable Autogiveback for all the nodes

  • ::> storage failover modify -node nodename -auto-giveback true

Verify Post-upgrade cluster is healthy

  • ::> set advanced
  • ::> system node upgrade-revert show

The status for each node should be listed as complete.

Verify Cluster Health

  • ::> cluster show

Verify Cluster is in RDB

  • ::> set advanced
  • ::> cluster ring show -unitname vldb
  • ::> cluster ring show -unitname mgmt
  • ::> cluster ring show -unitname vifmgr

Verify vserver health

  • ::> storage aggregate show -state !online
  • ::> volume show -state !online
  • ::> network interface show -status-oper down
  • ::> network interface show -is-home false

Verify lif failover configuration (data lif’s)

  • ::> network interface failover show

Backout Plan

  1. Verify that the Data ONTAP 8.2.2P1 Cluster-Mode software is installed:
    • system node image show
  2. Trigger autosupport
    • ::> system node autosupport invoke -type all -node <nodename> -message “Reverting to 8.2.2P1 Cluster-Mode”
  3. Check revert to settings
    • ::> system node revert-to -node <nodename> -check-only true -version 8.2.2P1
  4. Revert the node to 8.2.2P1
    • ::> system node revert-to -node <nodename> -version 8.2.P1

Leave a Reply

Your email address will not be published. Required fields are marked *