Oracle 11g RAC Add & Delete Node in Cluster


 

Adding New Node to an Existing Cluster(11gR2).

  1. Ensure network is configured properly with new node before installing cluster and oracle Software, and compare existing node and the new node.

            [grid@rac1 ~]$ cluvfy comp peer -n rac2 -refnode rac1 -r 11gR2
            Verifying peer compatibility
            Checking peer compatibility…

  1. This step will be to validate if we can add a node

            [grid@rac1 ~]$ cluvfy stage -pre nodeadd -n rac2 -fixup -verbose
            Performing pre-checks for node addition
            checking node reachability…

  1. GRID_HOME

            Install Grid Binary  (Using addNode.sh Script)
            [grid@rac1 bin]$ /u01/app/11.2.0/grid/oui/bin/addNode.sh -silent
            CLUSTER_NEW_NODES={rac2} CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac2-vip}
            Starting Oracle Universal Installer…
            …………Performing tests to see whether nodes server2 are available
            ……………………………………………………… 100% Done

  1. Run Root.sh

            [root@rac2] /u01/app/11.2.0/grid/root.sh

  1. Verify

            [grid@rac2 ~]$ cluvfy stage -post nodeadd -n rac2
            [oracle@rac2 ~]$ cluvfy stage -pre dbinst -n rac2 -r 11gR2

  1. ORACLE_HOME

            Install Oracle Binary (Using AddNode.sh script)
            Now we need to run the addNode.sh from the RDBMS home as oracle user.
            [oracle@rac1 ~]$ /u01/app/oracle/product/11.2.0/dbhome_1/oui/bin/addNode.sh -silent
            CLUSTER_NEW_NODES={rac2}
            [root@rac2 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

  1. Now the database software is installed and add database instance to new node.

            On any existing node, run DBCA ($ORACLE_HOME/bin/dbca) to add the new instance:

            [oracle@rac1 ~]$ /u01/app/oracle/product/11.2.0/dbhome_1/bin/dbca -silent
            -addInstance-nodeList rac2 -gdbNameracdb -instanceName rac2-sysDBAUserName
            sys-sysDBAPassword oracle
 

Deleting a node from existing Cluster (11gR2)

            We can divide into 6 high-level tasks for node removal in 2-node RAC:

  • Remove the Database Instance
  • Update the inventories for RDBMS
  • Remove the Database Binary
  • De configure Grid Cluster Daemons and Services
  • Remove the Node from the Grid Infrastructure layer
  • Update the inventories for GRID Infrastructure
  • Ensure the services defined are not active on the instance you want to remove.

            [oracle@rac1 ~]$ srvctl status service -d racdb

  • Remove the instance using DBCA, this can be done using the GUI, in silent mode or using Enterprise Manager. Start the selected method on a node which will not be removed.

            [oracle@rac1 ~]$ dbca -silent -deleteInstance -nodelist rac2 -gdbName
            racdb-instanceName racdb2-sysDBAUserName sys -sysDBAPassword xxxxx

  • When the database instance is removed we need to remove the listener from the host. Stop and disable the listener for the node which will be removed.

            [oracle@rac1 ~]$ srvctl status listener
            Listener LISTENER is enabled
            Listener LISTENER is running on node(s): rac1,rac2
            [oracle@rac1 ~]$ srvctl stop listener -n rac2
            [oracle@rac1 ~]$ srvctl disable listener -n rac2
            [oracle@rac1 ~]$ srvctl status listener
            Listener LISTENER is enabled
            Listener LISTENER is running on node(s): rac1

  • Update the inventory and remove the ORACLE_HOME from the node.On the node which will be removed execute the following:

            [oracle@rac2 ~]$ /u01/app/oracle/product/11.2.0/dbhome_1/oui/bin/runInstaller
            -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=rac2 –local

  • Next is to deinstall the ORACLE_HOME from the node.

            [oracle@rac2 ~]$ /u01/app/oracle/product/11.2.0/dbhome_1/deinstall/deinstall -local

  • Last step for the RDBMS home is to update the inventory on the remaining nodes.

            [oracle@rac1 ~]$ /u01/app/oracle/product/11.2.0/dbhome_1/oui/bin/runInstaller
            -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=rac1

  • Now we need to execute the steps on the GRID Infrastructure layer to remove the Clusterware and ASM layer.

            From the GRID Infrastructure home as root user:

            [root@rac2 ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig –force

  • Now we need to update the inventory as grid owner on the removed node.

            [grid@rac2 ~]/u01/app/11.2.0/grid/oui/bin/runInstaller –updateNodeList
            ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=rac2 CRS=TRUE

  • Now we need to remove grid infra binary

            [grid@rac2 ~]$ /u01/app/11.2.0/grid/deinstall/deinstall -local

  • When everything is removed from the node in question, an update of the inventory for the remaining nodes is required:

            [grid@rac1 ~]$ /u01/app/11.2.0/grid/oui/bin/runInstaller -updateNodeList
            ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=rac1 CRS=TRUE

  • To make sure all entries are removed from the clusterware layer as root executed:

            [root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl delete node -n rac2
            CRS-4661: Node server2 successfully deleted.

  • An optional step to make use of cluvfy to verify overall node removal is successful.

            [grid@rac1 ~]$ cluvfy stage -post nodedel –n rac2
            Performing post-checks for node removal
            Checking CRS integrity…
            CRS integrity check passed
            Node removal check passed
            Post-check for node removal was successful.

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *