Convert Standard to Flex Cluster


Converting Cluster Mode from Standard Cluster to Flex Cluster

To determine the current mode of the cluster:

[grid@rac1 addnode]$crsctl get cluster mode status
Cluster is running in “standard” mode

[grid@rac1 addnode]$asmcmdshowclustermode
ASM cluster: Flex mode enabled

Creating GNS with GNS-vip

[root@rac1 Desktop]# cd /u01/app/12.1.0/grid/bin/
[root@rac1 bin]# ./srvctl add gns -vip GNS-vip
[root@rac1 bin]# ./srvctlconfiggns
GNS is enabled.
GNS VIP addresses: 192.168.1.114
Domain served by GNS: N_FWD
To start GNS, run the following command as root

[root@rac1 bin]# ./srvctl start gns

Change the mode of the cluster to FLEX cluster as ‘ROOT’ user.

[root@rac1 bin]# ./crsctl set cluster mode flex
CRS-4933: Cluster mode set to “flex”; restart Oracle High Availability Services on all nodes for cluster to run in “flex” mode.

After cluster service restart, Changes to cluster mode will get effective

root@rac1 bin]# ./crsctl stop crs

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘rac1’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘rac1’

.
.
.
.
High Availability Services-managed resources on ‘rac1’ has completed
CRS-4133: Oracle High Availability y Services-managed resources on ‘rac1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.

[root@rac1 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[grid@rac1 ~]$crsctl get cluster mode status
Cluster is running in “flex” mode

[grid@rac1 ~]$crsctl get node role config -all
Node ‘rac1’ configured role is ‘hub’
Node ‘rac2’ configured role is ‘hub’

[grid@rac1 ~]$crsctl get node role status -all
Node ‘rac1’ active role is ‘hub’
Node ‘rac2’ active role is ‘hub’

[grid@rac1 ~]$cluvfy stage -pre nodeadd -n rac3 -fixup –verbose

Performing pre-checks for node addition

Checking node reachability…

Check: Node reachability from node “rac1”

Destination Node Reachable?
—————— ————————
rac3 yes
.
.
.
GNS VIP resource configuration check passed.

GNS integrity check passed

Checking Flex Cluster node role configuration…
Flex Cluster node role configuration check passed

Pre-check for node addition was successful.

[root@rac1 bin]#xhost +
access control disabled, clients can connect from any host

[grid@rac1 grid]$ cd /u01/app/12.1.0/grid/addnode/
[grid@rac1 addnode]$ ./addnode.sh

p1

Click Add and mention the node name and cluster mode to the OUI

p2

p3

p4
Because we reduced the ram size this warning appears. We can ignore and proceed.

p5

p6

p7

[root@rac1 Desktop]# cd /u01/app/oraInventory/
[root@rac1 oraInventory]# ./orainstRoot.sh

Changing permissions of /u01/app/oraInventory.
Adding read, write permissions for group.
Removing read, write, and execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[root@rac1 oraInventory]# cd /u01/app/12.1.0/grid/
[root@rac1 grid]# ./root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/12.1.0/grid
.
.
.
.
.

2016/07/08 21:50:14 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2016/07/08 21:50:17 CLSRSC-456: The Oracle Grid Infrastructure has already been configured.

Execute the same scripts in node rac3

[root@rac312.1.0]# cd /u01/app/oraInventory/
[root@rac3oraInventory]# ./orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read, write permissions for group.
Removing read, write, execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[root@rac3oraInventory]# cd /u01/app/12.1.0/grid/
[root@rac3grid]# ./root.sh

.
.
.
CRS-2676: Start of ‘ora.crsd’ on ‘rac3’ succeeded
CRS-2883: Resource ‘ora.crsd’ failed during Clusterware stack start.
CRS-4406: Oracle High Availability Services synchronous start failed.
CRS-4000: Command Start failed, or completed with errors.
2016/07/17 13:36:38 CLSRSC-117: Failed to start Oracle Clusterware stack

Died at /u01/app/12.1.0/grid/crs/install/crsinstall.pm line 914.
The command ‘/u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/rootcrs.pl ‘ execution failed

Work around :
[root@rac3 config]# cp /u01/app/12.1.0/grid/crs/config/rootconfig.sh /u01/app/12.1.0/grid/crs/config/bak_rootconfig.sh

p8

Uncomment the following lines

#if [ “$ADDNODE” = “true” ]; then
# SW_ONLY=false
# HA_CONFIG=false
#fi

To

if [ “$ADDNODE” = “true” ]; then
SW_ONLY=false
HA_CONFIG=false
Fi

[root@rac3 config]# cd /u01/app/12.1.0/grid/crs/install/

[root@rac3 install]# ./rootcrs.pl -verbose -deconfig –force
.
.
.
2016/07/17 14:11:46 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.

2016/07/17 14:11:47 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

Now re run the root.sh

[root@rac3 grid]# cd /u01/app/12.1.0/grid

[root@rac3 grid]# ./root.sh
.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Preparing packages for installation…
cvuqdisk-1.0.9-1
2016/07/17 14:17:15 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster … succeeded

Now click ok in the installer.

p9

Post verification

[grid@rac1 addnode]$olsnodes -s -t
rac1 Active Unpinned
rac2 Active Unpinned
rac3 Active Unpinned

[grid@rac1 addnode]$oifcfggetif
eth0 192.168.1.0 global public
eth1 192.168.2.0 global cluster_interconnect,asm

[grid@rac1 addnode]$asmcmdshowclusterstate
Normal

[grid@rac1 addnode]$crsctl get node role config -all
Node ‘rac1’ configured role is ‘hub’
Node ‘rac2’ configured role is ‘hub’
Node ‘rac3’ configured role is ‘leaf’

[grid@rac1 addnode]$crsctl get node role status -all
Node ‘rac1’ active role is ‘hub’
Node ‘rac2’ active role is ‘hub’
Node ‘rac3’ active role is ‘leaf’

On leaf node

[grid@rac3 bin]$ cd /u01/app/12.1.0/grid/bin/
[grid@rac3 bin]$ ./crsctl get node role config -all
Node ‘rac1’ configured role is ‘hub’
Node ‘rac2’ configured role is ‘hub’
Node ‘rac3’ configured role is ‘leaf’

[grid@rac3 bin]$ ./crsctl get node role status -all
Node ‘rac1’ active role is ‘hub’
Node ‘rac2’ active role is ‘hub’
Node ‘rac3’ active role is ‘leaf’

Verifying with cluvfy on node rac1

[grid@rac1 addnode]$cluvfy stage -post nodeadd -n rac3 –verbose


.

Result: Clock synchronization check using Network Time Protocol (NTP) passed

Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was successful.

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *