- Kiran Dalvi
- 08 Nov, 2023
- 0 Comments
- 13 Mins Read
OLR RESTORATION
OLR ISSUES & RESTORATION.
Â
1] Let us look at the OLR location and when it is created and directory.Â
Find OLR Location :
[root@rac1 ~]# cat /etc/oracle/olr.loc olrconfig_loc=/u01/app/11.2.0/grid/cdata/rac1.olr crs_home=/u01/app/11.2.0/grid [root@rac1 ~]# ls -lt /u01/app/11.2.0/grid/cdata/rac1.olr -rw------- 1 root oinstall 272756736 Jul 15 19:54 /u01/app/11.2.0/grid/cdata/rac1.olr [root@rac1 ~]# date Sat Jul 15 19:55:10 IST 2017 [root@rac1 ~]# ls -lt /etc/oracle/olr.loc -rw-r--r-- 1 root oinstall 80 Jul 3 2014 /etc/oracle/olr.loc
2] Cluster is running, However we moved the OLR.LOC fileÂ
[root@rac1 ~]# mv /etc/oracle/olr.loc /etc/oracle/olr.loc.bkp [root@rac1 ~]# ls -lt /etc/oracle/olr.loc ls: /etc/oracle/olr.loc: No such file or directory [root@rac1 ~]#
3] Stop the cluster. During stopping no error reported in Node alert log or in ohasd.log
[root@rac1 ~]# crsctl stop crs -f CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
4] Starting Cluster failed, further look at the ohasd.log
OHASD trying to Initializing OLR but it is unable find location OLR Location and failed, before giving up it has tried around 21 times [root@rac1 ~]# crsctl start crs CRS-4124: Oracle High Availability Services startup failed. CRS-4000: Command Start failed, or completed with errors. [root@rac1 ~]# ps -ef|grep d.bin root 8859 5140 0 20:14 pts/1 00:00:00 grep d.bin # tail -50f ohasd.log :: 2017-07-15 20:05:46.766: [ AGFW][2988137360] {0:7:4} Shutdown request received from /u01/app/11.2.0/grid/bin/cssdagent_root 2017-07-15 20:05:46.766: [ AGFW][2988137360] {0:7:4} Agfw Proxy Server replying to the message: AGENT_SHUTDOWN_REQUEST[Proxy] ID 20486:24 2017-07-15 20:06:46.989: [ default][3040839376] OHASD Daemon Starting. Command string :reboot 2017-07-15 20:06:47.013: [ default][3040839376] Initializing OLR 2017-07-15 20:06:47.015: [ OCRRAW][3040839376]proprioo: cannot get current configuration (33) 2017-07-15 20:06:47.015: [ OCRRAW][3040839376]proprinit: Could not open raw device 2017-07-15 20:06:47.015: [ OCRAPI][3040839376]a_init:16!: Backend init unsuccessful : [33] 2017-07-15 20:06:47.015: [ CRSOCR][3040839376] OCR context init failure. Error: PROCL-33: Oracle Local Registry is not configured Storage layer error [Error opening olr.loc file. No such file or directory] [2] 2017-07-15 20:06:47.015: [ CRSOCR][3040839376][PANIC] OCR Context is NULL(File: caaocr.cpp, line: 145) 2017-07-15 20:06:47.015: [ default][3040839376][PANIC] CRSD Exiting. OCR Failed 2017-07-15 20:06:47.015: [ default][3040839376] Done. 2017-07-15 20:06:47.239: [ default][3041687248] OHASD Daemon Starting. Command string :restart 2017-07-15 20:06:47.239: [ default][3041687248] Initializing OLR 2017-07-15 20:06:47.240: [ OCRRAW][3041687248]proprioo: cannot get current configuration (33) 2017-07-15 20:06:47.240: [ OCRRAW][3041687248]proprinit: Could not open raw device 2017-07-15 20:06:47.240: [ OCRAPI][3041687248]a_init:16!: Backend init unsuccessful : [33] 2017-07-15 20:06:47.240: [ CRSOCR][3041687248] OCR context init failure. Error: PROCL-33: Oracle Local Registry is not configured Storage layer error [Error opening olr.loc file. No such file or directory] [2] <<---- 2017-07-15 20:06:47.240: [ CRSOCR][3041687248][PANIC] OCR Context is NULL(File: caaocr.cpp, line: 145) 2017-07-15 20:06:47.240: [ default][3041687248][PANIC] CRSD Exiting. OCR Failed 2017-07-15 20:06:47.240: [ default][3041687248] Done. SOLUTION : Move or Rename the OLR.LOC file
5. Lets modify location of OLR file in OLR.LOC
I changed the OLRCONFIG_LOC pointing to different directory [root@rac1 oracle]# more olr.loc olrconfig_loc=/u01/app/12.2.0/grid/cdata/rac1.olr <<-------- crs_home=/u01/app/11.2.0/grid 2017-07-15 20:29:57.310: [ default][3041003216] OHASD Daemon Starting. Command string :restart 2017-07-15 20:29:57.311: [ default][3041003216] Initializing OLR 2017-07-15 20:29:57.312: [ OCROSD][3041003216]utopen:6m': failed in stat OCR file/disk /u01/app/12.2.0/grid/cdata/rac1.olr, errno=2, os err string=No such file or directory 2017-07-15 20:29:57.312: [ OCROSD][3041003216]utopen:7: failed to open any OCR file/disk, errno=2, os err string=No such file or directory 2017-07-15 20:29:57.312: [ OCRRAW][3041003216]proprinit: Could not open raw device 2017-07-15 20:29:57.312: [ OCRAPI][3041003216]a_init:16!: Backend init unsuccessful : [26] 2017-07-15 20:29:57.312: [ CRSOCR][3041003216] OCR context init failure. Error: PROCL-26: Error while accessing the physical storage Operating System error [No such file or directory] [2] <<--------- 2017-07-15 20:29:57.312: [ CRSOCR][3041003216][PANIC] OCR Context is NULL(File: caaocr.cpp, line: 145) 2017-07-15 20:29:57.312: [ default][3041003216][PANIC] CRSD Exiting. OCR Failed 2017-07-15 20:29:57.312: [ default][3041003216] Done. SOLUTION : Cluster failed to start, to fix revert the change in olr.loc file and point to correct OLR file
6. This time lets modify the crs_home location
[root@rac1 oracle]# more olr.loc olrconfig_loc=/u01/app/11.2.0/grid/cdata/rac1.olr crs_home=/u01/app/12.2.0/grid Note : This time cluster started even though the CRS_HOME pointed to Wrong Home. So changing CRS_HOME in OLR.LOC has not effect
7. What will happen when the OLR file is deleted ?
[root@rac1 cdata]# mv rac1.olr rac1.olr.bkp 2017-07-15 21:17:25.740: [ default][3041593040] OHASD Daemon Starting. Command string :restart 2017-07-15 21:17:25.740: [ default][3041593040] Initializing OLR 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 0 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 1 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 2 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 3 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 4 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 5 2017-07-15 21:17:25.741: [ OCRRAW][3041593040]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format. 2017-07-15 21:17:25.741: [ OCRRAW][3041593040]proprioini: all disks are not OCR/OLR formatted 2017-07-15 21:17:25.741: [ OCRRAW][3041593040]proprinit: Could not open raw device 2017-07-15 21:17:25.741: [ OCRAPI][3041593040]a_init:16!: Backend init unsuccessful : [26] 2017-07-15 21:17:25.741: [ CRSOCR][3041593040] OCR context init failure. Error: PROCL-26: Error while accessing the physical storage 2017-07-15 21:17:25.741: [ CRSOCR][3041593040][PANIC] OCR Context is NULL(File: caaocr.cpp, line: 145) <<--------------------- 2017-07-15 21:17:25.741: [ default][3041593040][PANIC] CRSD Exiting. OCR Failed 2017-07-15 21:17:25.741: [ default][3041593040] Done.
8) Check the OLR integrity
[root@rac1 cdata]# ocrcheck -local PROTL-602: Failed to retrieve data from the local registry PROCL-26: Error while accessing the physical storage Operating System error [No such file or directory] [2]
9) Unfortunately backup didn’t listed with -showbackup
[root@rac1 cdata]# ocrconfig -local -showbackup
10) Generally OLR backup is located under $GRID_HOME/cdata//backup_timestamp.olr. I have old backup.
[root@rac1 cdata]# ls -lt /u01/app/11.2.0/grid/cdata/rac1/ total 6580 -rw------- 1 root root 6725632 Jul 3 2014 backup_20140703_015127.olr
11) Tried to restore but failed ?
[root@rac1 cdata]# ocrconfig -local -restore /u01/app/11.2.0/grid/cdata/rac1/backup_20140703_015127.olr Segmentation fault
12) Create NULL file and try to start the cluster, which eventually failed
[root@rac1 cdata]# touch rac1.olr [root@rac1 cdata]# chown root:oinstall rac1.olr [root@rac1 cdata]# chmod 600 rac1.olr [root@rac1 cdata]# ls -lt total 2912 -rw------- 1 root oinstall 0 Jul 15 21:13 rac1.olr 2017-07-15 21:17:25.740: [ default][3041593040] OHASD Daemon Starting. Command string :restart 2017-07-15 21:17:25.740: [ default][3041593040] Initializing OLR 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 0 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 1 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 2 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 3 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 4 2017-07-15 21:17:25.741: [ OCROSD][3041593040]utread:3: Problem reading buffer a787000 buflen 4096 retval 0 phy_offset 102400 retry 5 2017-07-15 21:17:25.741: [ OCRRAW][3041593040]propriogid:1_1: Failed to read the whole bootblock. Assumes invalid format. 2017-07-15 21:17:25.741: [ OCRRAW][3041593040]proprioini: all disks are not OCR/OLR formatted 2017-07-15 21:17:25.741: [ OCRRAW][3041593040]proprinit: Could not open raw device 2017-07-15 21:17:25.741: [ OCRAPI][3041593040]a_init:16!: Backend init unsuccessful : [26] 2017-07-15 21:17:25.741: [ CRSOCR][3041593040] OCR context init failure. Error: PROCL-26: Error while accessing the physical storage 2017-07-15 21:17:25.741: [ CRSOCR][3041593040][PANIC] OCR Context is NULL(File: caaocr.cpp, line: 145 <<--------------------- 2017-07-15 21:17:25.741: [ default][3041593040][PANIC] CRSD Exiting. OCR Failed 2017-07-15 21:17:25.741: [ default][3041593040] Done.
13) Finally, In order to restore the OLR you need to “create dummy file”and then restore the file,This looks like some kind of BUG,
[root@rac1 cdata]# touch rac1.olr [root@rac1 cdata]# ocrconfig -local -restore /u01/app/11.2.0/grid/cdata/rac1/backup_20140703_015127.olr [root@rac1 cdata]# ls -lt total 9500 -rw-r--r-- 1 root oinstall 272756736 Jul 15 21:43 rac1.olr -rw------- 1 root oinstall 272756736 Jul 15 20:50 rac1.olr.bkp drwxrwxr-x 2 oracle oinstall 4096 Jul 15 17:15 rac-scan drwxr-xr-x 2 oracle oinstall 4096 Jul 3 2014 rac1 drwxr-xr-x 2 oracle oinstall 4096 Jul 3 2014 localhost [root@rac1 cdata]# crsctl start crs CRS-4123: Oracle High Availability Services has been started. File: alert.log 2017-07-15 21:43:37.744 [client(12968)]CRS-2102:The OLR was restored from file /u01/app/11.2.0/grid/cdata/rac1/backup_20140703_015127.olr. 2017-07-15 21:44:56.834 [ohasd(13001)]CRS-2112:The OLR service started on node rac1. 2017-07-15 21:44:56.844 [ohasd(13001)]CRS-1301:Oracle High Availability Service started on node rac1. 2017-07-15 21:44:56.876
14) Final leg of OLR restoration, When there is no OLR backup ?
1. Deconfig the existing clusterware, as this is only done on the problematic node, the other nodes should have Clusterware up and running, this command should only deconfig the clusterware configuration for local node, it should not touch/change OCR.
# /crs/install/rootcrs.pl -deconfig -force
2. Rerun root.sh
/root.sh [php] /root.sh [root@rac1 install]# locate rootcrs.pl /u01/app/11.2.0/grid/crs/install/rootcrs.pl /u01/app/oracle/product/11.2.0/db_1/crs/install/rootcrs.pl [root@rac1 install]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig -force Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type PRCR-1068 : Failed to query resources Cannot communicate with crsd PRCR-1070 : Failed to check if resource ora.gsd is registered Cannot communicate with crsd PRCR-1070 : Failed to check if resource ora.ons is registered Cannot communicate with crsd ACFS-9200: Supported CRS-4535: Cannot communicate with Cluster Ready Services CRS-4000: Command Stop failed, or completed with errors. CRS-4544: Unable to connect to OHAS CRS-4000: Command Stop failed, or completed with errors. Successfully deconfigured Oracle clusterware stack on this node [root@rac1 install]# Log File : /u01/app/11.2.0/grid/cfgtoollogs/crsconfig/crsdelete_rac1.log [root@rac1 install]# /u01/app/11.2.0/grid/root.sh Running Oracle 11g root script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. OLR initialization - successful Adding daemon to inittab ACFS-9200: Supported ACFS-9300: ADVM/ACFS distribution files found. ACFS-9307: Installing requested ADVM/ACFS software. ACFS-9308: Loading installed ADVM/ACFS drivers. ACFS-9321: Creating udev for ADVM/ACFS. ACFS-9323: Creating module dependencies - this may take some time. ACFS-9327: Verifying ADVM/ACFS devices. ACFS-9309: ADVM/ACFS installation correctness verified. CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac2, number 2, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. PRKC-1024 : Invalid netmask: eth0 add nodeapps on node=rac1 ... failed Failed to perform new node configuration at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 8232. /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed [root@rac1 install]# Although the root.sh reported failed, cluster had started without any issue and all the resource are up and running
[root@rac1 crsconfig]# crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [root@rac1 crsconfig]# srvctl config vip -n rac1 VIP exists: /192.168.10.200/192.168.10.200/192.168.10.0/255.255.255.0/eth0, hosting node rac1 [root@rac1 crsconfig]# srvctl status vip -n rac1 VIP 192.168.10.200 is enabled VIP 192.168.10.200 is running on node: rac1 [root@rac1 crsconfig]# srvctl status nodeapps VIP 192.168.10.200 is enabled VIP 192.168.10.200 is running on node: rac1 VIP 192.168.10.201 is enabled VIP 192.168.10.201 is running on node: rac2 VIP rac3-vip is enabled VIP rac3-vip is running on node: rac3 Network is enabled Network is running on node: rac1 Network is running on node: rac2 Network is running on node: rac3 GSD is disabled GSD is not running on node: rac1 GSD is not running on node: rac2 GSD is not running on node: rac3 ONS is enabled ONS daemon is running on node: rac1 ONS daemon is running on node: rac2 ONS daemon is running on node: rac3
[root@rac1 crsconfig]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:18:DB:D8 inet addr:192.168.10.100 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe18:dbd8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:48762 errors:0 dropped:0 overruns:0 frame:0 TX packets:52526 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5742275 (5.4 MiB) TX bytes:10620231 (10.1 MiB) Interrupt:75 Base address:0x2424 eth0:1 Link encap:Ethernet HWaddr 00:0C:29:18:DB:D8 inet addr:192.168.10.106 Bcast:192.168.10.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:75 Base address:0x2424 eth0:2 Link encap:Ethernet HWaddr 00:0C:29:18:DB:D8 inet addr:192.168.10.200 Bcast:192.168.10.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:75 Base address:0x2424 Further debugging on the error Now lets looks at log file for strange reason it is reporting error for VIP subnet Log file : /u01/app/11.2.0/grid/cfgtoollogs/crsconfig/rootcrs_rac1.log 2017-07-15 22:42:05: IP list=#@=result[0]: res_name={ora.net1.network 2017-07-15 22:42:05: IP list=id={1 2017-07-15 22:42:05: IP list=subnet={192.168.10.0 2017-07-15 22:42:05: IP list=netmask={255.255.255.0 2017-07-15 22:42:05: IP list=interfaces={eth0 2017-07-15 22:42:05: IP list=dhcpservertype={static 2017-07-15 22:42:05: IP list= 2017-07-15 22:42:05: netmask = 255.255.255.0 2017-07-15 22:42:05: interface = eth0 2017-07-15 22:42:05: hostVIP = /255.255.255.0/eth0 2017-07-15 22:42:05: Invoking "/u01/app/11.2.0/grid/bin/srvctl add nodeapps -n rac1 -A "/255.255.255.0/eth0" " 2017-07-15 22:42:05: trace file=/u01/app/11.2.0/grid/cfgtoollogs/crsconfig/srvmcfg0.log 2017-07-15 22:42:05: Executing /u01/app/11.2.0/grid/bin/srvctl add nodeapps -n rac1 -A "/255.255.255.0/eth0" 2017-07-15 22:42:05: Executing cmd: /u01/app/11.2.0/grid/bin/srvctl add nodeapps -n rac1 -A "/255.255.255.0/eth0" 2017-07-15 22:42:06: Command output: > PRKC-1024 : Invalid netmask: eth0 >End Command output 2017-07-15 22:42:06: "/u01/app/11.2.0/grid/bin/srvctl add nodeapps -n rac1 -A "/255.255.255.0/eth0" " failed with status 1. 2017-07-15 22:42:06: add nodeapps on node=rac1 ... failed 2017-07-15 22:42:06: Running as user oracle: /u01/app/11.2.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_NODECONFIG -state FAIL 2017-07-15 22:42:06: s_run_as_user2: Running /bin/su oracle -c ' /u01/app/11.2.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_NODECONFIG -state FAIL ' 2017-07-15 22:42:06: Removing file /tmp/filebsciVe 2017-07-15 22:42:06: Successfully removed file: /tmp/filebsciVe 2017-07-15 22:42:06: /bin/su successfully executed 2017-07-15 22:42:06: Succeeded in writing the checkpoint:'ROOTCRS_NODECONFIG' with status:FAIL 2017-07-15 22:42:06: ###### Begin DIE Stack Trace ###### 2017-07-15 22:42:06: Package File Line Calling 2017-07-15 22:42:06: --------------- -------------------- ---- ---------- 2017-07-15 22:42:06: 1: main rootcrs.pl 322 crsconfig_lib::dietrap 2017-07-15 22:42:06: 2: crsconfig_lib crsconfig_lib.pm 8232 main::__ANON__ 2017-07-15 22:42:06: 3: crsconfig_lib crsconfig_lib.pm 8192 crsconfig_lib::configNode 2017-07-15 22:42:06: 4: main rootcrs.pl 746 crsconfig_lib::perform_configNode 2017-07-15 22:42:06: ####### End DIE Stack Trace ####### 2017-07-15 22:42:06: 'ROOTCRS_NODECONFIG' checkpoint has failed Log file : /u01/app/11.2.0/grid/cfgtoollogs/crsconfig/srvmcfg0.log [main] [ 2017-07-15 22:42:06.489 IST ] [CommandLineParser.isValidNodeName:3857] nodename=rac1 hostname: rac1 [main] [ 2017-07-15 22:42:06.489 IST ] [CommandLineParser.buildVIPAddress:4181] vipAddr # of fields is 2 [main] [ 2017-07-15 22:42:06.489 IST ] [CommandLineParser.buildVIPAddress:4232] vipAddr =255.255.255.0 [main] [ 2017-07-15 22:42:06.491 IST ] [VIPAddress.<init>:138] vipName is null <<----------------May this is the reason?? [main] [ 2017-07-15 22:42:06.491 IST ] [VIPAddress.<init>:139] viporname is 255.255.255.0 [main] [ 2017-07-15 22:42:06.505 IST ] [DottedDecimalBinaryConverter.<init>:51] #tokens = 1 [main] [ 2017-07-15 22:42:06.506 IST ] [VIPAddress.<init>:211] VIPAddress: netmask eth0 is not valid [main] [ 2017-07-15 22:42:06.506 IST ] [VIPAddress.<init>:214] exception generated java.lang.NumberFormatException: For input string: "eth0
If you have seen earlier for my rac1-vip and rac2-vip; vip's doesn't have name. It looks like because of this error reported [root@rac1 crsconfig]# srvctl status nodeapps VIP 192.168.10.200 is enabled VIP 192.168.10.200 is running on node: rac1 VIP 192.168.10.201 is enabled VIP 192.168.10.201 is running on node: rac2 VIP rac3-vip is enabled VIP rac3-vip is running on node: rac3