This was a strange issue while installing DB software on a 2 node RAC system.
Oracle Grid Infrastructure was installed on both the nodes with Voting disks and OCR accessible from both the nodes and the ASM instances up and running on both the nodes.
But strangely, the DB software installation failed with error “[INS-35354] The system on which you are attempting to install Oracle RAC is not part of a valid cluster.”
[oracle@dr-node1 database]$ ./runInstaller -silent -responseFile /u02/db_install.rsp Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 9250 MB Passed Checking swap space: must be greater than 150 MB. Actual 10210 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-07-15_07-45-50PM. Please wait ...[oracle@dr-node1 database]$ [FATAL] [INS-35354] The system on which you are attempting to install Oracle RAC is not part of a valid cluster. CAUSE: Before you can install Oracle RAC, you must install Oracle Grid Infrastructure on all servers (Oracle Clusterware and Oracle ASM) to create a cluster. ACTION: Oracle Grid Infrastructure is not installed. Install it either from the separate installation media included in your media pack, or install it by downloading it from Electronic Product Delivery (EPD) or the Oracle Technology Network (OTN). Oracle Grid Infrastructure normally is installed by a different operating system user than the one used for Oracle Database. It may need to be installed by your system administrator. See the installation guide for more details.
On checking the inventory file, I could see that inventory contents had the “CRS=TRUE” flag missing on both the nodes which is why it was preventing me from installing the DB software with “CLUSTER” option.
Now, I need to update the inventory with “CRS=TRUE” and this can be done using the OUI utility.
[oracle@dr-node1 database]$ cat /etc/oraInst.loc inventory_loc=/u01/app/OraInventory inst_group=oinstall
[oracle@dr-node1 oracle]$ cd /u01/app/OraInventory [oracle@dr-node1 OraInventory]$ ls -lrt total 28 -rw-rw---- 1 oracle oinstall 293 Jul 12 11:55 oraInstaller.properties drwxrwx--- 2 oracle oinstall 4096 Jul 12 11:55 oui -rw-rw---- 1 oracle oinstall 37 Jul 12 11:58 install.platform drwxrwx--- 2 oracle oinstall 4096 Jul 12 11:58 ContentsXML -rwxrwx--- 1 oracle oinstall 1623 Jul 12 12:03 orainstRoot.sh -rw-rw---- 1 oracle oinstall 56 Jul 12 12:03 oraInst.loc drwxrwx--- 2 oracle oinstall 4096 Jul 15 19:46 logs [oracle@dr-node1 OraInventory]$ cd ContentsXML/ [oracle@dr-node1 ContentsXML]$ ls -lrt total 12 -rw-rw---- 1 oracle oinstall 503 Jul 12 11:58 inventory.xml -rw-rw---- 1 oracle oinstall 270 Jul 12 11:58 libs.xml -rw-rw---- 1 oracle oinstall 307 Jul 12 11:58 comps.xml
[oracle@dr-node1 ContentsXML]$ cat inventory.xml <?xml version="1.0" standalone="yes" ?> <!-- Copyright (c) 1999, 2010, Oracle. All rights reserved. --> <!-- Do not modify the contents of this file by hand. --> <INVENTORY> <VERSION_INFO> <SAVED_WITH>11.2.0.2.0</SAVED_WITH> <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER> </VERSION_INFO> <HOME_LIST> <HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0.2/grid" TYPE="O" IDX="1"> <NODE_LIST> <NODE NAME="dr-node1"/> <NODE NAME="dr-node2"/> </NODE_LIST> </HOME> </HOME_LIST> </INVENTORY>
It’s clear that CRS=TRUE is missing in the inventory file. Now, I need to update the inventory with “CRS=TRUE” and this can be done using the OUI utility.
[oracle@dr-node1 ~]$ /u01/app/11.2.0.2/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME="/u01/app/11.2.0.2/grid" CRS=true Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 10208 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/OraInventory 'UpdateNodeList' was successful.
[oracle@dr-node1 ~]$ cd /u01/app/OraInventory/ [oracle@dr-node1 OraInventory]$ ls -lrt total 32 -rw-rw---- 1 oracle oinstall 293 Jul 12 11:55 oraInstaller.properties drwxrwx--- 2 oracle oinstall 4096 Jul 12 11:55 oui -rwxrwx--- 1 oracle oinstall 1623 Jul 12 12:03 orainstRoot.sh -rw-rw---- 1 oracle oinstall 56 Jul 12 12:03 oraInst.loc drwxrwx--- 2 oracle oinstall 4096 Jul 15 19:49 ContentsXML drwxrwx--- 3 oracle oinstall 4096 Jul 15 19:50 backup -rw-rw---- 1 oracle oinstall 37 Jul 15 19:50 install.platform drwxrwx--- 2 oracle oinstall 4096 Jul 15 19:51 logs
[oracle@dr-node1 OraInventory]$ cd ContentsXML/ [oracle@dr-node1 ContentsXML]$ ls -lrt total 12 -rw-rw---- 1 oracle oinstall 514 Jul 15 19:50 inventory.xml -rw-rw---- 1 oracle oinstall 270 Jul 15 19:50 libs.xml -rw-rw---- 1 oracle oinstall 307 Jul 15 19:50 comps.xml [oracle@dr-node1 ContentsXML]$
[oracle@dr-node1 ContentsXML]$ cat inventory.xml <?xml version="1.0" standalone="yes" ?> <!-- Copyright (c) 1999, 2010, Oracle. All rights reserved. --> <!-- Do not modify the contents of this file by hand. --> <INVENTORY> <VERSION_INFO> <SAVED_WITH>11.2.0.2.0</SAVED_WITH> <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER> </VERSION_INFO> <HOME_LIST> <HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0.2/grid" TYPE="O" IDX="1" CRS="true"> <NODE_LIST> <NODE NAME="dr-node1"/> <NODE NAME="dr-node2"/> </NODE_LIST> </HOME> </HOME_LIST> </INVENTORY> [oracle@dr-node1 ContentsXML]$
Now, we could see that the inventory file is updated with CRS=TRUE. Run this on the remaining nodes as well using OUI utility.
And now the DB software installation with the Cluster option runs successfully.
[oracle@dr-node1 database]$ ./runInstaller -silent -responseFile /u02/db_install.rsp -ignorePrereq -ignoreSysPrereqs Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 9043 MB Passed Checking swap space: must be greater than 150 MB. Actual 10147 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-07-15_07-58-21PM. Please wait ...[oracle@dr-node1 database]$ You can find the log of this install session at: /u01/app/OraInventory/logs/installActions2015-07-15_07-58-21PM.log As a root user, execute the following script(s): 1. /u01/app/oracle/product/11.2.0.2/db_1/root.sh Successfully Setup Software.
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
This article demonstrates on adding a node to the RAC cluster.
Environment:
OS: OEL Linux 5.5
Existing Nodes : 10gnode1, 10gnode2
Node to be added: 10gnode3
Database Name: PRIMT
DB Version: 11.2.0.3
Here is the details of Public, Private and Virtual IP addresses of the all the nodes in the cluster environment. This is same across all the 3 nodes.
[oracle@10gnode1 ~]$ cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 # PUBLIC 192.168.2.101 10gnode1.mydomain 10gnode1 192.168.2.102 10gnode2.mydomain 10gnode2 192.168.2.103 10gnode3.mydomain 10gnode3 # PRIVATE 192.168.0.101 10gnode1-priv.mydomain 10gnode1-priv 192.168.0.102 10gnode2-priv.mydomain 10gnode2-priv 192.168.0.103 10gnode3-priv.mydomain 10gnode3-priv # VIRTUAL 192.168.2.111 10gnode1-vip.mydomain 10gnode1-vip 192.168.2.112 10gnode2-vip.mydomain 10gnode2-vip 192.168.2.113 10gnode3-vip.mydomain 10gnode3-vip # SCAN 192.168.2.203 node-scan.mydomain node-scan 192.168.2.204 node-scan.mydomain node-scan 192.168.2.205 node-scan.mydomain node-scan
Next step would be to verify and establish passwordless SSH between the running nodes and the new node.
[oracle@10gnode1 ~]$ ssh 10gnode3 date The authenticity of host '10gnode3 (192.168.2.103)' can't be established. RSA key fingerprint is d2:e4:b0:df:11:11:cb:1d:16:15:25:79:f7:be:96:04. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10gnode3,192.168.2.103' (RSA) to the list of known hosts. Sat Jan 24 17:43:37 IST 2015 [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ ssh 10gnode3 hostname 10gnode3.mydomain [oracle@10gnode2 ~]$ ssh 10gnode3 date The authenticity of host '10gnode3 (192.168.2.103)' can't be established. RSA key fingerprint is d2:e4:b0:df:11:11:cb:1d:16:15:25:79:f7:be:96:04. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10gnode3,192.168.2.103' (RSA) to the list of known hosts. Sat Jan 24 17:44:01 IST 2015 [oracle@10gnode2 ~]$ [oracle@10gnode2 ~]$ ssh 10gnode3 hostname 10gnode3.mydomain [oracle@10gnode3 ~]$ ssh 10gnode1 date Sat Jan 24 17:44:17 IST 2015 [oracle@10gnode3 ~]$ [oracle@10gnode3 ~]$ ssh 10gnode1 hostname 10gnode1.mydomain [oracle@10gnode3 ~]$ [oracle@10gnode3 ~]$ ssh 10gnode2 date Sat Jan 24 17:44:27 IST 2015 [oracle@10gnode3 ~]$ [oracle@10gnode3 ~]$ ssh 10gnode2 hostname 10gnode2.mydomain [oracle@10gnode3 ~]$
Using CLUVFY, let’s check the pre-requisite on the new node. This is done by running the following the command on the running node.
cluvfy stage -pre nodeadd -n <new-node-to-be-added> -verbose
[oracle@10gnode1 ~]$ cd $ORA_CRS_HOME/bin [oracle@10gnode1 bin]$ pwd /u01/app/11.2.0/grid3/bin [oracle@10gnode1 bin]$ [oracle@10gnode1 bin]$ cluvfy stage -pre nodeadd -n 10gnode3 -verbose [oracle@10gnode1 bin]$ ./cluvfy stage -pre nodeadd -n 10gnode3 -verbose Performing pre-checks for node addition Checking node reachability... Check: Node reachability from node "10gnode1" Destination Node Reachable? ------------------------------------ ------------------------ 10gnode3 yes Result: Node reachability check passed from node "10gnode1" Checking user equivalence... Check: User equivalence for user "oracle" Node Name Status ------------------------------------ ------------------------ 10gnode3 passed Result: User equivalence check passed for user "oracle" Checking node connectivity... Checking hosts config file... Node Name Status ------------------------------------ ------------------------ 10gnode1 passed 10gnode2 passed 10gnode3 passed Verification of the hosts config file successful Interface information for node "10gnode1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.2.101 192.168.2.0 0.0.0.0 UNKNOWN 08:00:27:AF:22:42 1500 eth0 192.168.2.203 192.168.2.0 0.0.0.0 UNKNOWN 08:00:27:AF:22:42 1500 eth0 192.168.2.111 192.168.2.0 0.0.0.0 UNKNOWN 08:00:27:AF:22:42 1500 eth1 192.168.0.101 192.168.0.0 0.0.0.0 UNKNOWN 08:00:27:0E:E3:05 1500 eth1 169.254.125.10 169.254.0.0 0.0.0.0 UNKNOWN 08:00:27:0E:E3:05 1500 eth2 192.168.75.103 192.168.75.0 0.0.0.0 UNKNOWN 08:00:27:73:78:FF 1500 Interface information for node "10gnode2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.2.102 192.168.2.0 0.0.0.0 192.168.2.1 08:00:27:8D:58:4D 1500 eth0 192.168.2.112 192.168.2.0 0.0.0.0 192.168.2.1 08:00:27:8D:58:4D 1500 eth1 192.168.0.102 192.168.0.0 0.0.0.0 192.168.2.1 08:00:27:32:B8:15 1500 eth1 169.254.60.182 169.254.0.0 0.0.0.0 192.168.2.1 08:00:27:32:B8:15 1500 eth2 192.168.75.102 192.168.75.0 0.0.0.0 192.168.2.1 08:00:27:76:C0:28 1500 Interface information for node "10gnode3" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.2.103 192.168.2.0 0.0.0.0 192.168.2.1 08:00:27:53:19:C6 1500 eth1 192.168.0.103 192.168.0.0 0.0.0.0 192.168.2.1 08:00:27:BA:22:46 1500 eth2 192.168.75.104 192.168.75.0 0.0.0.0 192.168.2.1 08:00:27:22:DE:35 1500 Check: Node connectivity for interface "eth0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- 10gnode1[192.168.2.101] 10gnode1[192.168.2.203] yes 10gnode1[192.168.2.101] 10gnode1[192.168.2.111] yes 10gnode1[192.168.2.101] 10gnode2[192.168.2.102] yes 10gnode1[192.168.2.101] 10gnode2[192.168.2.112] yes 10gnode1[192.168.2.101] 10gnode3[192.168.2.103] yes 10gnode1[192.168.2.203] 10gnode1[192.168.2.111] yes 10gnode1[192.168.2.203] 10gnode2[192.168.2.102] yes 10gnode1[192.168.2.203] 10gnode2[192.168.2.112] yes 10gnode1[192.168.2.203] 10gnode3[192.168.2.103] yes 10gnode1[192.168.2.111] 10gnode2[192.168.2.102] yes 10gnode1[192.168.2.111] 10gnode2[192.168.2.112] yes 10gnode1[192.168.2.111] 10gnode3[192.168.2.103] yes 10gnode2[192.168.2.102] 10gnode2[192.168.2.112] yes 10gnode2[192.168.2.102] 10gnode3[192.168.2.103] yes 10gnode2[192.168.2.112] 10gnode3[192.168.2.103] yes Result: Node connectivity passed for interface "eth0" Check: TCP connectivity of subnet "192.168.2.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- 10gnode1:192.168.2.101 10gnode1:192.168.2.203 passed 10gnode1:192.168.2.101 10gnode1:192.168.2.111 passed 10gnode1:192.168.2.101 10gnode2:192.168.2.102 passed 10gnode1:192.168.2.101 10gnode2:192.168.2.112 passed 10gnode1:192.168.2.101 10gnode3:192.168.2.103 passed Result: TCP connectivity check passed for subnet "192.168.2.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.2.0". Subnet mask consistency check passed. Result: Node connectivity check passed Checking multicast communication... Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed. Check of multicast communication passed. Checking CRS integrity... Clusterware version consistency passed The Oracle Clusterware is healthy on node "10gnode1" The Oracle Clusterware is healthy on node "10gnode2" CRS integrity check passed Checking shared resources... Checking CRS home location... PRVG-1013 : The path "/u01/app/11.2.0/grid3" does not exist or cannot be created on the nodes to be added Result: Shared resources check for node addition failed Checking node connectivity... Checking hosts config file... Node Name Status ------------------------------------ ------------------------ 10gnode1 passed 10gnode2 passed 10gnode3 passed Verification of the hosts config file successful Interface information for node "10gnode1" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.2.101 192.168.2.0 0.0.0.0 UNKNOWN 08:00:27:AF:22:42 1500 eth0 192.168.2.203 192.168.2.0 0.0.0.0 UNKNOWN 08:00:27:AF:22:42 1500 eth0 192.168.2.111 192.168.2.0 0.0.0.0 UNKNOWN 08:00:27:AF:22:42 1500 eth1 192.168.0.101 192.168.0.0 0.0.0.0 UNKNOWN 08:00:27:0E:E3:05 1500 eth1 169.254.125.10 169.254.0.0 0.0.0.0 UNKNOWN 08:00:27:0E:E3:05 1500 eth2 192.168.75.103 192.168.75.0 0.0.0.0 UNKNOWN 08:00:27:73:78:FF 1500 Interface information for node "10gnode2" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.2.102 192.168.2.0 0.0.0.0 192.168.2.1 08:00:27:8D:58:4D 1500 eth0 192.168.2.112 192.168.2.0 0.0.0.0 192.168.2.1 08:00:27:8D:58:4D 1500 eth1 192.168.0.102 192.168.0.0 0.0.0.0 192.168.2.1 08:00:27:32:B8:15 1500 eth1 169.254.60.182 169.254.0.0 0.0.0.0 192.168.2.1 08:00:27:32:B8:15 1500 eth2 192.168.75.102 192.168.75.0 0.0.0.0 192.168.2.1 08:00:27:76:C0:28 1500 Interface information for node "10gnode3" Name IP Address Subnet Gateway Def. Gateway HW Address MTU ------ --------------- --------------- --------------- --------------- ----------------- ------ eth0 192.168.2.103 192.168.2.0 0.0.0.0 192.168.2.1 08:00:27:53:19:C6 1500 eth1 192.168.0.103 192.168.0.0 0.0.0.0 192.168.2.1 08:00:27:BA:22:46 1500 eth2 192.168.75.104 192.168.75.0 0.0.0.0 192.168.2.1 08:00:27:22:DE:35 1500 Check: Node connectivity for interface "eth0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- 10gnode1[192.168.2.101] 10gnode1[192.168.2.203] yes 10gnode1[192.168.2.101] 10gnode1[192.168.2.111] yes 10gnode1[192.168.2.101] 10gnode2[192.168.2.102] yes 10gnode1[192.168.2.101] 10gnode2[192.168.2.112] yes 10gnode1[192.168.2.101] 10gnode3[192.168.2.103] yes 10gnode1[192.168.2.203] 10gnode1[192.168.2.111] yes 10gnode1[192.168.2.203] 10gnode2[192.168.2.102] yes 10gnode1[192.168.2.203] 10gnode2[192.168.2.112] yes 10gnode1[192.168.2.203] 10gnode3[192.168.2.103] yes 10gnode1[192.168.2.111] 10gnode2[192.168.2.102] yes 10gnode1[192.168.2.111] 10gnode2[192.168.2.112] yes 10gnode1[192.168.2.111] 10gnode3[192.168.2.103] yes 10gnode2[192.168.2.102] 10gnode2[192.168.2.112] yes 10gnode2[192.168.2.102] 10gnode3[192.168.2.103] yes 10gnode2[192.168.2.112] 10gnode3[192.168.2.103] yes Result: Node connectivity passed for interface "eth0" Check: TCP connectivity of subnet "192.168.2.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- 10gnode1:192.168.2.101 10gnode1:192.168.2.203 passed 10gnode1:192.168.2.101 10gnode1:192.168.2.111 passed 10gnode1:192.168.2.101 10gnode2:192.168.2.102 passed 10gnode1:192.168.2.101 10gnode2:192.168.2.112 passed 10gnode1:192.168.2.101 10gnode3:192.168.2.103 passed Result: TCP connectivity check passed for subnet "192.168.2.0" Check: Node connectivity for interface "eth1" Source Destination Connected? ------------------------------ ------------------------------ ---------------- 10gnode1[192.168.0.101] 10gnode2[192.168.0.102] yes 10gnode1[192.168.0.101] 10gnode3[192.168.0.103] yes 10gnode2[192.168.0.102] 10gnode3[192.168.0.103] yes Result: Node connectivity passed for interface "eth1" Check: TCP connectivity of subnet "192.168.0.0" Source Destination Connected? ------------------------------ ------------------------------ ---------------- 10gnode1:192.168.0.101 10gnode2:192.168.0.102 passed 10gnode1:192.168.0.101 10gnode3:192.168.0.103 passed Result: TCP connectivity check passed for subnet "192.168.0.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.2.0". Subnet mask consistency check passed for subnet "192.168.0.0". Subnet mask consistency check passed. Result: Node connectivity check passed Checking multicast communication... Checking subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.2.0" for multicast communication with multicast group "230.0.1.0" passed. Checking subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0"... Check of subnet "192.168.0.0" for multicast communication with multicast group "230.0.1.0" passed. Check of multicast communication passed. Check: Total memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 2.9718GB (3116120.0KB) 1.5GB (1572864.0KB) passed 10gnode1 2.9718GB (3116120.0KB) 1.5GB (1572864.0KB) passed Result: Total memory check passed Check: Available memory Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 2.9093GB (3050644.0KB) 50MB (51200.0KB) passed 10gnode1 1.9784GB (2074528.0KB) 50MB (51200.0KB) passed Result: Available memory check passed Check: Swap space Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 9.767GB (1.0241396E7KB) 2.9718GB (3116120.0KB) passed 10gnode1 9.767GB (1.0241396E7KB) 2.9718GB (3116120.0KB) passed Result: Swap space check passed Check: Free disk space for "10gnode3:/u01/app/11.2.0/grid3" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.0/grid3 10gnode3 /u01 18.6406GB 5.5GB passed Result: Free disk space check passed for "10gnode3:/u01/app/11.2.0/grid3" Check: Free disk space for "10gnode1:/u01/app/11.2.0/grid3" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /u01/app/11.2.0/grid3 10gnode1 /u01 9.3252GB 5.5GB passed Result: Free disk space check passed for "10gnode1:/u01/app/11.2.0/grid3" Check: Free disk space for "10gnode3:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /tmp 10gnode3 / 26.9141GB 1GB passed Result: Free disk space check passed for "10gnode3:/tmp" Check: Free disk space for "10gnode1:/tmp" Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------ /tmp 10gnode1 / 26.917GB 1GB passed Result: Free disk space check passed for "10gnode1:/tmp" Check: User existence for "oracle" Node Name Status Comment ------------ ------------------------ ------------------------ 10gnode3 passed exists(500) 10gnode1 passed exists(500) Checking for multiple users with UID value 500 Result: Check for multiple users with UID value 500 passed Result: User existence check passed for "oracle" Check: Run level Node Name run level Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 5 3,5 passed 10gnode1 5 3,5 passed Result: Run level check passed Check: Hard limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- 10gnode1 hard 65536 65536 passed 10gnode3 hard 65536 65536 passed Result: Hard limits check passed for "maximum open file descriptors" Check: Soft limits for "maximum open file descriptors" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- 10gnode1 soft 1024 1024 passed 10gnode3 soft 1024 1024 passed Result: Soft limits check passed for "maximum open file descriptors" Check: Hard limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- 10gnode1 hard 16384 16384 passed 10gnode3 hard 16384 16384 passed Result: Hard limits check passed for "maximum user processes" Check: Soft limits for "maximum user processes" Node Name Type Available Required Status ---------------- ------------ ------------ ------------ ---------------- 10gnode1 soft 2047 2047 passed 10gnode3 soft 2047 2047 passed Result: Soft limits check passed for "maximum user processes" Check: System architecture Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 i686 x86 passed 10gnode1 i686 x86 passed Result: System architecture check passed Check: Kernel version Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 2.6.18-8.el5 2.6.18 passed 10gnode1 2.6.18-8.el5 2.6.18 passed Result: Kernel version check passed Check: Kernel parameter for "semmsl" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 250 250 250 passed 10gnode3 250 250 250 passed Result: Kernel parameter check passed for "semmsl" Check: Kernel parameter for "semmns" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 32000 32000 32000 passed 10gnode3 32000 32000 32000 passed Result: Kernel parameter check passed for "semmns" Check: Kernel parameter for "semopm" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 100 100 100 passed 10gnode3 100 100 100 passed Result: Kernel parameter check passed for "semopm" Check: Kernel parameter for "semmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 128 128 128 passed 10gnode3 128 128 128 passed Result: Kernel parameter check passed for "semmni" Check: Kernel parameter for "shmmax" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 2147483648 2147483648 1595453440 passed 10gnode3 2147483648 2147483648 1595453440 passed Result: Kernel parameter check passed for "shmmax" Check: Kernel parameter for "shmmni" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 4096 4096 4096 passed 10gnode3 4096 4096 4096 passed Result: Kernel parameter check passed for "shmmni" Check: Kernel parameter for "shmall" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 2097152 2097152 2097152 passed 10gnode3 2097152 2097152 2097152 passed Result: Kernel parameter check passed for "shmall" Check: Kernel parameter for "file-max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 6815744 6815744 6815744 passed 10gnode3 6815744 6815744 6815744 passed Result: Kernel parameter check passed for "file-max" Check: Kernel parameter for "ip_local_port_range" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed 10gnode3 between 9000.0 & 65500.0 between 9000.0 & 65500.0 between 9000.0 & 65500.0 passed Result: Kernel parameter check passed for "ip_local_port_range" Check: Kernel parameter for "rmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 1048576 1048576 262144 passed 10gnode3 1048576 1048576 262144 passed Result: Kernel parameter check passed for "rmem_default" Check: Kernel parameter for "rmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 4194304 4194304 4194304 passed 10gnode3 4194304 4194304 4194304 passed Result: Kernel parameter check passed for "rmem_max" Check: Kernel parameter for "wmem_default" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 262144 262144 262144 passed 10gnode3 262144 262144 262144 passed Result: Kernel parameter check passed for "wmem_default" Check: Kernel parameter for "wmem_max" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 1048576 1048576 1048576 passed 10gnode3 1048576 1048576 1048576 passed Result: Kernel parameter check passed for "wmem_max" Check: Kernel parameter for "aio-max-nr" Node Name Current Configured Required Status Comment ---------------- ------------ ------------ ------------ ------------ ------------ 10gnode1 1048576 1048576 1048576 passed 10gnode3 1048576 1048576 1048576 passed Result: Kernel parameter check passed for "aio-max-nr" Check: Package existence for "ocfs2-tools" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 ocfs2-tools-1.2.7-1.el5 ocfs2-tools-1.2.7 passed 10gnode1 ocfs2-tools-1.2.7-1.el5 ocfs2-tools-1.2.7 passed Result: Package existence check passed for "ocfs2-tools" Check: Package existence for "make" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 make-3.81-1.1 make-3.81 passed 10gnode1 make-3.81-1.1 make-3.81 passed Result: Package existence check passed for "make" Check: Package existence for "binutils" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 binutils-2.17.50.0.6-2.el5 binutils-2.17.50.0.6 passed 10gnode1 binutils-2.17.50.0.6-2.el5 binutils-2.17.50.0.6 passed Result: Package existence check passed for "binutils" Check: Package existence for "gcc" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 gcc-4.1.2-48.el5 gcc-4.1.2 passed 10gnode1 gcc-4.1.2-48.el5 gcc-4.1.2 passed Result: Package existence check passed for "gcc" Check: Package existence for "gcc-c++" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 gcc-c++-4.1.2-48.el5 gcc-c++-4.1.2 passed 10gnode1 gcc-c++-4.1.2-48.el5 gcc-c++-4.1.2 passed Result: Package existence check passed for "gcc-c++" Check: Package existence for "libgomp" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 libgomp-4.4.0-6.el5 libgomp-4.1.2 passed 10gnode1 libgomp-4.4.0-6.el5 libgomp-4.1.2 passed Result: Package existence check passed for "libgomp" Check: Package existence for "libaio" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 libaio-0.3.106-5 libaio-0.3.106 passed 10gnode1 libaio-0.3.106-5 libaio-0.3.106 passed Result: Package existence check passed for "libaio" Check: Package existence for "glibc" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 glibc-2.5-49 glibc-2.5-24 passed 10gnode1 glibc-2.5-49 glibc-2.5-24 passed Result: Package existence check passed for "glibc" Check: Package existence for "compat-libstdc++-33" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 compat-libstdc++-33-3.2.3-61 compat-libstdc++-33-3.2.3 passed 10gnode1 compat-libstdc++-33-3.2.3-61 compat-libstdc++-33-3.2.3 passed Result: Package existence check passed for "compat-libstdc++-33" Check: Package existence for "elfutils-libelf" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 elfutils-libelf-0.125-3.el5 elfutils-libelf-0.125 passed 10gnode1 elfutils-libelf-0.125-3.el5 elfutils-libelf-0.125 passed Result: Package existence check passed for "elfutils-libelf" Check: Package existence for "elfutils-libelf-devel" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 elfutils-libelf-devel-0.125-3.el5 elfutils-libelf-devel-0.125 passed 10gnode1 elfutils-libelf-devel-0.125-3.el5 elfutils-libelf-devel-0.125 passed Result: Package existence check passed for "elfutils-libelf-devel" Check: Package existence for "glibc-common" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 glibc-common-2.5-49 glibc-common-2.5 passed 10gnode1 glibc-common-2.5-49 glibc-common-2.5 passed Result: Package existence check passed for "glibc-common" Check: Package existence for "glibc-devel" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 glibc-devel-2.5-12 glibc-devel-2.5 passed 10gnode1 glibc-devel-2.5-12 glibc-devel-2.5 passed Result: Package existence check passed for "glibc-devel" Check: Package existence for "glibc-headers" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 glibc-headers-2.5-12 glibc-headers-2.5 passed 10gnode1 glibc-headers-2.5-12 glibc-headers-2.5 passed Result: Package existence check passed for "glibc-headers" Check: Package existence for "libaio-devel" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 libaio-devel-0.3.106-5 libaio-devel-0.3.106 passed 10gnode1 libaio-devel-0.3.106-5 libaio-devel-0.3.106 passed Result: Package existence check passed for "libaio-devel" Check: Package existence for "libgcc" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 libgcc-4.1.2-48.el5 libgcc-4.1.2 passed 10gnode1 libgcc-4.1.2-48.el5 libgcc-4.1.2 passed Result: Package existence check passed for "libgcc" Check: Package existence for "libstdc++" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 libstdc++-4.1.2-48.el5 libstdc++-4.1.2 passed 10gnode1 libstdc++-4.1.2-48.el5 libstdc++-4.1.2 passed Result: Package existence check passed for "libstdc++" Check: Package existence for "libstdc++-devel" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 libstdc++-devel-4.1.2-48.el5 libstdc++-devel-4.1.2 passed 10gnode1 libstdc++-devel-4.1.2-48.el5 libstdc++-devel-4.1.2 passed Result: Package existence check passed for "libstdc++-devel" Check: Package existence for "sysstat" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 sysstat-7.0.2-3.el5 sysstat-7.0.2 passed 10gnode1 sysstat-7.0.2-3.el5 sysstat-7.0.2 passed Result: Package existence check passed for "sysstat" Check: Package existence for "ksh" Node Name Available Required Status ------------ ------------------------ ------------------------ ---------- 10gnode3 ksh-20060214-1.4 ksh-20060214 passed 10gnode1 ksh-20060214-1.4 ksh-20060214 passed Result: Package existence check passed for "ksh" Checking for multiple users with UID value 0 Result: Check for multiple users with UID value 0 passed Check: Current group ID Result: Current group ID check passed Starting check for consistency of primary group of root user Node Name Status ------------------------------------ ------------------------ 10gnode3 passed 10gnode1 passed Check for consistency of root user's primary group passed Checking OCR integrity... OCR integrity check passed Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed Check: Time zone consistency Result: Time zone consistency check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... The NTP configuration file "/etc/ntp.conf" is available on all nodes NTP Configuration file check passed No NTP Daemons or Services were found to be running PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s): 10gnode3,10gnode1 Result: Clock synchronization check using Network Time Protocol(NTP) failed Checking to make sure user "oracle" is not in "root" group Node Name Status Comment ------------ ------------------------ ------------------------ 10gnode3 passed does not exist 10gnode1 passed does not exist Result: User "oracle" is not part of "root" group. Check passed Checking consistency of file "/etc/resolv.conf" across nodes Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined File "/etc/resolv.conf" does not have both domain and search entries defined Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes... domain entry in file "/etc/resolv.conf" is consistent across nodes Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes... PRVF-5622 : search entry does not exist in file "/etc/resolv.conf" on nodes: "10gnode3" Checking file "/etc/resolv.conf" to make sure that only one search entry is defined All nodes have one search entry defined in file "/etc/resolv.conf" Checking DNS response time for an unreachable node Node Name Status ------------------------------------ ------------------------ 10gnode1 failed 10gnode3 failed PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: 10gnode1,10gnode3 File "/etc/resolv.conf" is not consistent across nodes Pre-check for node addition was unsuccessful on all the nodes.
From the above output, I see that all checks have been passed except the “DNS response time”. I’m ignoring this in my case as I haven’t configured DNS and not using this.
Now, let’s add the new node using the “addNode.sh” using “silent” method. The parameters passed here are “CLUSTER_NEW_NODES” which is the new node that needs to be added and “CLUSTER_NEW_VIRTUAL_HOSTNAMES” which is the VIP alias of the new node to be added.
addNode.sh is located at “GRID_HOME/oui/bin” and is run from one of the running nodes. I’m running it here from 10gnode1.
[oracle@10gnode1 ~]$ cd /u01/app/11.2.0/grid3/oui/bin
[oracle@10gnode1 bin]$
[oracle@10gnode1 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={10gnode3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={10gnode3-vip}" -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 10001 MB Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
Performing tests to see whether nodes 10gnode2,10gnode3 are available
............................................................... 100% Done.
.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /u01/app/11.2.0/grid3
New Nodes
Space Requirements
New Nodes
10gnode3
/u01: Required 5.16GB : Available 17.36GB
Installed Products
Product Names
Oracle Grid Infrastructure 11.2.0.3.0
Sun JDK 1.5.0.30.03
Installer SDK Component 11.2.0.3.0
Oracle One-Off Patch Installer 11.2.0.1.7
Oracle Universal Installer 11.2.0.3.0
Oracle USM Deconfiguration 11.2.0.3.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.4
Oracle DBCA Deconfiguration 11.2.0.3.0
Oracle RAC Deconfiguration 11.2.0.3.0
Oracle Quality of Service Management (Server) 11.2.0.3.0
Installation Plugin Files 11.2.0.3.0
Universal Storage Manager Files 11.2.0.3.0
Oracle Text Required Support Files 11.2.0.3.0
Automatic Storage Management Assistant 11.2.0.3.0
Oracle Database 11g Multimedia Files 11.2.0.3.0
Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
Oracle Core Required Support Files 11.2.0.3.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.3.0
Oracle Quality of Service Management (Client) 11.2.0.3.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.3.0
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.3.0
Oracle JDBC/OCI Instant Client 11.2.0.3.0
Oracle Multimedia Client Option 11.2.0.3.0
LDAP Required Support Files 11.2.0.3.0
Character Set Migration Utility 11.2.0.3.0
Perl Interpreter 5.10.0.0.1
PL/SQL Embedded Gateway 11.2.0.3.0
OLAP SQL Scripts 11.2.0.3.0
Database SQL Scripts 11.2.0.3.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.3.0
SQL*Plus Files for Instant Client 11.2.0.3.0
Oracle Net Required Support Files 11.2.0.3.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.3.0
RDBMS Required Support Files Runtime 11.2.0.3.0
XML Parser for Java 11.2.0.3.0
Oracle Security Developer Tools 11.2.0.3.0
Oracle Wallet Manager 11.2.0.3.0
Enterprise Manager plugin Common Files 11.2.0.3.0
Platform Required Support Files 11.2.0.3.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.3.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.3
Deinstallation Tool 11.2.0.3.0
Oracle Java Client 11.2.0.3.0
Cluster Verification Utility Files 11.2.0.3.0
Oracle Notification Service (eONS) 11.2.0.3.0
Oracle LDAP administration 11.2.0.3.0
Cluster Verification Utility Common Files 11.2.0.3.0
Oracle Clusterware RDBMS Files 11.2.0.3.0
Oracle Locale Builder 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
Buildtools Common Files 11.2.0.3.0
Oracle RAC Required Support Files-HAS 11.2.0.3.0
SQL*Plus Required Support Files 11.2.0.3.0
XDK Required Support Files 11.2.0.3.0
Agent Required Support Files 10.2.0.4.3
Parser Generator Required Support Files 11.2.0.3.0
Precompiler Required Support Files 11.2.0.3.0
Installation Common Files 11.2.0.3.0
Required Support Files 11.2.0.3.0
Oracle JDBC/THIN Interfaces 11.2.0.3.0
Oracle Multimedia Locator 11.2.0.3.0
Oracle Multimedia 11.2.0.3.0
HAS Common Files 11.2.0.3.0
Assistant Common Files 11.2.0.3.0
PL/SQL 11.2.0.3.0
HAS Files for DB 11.2.0.3.0
Oracle Recovery Manager 11.2.0.3.0
Oracle Database Utilities 11.2.0.3.0
Oracle Notification Service 11.2.0.3.0
SQL*Plus 11.2.0.3.0
Oracle Netca Client 11.2.0.3.0
Oracle Net 11.2.0.3.0
Oracle JVM 11.2.0.3.0
Oracle Internet Directory Client 11.2.0.3.0
Oracle Net Listener 11.2.0.3.0
Cluster Ready Services Files 11.2.0.3.0
Oracle Database 11g 11.2.0.3.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Saturday, January 24, 2015 7:13:26 PM IST)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Saturday, January 24, 2015 7:13:29 PM IST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Saturday, January 24, 2015 7:22:28 PM IST)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/11.2.0/grid3/root.sh #On nodes 10gnode3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/11.2.0/grid3 was successful.
Please check '/tmp/silentInstall.log' for more details.
We need to run root.sh script on the new node (10gnode3) as ROOT user.
[root@10gnode3 ~]# /u01/app/11.2.0/grid3/root.sh Check /u01/app/11.2.0/grid3/install/root_10gnode3.mydomain_2015-01-24_19-25-31.log for the output of root script [root@10gnode3 ~]#
Let’s check the status of the resources on the newly added node.
[root@10gnode3 bin]# ./crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.asm ONLINE ONLINE 10gnode1 Started ONLINE ONLINE 10gnode2 Started ONLINE ONLINE 10gnode3 Started ora.gsd OFFLINE OFFLINE 10gnode1 OFFLINE OFFLINE 10gnode2 OFFLINE OFFLINE 10gnode3 ora.net1.network ONLINE ONLINE 10gnode1 ONLINE ONLINE 10gnode2 ONLINE ONLINE 10gnode3 ora.ons ONLINE ONLINE 10gnode1 ONLINE ONLINE 10gnode2 ONLINE ONLINE 10gnode3 ora.registry.acfs ONLINE ONLINE 10gnode1 ONLINE ONLINE 10gnode2 ONLINE ONLINE 10gnode3 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.10gnode1.vip 1 ONLINE ONLINE 10gnode1 ora.10gnode2.vip 1 ONLINE ONLINE 10gnode2 ora.10gnode3.vip 1 ONLINE ONLINE 10gnode3 ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE 10gnode2 ora.cvu 1 ONLINE ONLINE 10gnode3 ora.oc4j 1 ONLINE ONLINE 10gnode2 ora.primt.db 1 ONLINE ONLINE 10gnode1 Open 2 ONLINE ONLINE 10gnode2 Open ora.scan1.vip 1 ONLINE ONLINE 10gnode2
Run the CLUVFY utility to verify everything is fine after adding the new node to the cluster.
[oracle@10gnode1 ~]$ cd /u01/app/11.2.0/grid3/bin [oracle@10gnode1 bin]$ [oracle@10gnode1 bin]$ ./cluvfy stage -post nodeadd -n 10gnode3
============================DB=============================
Now it’s time to add the DB Home installed on the new node, in other words we need to add the new node needs to be added to the DB cluster.
[oracle@10gnode1 ~]$ cd /u01/app/oracle/product/11.2.0/db3/oui/bin/
[oracle@10gnode1 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={10gnode3}" -silent
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "10gnode1"
Checking user equivalence...
User equivalence check passed for user "oracle"
WARNING:
Node "10gnode3" already appears to be part of cluster
Pre-check for node addition was successful.
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 10001 MB Passed
Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
Performing tests to see whether nodes 10gnode2,10gnode3 are available
............................................................... 100% Done.
.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /u01/app/oracle/product/11.2.0/db3
New Nodes
Space Requirements
New Nodes
10gnode3
/u01: Required 4.14GB : Available 13.02GB
Installed Products
Product Names
Oracle Database 11g 11.2.0.3.0
Sun JDK 1.5.0.30.03
Installer SDK Component 11.2.0.3.0
Oracle One-Off Patch Installer 11.2.0.1.7
Oracle Universal Installer 11.2.0.3.0
Oracle USM Deconfiguration 11.2.0.3.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Oracle DBCA Deconfiguration 11.2.0.3.0
Oracle RAC Deconfiguration 11.2.0.3.0
Oracle Database Deconfiguration 11.2.0.3.0
Oracle Configuration Manager Client 10.3.2.1.0
Oracle Configuration Manager 10.3.5.0.1
Oracle ODBC Driverfor Instant Client 11.2.0.3.0
LDAP Required Support Files 11.2.0.3.0
SSL Required Support Files for InstantClient 11.2.0.3.0
Bali Share 1.1.18.0.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
Oracle Real Application Testing 11.2.0.3.0
Oracle Database Vault J2EE Application 11.2.0.3.0
Oracle Label Security 11.2.0.3.0
Oracle Data Mining RDBMS Files 11.2.0.3.0
Oracle OLAP RDBMS Files 11.2.0.3.0
Oracle OLAP API 11.2.0.3.0
Platform Required Support Files 11.2.0.3.0
Oracle Database Vault option 11.2.0.3.0
Oracle RAC Required Support Files-HAS 11.2.0.3.0
SQL*Plus Required Support Files 11.2.0.3.0
Oracle Display Fonts 9.0.2.0.0
Oracle Ice Browser 5.2.3.6.0
Oracle JDBC Server Support Package 11.2.0.3.0
Oracle SQL Developer 11.2.0.3.0
Oracle Application Express 11.2.0.3.0
XDK Required Support Files 11.2.0.3.0
RDBMS Required Support Files for Instant Client 11.2.0.3.0
SQLJ Runtime 11.2.0.3.0
Database Workspace Manager 11.2.0.3.0
RDBMS Required Support Files Runtime 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
Exadata Storage Server 11.2.0.1.0
Provisioning Advisor Framework 10.2.0.4.3
Enterprise Manager Database Plugin -- Repository Support 11.2.0.3.0
Enterprise Manager Repository Core Files 10.2.0.4.4
Enterprise Manager Database Plugin -- Agent Support 11.2.0.3.0
Enterprise Manager Grid Control Core Files 10.2.0.4.4
Enterprise Manager Common Core Files 10.2.0.4.4
Enterprise Manager Agent Core Files 10.2.0.4.4
RDBMS Required Support Files 11.2.0.3.0
regexp 2.1.9.0.0
Agent Required Support Files 10.2.0.4.3
Oracle 11g Warehouse Builder Required Files 11.2.0.3.0
Oracle Notification Service (eONS) 11.2.0.3.0
Oracle Text Required Support Files 11.2.0.3.0
Parser Generator Required Support Files 11.2.0.3.0
Oracle Database 11g Multimedia Files 11.2.0.3.0
Oracle Multimedia Java Advanced Imaging 11.2.0.3.0
Oracle Multimedia Annotator 11.2.0.3.0
Oracle JDBC/OCI Instant Client 11.2.0.3.0
Oracle Multimedia Locator RDBMS Files 11.2.0.3.0
Precompiler Required Support Files 11.2.0.3.0
Oracle Core Required Support Files 11.2.0.3.0
Sample Schema Data 11.2.0.3.0
Oracle Starter Database 11.2.0.3.0
Oracle Message Gateway Common Files 11.2.0.3.0
Oracle XML Query 11.2.0.3.0
XML Parser for Oracle JVM 11.2.0.3.0
Oracle Help For Java 4.2.9.0.0
Installation Plugin Files 11.2.0.3.0
Enterprise Manager Common Files 10.2.0.4.3
Expat libraries 2.0.1.0.1
Deinstallation Tool 11.2.0.3.0
Oracle Quality of Service Management (Client) 11.2.0.3.0
Perl Modules 5.10.0.0.1
JAccelerator (COMPANION) 11.2.0.3.0
Oracle Containers for Java 11.2.0.3.0
Perl Interpreter 5.10.0.0.1
Oracle Net Required Support Files 11.2.0.3.0
Secure Socket Layer 11.2.0.3.0
Oracle Universal Connection Pool 11.2.0.3.0
Oracle JDBC/THIN Interfaces 11.2.0.3.0
Oracle Multimedia Client Option 11.2.0.3.0
Oracle Java Client 11.2.0.3.0
Character Set Migration Utility 11.2.0.3.0
Oracle Code Editor 1.2.1.0.0I
PL/SQL Embedded Gateway 11.2.0.3.0
OLAP SQL Scripts 11.2.0.3.0
Database SQL Scripts 11.2.0.3.0
Oracle Locale Builder 11.2.0.3.0
Oracle Globalization Support 11.2.0.3.0
SQL*Plus Files for Instant Client 11.2.0.3.0
Required Support Files 11.2.0.3.0
Oracle Database User Interface 2.2.13.0.0
Oracle ODBC Driver 11.2.0.3.0
Oracle Notification Service 11.2.0.3.0
XML Parser for Java 11.2.0.3.0
Oracle Security Developer Tools 11.2.0.3.0
Oracle Wallet Manager 11.2.0.3.0
Cluster Verification Utility Common Files 11.2.0.3.0
Oracle Clusterware RDBMS Files 11.2.0.3.0
Oracle UIX 2.2.24.6.0
Enterprise Manager plugin Common Files 11.2.0.3.0
HAS Common Files 11.2.0.3.0
Precompiler Common Files 11.2.0.3.0
Installation Common Files 11.2.0.3.0
Oracle Help for the Web 2.0.14.0.0
Oracle LDAP administration 11.2.0.3.0
Buildtools Common Files 11.2.0.3.0
Assistant Common Files 11.2.0.3.0
Oracle Recovery Manager 11.2.0.3.0
PL/SQL 11.2.0.3.0
Generic Connectivity Common Files 11.2.0.3.0
Oracle Database Gateway for ODBC 11.2.0.3.0
Oracle Programmer 11.2.0.3.0
Oracle Database Utilities 11.2.0.3.0
Enterprise Manager Agent 10.2.0.4.3
SQL*Plus 11.2.0.3.0
Oracle Netca Client 11.2.0.3.0
Oracle Multimedia Locator 11.2.0.3.0
Oracle Call Interface (OCI) 11.2.0.3.0
Oracle Multimedia 11.2.0.3.0
Oracle Net 11.2.0.3.0
Oracle XML Development Kit 11.2.0.3.0
Database Configuration and Upgrade Assistants 11.2.0.3.0
Oracle JVM 11.2.0.3.0
Oracle Advanced Security 11.2.0.3.0
Oracle Internet Directory Client 11.2.0.3.0
Oracle Enterprise Manager Console DB 11.2.0.3.0
HAS Files for DB 11.2.0.3.0
Oracle Net Listener 11.2.0.3.0
Oracle Text 11.2.0.3.0
Oracle Net Services 11.2.0.3.0
Oracle Database 11g 11.2.0.3.0
Oracle OLAP 11.2.0.3.0
Oracle Spatial 11.2.0.3.0
Oracle Partitioning 11.2.0.3.0
Enterprise Edition Options 11.2.0.3.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Sunday, January 25, 2015 7:13:56 PM IST)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Sunday, January 25, 2015 7:14:04 PM IST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Sunday, January 25, 2015 7:30:04 PM IST)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oracle/product/11.2.0/db3/root.sh #On nodes 10gnode3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /u01/app/oracle/product/11.2.0/db3 was successful.
Please check '/tmp/silentInstall.log' for more details.
[oracle@10gnode1 bin]$
Its time for now to run the root.sh script from the DB home on the newly added node -10gnode3 as ROOT user.
[root@10gnode3 ~]# /u01/app/oracle/product/11.2.0/db3/root.sh Check /u01/app/oracle/product/11.2.0/db3/install/root_10gnode3.mydomain_2015-01-25_19-37-11.log for the output of root script [root@10gnode3 ~]#
Here is the outcome of the ROOT.SH script that was run on 10gnode3.
[root@10gnode3 ~]# cat /u01/app/oracle/product/11.2.0/db3/install/root_10gnode3.mydomain_2015-01-25_19-37-11.log Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/db3 Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Finished product-specific root actions. [root@10gnode3 ~]#
We now need to add a new instance “primt3” running from 10gnode3 to the database “primt”. But before we do that, we need to have few of the cluster parameters modified according to the instance.
Eg:
instance_number
Thread
Online Redo log groups for the new thread
UNDO_TABLESPACE
[oracle@10gnode1 ~]$ . oraenv ORACLE_SID = [oracle] ? primt1 The Oracle base has been set to /u01/app/oracle [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Thu Feb 26 19:38:41 2015 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select group#,thread#,bytes/1024/1024,members from v$log; GROUP# THREAD# BYTES/1024/1024 MEMBERS ------ ------- --------------- ---------- 1 1 100 2 2 1 100 2 3 2 100 2 4 2 100 2 GROUP# MEMBER ------ -------------------------------------------------- 1 +DATA/primt/onlinelog/group_1.281.854648695 1 +FRA/primt/onlinelog/group_1.269.854648697 2 +DATA/primt/onlinelog/group_2.282.854648701 2 +FRA/primt/onlinelog/group_2.270.854648703 3 +DATA/primt/onlinelog/group_3.289.854738095 3 +FRA/primt/onlinelog/group_3.271.854738099 4 +DATA/primt/onlinelog/group_4.290.854738101 4 +FRA/primt/onlinelog/group_4.272.854738107 8 rows selected. SQL> alter database add logfile thread 3 group 5 size 100M; Database altered. SQL> alter database add logfile thread 3 group 6 size 100M; Database altered. SQL> select group#,thread#,bytes/1024/1024,members from v$log; GROUP# THREAD# BYTES/1024/1024 MEMBERS ------ ------- --------------- ------------- 1 1 100 2 2 1 100 2 3 2 100 2 4 2 100 2 5 3 100 2 6 3 100 2 6 rows selected.
Log Files have been added to the new thread 3 by creating new Groups 5 and 6.
Now, let’s move on the undo_tablespace parameter. We shall create a new undo tablespace called “UNDOTBS3” and assign it as a default to instance 3.
SQL> select inst_id,name,value from gv$parameter where name like 'undo_tablespa%'; INST_ID NAME VALUE ------- ------------------------- ------------------------- 1 undo_tablespace UNDOTBS1 2 undo_tablespace UNDOTBS2 SQL> create undo tablespace UNDOTBS3 datafile size 400M; Tablespace created. SQL> alter system set undo_tablespace=UNDOTBS3 scope=spfile sid='primt3'; System altered.
Assign an instance number to the instance that would be running from newly added node.
SQL> ALTER SYSTEM SET instance_number=3 SCOPE=SPFILE SID='primt3'; System altered.
Thread number for the instance that would be added newly is assigned as 3.
SQL> alter system set thread=3 scope=spfile sid='primt3'; System altered.
Shutdown the database (including all the instances)
[oracle@10gnode1 ~]$ srvctl stop database -d primt [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ srvctl status database -d primt Instance primt1 is not running on node 10gnode1 Instance primt2 is not running on node 10gnode2
Current DB configuration is as follows:
[oracle@10gnode1 ~]$ srvctl config database -d primt Database unique name: primt Database name: Oracle home: /u01/app/oracle/product/11.2.0/db3 Oracle user: oracle Spfile: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: primt Database instances: primt1,primt2 Disk Groups: Mount point paths: Services: Type: RAC Database is administrator managed
Now its time to add the instance that will be running on the newly added node 10gnode3.
[oracle@10gnode1 ~]$ srvctl add instance -d primt -i primt3 -n 10gnode3 [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ srvctl status database -d primt -v -f Instance primt1 is not running on node 10gnode1 Instance primt2 is not running on node 10gnode2 Instance primt3 is not running on node 10gnode3
Copy the PFILE from one of the running nodes to 10gnode3. The pfile would contain only the location of the SPFILE which is hosted on ASM.
[oracle@10gnode1 ~]$ cd $ORACLE_HOME/dbs [oracle@10gnode1 dbs]$ ls -lrt initprimt1* -rw-r--r-- 1 oracle oinstall 38 Aug 19 2014 initprimt1.ora [oracle@10gnode1 dbs]$ [oracle@10gnode1 dbs]$ [oracle@10gnode1 dbs]$ cat initprimt1.ora SPFILE='+data/primt/spfileprimt.ora'
[oracle@10gnode1 dbs]$ [oracle@10gnode1 dbs]$ scp initprimt1.ora oracle@10gnode3:/u01/app/oracle/product/11.2.0/db3/dbs/initprimt3.ora initprimt1.ora 100% 38 0.0KB/s 00:00 [oracle@10gnode1 dbs]$ [oracle@10gnode3 ~]$ cd $ORACLE_HOME/dbs [oracle@10gnode3 dbs]$ ls -lrt init* -rw-r--r-- 1 oracle oinstall 2851 Jan 25 19:25 init.ora -rw-r--r-- 1 oracle oinstall 38 Feb 24 20:02 initprimt3.ora [/sourcode] Start the Database using SRVCTL: [oracle@10gnode1 ~]$ srvctl start database -d primt PRCR-1079 : Failed to start resource ora.primt.db CRS-5017: The resource action "ora.primt.db start" encountered the following error: ORA-01618: redo thread 3 is not enabled - cannot mount . For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid3/log/10gnode3/agent/crsd/oraagent_oracle/oraagent_oracle.log". CRS-2674: Start of 'ora.primt.db' on '10gnode3' failed CRS-2632: There are no more servers to try to place resource 'ora.primt.db' on that would satisfy its placement policy
[oracle@10gnode1 ~]$ srvctl status database -d primt -v -f Instance primt1 is running on node 10gnode1. Instance status: Open. Instance primt2 is running on node 10gnode2. Instance status: Open. Instance primt3 is not running on node 10gnode3
From the above error, it could be noticed that THREAD 3 was not enabled and due to which the instance primt3 wasn’t started.
From any one of the running instance, enable THREAD 3 as shown below.
[oracle@10gnode1 ~]$ echo $ORACLE_SID primt1 [oracle@10gnode1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Thu Feb 26 21:02:31 2015 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> alter database enable public thread 3; Database altered.
Now, lets’ start the instance primt3 on 10gnode3.
[oracle@10gnode1 ~]$ srvctl start instance -i primt3 -d primt [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ srvctl status database -d primt -v -f Instance primt1 is running on node 10gnode1. Instance status: Open. Instance primt2 is running on node 10gnode2. Instance status: Open. Instance primt3 is running on node 10gnode3. Instance status: Open.
Finally add the local_listener value to the instance “primt3” based on the listener configuration running on the new node 10gnode3.
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
While running root.sh in a RAC environment, its execution might fail with the error message “Invalid interface. There are more than one interface, but there is no private interface specified”. This normally happens when the previous CRS was deconfigured and a new clusterware is being configured.
[oracle@10gnode1 install]$ more root_10gnode1.mydomain_2014-11-14_19-54-35.log Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0/grid3 Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid3/crs/install/crsconfig_params User ignored Prerequisites during installation Invalid interface. There are more than one interface, but there is no private interface specified /u01/app/11.2.0/grid3/perl/bin/perl -I/u01/app/11.2.0/grid3/perl/lib -I/u01/app/11.2.0/grid3/crs/install /u01/app/11.2.0/grid3/crs/install/rootcrs.pl execution failed [oracle@10gnode1 install]$
Upon checking the CRSCONFIG_PARAMS file, there were no values specified for the NODEVIPS and the NETWORKS parameters.
Fill in appropriate values as per your environment for the CRS_NODEVIPS (VIP subnet mask details of the nodes of the cluster) and the NETWORKS (Public and Private IP Subnet mask details)
[oracle@10gnode1 ~]$ vi /u01/app/11.2.0/grid3/crs/install/crsconfig_params CRS_NODEVIPS='10gnode1-vip/255.255.255.0/eth0,10gnode2-vip/255.255.255.0/eth0' NETWORKS="eth0"/192.168.2.0:public,"eth1"/192.168.0.0:cluster_interconnect
CRSCONFIG_PARAMS file can be found at the location GRID_HOME/crs/install
[oracle@10gnode1 ~]$ vi /u01/app/11.2.0/grid3/crs/install/crsconfig_params CRS_NODEVIPS='10gnode1-vip/255.255.255.0/eth0,10gnode2-vip/255.255.255.0/eth0' NETWORKS="eth0"/192.168.2.0:public,"eth1"/192.168.0.0:cluster_interconnect
Once the file is filled in with the appropriate values, re-run the root.sh script and that would fix the problem.
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
In this article, I’m posting out a scenario wherein there is a loss of OCR and Voting Disk and we do not have any backups of these to restore.
Environment:
Database Version: 11.2.0.3
2 node RAC– nodes: 10gnode1, 10gnode2
Database : PRIMT
Instance PRIMT1 on 10gnode1
Instance PRIMT2 on 10gnode2
ORA_CRS_HOME: /u01/app/11.2.0/grid3 (grid infrastructure home)
Below is the details of the Database instances running on node 10gnode1 and 10gnode2
10gnode1:
[oracle@10gnode1 ~]$ ps -ef | grep pmon oracle 4159 1 0 18:15 ? 00:00:00 asm_pmon_+ASM1 oracle 4616 1 0 18:16 ? 00:00:00 ora_pmon_primt1 oracle 5069 5030 0 18:28 pts/0 00:00:00 grep pmon
10gnode2:
[oracle@10gnode2 ~]$ [oracle@10gnode2 ~]$ ps -ef | grep pmon oracle 4248 1 0 18:14 ? 00:00:00 asm_pmon_+ASM2 oracle 4811 1 0 18:15 ? 00:00:00 ora_pmon_primt2 oracle 5822 5793 0 18:28 pts/0 00:00:00 grep pmon
CRS Status:
[oracle@10gnode1 ~]$ cd $ORA_CRS_HOME/bin [oracle@10gnode1 bin]$ [oracle@10gnode1 bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [oracle@10gnode1 bin]$
Details of the resources registered
[oracle@10gnode1 bin]$ ./crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.DATA.dg
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.FRA.dg
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.LISTENER.lsnr
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.asm
ONLINE ONLINE 10gnode1 Started
ONLINE ONLINE 10gnode2 Started
ora.eons
ONLINE OFFLINE 10gnode1
ONLINE OFFLINE 10gnode2
ora.gsd
OFFLINE OFFLINE 10gnode1
OFFLINE OFFLINE 10gnode2
ora.net1.network
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.ons
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.registry.acfs
ONLINE OFFLINE 10gnode1
ONLINE ONLINE 10gnode2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.10gnode1.vip
1 ONLINE ONLINE 10gnode1
ora.10gnode2.vip
1 ONLINE ONLINE 10gnode2
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE 10gnode2
ora.oc4j
1 OFFLINE OFFLINE
ora.primt.db
1 ONLINE ONLINE 10gnode1 Open
2 ONLINE ONLINE 10gnode2 Open
ora.primt.primt_appl.svc
1 ONLINE ONLINE 10gnode1
ora.scan1.vip
1 ONLINE ONLINE 10gnode2
[oracle@10gnode1 bin]$
Now let’s check the OCR and voting disk details.
Voting Disk Details:
[oracle@10gnode1 bin]$ ./crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 823a909459594fcebfeab0588e3f9db2 (/dev/oracleasm/disks/DSK1) [CRS] Located 1 voting disk(s). [oracle@10gnode1 bin]$
OCR details:
[oracle@10gnode1 bin]$ ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 3284
Available space (kbytes) : 258836
ID : 1842344055
Device/File Name : +CRS
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
Lets check the backup of OCR available
[oracle@10gnode1 ~]$ cd $ORA_CRS_HOME/bin [oracle@10gnode1 bin]$ ./ocrconfig -showbackup PROT-24: Auto backups for the Oracle Cluster Registry are not available 10gnode2 2014/11/14 18:33:54 /u01/app/11.2.0/grid3/cdata/node-scan/backup_20141114_183354.ocr
Now, let me collect the details of the ASM diskgroup hosting the OCR and voting disk.
[oracle@10gnode1 ~]$ . oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base has been set to /u01/app/oracle [oracle@10gnode1 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Fri Nov 14 18:36:27 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Real Application Clusters and Automatic Storage Management options SQL> select name,state from v$asm_diskgroup; NAME STATE ------------------------------ ----------- CRS MOUNTED DATA MOUNTED FRA MOUNTED
SQL> select a.name,a.path,a.group_number from v$asm_disk a,v$asm_diskgroup b where a.group_number=b.group_number; NAME PATH GROUP_NUMBER ------------------------------ -------------------------------------------------- ------------ FRA_0001 /dev/oracleasm/disks/DSK5 3 FRA_0000 /dev/oracleasm/disks/DSK4 3 DATA_0001 /dev/oracleasm/disks/DSK3 2 DATA_0000 /dev/oracleasm/disks/DSK2 2 CRS_0000 /dev/oracleasm/disks/DSK1 1
The disk hosting the OCR and Voting disks is “/dev/oracleasm/disks/DSK1”. Let me collect its details and try to corrupt it to create a scenario of loss of voting disk and OCR.
We cannot directly delete the OCR file from the ASM as its currently being used by CRS. If an attempt to delete, then the following error would be thrown.
ASMCMD> rm -rf +crs/node-scan/ocrfile/REGISTRY.255.854106767 ORA-15032: not all alterations performed ORA-15028: ASM file '+crs/node-scan/ocrfile/REGISTRY.255.854106767' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
With the below outcome, its clear that the disk at OS level is “/dev/sdc1” which is associated with the CRS ASM diskfile.
[root@10gnode1 ~]# /etc/init.d/oracleasm querydisk -p DSK1 ASM disk name contains an invalid character: "-" Disk "DSK1" is a valid ASM disk on device [8, 33] [root@10gnode1 ~]# [root@10gnode1 ~]# ls -lrt /dev/sd* | grep 33 brw-r----- 1 root disk 8, 33 Nov 14 18:12 /dev/sdc1 [root@10gnode1 ~]#
Let me try to corrupt the diskgroup.
[root@10gnode1 ~]# dd if=/dev/zero of=/dev/sdc1 bs=8192 654645+0 records in 654644+0 records out 5362850304 bytes (5.4 GB) copied, 51.6696 seconds, 104 MB/s
Done…So now CRS would be down automatically.
[oracle@10gnode1 ~]$ cd $ORA_CRS_HOME/bin [oracle@10gnode1 bin]$ ./ocrcheck PROT-602: Failed to retrieve data from the cluster registry PROC-26: Error while accessing the physical storage
[root@10gnode1 ~]# cd /u01/app/11.2.0/grid3/bin [root@10gnode1 bin]# ./crsctl stop crs CRS-2796: The command may not proceed when Cluster Ready Services is not running CRS-4687: Shutdown command has completed with errors. CRS-4000: Command Stop failed, or completed with errors.
Lets check for the CRSD.BIN process …. From below output, its clear that CRSD process is no more.
[root@10gnode1 bin]# ps -ef | grep d.bin root 3394 1 0 18:12 ? 00:00:11 /u01/app/11.2.0/grid3/bin/ohasd.bin reboot oracle 3832 1 0 18:13 ? 00:00:00 /u01/app/11.2.0/grid3/bin/mdnsd.bin oracle 3843 1 0 18:13 ? 00:00:01 /u01/app/11.2.0/grid3/bin/gpnpd.bin oracle 3856 1 0 18:13 ? 00:00:07 /u01/app/11.2.0/grid3/bin/gipcd.bin root 3869 1 0 18:13 ? 00:00:13 /u01/app/11.2.0/grid3/bin/osysmond.bin oracle 5831 1 0 19:05 ? 00:00:00 /u01/app/11.2.0/grid3/bin/evmd.bin oracle 5889 1 0 19:05 ? 00:00:00 /u01/app/11.2.0/grid3/bin/ocssd.bin root 5939 5891 0 19:06 pts/0 00:00:00 grep d.bin
Confirming the same on 10gnode2 as well.
[oracle@10gnode2 node-scan]$ ps -ef | grep d.bin root 3385 1 0 18:13 ? 00:00:13 /u01/app/11.2.0/grid3/bin/ohasd.bin reboot oracle 3824 1 0 18:13 ? 00:00:00 /u01/app/11.2.0/grid3/bin/mdnsd.bin oracle 3835 1 0 18:13 ? 00:00:03 /u01/app/11.2.0/grid3/bin/gpnpd.bin oracle 3847 1 0 18:13 ? 00:00:11 /u01/app/11.2.0/grid3/bin/gipcd.bin root 3861 1 0 18:13 ? 00:00:18 /u01/app/11.2.0/grid3/bin/osysmond.bin oracle 7215 5793 0 19:27 pts/0 00:00:00 grep d.bin oracle 23920 1 0 19:25 ? 00:00:00 /u01/app/11.2.0/grid3/bin/ocssd.bin [oracle@10gnode2 node-scan]$
Ok … Now let me stop all other processes forcibly.
10gnode1:
[root@10gnode1 bin]# ./crsctl stop crs -f CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '10gnode1' CRS-2673: Attempting to stop 'ora.mdnsd' on '10gnode1' CRS-2677: Stop of 'ora.mdnsd' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.crf' on '10gnode1' CRS-2677: Stop of 'ora.crf' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on '10gnode1' CRS-2677: Stop of 'ora.gipcd' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on '10gnode1' CRS-2677: Stop of 'ora.gpnpd' on '10gnode1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '10gnode1' has completed CRS-4133: Oracle High Availability Services has been stopped.
10gnode2:
[root@10gnode2 ~]# cd /u01/app/11.2.0/grid3/bin [root@10gnode2 bin]# ./crsctl stop crs -f CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '10gnode2' CRS-2673: Attempting to stop 'ora.drivers.acfs' on '10gnode2' CRS-2673: Attempting to stop 'ora.mdnsd' on '10gnode2' CRS-2677: Stop of 'ora.drivers.acfs' on '10gnode2' succeeded CRS-2677: Stop of 'ora.mdnsd' on '10gnode2' succeeded CRS-2673: Attempting to stop 'ora.crf' on '10gnode2' CRS-2677: Stop of 'ora.crf' on '10gnode2' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on '10gnode2' CRS-2677: Stop of 'ora.gipcd' on '10gnode2' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on '10gnode2' CRS-2677: Stop of 'ora.gpnpd' on '10gnode2' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '10gnode2' has completed CRS-4133: Oracle High Availability Services has been stopped.
To simulate the scenario, let me remvoe the backup that we have.
[root@10gnode2 ~]# ls -lrt /u01/app/11.2.0/grid3/cdata/node-scan/backup_20141114_183354.ocr -rw------- 1 root root 7610368 Nov 14 18:33 /u01/app/11.2.0/grid3/cdata/node-scan/backup_20141114_183354.ocr [root@10gnode2 ~]# [root@10gnode2 ~]# rm -rf /u01/app/11.2.0/grid3/cdata/node-scan/backup_20141114_183354.ocr [root@10gnode2 ~]#
All done..Now let’s recreate the disk to be used at ASM level.
[root@10gnode1 bin]# /etc/init.d/oracleasm deletedisk DSK1 Removing ASM disk "DSK1": [ OK ] [root@10gnode1 bin]# [root@10gnode1 bin]# /etc/init.d/oracleasm createdisk DSK1 /dev/sdc1 Marking disk "/dev/sdc1" as an ASM disk: [ OK ] [root@10gnode1 bin]# [root@10gnode1 bin]#
If the backup of OCR and voting disk is available, then we could just create the diskgroup and restore OCR/Voting disk from the backup back to the diskgroup. But in our case (no backup), we’ll have to deconfig the cluster and run “ROOT.SH” to reconfig.
Deconfiguring the cluster can be done through “rootcrs.pl” script located at .
[root@10gnode1 ~]# cd /u01/app/11.2.0/grid3/crs/install/ [root@10gnode1 install]# ls -lrt rootcrs* -rwxr-xr-x 1 root oinstall 35639 Oct 12 2012 rootcrs.pl [root@10gnode1 install]# [root@10gnode1 install]# [root@10gnode1 install]# ./rootcrs.pl -deconfig -force Using configuration parameter file: ./crsconfig_params PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type PRCR-1068 : Failed to query resources Cannot communicate with crsd PRCR-1070 : Failed to check if resource ora.gsd is registered Cannot communicate with crsd PRCR-1070 : Failed to check if resource ora.ons is registered Cannot communicate with crsd CRS-4535: Cannot communicate with Cluster Ready Services CRS-4000: Command Stop failed, or completed with errors. CRS-4544: Unable to connect to OHAS CRS-4000: Command Stop failed, or completed with errors. Successfully deconfigured Oracle clusterware stack on this node
Repeat the same step of deconfiguring the cluster in 10gnode2 as well.
10gnode2:
[root@10gnode2 install]# ./rootcrs.pl -deconfig -force Using configuration parameter file: ./crsconfig_params PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type PRCR-1068 : Failed to query resources Cannot communicate with crsd PRCR-1070 : Failed to check if resource ora.gsd is registered Cannot communicate with crsd PRCR-1070 : Failed to check if resource ora.ons is registered Cannot communicate with crsd CRS-4535: Cannot communicate with Cluster Ready Services CRS-4000: Command Stop failed, or completed with errors. CRS-4544: Unable to connect to OHAS CRS-4000: Command Stop failed, or completed with errors. Successfully deconfigured Oracle clusterware stack on this node [root@10gnode2 install]#
Now, let me run ROOT.SH script in both the nodes. Let’s first run in 10gnode1.
[root@10gnode1 ~]# cd /u01/app/11.2.0/grid3/ [root@10gnode1 grid3]# ./root.sh Check /u01/app/11.2.0/grid3/install/root_10gnode1.mydomain_2014-11-14_21-53-08.log for the output of root script [root@10gnode1 grid3]#
Here is outcome of the above execution on 10gnode1.
[oracle@10gnode1 ~]$ cat /u01/app/11.2.0/grid3/install/root_10gnode1.mydomain_2014-11-14_21-53-08.log
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/grid3
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid3/crs/install/crsconfig_params
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on '10gnode1'
CRS-2676: Start of 'ora.mdnsd' on '10gnode1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on '10gnode1'
CRS-2676: Start of 'ora.gpnpd' on '10gnode1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on '10gnode1'
CRS-2672: Attempting to start 'ora.gipcd' on '10gnode1'
CRS-2676: Start of 'ora.cssdmonitor' on '10gnode1' succeeded
CRS-2676: Start of 'ora.gipcd' on '10gnode1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on '10gnode1'
CRS-2672: Attempting to start 'ora.diskmon' on '10gnode1'
CRS-2676: Start of 'ora.diskmon' on '10gnode1' succeeded
CRS-2676: Start of 'ora.cssd' on '10gnode1' succeeded
ASM created and started successfully.
Disk Group CRS created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Successful addition of voting disk 067b68c8ffc34fd9bf3bb529b7737ecb.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 067b68c8ffc34fd9bf3bb529b7737ecb (/dev/oracleasm/disks/DSK1) [CRS]
Located 1 voting disk(s).
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[oracle@10gnode1 ~]$
From the above log, its clear that CRS diskgroup is created with the disk ‘/dev/oracleasm/disks/DSK1’ and OCR and voting disks have got created in the CRS diskgroup.
To have the ASM diskgroup created with the required disk and also for the OCR and voting disks to be created under it, we need to pass the right values under the CRSCONFIG_PARAMS file. This file is located under . In my case it’s under “/u01/app/11.2.0/grid3/crs/install/”
Once the script is exeucted, let’s verify if everything is fine on node 10gnode1.
10gnode1:
[oracle@10gnode1 ~]$ ps -ef | grep d.bin root 7714 1 0 21:53 ? 00:00:04 /u01/app/11.2.0/grid3/bin/ohasd.bin reboot oracle 9876 1 0 21:56 ? 00:00:00 /u01/app/11.2.0/grid3/bin/mdnsd.bin oracle 9890 1 0 21:56 ? 00:00:00 /u01/app/11.2.0/grid3/bin/gpnpd.bin oracle 9909 1 0 21:56 ? 00:00:01 /u01/app/11.2.0/grid3/bin/gipcd.bin oracle 9944 1 0 21:56 ? 00:00:02 /u01/app/11.2.0/grid3/bin/ocssd.bin root 10028 1 0 21:56 ? 00:00:00 /u01/app/11.2.0/grid3/bin/octssd.bin root 10049 1 0 21:56 ? 00:00:02 /u01/app/11.2.0/grid3/bin/osysmond.bin root 10193 1 1 21:57 ? 00:00:06 /u01/app/11.2.0/grid3/bin/crsd.bin reboot oracle 10210 1 0 21:57 ? 00:00:00 /u01/app/11.2.0/grid3/bin/evmd.bin oracle 11421 4263 0 22:02 pts/1 00:00:00 grep d.bin
[oracle@10gnode1 ~]$ cd /u01/app/11.2.0/grid3/bin [oracle@10gnode1 bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
[oracle@10gnode1 bin]$ ps -ef | grep pmon oracle 10133 1 0 21:57 ? 00:00:00 asm_pmon_+ASM1 oracle 11477 4263 0 22:03 pts/1 00:00:00 grep pmon
It can be seen that all is fine on node 10gnode1. Now, let’s proceed with node 10gnode2 by running the “ROOT.SH” script.
10gnode2:
[root@10gnode2 ~]# cd /u01/app/11.2.0/grid3 [root@10gnode2 grid3]# ./root.sh Check /u01/app/11.2.0/grid3/install/root_10gnode2.mydomain_2014-11-14_22-05-58.log for the output of root script [root@10gnode2 grid3]#
Here is outcome of the above execution on 10gnode2.
[oracle@10gnode2 ~]$ cat /u01/app/11.2.0/grid3/install/root_10gnode2.mydomain_2014-11-14_22-05-58.log
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/11.2.0/grid3
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid3/crs/install/crsconfig_params
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node 10gnode1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[oracle@10gnode2 ~]$
[oracle@10gnode2 ~]$ ps -ef | grep d.bin root 5121 1 0 22:06 ? 00:00:05 /u01/app/11.2.0/grid3/bin/ohasd.bin reboot oracle 6547 1 0 22:08 ? 00:00:00 /u01/app/11.2.0/grid3/bin/mdnsd.bin oracle 6561 1 0 22:08 ? 00:00:00 /u01/app/11.2.0/grid3/bin/gpnpd.bin oracle 6580 1 0 22:08 ? 00:00:01 /u01/app/11.2.0/grid3/bin/gipcd.bin oracle 6629 1 0 22:08 ? 00:00:04 /u01/app/11.2.0/grid3/bin/ocssd.bin root 6695 1 0 22:08 ? 00:00:01 /u01/app/11.2.0/grid3/bin/octssd.bin root 6717 1 0 22:08 ? 00:00:02 /u01/app/11.2.0/grid3/bin/osysmond.bin root 6899 1 1 22:09 ? 00:00:05 /u01/app/11.2.0/grid3/bin/crsd.bin reboot oracle 6915 1 0 22:09 ? 00:00:01 /u01/app/11.2.0/grid3/bin/evmd.bin oracle 8136 7522 0 22:16 pts/2 00:00:00 grep d.bin
[oracle@10gnode2 ~]$ cd /u01/app/11.2.0/grid3/bin/ [oracle@10gnode2 bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
[oracle@10gnode2 bin]$ ps -ef | grep pmon oracle 6841 1 0 22:09 ? 00:00:00 asm_pmon_+ASM2 oracle 8807 7522 0 22:22 pts/2 00:00:00 grep pmon
It’s clear that all is fine on 10gnode2 as well. Let me mount the diskgroups at ASM instance and start the database instances.
Note that, with the loss of voting disks and OCR, there would be no impact on the other diskgroups. So, we are not recreating or deleting or doing unwanted stuff with the remaining diskgroups.
[oracle@10gnode1 ~]$ . oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base has been set to /u01/app/oracle [oracle@10gnode1 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Fri Nov 14 22:22:41 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Real Application Clusters and Automatic Storage Management options SQL> select name,state from v$asm_diskgroup; NAME STATE ------------------------------ ----------- CRS MOUNTED FRA DISMOUNTED DATA DISMOUNTED
Let me start the DATA and FRA diskgroups.
SQL> alter diskgroup FRA mount; Diskgroup altered. SQL> alter diskgroup DATA mount; Diskgroup altered. SQL> select name,state from v$asm_diskgroup; NAME STATE ------------------------------ ----------- CRS MOUNTED FRA MOUNTED DATA MOUNTED
All done. Let’s check the same on the other node.
10gnode2:
[oracle@10gnode2 ~]$ . oraenv ORACLE_SID = [+ASM2] ? The Oracle base remains unchanged with value /u01/app/oracle [oracle@10gnode2 ~]$ [oracle@10gnode2 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Fri Nov 14 22:24:28 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Real Application Clusters and Automatic Storage Management options SQL> select name,state from v$asm_diskgroup; NAME STATE ------------------------------ ----------- CRS MOUNTED FRA MOUNTED DATA MOUNTED
Everything is fine. Let me check the same using SRVCTL.
[oracle@10gnode1 ~]$ srvctl status asm ASM is running on 10gnode1,10gnode2
Now, its time to add the resources to the cluster. First let me add the database and then its instances.
[oracle@10gnode1 bin]$ srvctl status database -d primt -v -f PRCD-1120 : The resource for database primt could not be found. PRCR-1001 : Resource ora.primt.db does not exist
As expected from above, database is still not registered. Let me do it.
[oracle@10gnode1 bin]$ srvctl add database -d primt -o /u01/app/oracle/product/11.2.0/db3 [oracle@10gnode1 bin]$ srvctl config database -d primt Database unique name: primt Database name: Oracle home: /u01/app/oracle/product/11.2.0/db3 Oracle user: oracle Spfile: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: primt Database instances: primt1,primt2 Disk Groups: Mount point paths: Services: Type: RAC Database is administrator managed [oracle@10gnode1 bin]$
[oracle@10gnode1 bin]$ srvctl status database -d primt Database is not running.
Now, its time to add the instances.
[oracle@10gnode1 bin]$ srvctl add instance -i primt1 -d primt -n 10gnode1 [oracle@10gnode1 bin]$ srvctl add instance -i primt2 -d primt -n 10gnode2
Start the database using “srvctl start database -d ” command and check its status.
[oracle@10gnode1 bin]$ srvctl status database -d primt -v -f Instance primt1 is running on node 10gnode1. Instance status: Open. Instance primt2 is running on node 10gnode2. Instance status: Open.
If there were services configured for the database, then those too need to be added using “srvctl add service” command.
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
In this post I’m demonstrating on the startegies involved in applying a PSU patch on the Grid Home and the Database Home using a rolling fashion.
2 Node RAC : 10gnode1 and 10gnode2 Database & Grid Version : 11.2.0.3 PSU Patch : 11.2.0.3.4 (14275572) Database Running : PRIMT DB Service : PRIMT_APPL
on node 10gnode1:
List of patches that have been applied on the GRID Home:
[oracle@10gnode1 ~]$ opatch lsinventory -oh /u01/app/11.2.0/grid3 Invoking OPatch 11.2.0.1.7 Oracle Interim Patch Installer version 11.2.0.1.7 Copyright (c) 2011, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/11.2.0/grid3 Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.7 OUI version : 11.2.0.3.0 Log file location : /u01/app/11.2.0/grid3/cfgtoollogs/opatch/opatch2014-08-24_08-46-26AM.log Lsinventory Output file location : /u01/app/11.2.0/grid3/cfgtoollogs/opatch/lsinv/lsinventory2014-08-24_08-46-26AM.txt -------------------------------------------------------------------------------- Installed Top-level Products (1): Oracle Grid Infrastructure 11.2.0.3.0 There are 1 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. Rac system comprising of multiple nodes Local node = 10gnode1 Remote node = 10gnode2 -------------------------------------------------------------------------------- OPatch succeeded. [oracle@10gnode1 ~]$
List of Patches applied on DB Home:
[oracle@10gnode1 ~]$ opatch lsinventory -oh /u01/app/oracle/product/11.2.0/db3 Invoking OPatch 11.2.0.1.7 Oracle Interim Patch Installer version 11.2.0.1.7 Copyright (c) 2011, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/11.2.0/db3 Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.7 OUI version : 11.2.0.3.0 Log file location : /u01/app/oracle/product/11.2.0/db3/cfgtoollogs/opatch/opatch2014-08-24_08-46-40AM.log Lsinventory Output file location : /u01/app/oracle/product/11.2.0/db3/cfgtoollogs/opatch/lsinv/lsinventory2014-08-24_08-46-40AM.txt -------------------------------------------------------------------------------- Installed Top-level Products (1): Oracle Database 11g 11.2.0.3.0 There are 1 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. Rac system comprising of multiple nodes Local node = 10gnode1 Remote node = 10gnode2 --------------------------------------------------------------------------------
On node 10gnode2:
List of patches applied on GRID Home:
[oracle@10gnode2 ~]$ opatch lsinventory -oh /u01/app/11.2.0/grid3 Invoking OPatch 11.2.0.1.7 Oracle Interim Patch Installer version 11.2.0.1.7 Copyright (c) 2011, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/11.2.0/grid3 Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.7 OUI version : 11.2.0.3.0 Log file location : /u01/app/11.2.0/grid3/cfgtoollogs/opatch/opatch2014-08-24_08-50-37AM.log Lsinventory Output file location : /u01/app/11.2.0/grid3/cfgtoollogs/opatch/lsinv/lsinventory2014-08-24_08-50-37AM.txt -------------------------------------------------------------------------------- Installed Top-level Products (1): Oracle Grid Infrastructure 11.2.0.3.0 There are 1 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. Rac system comprising of multiple nodes Local node = 10gnode2 Remote node = 10gnode1 -------------------------------------------------------------------------------- OPatch succeeded.
List of Patches applied on DB Home:
[oracle@10gnode2 ~]$ opatch lsinventory -oh /u01/app/oracle/product/11.2.0/db3 Invoking OPatch 11.2.0.1.7 Oracle Interim Patch Installer version 11.2.0.1.7 Copyright (c) 2011, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/11.2.0/db3 Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.7 OUI version : 11.2.0.3.0 Log file location : /u01/app/oracle/product/11.2.0/db3/cfgtoollogs/opatch/opatch2014-08-24_08-50-54AM.log Lsinventory Output file location : /u01/app/oracle/product/11.2.0/db3/cfgtoollogs/opatch/lsinv/lsinventory2014-08-24_08-50-54AM.txt -------------------------------------------------------------------------------- Installed Top-level Products (1): Oracle Database 11g 11.2.0.3.0 There are 1 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. Rac system comprising of multiple nodes Local node = 10gnode2 Remote node = 10gnode1 -------------------------------------------------------------------------------- OPatch succeeded.
It could be see that there are no patches currently applied on either GRID or DB Homes on both the nodes.
Let’s check the DB instances running on these nodes:
[oracle@10gnode1 ~]$ srvctl status database -d primt -v -f Instance primt1 is running on node 10gnode1 with online services PRIMT_APPL. Instance status: Open. Instance primt2 is running on node 10gnode2. Instance status: Open.
Check if there are any conflicts occurring with the apply of the 11.2.0.3.4 PSU patch on GRID and DB HOME
[oracle@10gnode1 ~]$ opatch prereq CheckConflictAgainstOHWithDetail -ph /opt/112034_GI_PSU/14275605/14275605 -oh /u01/app/11.2.0/grid3 Invoking OPatch 11.2.0.1.7 Oracle Interim Patch Installer version 11.2.0.1.7 Copyright (c) 2011, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /u01/app/11.2.0/grid3 Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.7 OUI version : 11.2.0.3.0 Log file location : /u01/app/11.2.0/grid3/cfgtoollogs/opatch/opatch2014-08-24_09-54-37AM.log Invoking prereq "checkconflictagainstohwithdetail" Prereq "checkConflictAgainstOHWithDetail" passed. OPatch succeeded. [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$
[oracle@10gnode1 ~]$ opatch prereq CheckConflictAgainstOHWithDetail -ph /opt/112034_GI_PSU/14275605/14275605 -oh /u01/app/oracle/product/11.2.0/db3 Invoking OPatch 11.2.0.1.7 Oracle Interim Patch Installer version 11.2.0.1.7 Copyright (c) 2011, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /u01/app/oracle/product/11.2.0/db3 Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.7 OUI version : 11.2.0.3.0 Log file location : /u01/app/oracle/product/11.2.0/db3/cfgtoollogs/opatch/opatch2014-08-24_09-54-54AM.log Invoking prereq "checkconflictagainstohwithdetail" Prereq "checkConflictAgainstOHWithDetail" passed. OPatch succeeded. [oracle@10gnode1 ~]$
[oracle@10gnode1 ~]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /opt/112034_GI_PSU/14275572 -oh /u01/app/11.2.0/grid3 Invoking OPatch 11.2.0.1.7 Oracle Interim Patch Installer version 11.2.0.1.7 Copyright (c) 2011, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /u01/app/11.2.0/grid3 Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.7 OUI version : 11.2.0.3.0 Log file location : /u01/app/11.2.0/grid3/cfgtoollogs/opatch/opatch2014-08-24_09-55-30AM.log Invoking prereq "checkconflictagainstohwithdetail" Prereq "checkConflictAgainstOHWithDetail" passed. OPatch succeeded. [oracle@10gnode1 ~]$
[oracle@10gnode1 ~]$ opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /opt/112034_GI_PSU/14275572/custom/server/14275572 -oh /u01/app/oracle/product/11.2.0/db3 Invoking OPatch 11.2.0.1.7 Oracle Interim Patch Installer version 11.2.0.1.7 Copyright (c) 2011, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /u01/app/oracle/product/11.2.0/db3 Central Inventory : /u01/app/oraInventory from : /etc/oraInst.loc OPatch version : 11.2.0.1.7 OUI version : 11.2.0.3.0 Log file location : /u01/app/oracle/product/11.2.0/db3/cfgtoollogs/opatch/opatch2014-08-24_09-55-55AM.log Invoking prereq "checkconflictagainstohwithdetail" Prereq "checkConflictAgainstOHWithDetail" passed. OPatch succeeded.
Now, as ROOT user, apply the GI PSU Patch on both GRID Home and DB Home only on node 10gnode1 using OPATCH AUTO command.
[root@10gnode1 ~]# opatch auto /opt/112034_GI_PSU/ -ocmrf /home/oracle/ocm.rsp Executing /u01/app/11.2.0/grid3/perl/bin/perl /u01/app/11.2.0/grid3/OPatch/crs/patch11203.pl -patchdir /opt -patchn 112034_GI_PSU -ocmrf /home/oracle/ocm.rsp -paramfile /u01/app/11.2.0/grid3/crs/install/crsconfig_params This is the main log file: /u01/app/11.2.0/grid3/cfgtoollogs/opatchauto2014-08-24_11-42-53.log This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /u01/app/11.2.0/grid3/cfgtoollogs/opatchauto2014-08-24_11-42-53.report.log 2014-08-24 11:42:53: Starting Clusterware Patch Setup Using configuration parameter file: /u01/app/11.2.0/grid3/crs/install/crsconfig_params Stopping RAC /u01/app/oracle/product/11.2.0/db3 ... Stopped RAC /u01/app/oracle/product/11.2.0/db3 successfully patch /opt/112034_GI_PSU/14275572/custom/server/14275572 apply successful for home /u01/app/oracle/product/11.2.0/db3 patch /opt/112034_GI_PSU/14275605 apply successful for home /u01/app/oracle/product/11.2.0/db3 Stopping CRS... Stopped CRS successfully patch /opt/112034_GI_PSU/14275572 apply successful for home /u01/app/11.2.0/grid3 patch /opt/112034_GI_PSU/14275605 apply successful for home /u01/app/11.2.0/grid3 Starting CRS... CRS-4123: Oracle High Availability Services has been started. Starting RAC /u01/app/oracle/product/11.2.0/db3 ... Started RAC /u01/app/oracle/product/11.2.0/db3 successfully opatch auto succeeded. [root@10gnode1 ~]#
From the above log, its clear that the patch has been applied successfully on both GRID and DB Homes of node 10gnode1
Lets verify the same.
[oracle@10gnode1 grid3]$ opatch lsinventory -oh /u01/app/11.2.0/grid3
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/11.2.0/grid3
Central Inventory : /u01/app/oraInventory
from : /u01/app/11.2.0/grid3/oraInst.loc
OPatch version : 11.2.0.3.6
OUI version : 11.2.0.3.0
Log file location : /u01/app/11.2.0/grid3/cfgtoollogs/opatch/opatch2014-08-24_12-08-16PM_1.log
Lsinventory Output file location : /u01/app/11.2.0/grid3/cfgtoollogs/opatch/lsinv/lsinventory2014-08-24_12-08-16PM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Grid Infrastructure 11.2.0.3.0
There are 1 product(s) installed in this Oracle Home.
Interim patches (2) :
Patch 14275605 : applied on Sun Aug 24 12:01:58 IST 2014
Unique Patch ID: 15398795
Patch description: "Database Patch Set Update : 11.2.0.3.4 (14275605)"
Created on 3 Oct 2012, 23:24:57 hrs PST8PDT
Sub-patch 13923374; "Database Patch Set Update : 11.2.0.3.3 (13923374)"
Sub-patch 13696216; "Database Patch Set Update : 11.2.0.3.2 (13696216)"
Sub-patch 13343438; "Database Patch Set Update : 11.2.0.3.1 (13343438)"
Bugs fixed:
14480676, 13566938, 13419660, 10350832, 13632717, 14063281, 12919564
13624984, 13430938, 13467683, 13588248, 13420224, 14548763, 13080778
12646784, 13804294, 12861463, 12834027, 13377816, 13036331, 12880299
14664355, 13499128, 14409183, 12998795, 12829021, 13492735, 12794305
13503598, 10133521, 12718090, 13742433, 12905058, 12401111, 13742434
13257247, 12849688, 13362079, 12950644, 13742435, 13464002, 12917230
13923374, 12879027, 14613900, 12585543, 12535346, 14480675, 12588744
11877623, 14480674, 13916709, 12847466, 13773133, 14076523, 13649031
13340388, 13366202, 13528551, 13981051, 12894807, 13343438, 12582664
12748240, 12797765, 13385346, 12923168, 13384182, 13612575, 13466801
13484963, 12971775, 11063191, 13772618, 13070939, 12797420, 13035804
13041324, 12976376, 11708510, 13742437, 13737746, 14062795, 13035360
12693626, 13742438, 13326736, 13332439, 14038787, 14062796, 12913474
13001379, 14390252, 13099577, 13370330, 13059165, 14062797, 14275605
9873405, 13742436, 9858539, 14062794, 13358781, 12960925, 13699124
12662040, 9703627, 12617123, 13338048, 12938841, 12658411, 12620823
12845115, 12656535, 14062793, 12678920, 12764337, 13354082, 13397104
14062792, 13250244, 12594032, 9761357, 12612118, 13742464, 13550185
13457582, 13527323, 12780983, 12583611, 13502183, 12780098, 13705338
13696216, 13476583, 11840910, 13903046, 13572659, 13718279, 13554409
13657605, 13103913, 14063280
Patch 14275572 : applied on Sun Aug 24 11:58:24 IST 2014
Unique Patch ID: 15398795
Patch description: "Grid Infrastructure Patch Set Update : 11.2.0.3.4 (14275572)"
Created on 12 Oct 2012, 05:40:32 hrs PST8PDT
Bugs fixed:
14275572, 13919095, 13696251, 13348650, 12659561, 13039908, 13825231
13036424, 12794268, 13011520, 13569812, 12758736, 13000491, 13498267
13077654, 13001901, 13550689, 13430715, 13806545, 11675721, 14082976
12771830, 12538907, 13947200, 13066371, 13483672, 12594616, 13540563
12897651, 12897902, 13241779, 12896850, 12726222, 12829429, 12728585
13079948, 12876314, 13090686, 12925041, 12995950, 13251796, 12650672
12398492, 12848480, 13582411, 13652088, 12990582, 13857364, 12975811
12917897, 13082238, 12947871, 13037709, 13371153, 12878750, 10114953
11772838, 13058611, 13001955, 11836951, 12965049, 13440962, 12765467
13727853, 13425727, 12885323, 13965075, 13339443, 12784559, 13332363
13074261, 12971251, 13811209, 12709476, 13460353, 13523527, 12857064
13719731, 13396284, 12899169, 13111013, 13323698, 12867511, 12639013
12959140, 13085732, 12829917, 10317921, 13843080, 12934171, 12849377
12349553, 13924431, 13869978, 12680491, 12914824, 13789135, 12730342
13334158, 12950823, 10418841, 13355963, 13531373, 13776758, 12720728
13620816, 13002015, 13023609, 13024624, 12791719, 13886023, 13255295
13821454, 12782756, 14625969, 14152875, 14186070, 12873909, 14214257
12914722, 13243172, 12842804, 13045518, 12765868, 12772345, 12663376
13345868, 14059576, 13683090, 12932852, 13889047, 12695029, 13146560
13038806, 14251904, 14070200, 13820621, 14304758, 13396356, 13697828
13258062, 12834777, 12996572, 13941934, 13657366, 13019958, 12810890
13888719, 13502441, 13726162, 13880925, 14153867, 13506114, 12820045
13604057, 12823838, 13877508, 12823042, 14494305, 13582706, 13617861
12825835, 13263435, 13025879, 13853089, 14009845, 13410987, 13570879
13637590, 12827493, 13247273, 13068077
Rac system comprising of multiple nodes
Local node = 10gnode1
Remote node = 10gnode2
--------------------------------------------------------------------------------
OPatch succeeded.
[oracle@10gnode1 grid3]$
[oracle@10gnode1 grid3]$ opatch lsinventory -oh /u01/app/oracle/product/11.2.0/db3
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/oracle/product/11.2.0/db3
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/11.2.0/db3/oraInst.loc
OPatch version : 11.2.0.3.6
OUI version : 11.2.0.3.0
Log file location : /u01/app/oracle/product/11.2.0/db3/cfgtoollogs/opatch/opatch2014-08-24_12-08-26PM_1.log
Lsinventory Output file location : /u01/app/oracle/product/11.2.0/db3/cfgtoollogs/opatch/lsinv/lsinventory2014-08-24_12-08-26PM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 11g 11.2.0.3.0
There are 1 product(s) installed in this Oracle Home.
Interim patches (2) :
Patch 14275605 : applied on Sun Aug 24 11:49:55 IST 2014
Unique Patch ID: 15398795
Patch description: "Database Patch Set Update : 11.2.0.3.4 (14275605)"
Created on 3 Oct 2012, 23:24:57 hrs PST8PDT
Sub-patch 13923374; "Database Patch Set Update : 11.2.0.3.3 (13923374)"
Sub-patch 13696216; "Database Patch Set Update : 11.2.0.3.2 (13696216)"
Sub-patch 13343438; "Database Patch Set Update : 11.2.0.3.1 (13343438)"
Bugs fixed:
14480676, 13566938, 13419660, 10350832, 13632717, 14063281, 12919564
13624984, 13430938, 13467683, 13588248, 13420224, 14548763, 13080778
12646784, 13804294, 12861463, 12834027, 13377816, 13036331, 12880299
14664355, 13499128, 14409183, 12998795, 12829021, 13492735, 12794305
13503598, 10133521, 12718090, 13742433, 12905058, 12401111, 13742434
13257247, 12849688, 13362079, 12950644, 13742435, 13464002, 12917230
13923374, 12879027, 14613900, 12585543, 12535346, 14480675, 12588744
11877623, 14480674, 13916709, 12847466, 13773133, 14076523, 13649031
13340388, 13366202, 13528551, 13981051, 12894807, 13343438, 12582664
12748240, 12797765, 13385346, 12923168, 13384182, 13612575, 13466801
13484963, 12971775, 11063191, 13772618, 13070939, 12797420, 13035804
13041324, 12976376, 11708510, 13742437, 13737746, 14062795, 13035360
12693626, 13742438, 13326736, 13332439, 14038787, 14062796, 12913474
13001379, 14390252, 13099577, 13370330, 13059165, 14062797, 14275605
9873405, 13742436, 9858539, 14062794, 13358781, 12960925, 13699124
12662040, 9703627, 12617123, 13338048, 12938841, 12658411, 12620823
12845115, 12656535, 14062793, 12678920, 12764337, 13354082, 13397104
14062792, 13250244, 12594032, 9761357, 12612118, 13742464, 13550185
13457582, 13527323, 12780983, 12583611, 13502183, 12780098, 13705338
13696216, 13476583, 11840910, 13903046, 13572659, 13718279, 13554409
13657605, 13103913, 14063280
Patch 14275572 : applied on Sun Aug 24 11:45:00 IST 2014
Unique Patch ID: 15398795
Patch description: "Grid Infrastructure Patch Set Update : 11.2.0.3.4 (14275572)"
Created on 12 Oct 2012, 05:40:32 hrs PST8PDT
Bugs fixed:
14275572, 13919095, 13696251, 13348650, 12659561, 13039908, 13825231
13036424, 12794268, 13011520, 13569812, 12758736, 13000491, 13498267
13077654, 13001901, 13550689, 13430715, 13806545, 11675721, 14082976
12771830, 12538907, 13947200, 13066371, 13483672, 12594616, 13540563
12897651, 12897902, 13241779, 12896850, 12726222, 12829429, 12728585
13079948, 12876314, 13090686, 12925041, 12995950, 13251796, 12650672
12398492, 12848480, 13582411, 13652088, 12990582, 13857364, 12975811
12917897, 13082238, 12947871, 13037709, 13371153, 12878750, 10114953
11772838, 13058611, 13001955, 11836951, 12965049, 13440962, 12765467
13727853, 13425727, 12885323, 13965075, 13339443, 12784559, 13332363
13074261, 12971251, 13811209, 12709476, 13460353, 13523527, 12857064
13719731, 13396284, 12899169, 13111013, 13323698, 12867511, 12639013
12959140, 13085732, 12829917, 10317921, 13843080, 12934171, 12849377
12349553, 13924431, 13869978, 12680491, 12914824, 13789135, 12730342
13334158, 12950823, 10418841, 13355963, 13531373, 13776758, 12720728
13620816, 13002015, 13023609, 13024624, 12791719
Rac system comprising of multiple nodes
Local node = 10gnode1
Remote node = 10gnode2
--------------------------------------------------------------------------------
OPatch succeeded.
[oracle@10gnode1 grid3]$
Now proceed with the same step of applying the PSU patch on GRID and DB home using OPATCH AUTO command as ROOT user on node 10gnode2.
[root@10gnode2 ~]# opatch auto /opt/112034_GI_PSU/ -ocmrf /home/oracle/ocm.rsp
Executing /u01/app/11.2.0/grid3/perl/bin/perl /u01/app/11.2.0/grid3/OPatch/crs/patch11203.pl -patchdir /opt -patchn 112034_GI_PSU -ocmrf /home/oracle/ocm.rsp -paramfile /u01/app/11.2.0/grid3/crs/install/crsconfig_params
This is the main log file: /u01/app/11.2.0/grid3/cfgtoollogs/opatchauto2014-08-24_12-26-02.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system:
/u01/app/11.2.0/grid3/cfgtoollogs/opatchauto2014-08-24_12-26-02.report.log
2014-08-24 12:26:02: Starting Clusterware Patch Setup
Using configuration parameter file: /u01/app/11.2.0/grid3/crs/install/crsconfig_params
Stopping RAC /u01/app/oracle/product/11.2.0/db3 ...
Stopped RAC /u01/app/oracle/product/11.2.0/db3 successfully
patch /opt/112034_GI_PSU/14275572/custom/server/14275572 apply successful for home /u01/app/oracle/product/11.2.0/db3
patch /opt/112034_GI_PSU/14275605 apply successful for home /u01/app/oracle/product/11.2.0/db3
Stopping CRS...
Stopped CRS successfully
patch /opt/112034_GI_PSU/14275572 apply successful for home /u01/app/11.2.0/grid3
patch /opt/112034_GI_PSU/14275605 apply successful for home /u01/app/11.2.0/grid3
Starting CRS...
CRS-4123: Oracle High Availability Services has been started.
Starting RAC /u01/app/oracle/product/11.2.0/db3 ...
Started RAC /u01/app/oracle/product/11.2.0/db3 successfully
opatch auto succeeded.
As stated, from the above log, the patch has been applied successfully on both the homes of both nodes.
Just crosschecking using "opatch lsinventory" on both the homes.
[oracle@10gnode2 ~]$ /u01/app/11.2.0/grid3/OPatch/opatch lsinventory -oh /u01/app/11.2.0/grid3
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/11.2.0/grid3
Central Inventory : /u01/app/oraInventory
from : /u01/app/11.2.0/grid3/oraInst.loc
OPatch version : 11.2.0.3.6
OUI version : 11.2.0.3.0
Log file location : /u01/app/11.2.0/grid3/cfgtoollogs/opatch/opatch2014-08-24_12-48-00PM_1.log
Lsinventory Output file location : /u01/app/11.2.0/grid3/cfgtoollogs/opatch/lsinv/lsinventory2014-08-24_12-48-00PM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Grid Infrastructure 11.2.0.3.0
There are 1 product(s) installed in this Oracle Home.
Interim patches (2) :
Patch 14275605 : applied on Sun Aug 24 12:43:06 IST 2014
Unique Patch ID: 15398795
Patch description: "Database Patch Set Update : 11.2.0.3.4 (14275605)"
Created on 3 Oct 2012, 23:24:57 hrs PST8PDT
Sub-patch 13923374; "Database Patch Set Update : 11.2.0.3.3 (13923374)"
Sub-patch 13696216; "Database Patch Set Update : 11.2.0.3.2 (13696216)"
Sub-patch 13343438; "Database Patch Set Update : 11.2.0.3.1 (13343438)"
Bugs fixed:
14480676, 13566938, 13419660, 10350832, 13632717, 14063281, 12919564
13624984, 13430938, 13467683, 13588248, 13420224, 14548763, 13080778
12646784, 13804294, 12861463, 12834027, 13377816, 13036331, 12880299
14664355, 13499128, 14409183, 12998795, 12829021, 13492735, 12794305
13503598, 10133521, 12718090, 13742433, 12905058, 12401111, 13742434
13257247, 12849688, 13362079, 12950644, 13742435, 13464002, 12917230
13923374, 12879027, 14613900, 12585543, 12535346, 14480675, 12588744
11877623, 14480674, 13916709, 12847466, 13773133, 14076523, 13649031
13340388, 13366202, 13528551, 13981051, 12894807, 13343438, 12582664
12748240, 12797765, 13385346, 12923168, 13384182, 13612575, 13466801
13484963, 12971775, 11063191, 13772618, 13070939, 12797420, 13035804
13041324, 12976376, 11708510, 13742437, 13737746, 14062795, 13035360
12693626, 13742438, 13326736, 13332439, 14038787, 14062796, 12913474
13001379, 14390252, 13099577, 13370330, 13059165, 14062797, 14275605
9873405, 13742436, 9858539, 14062794, 13358781, 12960925, 13699124
12662040, 9703627, 12617123, 13338048, 12938841, 12658411, 12620823
12845115, 12656535, 14062793, 12678920, 12764337, 13354082, 13397104
14062792, 13250244, 12594032, 9761357, 12612118, 13742464, 13550185
13457582, 13527323, 12780983, 12583611, 13502183, 12780098, 13705338
13696216, 13476583, 11840910, 13903046, 13572659, 13718279, 13554409
13657605, 13103913, 14063280
Patch 14275572 : applied on Sun Aug 24 12:39:47 IST 2014
Unique Patch ID: 15398795
Patch description: "Grid Infrastructure Patch Set Update : 11.2.0.3.4 (14275572)"
Created on 12 Oct 2012, 05:40:32 hrs PST8PDT
Bugs fixed:
14275572, 13919095, 13696251, 13348650, 12659561, 13039908, 13825231
13036424, 12794268, 13011520, 13569812, 12758736, 13000491, 13498267
13077654, 13001901, 13550689, 13430715, 13806545, 11675721, 14082976
12771830, 12538907, 13947200, 13066371, 13483672, 12594616, 13540563
12897651, 12897902, 13241779, 12896850, 12726222, 12829429, 12728585
13079948, 12876314, 13090686, 12925041, 12995950, 13251796, 12650672
12398492, 12848480, 13582411, 13652088, 12990582, 13857364, 12975811
12917897, 13082238, 12947871, 13037709, 13371153, 12878750, 10114953
11772838, 13058611, 13001955, 11836951, 12965049, 13440962, 12765467
13727853, 13425727, 12885323, 13965075, 13339443, 12784559, 13332363
13074261, 12971251, 13811209, 12709476, 13460353, 13523527, 12857064
13719731, 13396284, 12899169, 13111013, 13323698, 12867511, 12639013
12959140, 13085732, 12829917, 10317921, 13843080, 12934171, 12849377
12349553, 13924431, 13869978, 12680491, 12914824, 13789135, 12730342
13334158, 12950823, 10418841, 13355963, 13531373, 13776758, 12720728
13620816, 13002015, 13023609, 13024624, 12791719, 13886023, 13255295
13821454, 12782756, 14625969, 14152875, 14186070, 12873909, 14214257
12914722, 13243172, 12842804, 13045518, 12765868, 12772345, 12663376
13345868, 14059576, 13683090, 12932852, 13889047, 12695029, 13146560
13038806, 14251904, 14070200, 13820621, 14304758, 13396356, 13697828
13258062, 12834777, 12996572, 13941934, 13657366, 13019958, 12810890
13888719, 13502441, 13726162, 13880925, 14153867, 13506114, 12820045
13604057, 12823838, 13877508, 12823042, 14494305, 13582706, 13617861
12825835, 13263435, 13025879, 13853089, 14009845, 13410987, 13570879
13637590, 12827493, 13247273, 13068077
Rac system comprising of multiple nodes
Local node = 10gnode2
Remote node = 10gnode1
--------------------------------------------------------------------------------
OPatch succeeded.
[oracle@10gnode2 ~]$
[oracle@10gnode2 ~]$ /u01/app/11.2.0/grid3/OPatch/opatch lsinventory -oh /u01/app/oracle/product/11.2.0/db3
Oracle Interim Patch Installer version 11.2.0.3.6
Copyright (c) 2013, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/oracle/product/11.2.0/db3
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/11.2.0/db3/oraInst.loc
OPatch version : 11.2.0.3.6
OUI version : 11.2.0.3.0
Log file location : /u01/app/oracle/product/11.2.0/db3/cfgtoollogs/opatch/opatch2014-08-24_12-48-10PM_1.log
Lsinventory Output file location : /u01/app/oracle/product/11.2.0/db3/cfgtoollogs/opatch/lsinv/lsinventory2014-08-24_12-48-10PM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 11g 11.2.0.3.0
There are 1 product(s) installed in this Oracle Home.
Interim patches (2) :
Patch 14275605 : applied on Sun Aug 24 12:31:43 IST 2014
Unique Patch ID: 15398795
Patch description: "Database Patch Set Update : 11.2.0.3.4 (14275605)"
Created on 3 Oct 2012, 23:24:57 hrs PST8PDT
Sub-patch 13923374; "Database Patch Set Update : 11.2.0.3.3 (13923374)"
Sub-patch 13696216; "Database Patch Set Update : 11.2.0.3.2 (13696216)"
Sub-patch 13343438; "Database Patch Set Update : 11.2.0.3.1 (13343438)"
Bugs fixed:
14480676, 13566938, 13419660, 10350832, 13632717, 14063281, 12919564
13624984, 13430938, 13467683, 13588248, 13420224, 14548763, 13080778
12646784, 13804294, 12861463, 12834027, 13377816, 13036331, 12880299
14664355, 13499128, 14409183, 12998795, 12829021, 13492735, 12794305
13503598, 10133521, 12718090, 13742433, 12905058, 12401111, 13742434
13257247, 12849688, 13362079, 12950644, 13742435, 13464002, 12917230
13923374, 12879027, 14613900, 12585543, 12535346, 14480675, 12588744
11877623, 14480674, 13916709, 12847466, 13773133, 14076523, 13649031
13340388, 13366202, 13528551, 13981051, 12894807, 13343438, 12582664
12748240, 12797765, 13385346, 12923168, 13384182, 13612575, 13466801
13484963, 12971775, 11063191, 13772618, 13070939, 12797420, 13035804
13041324, 12976376, 11708510, 13742437, 13737746, 14062795, 13035360
12693626, 13742438, 13326736, 13332439, 14038787, 14062796, 12913474
13001379, 14390252, 13099577, 13370330, 13059165, 14062797, 14275605
9873405, 13742436, 9858539, 14062794, 13358781, 12960925, 13699124
12662040, 9703627, 12617123, 13338048, 12938841, 12658411, 12620823
12845115, 12656535, 14062793, 12678920, 12764337, 13354082, 13397104
14062792, 13250244, 12594032, 9761357, 12612118, 13742464, 13550185
13457582, 13527323, 12780983, 12583611, 13502183, 12780098, 13705338
13696216, 13476583, 11840910, 13903046, 13572659, 13718279, 13554409
13657605, 13103913, 14063280
Patch 14275572 : applied on Sun Aug 24 12:27:58 IST 2014
Unique Patch ID: 15398795
Patch description: "Grid Infrastructure Patch Set Update : 11.2.0.3.4 (14275572)"
Created on 12 Oct 2012, 05:40:32 hrs PST8PDT
Bugs fixed:
14275572, 13919095, 13696251, 13348650, 12659561, 13039908, 13825231
13036424, 12794268, 13011520, 13569812, 12758736, 13000491, 13498267
13077654, 13001901, 13550689, 13430715, 13806545, 11675721, 14082976
12771830, 12538907, 13947200, 13066371, 13483672, 12594616, 13540563
12897651, 12897902, 13241779, 12896850, 12726222, 12829429, 12728585
13079948, 12876314, 13090686, 12925041, 12995950, 13251796, 12650672
12398492, 12848480, 13582411, 13652088, 12990582, 13857364, 12975811
12917897, 13082238, 12947871, 13037709, 13371153, 12878750, 10114953
11772838, 13058611, 13001955, 11836951, 12965049, 13440962, 12765467
13727853, 13425727, 12885323, 13965075, 13339443, 12784559, 13332363
13074261, 12971251, 13811209, 12709476, 13460353, 13523527, 12857064
13719731, 13396284, 12899169, 13111013, 13323698, 12867511, 12639013
12959140, 13085732, 12829917, 10317921, 13843080, 12934171, 12849377
12349553, 13924431, 13869978, 12680491, 12914824, 13789135, 12730342
13334158, 12950823, 10418841, 13355963, 13531373, 13776758, 12720728
13620816, 13002015, 13023609, 13024624, 12791719
Rac system comprising of multiple nodes
Local node = 10gnode2
Remote node = 10gnode1
--------------------------------------------------------------------------------
OPatch succeeded.
[oracle@10gnode2 ~]$
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
In oracle 12c, after creating a CDB database, I was facing a strange error. I was able to describe few of the dictionary tables, but was unable to query them.
Container Database Name: TSTCDB
Database Version : 12.1.0.1
DB Server: ora12c
After creating the CDB by running all the mandatory scripts (catalog.sql and catproc.sql), I was unable to query the dictionary tables. Here is the scenario.
I am connecting to the CDB and querying the view “CDB_DATA_FILES”.
[oracle@ora12c u02]$ sqlplus sys/oracle@tstcdb as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Sat Aug 30 17:23:55 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> desc cdb_data_files Name Null? Type ------------------- ---------- -------------------- FILE_NAME VARCHAR2(513) FILE_ID NUMBER TABLESPACE_NAME VARCHAR2(30) BYTES NUMBER BLOCKS NUMBER STATUS VARCHAR2(9) RELATIVE_FNO NUMBER AUTOEXTENSIBLE VARCHAR2(3) MAXBYTES NUMBER MAXBLOCKS NUMBER INCREMENT_BY NUMBER USER_BYTES NUMBER USER_BLOCKS NUMBER ONLINE_STATUS VARCHAR2(7) CON_ID NUMBER
Here, above you can notice that the description of the view worked fine. But the same when I tried to query the view, I faced an error message which read “ORA-00942: table or view does not exist”
SQL> select file_name,con_id from cdb_data_files order by con_id; select file_name,con_id from cdb_data_files order by con_id * ERROR at line 1: ORA-12801: error signaled in parallel query server P003 ORA-00942: table or view does not exist
Upon investigating, was able to find out that these errors appear when the CATALOG.SQL, CATPROC.SQL scripts were not executed using the catcon.pl script provided by Oracle.
This means that the data dictonary views were only created in the root database (CDB) but not in the seed database. This leads to the above errors.
In Oracle 12c, Oracle has introduced multiple database [PDB’s] inside one database [CDB]. For maintenance Oracle’s best way is to run SQL scripts by using catcon.pl. This allows the scripts to be run in the CDB (ROOT) and also in the required PDBs. This also generates log files that can be viewed to confirm that the SQL script or SQL statement did not generate unexpected errors.
Here is the usage of the CATCON.pl script.
[oracle@ora12c ~]$ . oraenv
ORACLE_SID = [tstcdb] ? tstcdb
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12c ~]$
[oracle@ora12c ~]$ cd $ORACLE_HOME/rdbms/admin
[oracle@ora12c admin]$ perl catcon.pl
Usage: catcon [-u username[/password]] [-U username[/password]]
[-d directory] [-l directory]
[{-c|-C} container] [-p degree-of-parallelism]
[-e] [-s]
[-E { ON | errorlogging-table-other-than-SPERRORLOG } ]
[-g]
-b log-file-name-base
--
{ sqlplus-script [arguments] | --x<SQL-statement> } ...
Optional:
-u username (optional /password; otherwise prompts for password)
used to connect to the database to run user-supplied scripts or
SQL statements
defaults to "/ as sysdba"
-U username (optional /password; otherwise prompts for password)
used to connect to the database to perform internal tasks
defaults to "/ as sysdba"
-d directory containing the file to be run
-l directory to use for spool log files
-c container(s) in which to run sqlplus scripts, i.e. skip all
Containers not named here; for example,
-c 'PDB1 PDB2',
-C container(s) in which NOT to run sqlplus scripts, i.e. skip all
Containers named here; for example,
-C 'CDB PDB3'
NOTE: -c and -C are mutually exclusive
-p expected number of concurrent invocations of this script on a
given host
NOTE: this parameter rarely needs to be specified
-e sets echo on while running sqlplus scripts
-s output of running every script will be spooled into a file
whose name will be
<log-file-name-base>_<script_name_without_extension>_[<contai ner_name_if_any>].<default_extension>
-E sets errorlogging on; if ON is specified, default error logging table will be used, otherwise, specified error logging table
(which must have been created in every Container) will be used
-g turns on production of debugging info while running this script
Mandatory:
-b base name (e.g. catcon_test) for log and spool file names
sqlplus-script - sqlplus script to run OR
SQL-statement - a statement to execute
NOTES:
- if --x<SQL-statement> is the first non-option string, it needs to be preceeded with -- to avoid confusing module parsing
options into assuming that '-' is an option which that module is not expecting and about which it will complain
- command line parameters to SQL scripts can be introduced using --p
interactive (or secret) parameters to SQL scripts can be
introduced using --P
For example, perl catcon.pl ... x.sql '--pJohn' '--PEnter Password for John:' ...
No, let’s run the CATALOG.SQL, CATPROC.SQL scripts using the CATCON Perl script.
[oracle@ora12c ~]$ . oraenv ORACLE_SID = [oracle] ? tstcdb The Oracle base has been set to /u01/app/oracle [oracle@ora12c ~]$ [oracle@ora12c ~]$ PERL5LIB=$ORACLE_HOME/rdbms/admin:$PERL5LIB; export PERL5LIB [oracle@ora12c ~]$ cd $ORACLE_HOME/rdbms/admin [oracle@ora12c admin]$ perl catcon.pl -l /home/oracle -b catalog /u01/app/oracle/product/12.1.0.1/db1/rdbms/admin/catalog.sql [oracle@ora12c admin]$ perl catcon.pl -l /home/oracle -b catalog /u01/app/oracle/product/12.1.0.1/db1/rdbms/admin/catproc.sql
The log files are placed under the base directory (/home/oracle) which was mentioned by using the “-l” option with the log names as “catalog” and “catproc” which was mentioned by using the “-b” option.
[oracle@ora12c admin]$ cd /home/oracle [oracle@ora12c ~]$ [oracle@ora12c ~]$ ls -lrt cat* -rw-r--r-- 1 oracle oinstall 445152 Aug 31 11:34 catalog0.log -rw-r--r-- 1 oracle oinstall 24709417 Aug 31 12:05 catproc0.log
Now after executing these scripts, lets try querying the dictionary views on the TSTCDB.
[oracle@ora12c ~]$ . oraenv ORACLE_SID = [tstcdb] ? The Oracle base remains unchanged with value /u01/app/oracle [oracle@ora12c ~]$ [oracle@ora12c ~]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Sun Aug 31 12:09:03 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> set linesize 300 SQL> col file_name for a55 SQL> select file_name,con_id from cdb_data_files order by con_id; FILE_NAME CON_ID ------------------------------------- ----------------- ---------- /u02/oradata/tstcdb/system01.dbf 1 /u02/oradata/tstcdb/myts01.dbf 1 /u02/oradata/tstcdb/undotbs01.dbf 1 /u02/oradata/tstcdb/sysaux01.dbf 1 /u02/oradata/pdbseed/system01.dbf 2 /u02/oradata/pdbseed/sysaux01.dbf 2 /u02/oradata/pdbseed/myts01.dbf 2 7 rows selected.
Now, it works !!
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
This article speaks of creating a RAC physical standby database for RAC Primary database.
Primary and Standby databases are of 2 node RAC.
Primary Database : PRIM
DBNAME : PRIM
DB UNIQUE NAME : PRIM
Instances : PRIM1 on node1
PRIM2 on node2
Standby Database : SRPSTB
DBNAME : PRIM
DB UNIQUE NAME : SRPSTB
Instances : SRPSTB1 on drnode1
SRPSTB2 on drnode2
Database version: Oracle 11.2.0.1
Below are the steps to create a RAC standby database for a RAC primary database.
Step 1: Add the following standby parameters on the primary database.
log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=prim' log_archive_dest_2='service=srpstb valid_for=(online_logfiles,primary_role) db_unique_name=srpstb' fal_client='prim' #Oracle net service name of primary database fal_server='srpstb' #oracle net service name of standby database.
Step 2: Create a pfile of the primary database and copy this file to the standby server.
The contents of the pfile of primary database is as below. (This is the pfile taken from instance prim1 of primary DB.)
[oracle@node1 u02]$ cat initprim1.ora prim1.__db_cache_size=385875968 prim2.__db_cache_size=436207616 prim1.__java_pool_size=16777216 prim2.__java_pool_size=16777216 prim1.__large_pool_size=16777216 prim2.__large_pool_size=16777216 prim1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment prim1.__pga_aggregate_target=520093696 prim2.__pga_aggregate_target=520093696 prim1.__sga_target=754974720 prim2.__sga_target=754974720 prim1.__shared_io_pool_size=0 prim2.__shared_io_pool_size=0 prim1.__shared_pool_size=318767104 prim2.__shared_pool_size=268435456 prim1.__streams_pool_size=0 prim2.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/prim/adump' *.audit_trail='db' *.cluster_database=true *.compatible='11.2.0.0.0' *.control_files='+DATA/prim/controlfile/current.260.826037247' *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_domain='' *.db_name='prim' *.db_unique_name='prim' *.db_recovery_file_dest_size=2147483648 *.db_recovery_file_dest='+FRA' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=primXDB)' *.fal_client='prim' *.fal_server='srpstb' prim1.instance_number=1 prim2.instance_number=2 prim1.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip)(PORT=1521))))' prim2.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip)(PORT=1521))))' *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=prim' *.log_archive_dest_2='service=srpstb valid_for=(online_logfiles,primary_role) db_unique_name=srpstb' *.memory_target=1264582656 *.open_cursors=300 *.processes=150 *.remote_listener='node-scan:1521' *.remote_login_passwordfile='exclusive' prim2.thread=2 prim1.thread=1 prim2.undo_tablespace='UNDOTBS2' prim1.undo_tablespace='UNDOTBS1'
Step 3. Listener File contents on Primary Database server.
[oracle@node1 admin]$ cat listener.ora LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))) # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (ORACLE_HOME=/u01/app/oracle/product/11.2.0/db1) (SID_NAME=prim1) ) )
Step 4: TNS entries on primary database server.
### PRIMARY ENTRIES ###
PRIM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prim)
)
)
PRIM1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT=1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prim)
(INSTANCE_NAME = prim1)
)
)
PRIM2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT=1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prim)
(INSTANCE_NAME = prim2)
)
)
### Standby TNS ENTRIES ###
SRPSTB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = drnode-scan)(PORT = 1521) )
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = srpstb)
(UR = A)
)
)
SRPSTB2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = drnode2-vip)(PORT = 1521) )
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = srpstb)
(UR = A)
(INSTANCE_NAME = srpstb2)
)
)
SRPSTB1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = drnode1-vip)(PORT = 1521) )
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = srpstb)
(UR = A)
(INSTANCE_NAME = srpstb1)
)
)
Step 5: Copy the above TNS entries to standby database server node drnode1.
Step 6: Copy the password file of any instance (“orapwprim1” or “orapwprim2”) of primary database located at “$ORACLE_HOME/dbs” location to
the standby database server node drnode1 location “$ORACLE_HOME/dbs” and rename it to “orapwsrpstb1”.
Step 7: On the standby database server, perpare the pfile for srpstb1 instance as initsrpstb1.ora .
Contents of initsrpstb1.ora File:
[oracle@drnode1 dbs]$ cat initsrpstb1.ora srpstb1.__db_cache_size=419430400 srpstb1.__java_pool_size=16777216 srpstb1.__large_pool_size=16777216 srpstb1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment srpstb1.__pga_aggregate_target=520093696 srpstb1.__sga_target=754974720 srpstb1.__shared_io_pool_size=0 srpstb1.__shared_pool_size=285212672 srpstb1.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/srpstb/adump' *.audit_trail='db' *.cluster_database=false *.compatible='11.2.0.0.0' *.control_files='+DATA','+FRA' *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_name='prim' *.db_unique_name='srpstb' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=srpstbXDB)' *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srpstb' *.log_archive_dest_2='service=prim valid_for=(online_logfiles,primary_role) db_unique_name=prim' *.db_recovery_file_dest='+FRA' *.db_recovery_file_dest_size=2G *.memory_target=1264582656 *.open_cursors=300 *.processes=150 *.remote_listener='drnode-scan:1521' *.remote_login_passwordfile='EXCLUSIVE' srpstb1.undo_tablespace='UNDOTBS1' srpstb1.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=drnode1-vip)(PORT=1521))))' srpstb1.fal_client='srpstb1' *.fal_server='prim'
Step 8: Set up the listener.ora file on the standby database server drnode1.
Contents of Listener.ora file on DRNODE1:
[oracle@drnode1 admin]$ cat listener.ora LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))) # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (ORACLE_HOME=/u01/app/oracle/product/11.2.0.1/db1) (SID_NAME=srpstb1) ) )
Step 9: Nomount the standby instance srpstb1 using the above pfile “initsrpstb1.ora”.
Step 10: Now connect to the Primary database as target database and standby database as auxiliary instance through RMAN. Make sure that the
Primary database is open.
[oracle@drnode1 ~]$ rman target sys/oracle@prim auxiliary sys/oracle@srpstb1
Recovery Manager: Release 11.2.0.1.0 - Production on Fri Oct 25 19:54:07 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: PRIM (DBID=4118338878)
connected to auxiliary database: PRIM (not mounted)
RMAN> duplicate target database for standby from active database;
Starting Duplicate Db at 25-OCT-13
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=28 device type=DISK
contents of Memory Script:
{
backup as copy reuse
targetfile '/u01/app/oracle/product/11.2.0/db1/dbs/orapwprim2' auxiliary format
'/u01/app/oracle/product/11.2.0.1/db1/dbs/orapwsrpstb1' ;
}
executing Memory Script
Starting backup at 25-OCT-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 instance=prim2 device type=DISK
Finished backup at 25-OCT-13
contents of Memory Script:
{
backup as copy current controlfile for standby auxiliary form at '+DATA/srpstb/controlfile/current.269.829771129';
restore clone controlfile to '+FRA/srpstb/controlfile/current.261.829771129' from
'+DATA/srpstb/controlfile/current.269.829771129';
sql clone "create spfile from memory";
shutdown clone immediate;
startup clone nomount;
sql clone "alter system set control_files =
''+DATA/srpstb/controlfile/current.269.829771129'', ''+FRA/sr pstb/controlfile/current.261.829771129'' comment=
''Set by RMAN'' scope=spfile";
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script
Starting backup at 25-OCT-13
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying standby control file
output file name=/u01/app/oracle/product/11.2.0/db1/dbs/snapcf_prim2.f tag=TAG20131025T195848 RECID=2 STAMP=829771151
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:46
Finished backup at 25-OCT-13
Starting restore at 25-OCT-13
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: copied control file copy
Finished restore at 25-OCT-13
sql statement: create spfile from memory
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 1269366784 bytes
Fixed Size 2212976 bytes
Variable Size 855640976 bytes
Database Buffers 402653184 bytes
Redo Buffers 8859648 bytes
sql statement: alter system set control_files = ''+DATA/srpstb/controlfile/current.269.829771129'',
''+FRA/srpstb/controlfile/current.261.829771129'' comment= ''Set by RMAN'' scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 1269366784 bytes
Fixed Size 2212976 bytes
Variable Size 855640976 bytes
Database Buffers 402653184 bytes
Redo Buffers 8859648 bytes
contents of Memory Script:
{
sql clone 'alter database mount standby database';
}
executing Memory Script
sql statement: alter database mount standby database
contents of Memory Script:
{
set newname for clone tempfile 1 to new;
switch clone tempfile all;
set newname for clone datafile 1 to new;
set newname for clone datafile 2 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 5 to new;
set newname for clone datafile 6 to new;
backup as copy reuse
datafile 1 auxiliary format new
datafile 2 auxiliary format new
datafile 3 auxiliary format new
datafile 4 auxiliary format new
datafile 5 auxiliary format new
datafile 6 auxiliary format new
;
sql 'alter system archive log current';
}
executing Memory Script
executing command: SET NEWNAME
renamed tempfile 1 to +DATA in control file
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting backup at 25-OCT-13
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=+DATA/prim/datafile/system.256.826037011
output file name=+DATA/srpstb/datafile/system.266.829771513 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:45
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=+DATA/prim/datafile/sysaux.257.826037013
output file name=+DATA/srpstb/datafile/sysaux.277.829771637 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:55
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DATA/prim/datafile/undotbs1.258.826037015
output file name=+DATA/srpstb/datafile/undotbs1.276.829771735 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+DATA/prim/datafile/example.264.826037271
output file name=+DATA/srpstb/datafile/example.275.829771771 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=+DATA/prim/datafile/undotbs2.265.826037561
output file name=+DATA/srpstb/datafile/undotbs2.274.829771805 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=+DATA/prim/datafile/users.259.826037015
output file name=+DATA/srpstb/datafile/users.273.829771831 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
Finished backup at 25-OCT-13
sql statement: alter system archive log current
contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script
datafile 1 switched to datafile copy
input datafile copy RECID=2 STAMP=829771843 file name=+DATA/srpstb/datafile/system.266.829771513
datafile 2 switched to datafile copy
input datafile copy RECID=3 STAMP=829771843 file name=+DATA/srpstb/datafile/sysaux.277.829771637
datafile 3 switched to datafile copy
input datafile copy RECID=4 STAMP=829771843 file name=+DATA/srpstb/datafile/undotbs1.276.829771735
datafile 4 switched to datafile copy
input datafile copy RECID=5 STAMP=829771843 file name=+DATA/srpstb/datafile/users.273.829771831
datafile 5 switched to datafile copy
input datafile copy RECID=6 STAMP=829771843 file name=+DATA/srpstb/datafile/example.275.829771771
datafile 6 switched to datafile copy
input datafile copy RECID=7 STAMP=829771843 file name=+DATA/srpstb/datafile/undotbs2.274.829771805
Finished Duplicate Db at 25-OCT-13
Step 11: Once the duplication is completed, close the RMAN prompt and connect to the standby database through SQL.
sqlplus sys/<password>@srpstb1 as sysdba
Check the status of the standby database by making sure it is in mount stage.
sql>select status,instance_name,database_role from v$instance,v$database;
Step 12: Now start the managed recovery process on the standby database.
sql>alter database recover managed standby database disconnectfrom session;
Step 13: Now check if the managed recovery process (MRP) has been started on the standby database or not.
SQL> select process,status,sequence#,thread# from v$managed_standby; PROCESS STATUS SEQUENCE# THREAD# ------- ------------ --------- ------- ARCH CONNECTED 0 0 ARCH CONNECTED 0 0 ARCH CONNECTED 0 0 ARCH CONNECTED 0 0 MRP0 WAIT_FOR_LOG 65 2 RFS IDLE 0 0 RFS IDLE 0 0 RFS IDLE 0 0 RFS IDLE 0 0 RFS IDLE 75 1 RFS IDLE 0 0 RFS IDLE 0 0 RFS IDLE 0 0 RFS IDLE 0 0 14 rows selected.
Here, the MRP has been started and is waiting for the log sequence# 39. If MRP is not started, then the above query would not show up the
MRP0 under the process column.
Step 14: On the primary database, perform a few log switches and check if the logs are applied to the standby database.
Primary Database Archive Sequence:
sqlplus sys/<password>@prim1 as sysdba SQL> select thread#,max(sequence#) from v$archived_log group by thread#; THREAD# MAX(SEQUENCE#) ------- ---------------- 1 76 2 67
Standby database SRPSTB1 archive sequence being applied:
sqlplus sys/<password>@srpstb1 as sysdba SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#; THREAD# MAX(SEQUENCE#) ------- ----------------- 1 75 2 67
Here, the maximum sequence# generated on the Primary database from instance 1 (prim1) is 76 and from instance 2 (prim2) is 67 and the maximum
sequence# applied on the standby database for thread 1 is 75 and thread 2 is 67 which means that the standby database is in sync with the
primary database.
Now lets add the 2nd instance srpstb2 to the Standby Database srpstb.
Step 15: Create a pfile from the standby instance srpstb1 to add the cluster parameters.
cluster_database=TRUE srpstb1.undo_tablespace='UNDOTBS1' srpstb2.undo_tablespace='UNDOTBS2' srpstb2.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=drnode2-vip)(PORT=1521))))' srpstb1.instance_number=1 srpstb2.instance_number=2 srpstb1.thread=1 srpstb2.thread=2 srpstb1.fal_client='srpstb1' srpstb2.fal_client='srpstb2'
The new pfile required for the standby database cluster would look as below.
[oracle@drnode1 dbs]$ cat initsrpstb1.ora srpstb1.__db_cache_size=402653184 srpstb1.__java_pool_size=16777216 srpstb1.__large_pool_size=16777216 srpstb1.__pga_aggregate_target=520093696 srpstb1.__sga_target=754974720 srpstb1.__shared_io_pool_size=0 srpstb1.__shared_pool_size=285212672 srpstb1.__streams_pool_size=0 srpstb2.__db_cache_size=402653184 srpstb2.__java_pool_size=16777216 srpstb2.__large_pool_size=16777216 srpstb2.__pga_aggregate_target=520093696 srpstb2.__sga_target=754974720 srpstb2.__shared_io_pool_size=0 srpstb2.__shared_pool_size=285212672 srpstb2.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/srpstb/adump' *.audit_trail='DB' *.cluster_database=TRUE *.compatible='11.2.0.0.0' *.control_files='+DATA/srpstb/controlfile/current.269.829771129','+FRA/srpstb/controlfile/current.261.829771129'#Set by RMAN *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_name='prim' *.db_recovery_file_dest='+FRA' *.db_recovery_file_dest_size=2G *.db_unique_name='srpstb' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=srpstbXDB)' *.fal_client='srpstb1' *.fal_server='prim' srpstb1.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCO=TCP)(HOST=drnode1-vip)(PORT=1521))))' srpstb2.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=drnode2-vip)(PORT=1521))))' *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srpstb' *.log_archive_dest_2='service=prim valid_for=(online_logfiles,primary_role) db_unique_name=prim' *.log_buffer=8491008# log buffer update *.memory_target=1216M *.open_cursors=300 *.processes=150 *.query_rewrite_enabled='TRUE' *.remote_login_passwordfile='EXCLUSIVE' srpstb1.undo_tablespace='UNDOTBS1' srpstb2.undo_tablespace='UNDOTBS2' srpstb1.instance_number=1 srpstb2.instance_number=2 srpstb1.thread=1 srpstb2.thread=2 srpstb1.fal_client='srpstb1' srpstb2.fal_client='srpstb2' *.fal_server='prim'
Step 16: Shutdown the instance srpstb1, mount it using the newly above pfile, create an spfile to be placed in the shared location (ASM
diskgroup, as it is being shared by both the instances srpstb1 and srpstb2.)
SQL> create spfile='+DATA/SRPSTB/spfilesrpstb.ora' from pfile; File created.
Step 17: Create a new pfile initsrpstb1.ora in drnode1 located at $ORACLE_HOME/dbs with just one entry to point to the spfile location.
[oracle@drnode1 dbs]$ cat initsrpstb1.ora SPFILE='+DATA/SRPSTB/spfilesrpstb.ora'
Step 17: Copy the password file of srpstb1 (orapwsrpstb1) to drnode2 location “$ORACLE_HOME/dbs” and rename it as orapwsrpstb2.
Step 18: Copy the newly created pfile (initsrpstb1.ora) fto drnode2 location “$ORACLE_HOME/dbs” and rename it as initsrpstb2.ora
Step 19: Mount both srpstb1 and srpstb2 instances.
Step 20: Start MRP on any one instance using the below query.
SQL> alter database recover managed standby database disconnect from session;
Step 21: Check the max archive sequence generated on primary and compare it with the max archive sequence applied on the standby.
Primary:
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
Standby:
SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
Step 22: The next step would be to add the standby database SRPSTB to the cluster.
[oracle@drnode1 ~]$ srvctl add database -d srpstb -o /u01/app/oracle/product/11.2.0.1/db1 -r PHYSICAL_STANDBY -s MOUNT
Step 23: Now, we also need to add the instances entrires to the standby database.
[oracle@drnode1 ~]$ srvctl add instance -d srpstb -i srpstb1 -n drnode1 [oracle@drnode1 ~]$ srvctl add instance -d srpstb -i srpstb2 -n drnode2
Step 24: Now check the status of the standby database using srvctl.
[oracle@drnode1 ~]$ srvctl start database -d srpstb [oracle@drnode1 ~]$ srvctl status database -d srpstb Instance srpstb1 is running on node drnode1 Instance srpstb2 is running on node drnode2
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
Here is a detailed steps on configuring CRS for a RAC machine on 11.2.0.1 version in silent node.
Environment:
2 node RAC: 10gnode1 and 10gnode2
OS: Linux 32 bit
Attached is the response file (CRS_INSTALL.RSP) that is found in the Grid Infrastructure zip file. Unzip the GI software and the response file is located under the “grid/response” directory. The response file needs to be filled in with all the relevant values (ORACLE_HOME path, group that owns the grid, cluster name, scan configuration details, network configuration details of the nodes, Storage (File system / ASM), if ASM then diskgroup name, disk path, disk string). Please refer the file CRS_INSTALL.RSP for the values set.
As oracle user (grid owner), run the below command. Here I’ve used the “ignore prerqs” option as the prerequisite check NTP had a problem which can be ignored in my case. Better option would be to first run the below command without the “ignore” option, check the prerequsites and fix them before proceeding.
cd <GRID_SOFTWARE_Directory>
./runInstaller -silent -ignoreSysPrereqs -ignorePrereq /u02/11201_soft/crs_install.rsp
[oracle@10gnode1 grid]$ ./runInstaller -silent -ignoreSysPrereqs -responseFile /u02/11201_soft/crs_install.rsp Starting Oracle Universal Installer... Checking Temp space: must be greater than 80 MB. Actual 24921 MB Passed Checking swap space: must be greater than 150 MB. Actual 10001 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-07-28_10-58-44AM. Please wait ...[oracle@10gnode1 grid]$ You can find the log of this install session at: /u01/app/oraInventory/logs/installActions2014-07-28_10-58-44AM.log(Refer the attachment for detailed view). The following configuration scripts need to be executed as the "root" user. #!/bin/sh #Root scripts to run /u01/app/11.2.0/grid1/root.sh To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts 4. Return to this window and hit "Enter" key to continue Configuration assistants have not been run. This can happen for following reasons - either root.sh is to be run before config or Oracle Universal Installer was invoked with the -noConfig option. "/u01/app/11.2.0/grid1/cfgtoollogs/configToolAllCommands" script contains all commands to be executed by the configuration assistants. This file may be used to run the configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same. Successfully Setup Software.
From the above log, it can be noticed that the install log can be found at “/u01/app/oraInventory/logs/installActions2014-07-28_10-58-44AM.log”. I’ve uploaded this log here installActions2014-07-28_10-58-44AM.log
As root user, run the root.sh script on node1 (10gnode1) followed by node2 (10gnode2). Refer log “root_10gnode1.mydomain_2014-07-28_11-50-00.log” for root.sh o/p on 10gnode1 and log “root_10gnode2.mydomain_2014-07-28_11-57-05.log” for root.sh o/p on 10gnode2.
Root.sh on 10gnode1: root_10gnode1.mydomain_2014-07-28_11-50-00.log
[root@10gnode1 ~]# /u01/app/11.2.0/grid1/root.sh Check /u01/app/11.2.0/grid1/install/root_10gnode1.mydomain_2014-07-28_11-50-00.log for the output of root script [root@10gnode1 ~]#
Root.sh on 10gnode2: root_10gnode2.mydomain_2014-07-28_11-57-05.log
[root@10gnode2 ~]# /u01/app/11.2.0/grid1/root.sh Check /u01/app/11.2.0/grid1/install/root_10gnode2.mydomain_2014-07-28_11-57-05.log for the output of root script [root@10gnode2 ~]#
Verify if CRS is online on both the nodes.
[oracle@10gnode1 ~]$ /u01/app/11.2.0/grid1/bin/crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
[oracle@10gnode2 ~]$ /u01/app/11.2.0/grid1/bin/crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online [oracle@10gnode2 ~]$
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
In this article, I’m demonstrating on a scenario where the one of the datafile of the primary database is corrupted and how it can be restored back from it’s standby
database. The following steps can also be followed for scenarios where the datafile of the primary database is accidentally deleted from the File system or ASM
diskgroup.
Primary database: srpstb
Standby database: srprim
Primary database server: ora1-2
Standby database server: ora1-1
Primary database:
From the below outcome, it can be noted that datafile 9 of tablespace MYTS is corruped on the primary database.
SQL> select status,instance_name,database_role,open_mode from v$database,v$instance; STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE ------ ------------- ------------- ----------- OPEN srpstb PRIMARY READ WRITE SQL> select * from v$database_block_corruption; FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO ----- ------ ------ ------------------ ---------- 9 128 1 0 CORRUPT 9 138 1 0 CORRUPT SQL> select file_id,file_name,tablespace_name from dba_data_files where file_id=9; FILE_ID FILE_NAME TABLESPACE_NAME ------- -------------------------------------------- ------------ 9 +DATA_NEW/srpstb/datafile/myts.273.833748265 MYTS
Make sure that the standby database is in sync with the primary database.
Primary database:
SQL> select max(sequence#) from v$archived_log; MAX(SEQUENCE#) -------------- 59
Standby database:
SQL> select status,instance_name,database_role,open_mode from v$database,v$instance; STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE ------ ------------- ---------------- ------------------------- OPEN srprim PHYSICAL STANDBY READ ONLY WITH APPLY SQL> select max(sequence#) from v$archived_log where applied='YES'; MAX(SEQUENCE#) -------------- 59
It could be noticed that the standby database is in sync with the primary with the latest archive sequence that is being applied is 59.
Consider taking a backup of the datafile 9 from the standby database. Here, I’ve taken a image copy backup.
[oracle@ora1-1 ~]$ rman target sys/oracle@srprim Recovery Manager: Release 11.2.0.3.0 - Production on Wed Apr 30 19:11:04 2014 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rightsreserved. connected to target database: SRPRIM (DBID=216679430) RMAN> backup as copy datafile 9 format '/u02/bkp/MYTS_09.dbf'; Starting backup at 30-APR-14 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=20 device type=DISK channel ORA_DISK_1: starting datafile copy input datafile file number=00009 name=+DATA/srprim/datafile/myts.291.839878711 output file name=/u02/bkp/MYTS_09.dbf tag=TAG20140430T191204 RECID=22 STAMP=846270726 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01 Finished backup at 30-APR-14 Copy the above backup piece to the primary database server from the standby DB server. [oracle@ora1-1 ~]$ [oracle@ora1-1 ~]$ scp /u02/bkp/MYTS_09.dbf oracle@ora1-2:/u02/bkp/MYTS_09.dbf oracle@ora1-2's password: MYTS_09.dbf 100% 10MB 10.0MB/s 00:00 [oracle@ora1-1 ~]$
Connect to the primary database and get the corrupted datafile offline.
Primary Database:
[oracle@ora1-2 ~]$ sqlplus sys/oracle@srpstb as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 30 19:13:39 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> alter database datafile 9 offline; Database altered.
Catalog the above copied backup copy with the primary database so that the controlfile of the primary gets updated with this backup copy.
[oracle@ora1-2 ~]$ rman target sys/oracle@srpstb Recovery Manager: Release 11.2.0.3.0 - Production on Wed Apr 30 19:15:02 2014 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: SRPRIM (DBID=216679430) RMAN> catalog datafilecopy '/u02/bkp/MYTS_09.dbf'; using target database control file instead of recovery catalog cataloged datafile copy datafile copy file name=/u02/bkp/MYTS_09.dbf RECID=14 STAMP=846271256
Restore and recover the datafile from the backup.
RMAN> restore datafile 9; Starting restore at 30-APR-14 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=49 device type=DISK channel ORA_DISK_1: restoring datafile 00009 input datafile copy RECID=14 STAMP=846271256 file name=/u02/bkp/MYTS_09.dbf destination for restore of datafile 00009: +DATA_NEW/srpstb/datafile/myts.273.833748265 channel ORA_DISK_1: copied datafile copy of datafile 00009 output file name=+DATA_NEW/srpstb/datafile/myts.273.833748265 RECID=0 STAMP=0 Finished restore at 30-APR-14
RMAN> recover datafile 9; Starting recover at 30-APR-14 using channel ORA_DISK_1 starting media recovery media recovery complete, elapsed time: 00:00:01 Finished recover at 30-APR-14
Once the recovery process is done, bring back the datafile 9 online .
RMAN> sql 'alter database datafile 9 online'; sql statement: alter database datafile 9 online
Now, validate this datafile and crosscheck if the corruption exists.
RMAN> validate check logical datafile 9; Starting validate at 30-APR-14 using channel ORA_DISK_1 channel ORA_DISK_1: starting validation of datafile channel ORA_DISK_1: specifying datafile(s) for validation input datafile file number=00009 name=+DATA_NEW/srpstb/datafile/myts.273.833748265 channel ORA_DISK_1: validation complete, elapsed time: 00:00:01 List of Datafiles ================= File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------- 9 OK 0 1 1281 2835804 File Name: +DATA_NEW/srpstb/datafile/myts.273.833748265 Block Type Blocks Failing Blocks Processed ---------- -------------- ---------------- Data 0 0 Index 0 0 Other 0 1279 Finished validate at 30-APR-14 RMAN> exit Recovery Manager complete.
It can be seen that there are no blocks being marked as corrupted.
Connect to the Primary and standby database verify if the standby is in sync with the primary
Primary database:
[oracle@ora1-2 ~]$ [oracle@ora1-2 ~]$ [oracle@ora1-2 ~]$ sqlplus sys/oracle@srpstb as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Wed Apr 30 19:33:06 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select * from v$database_block_corruption; no rows selected SQL> select max(sequence#) from v$archived_log; MAX(SEQUENCE#) -------------- 68
Standby database:
SQL> select max(sequence#) from v$archived_log where applied='YES'; MAX(SEQUENCE#) -------------- 68
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
Here is a scenario where a table is dropped from a database and we know the time the able was dropped. The table needs to be recovered until time the table was dropped from the database. In my previous post, I had demonstrated this using “Tablespace Point In Time Recovery” but I had mentioned the disadvantage of using this method.
In this post, I’m demonstrating on how to recover a dropped table or a schema using the traditional method which involves the below steps.
1. Create a dummy database from the original database and recover it until time the table was not dropped.
2. Export the table from the dummy database.
3. Import the table into the original database.
Database name: SRPRIM
RAC : YES
NODES: 10gnode1 (instance : SRPRIM1), 10gnode2 (instance : SRPRIM2)
Table Name: TEST
Table Owner: SHIVU
Dummy Database name: TMP
In this scenario, I was aware of the time the table was dropped. It was dropped in at 11:17 AM 5th April 2014.
11:17:28 SQL> drop table shivu.test; Table dropped. Elapsed: 00:00:01.65
Let me check when was the last full backup happened successfully.
TO_CHAR(START_TIME,' TO_CHAR(END_TIME,'DD STATUS INPUT_TYPE -------------------- -------------------- --------- ------------- 05-APR-2014 10:47:11 05-APR-2014 10:49:03 COMPLETED DB INCR
From the above outcome, we could notice that the DB incremental level 0 backup was successfully occurred on 5th April 10:49 AM.
Let’s check the status of the RAC database and the services running on it.
[oracle@10gnode1 ~]$ srvctl status database -d srprim -v -f Instance srprim1 is running on node 10gnode1 with online services newsrv. Instance status: Open. Instance srprim2 is running on node 10gnode2 with online services newsrv. Instance status: Open.
Create a pfile from one of the instances. This pfile would be used to create a new database called “TMP”.
Here I’m using the node 10gnode1 for my demonstrations.
[oracle@10gnode1 ~]$ sqlplus sys/oracle@srprim1 as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Mon Apr 7 11:38:38 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> create pfile='/u02/bkp/initsrprim.ora' from spfile; File created.
The pfile is created under location “/u02/bkp/” in the node 10gnode1. Remove the below cluster parameters and create a dummy pfile called inittmp.ora:
*.cluster_database=TRUE srprim2.instance_number=2 srprim1.instance_number=1 srprim2.thread=2 srprim1.thread=1 srprim1.undo_tablespace='UNDOTBS1' srprim2.undo_tablespace='UNDOTBS2'
Here is the PFILE that I’m using to create the dummy database.
[oracle@10gnode1 bkp]$ cat inittmp.ora tmp.__db_cache_size=436207616 tmp.__java_pool_size=16777216 tmp.__large_pool_size=16777216 tmp.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment tmp.__pga_aggregate_target=520093696 tmp.__sga_target=754974720 tmp.__shared_io_pool_size=0 tmp.__shared_pool_size=268435456 tmp.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/tmp/adump' *.audit_trail='db' *.cluster_database=FALSE *.compatible='11.2.0.0.0' *.control_files='+DATA','+FRA' #*.control_files='+DATA/srprim/controlfile/current.260.835353389','+FRA/srprim/controlfile/current.256.835353391' *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_domain='' *.db_name='tmp' *.db_recovery_file_dest='+FRA' *.db_recovery_file_dest_size=4039114752 *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=srprimXDB)' #srprim2.instance_number=2 #srprim1.instance_number=1 *.memory_target=1275068416 *.open_cursors=300 *.processes=150 #*.remote_listener='node-scan:1521' *.remote_login_passwordfile='exclusive' #srprim2.thread=2 #srprim1.thread=1 #srprim1.undo_tablespace='UNDOTBS1' tmp.undo_tablespace='UNDOTBS1'
Add the below TNS entry for the tmp database in the TNSNAMES.ora file of the 10gnode1.
TMP =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10gnode1)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = tmp) (UR = A)
)
)
Add a static entry in the listener.ora file of the 10gnode1 server as shown below and reload the listener.
SID_LIST_LISTENER=
(SID_LIST =
(SID_DESC=
(ORACLE_HOME=/u01/app/oracle/product/11.2.0/db3)
(SID_NAME=tmp)
)
)
Create a password file for the temporary database.
[oracle@10gnode1 dbs]$ orapwd file=$ORACLE_HOME/dbs/orapwtmp password=oracle
Startup the TMP database in nomount mode using the password that was created in the previous post.
[oracle@10gnode1 bkp]$ sqlplus sys/oracle@tmp as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Mon Apr 7 13:27:30 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. SQL> startup nomount pfile='/u02/bkp/inittmp.ora'; ORACLE instance started. Total System Global Area 1272213504 bytes Fixed Size 1344680 bytes Variable Size 822086488 bytes Database Buffers 436207616 bytes Redo Buffers 12574720 bytes
Connect to RMAN with target database as the RAC database “SRPRIM” and the auxiliary database as the dummy database “TMP”. Set the time until which the database TMP needs to be recovered. This time would be the time until before the table was dropped.
Here I’m using the RMAN duplicate database method to create the TMP database and not the traditional “RMAN restore/recover database until time ” option. The reason for using this method is that, there is already a database called SRPRIM running on node 10gnode1 and 10gnode2 and if I would be using the traditional method of restore/recover on the same node, then I’ll have to restore the database with the same name as SRPRIM and later change the name which is not possible due to the SRPRIM database which is already running.
Instead, if I would be using the RMAN duplicate database method, then I can create the dummy database with my own name on the same node (10gnode1) on the fly and no requirement to restore/recover the database with the same name.
Here, the backups of SRPRIM database is stored under location “/u02/bkp” on the node 10gnod1.
[oracle@10gnode1 ~]$ rman target sys/oracle@srprim auxiliary sys/oracle@tmp
Recovery Manager: Release 11.2.0.3.0 - Production on Mon Apr 7 13:54:13 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rightsreserved.
connected to target database: SRPRIM (DBID=251294829)
connected to auxiliary database: TMP (not mounted)
RMAN> run
2> {
3> set until time "to_date('05-04-2014 11:15:00','DD-MM-YYYY HH24:MI:SS')";
4> allocate auxiliary channel dup1 device type disk;
5> allocate auxiliary channel dup2 device type disk;
6> duplicate target database to TMP nofilenamecheck;
7> }
executing command: SET until clause
using target database control file instead of recovery catalog
allocated channel: dup1
channel dup1: SID=32 device type=DISK
allocated channel: dup2
channel dup2: SID=33 device type=DISK
Starting Duplicate Db at 07-APR-14
contents of Memory Script:
{
sql clone "create spfile from memory";
}
executing Memory Script
sql statement: create spfile from memory
contents of Memory Script:
{
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 1272213504 bytes
Fixed Size 1344680 bytes
Variable Size 838863704 bytes
Database Buffers 419430400 bytes
Redo Buffers 12574720 bytes
allocated channel: dup1
channel dup1: SID=29 device type=DISK
allocated channel: dup2
channel dup2: SID=30 device type=DISK
contents of Memory Script:
{
set until scn 2139313;
sql clone "alter system set control_files =
''+DATA/tmp/controlfile/current.272.844264497'', ''+FRA/tmp/controlfile/current.324.844264497'' comment=
''Set by RMAN'' scope=spfile";
sql clone "alter system set db_name =
''SRPRIM'' comment=
''Modified by RMAN duplicate'' scope=spfile";
sql clone "alter system set db_unique_name =
''TMP'' comment=
''Modified by RMAN duplicate'' scope=spfile";
shutdown clone immediate;
startup clone force nomount
restore clone primary controlfile;
alter clone database mount;
}
executing Memory Script
executing command: SET until clause
sql statement: alter system set control_files = ''+DATA/tmp/controlfile/current.272.844264497'',
''+FRA/tmp/controlfile/current.324.844264497'' comment= ''Set by RMAN'' scope=spfile
sql statement: alter system set db_name = ''SRPRIM'' comment= ''Modified by RMAN duplicate'' scope=spfile
sql statement: alter system set db_unique_name = ''TMP'' comment= ''Modified by RMAN duplicate'' scope=spfile
Oracle instance shut down
Oracle instance started
Total System Global Area 1272213504 bytes
Fixed Size 1344680 bytes
Variable Size 838863704 bytes
Database Buffers 419430400 bytes
Redo Buffers 12574720 bytes
allocated channel: dup1
channel dup1: SID=29 device type=DISK
allocated channel: dup2
channel dup2: SID=30 device type=DISK
Starting restore at 07-APR-14
channel dup1: starting datafile backup set restore
channel dup1: restoring control file
channel dup1: reading from backup piece +FRA/srprim/backupset/2014_04_05/ncnnf0_tag20140405t104856_0.333.844080539
channel dup1: piece handle=+FRA/srprim/backupset/2014_04_05/ncnnf0_tag20140405t104856_0.333.844080539 tag=TAG20140405T104856
channel dup1: restored backup piece 1
channel dup1: restore complete, elapsed time: 00:00:08
output file name=+DATA/tmp/controlfile/current.272.844264497
output file name=+FRA/tmp/controlfile/current.324.844264497
Finished restore at 07-APR-14
database mounted
contents of Memory Script:
{
set until scn 2139313;
set newname for clone datafile 1 to new;
set newname for clone datafile 2 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 5 to new;
set newname for clone datafile 6 to new;
set newname for clone datafile 7 to new;
set newname for clone datafile 8 to new;
restore
clone database
;
}
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 07-APR-14
channel dup1: starting datafile backup set restore
channel dup1: specifying datafile(s) to restore from backup set
channel dup1: restoring datafile 00002 to +DATA
channel dup1: restoring datafile 00003 to +DATA
channel dup1: restoring datafile 00004 to +DATA
channel dup1: restoring datafile 00006 to +DATA
channel dup1: reading from backup piece /u02/bkp/SRPRIM_inc0_02p4v99g_1_1.bak
channel dup2: starting datafile backup set restore
channel dup2: specifying datafile(s) to restore from backup set
channel dup2: restoring datafile 00001 to +DATA
channel dup2: restoring datafile 00005 to +DATA
channel dup2: restoring datafile 00007 to +DATA
channel dup2: restoring datafile 00008 to +DATA
channel dup2: reading from backup piece /u02/bkp/SRPRIM_inc0_03p4v99g_1_1.bak
channel dup1: piece handle=/u02/bkp/SRPRIM_inc0_02p4v99g_1_1.bak tag=TAG20140405T104711
channel dup1: restored backup piece 1
channel dup1: restore complete, elapsed time: 00:02:25
channel dup2: piece handle=/u02/bkp/SRPRIM_inc0_03p4v99g_1_1.bak tag=TAG20140405T104711
channel dup2: restored backup piece 1
channel dup2: restore complete, elapsed time: 00:02:45
Finished restore at 07-APR-14
contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script
datafile 1 switched to datafile copy
input datafile copy RECID=9 STAMP=844264708 file name=+DATA/tmp/datafile/system.279.844264543
datafile 2 switched to datafile copy
input datafile copy RECID=10 STAMP=844264708 file name=+DATA/tmp/datafile/sysaux.280.844264543
datafile 3 switched to datafile copy
input datafile copy RECID=11 STAMP=844264708 file name=+DATA/tmp/datafile/undotbs1.278.844264543
datafile 4 switched to datafile copy
input datafile copy RECID=12 STAMP=844264708 file name=+DATA/tmp/datafile/users.275.844264543
datafile 5 switched to datafile copy
input datafile copy RECID=13 STAMP=844264708 file name=+DATA/tmp/datafile/example.276.844264543
datafile 6 switched to datafile copy
input datafile copy RECID=14 STAMP=844264708 file name=+DATA/tmp/datafile/undotbs2.277.844264543
datafile 7 switched to datafile copy
input datafile copy RECID=15 STAMP=844264708 file name=+DATA/tmp/datafile/users.274.844264543
datafile 8 switched to datafile copy
input datafile copy RECID=16 STAMP=844264708 file name=+DATA/tmp/datafile/mydata.273.844264543
contents of Memory Script:
{
set until time "to_date('APR 05 2014 11:15:00', 'MON DD YYYY HH24:MI:SS')";
recover
clone database
delete archivelog
;
}
executing Memory Script
executing command: SET until clause
Starting recover at 07-APR-14
starting media recovery
archived log for thread 1 with sequence 103 is already on disk as file
+FRA/srprim/archivelog/2014_04_05/thread_1_seq_103.334.844082183
archived log for thread 2 with sequence 50 is already on disk as file
+FRA/srprim/archivelog/2014_04_05/thread_2_seq_50.331.844082191
channel dup1: starting archived log restore to default destination
channel dup1: restoring archived log
archived log thread=1 sequence=102
channel dup1: restoring archived log
archived log thread=2 sequence=49
channel dup1: reading from backup piece /u02/bkp/SRPRIM_arch_07p4v9ce_1_1.bak
channel dup1: piece handle=/u02/bkp/SRPRIM_arch_07p4v9ce_1_1.bak tag=TAG20140405T104844
channel dup1: restored backup piece 1
channel dup1: restore complete, elapsed time: 00:00:15
archived log file name=+FRA/tmp/archivelog/2014_04_07/thread_1_seq_102.322.844264715 thread=1 sequence=102
archived log file name=+FRA/tmp/archivelog/2014_04_07/thread_2_seq_49.321.844264715 thread=2 sequence=49
channel clone_default: deleting archived log(s)
archived log file name=+FRA/tmp/archivelog/2014_04_07/thread_2_seq_49.321.844264715 RECID=22 STAMP=844264720
archived log file name=+FRA/srprim/archivelog/2014_04_05/thread_2_seq_50.331.844082191 thread=2 sequence=50
channel clone_default: deleting archived log(s)
archived log file name=+FRA/tmp/archivelog/2014_04_07/thread_1_seq_102.322.844264715 RECID=21 STAMP=844264716
archived log file name=+FRA/srprim/archivelog/2014_04_05/thread_1_seq_103.334.844082183 thread=1 sequence=103
media recovery complete, elapsed time: 00:00:03
Finished recover at 07-APR-14
Oracle instance started
Total System Global Area 1272213504 bytes
Fixed Size 1344680 bytes
Variable Size 838863704 bytes
Database Buffers 419430400 bytes
Redo Buffers 12574720 bytes
contents of Memory Script:
{
sql clone "alter system set db_name =
''TMP'' comment=
''Reset to original value by RMAN'' scope=spfile";
sql clone "alter system reset db_unique_name scope=spfile";
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script
sql statement: alter system set db_name = ''TMP'' comment= ''Reset to original value by RMAN'' scope=spfile
sql statement: alter system reset db_unique_name scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 1272213504 bytes
Fixed Size 1344680 bytes
Variable Size 838863704 bytes
Database Buffers 419430400 bytes
Redo Buffers 12574720 bytes
allocated channel: dup1
channel dup1: SID=29 device type=DISK
allocated channel: dup2
channel dup2: SID=30 device type=DISK
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "TMP" RESETLOGS ARCHIVELOG
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 32
MAXLOGHISTORY 292
LOGFILE
GROUP 1 SIZE 50 M ,
GROUP 2 SIZE 50 M
DATAFILE
'+DATA/tmp/datafile/system.279.844264543'
CHARACTER SET WE8MSWIN1252
sql statement: ALTER DATABASE ADD LOGFILE
INSTANCE 'i2'
GROUP 3 SIZE 50 M ,
GROUP 4 SIZE 50 M
contents of Memory Script:
{
set newname for clone tempfile 1 to new;
switch clone tempfile all;
catalog clone datafilecopy "+DATA/tmp/datafile/sysaux.280.844264543",
"+DATA/tmp/datafile/undotbs1.278.844264543",
"+DATA/tmp/datafile/users.275.844264543",
"+DATA/tmp/datafile/example.276.844264543",
"+DATA/tmp/datafile/undotbs2.277.844264543",
"+DATA/tmp/datafile/users.274.844264543",
"+DATA/tmp/datafile/mydata.273.844264543";
switch clone datafile all;
}
executing Memory Script
executing command: SET NEWNAME
renamed tempfile 1 to +DATA in control file
cataloged datafile copy
datafile copy file name=+DATA/tmp/datafile/sysaux.280.844264543 RECID=1 STAMP=844264790
cataloged datafile copy
datafile copy file name=+DATA/tmp/datafile/undotbs1.278.844264543 RECID=2 STAMP=844264790
cataloged datafile copy
datafile copy file name=+DATA/tmp/datafile/users.275.844264543 RECID=3 STAMP=844264791
cataloged datafile copy
datafile copy file name=+DATA/tmp/datafile/example.276.844264543 RECID=4 STAMP=844264791
cataloged datafile copy
datafile copy file name=+DATA/tmp/datafile/undotbs2.277.844264543 RECID=5 STAMP=844264791
cataloged datafile copy
datafile copy file name=+DATA/tmp/datafile/users.274.844264543 RECID=6 STAMP=844264791
cataloged datafile copy
datafile copy file name=+DATA/tmp/datafile/mydata.273.844264543 RECID=7 STAMP=844264791
datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=844264790 file name=+DATA/tmp/datafile/sysaux.280.844264543
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=844264790 file name=+DATA/tmp/datafile/undotbs1.278.844264543
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=844264791 file name=+DATA/tmp/datafile/users.275.844264543
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=844264791 file name=+DATA/tmp/datafile/example.276.844264543
datafile 6 switched to datafile copy
input datafile copy RECID=5 STAMP=844264791 file name=+DATA/tmp/datafile/undotbs2.277.844264543
datafile 7 switched to datafile copy
input datafile copy RECID=6 STAMP=844264791 file name=+DATA/tmp/datafile/users.274.844264543
datafile 8 switched to datafile copy
input datafile copy RECID=7 STAMP=844264791 file name=+DATA/tmp/datafile/mydata.273.844264543
contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script
database opened
Finished Duplicate Db at 07-APR-14
released channel: dup1
released channel: dup2
RMAN>
Now that the duplicate until time is completed and the TMP database is opened, let’s check if the user SHIVU exists and also the table TEST with the contents.
[oracle@10gnode1 ~]$ sqlplus sys/oracle@tmp as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Mon Apr 7 14:02:07 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select username,account_status from dba_users where username='SHIVU'; USERNAME ACCOUNT_STATUS -------- --------------- SHIVU OPEN SQL> conn shivu/oracle@tmp Connected. SQL> SQL> select * from tab; TNAME TABTYPE CLUSTERID ----- ------- ---------- TEST TABLE TEST1 TABLE SQL> select * from test; CODE ---------- 100 101 102 103 104 105 106 107 108 109 110 CODE ---------- 111 112 113 114 15 rows selected. SQL> create directory DP_DIR as '/u01/app/oracle/diag/rdbms/tmp/tmp/dpdump'; Directory created. SQL> grant read,write on directory DP_DIR to shivu; Grant succeeded. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options
We could see that the table TEST exists with all the rows. Now that this table needs to be exported and imported to the original database SRPRIM. I’m using the DataPump option to proceed with the export and import.
I’m creating a database directory called DP_DIR with the path “/u01/app/oracle/diag/rdbms/tmp/tmp/dpdump” which is used to place the dumpfile and logfile.
[oracle@10gnode1 ~]$ export ORACLE_SID=tmp [oracle@10gnode1 ~]$ expdp shivu/oracle directory=DP_DIR dumpfile=test_table.dmp logfile=test_table.log tables=test Export: Release 11.2.0.3.0 - Production on Mon Apr 7 14:14:45 2014 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rightsreserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,Data Mining and Real Application Testing options Starting "SHIVU"."SYS_EXPORT_TABLE_01": shivu/******** directory=DP_DIR dumpfile=test_table.dmp logfile=test_table.log tables=test Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64 KB Processing object type TABLE_EXPORT/TABLE/TABLE . . exported "SHIVU"."TEST" 5.117 KB 15 rows Master table "SHIVU"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SHIVU.SYS_EXPORT_TABLE_01 is: /u01/app/oracle/diag/rdbms/tmp/tmp/dpdump/test_table.dmp Job "SHIVU"."SYS_EXPORT_TABLE_01" successfully completed at 14:15:14
Export of Table TEST is taken from the TMP database and the dumpfile as well as logfile are located under the location “/u01/app/oracle/diag/rdbms/tmp/tmp/dpdump/”.
Similarly, I’m creating a database directory called DP_DIR with location “/u01/app/oracle/diag/rdbms/srprim/srprim1/dpdump” on node 10gnode1 for SRPRIM database.
[oracle@10gnode1 ~]$ sqlplus sys/oracle@srprim1 as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Mon Apr 7 14:16:58 2014
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> select status,instance_name from gv$instance;
STATUS INSTANCE_NAME
------ ---------------------
OPEN srprim1
OPEN srprim2
SQL> create directory DP_DIR as '/u01/app/oracle/diag/rdbms/srprim/srprim1/dpdump';
Directory created.
SQL> grant read,write on directory DP_DIR to shivu;
Grant succeeded.
SQL> select * from dba_directories;
OWNER DIRECTORY_NAME DIRECTORY_PATH
----- -------------- ---------------------------------------------
SYS DATA_PUMP_DIR /u01/app/oracle/product/11.2.0/db3/rdbms/log/
SYS SUBDIR /u01/app/oracle/product/11.2.0/db_1/demo/sche ma/order_entry//2002/Sep
SYS SS_OE_XMLDIR /u01/app/oracle/product/11.2.0/db_1/demo/sche ma/order_entry/
SYS LOG_FILE_DIR /u01/app/oracle/product/11.2.0/db_1/demo/sche ma/log/
SYS DATA_FILE_DIR /u01/app/oracle/product/11.2.0/db_1/demo/sche ma/sales_history/
SYS XMLDIR /ade/b/1191423112/oracle/rdbms/xml
SYS MEDIA_DIR /u01/app/oracle/product/11.2.0/db_1/demo/sche ma/product_media/
SYS DP_DIR /u01/app/oracle/diag/rdbms/srprim/srprim1/dpd ump
SYS ORACLE_OCM_ /u01/app/oracle/product/11.2.0/db3/ccr/state
CONFIG_DIR
9 rows selected.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Move the dumpfile “test_table.dmp” from location “/u01/app/oracle/diag/rdbms/tmp/tmp/dpdump/” to “/u01/app/oracle/diag/rdbms/srprim/srprim1/”.
[oracle@10gnode1 dpdump]$ mv /u01/app/oracle/diag/rdbms/tmp/tmp/dpdump/test_table.dmp /u01/app/oracle/diag/rdbms/srprim/srprim1/dpdump/ [oracle@10gnode1 dpdump]$ cd /u01/app/oracle/diag/rdbms/srprim/srprim1/dpdump/ [oracle@10gnode1 dpdump]$ ls -lrt total 92 -rw-r----- 1 oracle oinstall 90112 Apr 7 14:15 test_table.dmp
Now, let’s proceed with the import of TEST table from the dumpfile to the SRPRIM database.
[oracle@10gnode1 dpdump]$ export ORACLE_SID=srprim1 [oracle@10gnode1 dpdump]$ impdp shivu/oracle directory=DP_DIR dumpfile=test_table.dmp logfile=test_table_imp.log tables=TEST Import: Release 11.2.0.3.0 - Production on Mon Apr 7 15:18:02 2014 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rightsreserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options Master table "SHIVU"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded Starting "SHIVU"."SYS_IMPORT_TABLE_01": shivu/******** directory=DP_DIR dumpfile=test_table.dmp logfile=test_table_imp.log tables=TEST Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/TABLE_DATA . . imported "SHIVU"."TEST" 5.117 KB 15 rows Job "SHIVU"."SYS_IMPORT_TABLE_01" successfully completed at 15:18:51
Let’s check if the table is available with all the rows in the SRPRIM database.
[oracle@10gnode1 dpdump]$ sqlplus shivu/oracle@srprim SQL*Plus: Release 11.2.0.3.0 Production on Mon Apr 7 15:22:45 2014 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> show user USER is "SHIVU" SQL> select * from tab; TNAME TABTYPE CLUSTERID ----- ------- ----------- TEST TABLE TEST1 TABLE SQL> select * from test; CODE ---------- 100 101 102 103 104 105 106 107 108 109 110 CODE ---------- 111 112 113 114 15 rows selected. SQL> select owner,object_name,to_char(CREATED,'DD-MON-YYYY HH24:MI:SS') from dba_objects where object_name='TEST' and owner='SHIVU'; OWNER OBJECT_NAME TO_CHAR(CREATED,'DD- ----- ----------- ------------------------- SHIVU TEST 07-APR-2014 15:18:47 SQL> !date Mon Apr 7 15:39:25 IST 2014
We could see that all the rows of the TEST table have been imported. Also, we could just crosscheck from the view “dba_objects” as to when the table was created (that is after the import).
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
