RAC Node addition in Oracle 12c
This article demonstrates on how to add a node to a RAC cluster in oracle 12c. The environment used here makes use of ASM as a storage area for the OCR/Voting disks and database physical files.
Environment:
OS : OEL 6 Existing nodes : 12cnode1, 12cnode2 Node to be added: 12cnode3 Database Name : srprim DB Version : 12.1.0.2
It is assumed that the new node to be added is configured to access the ASM disks that are shared between the existing nodes. OS installation, oracle or grid user creation and network configuraiton steps are not outlined in this document.
Lets move on with the node addition.
Add Public and private interfaces IP (eth0 and eth1)on the new node. I have assigned the IPs as follows
Public Interface : 192.168.0.120 Private interface : 192.168.1.109 Virtual IP interface: 192.168.0.121
Update the the above IP addresses of the new node in the “/etc/hosts” file on all the nodes of the cluster.
Sample of “/etc/hosts” file on the new node. The same entries exists on the other nodes of the cluster.
[root@12cnode3 u02]# cat /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6
################ PUBLIC ################### 192.168.0.115 12cnode1.mydomain 12cnode1 192.168.0.116 12cnode2.mydomain 12cnode2 192.168.0.120 12cnode3.mydomain 12cnode3 ############### PRIVATE ################### 192.168.1.107 12cnode1-priv.mydomain 12cnode1-priv 192.168.1.108 12cnode2-priv.mydomain 12cnode2-priv 192.168.1.109 12cnode3-priv.mydomain 12cnode3-priv ############### VIP ####################### 192.168.0.117 12cnode1-vip.mydomain 12cnode1-vip 192.168.0.118 12cnode2-vip.mydomain 12cnode2-vip 192.168.0.121 12cnode3-vip.mydomain 12cnode3-vip ############## SCAN ####################### 192.168.0.119 node12c-scan.mydomain node12c-scan #################################################################
As a pre-requisite check, install the “cvuqdisk-1”, “nfs-utils” rpm on the new node.
[root@12cnode3 u02]# rpm -Uivh cvuqdisk-1.0.9-1.rpm Preparing... ########################################### [100%] Using default group oinstall to install package 1:cvuqdisk ########################################### [100%] [root@12cnode3 u02]# rpm -Uivh nfs-utils-1.2.3-15.el6.x86_64.rpm --force --nodeps warning: nfs-utils-1.2.3-15.el6.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 192a7d7d: NOKEY Preparing... ########################################### [100%] 1:nfs-utils ########################################### [100%] [root@12cnode3 u02]#
Update the /etc/sysctl.conf file on the new node with the required kernel parameters. The values can be obtained from the existing nodes.
fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 4398046511104 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500
Update /etc/security/limits.conf file on the new node. The values can be obtained from the existing nodes.
oracle soft nofile 1024 oracle hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft stack 10240 oracle hard stack 32768 oracle soft memlock unlimited oracle hard memlock unlimited
Disable the firewall between the nodes.
[root@12cnode3 u02]# service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 5 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) num target prot opt source destination
[root@12cnode3 u02]# service iptables stop iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Unloading modules: [ OK ] [root@12cnode3 u02]# [root@12cnode3 u02]# chkconfig iptables off [root@12cnode3 u02]# service iptables status iptables: Firewall is not running.
Check if avahi-daemon is running and if so, then stop it.
[root@12cnode3 u02]# chkconfig --list | grep avahi avahi-daemon 0:off 1:off 2:off 3:on 4:on 5:on 6:off [root@12cnode3 u02]# [root@12cnode3 u02]# service avahi-daemon stop Shutting down Avahi daemon: [ OK ] [root@12cnode3 u02]# chkconfig avahi-daemon off [root@12cnode3 u02]# chkconfig --list | grep avahi avahi-daemon 0:off 1:off 2:off 3:off 4:off 5:off 6:off
Set parameter NOZEROCONF to YES in the /etc/sysconfig/network file.
[root@12cnode3 u02]# cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=12cnode3.mydomain NOZEROCONF=yes
I have my ASM disks configured using UDEV. Configure UDEV rules as defined on the other nodes of the cluster.
This step can be skipped if UDEV is not being used to configure the disks.
[root@12cnode3 dev]# cat /etc/udev/rules.d/99-oracle-asmdevices.rules KERNEL =="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB1cd37fc0-cc66bbc6", NAME="DSK1", OWNER="oracle", GROUP="oinstall", MODE="0660" KERNEL =="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBc62708e8-ee54011f", NAME="DSK2", OWNER="oracle", GROUP="oinstall", MODE="0660" KERNEL =="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB668ac5ba-619c34bb", NAME="DSK3", OWNER="oracle", GROUP="oinstall", MODE="0660" KERNEL =="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB1dfcc4bc-fc73fb86", NAME="DSK4", OWNER="oracle", GROUP="oinstall", MODE="0660"
Create passwordless ssh between the existing nodes of the cluster and with the new node. This can be done by running the “sshUserSetup.sh” script available with the GI software.
[root@12cnode3 ~]# cd /u03 [root@12cnode3 u03]# ls -lrt total 52 drwxrwxrwx. 2 oracle oinstall 16384 Mar 7 19:26 lost+found -rwxr-xr-x. 1 oracle oinstall 32334 Mar 16 20:10 sshUserSetup.sh [root@12cnode3 u03]# [root@12cnode3 u03]# [root@12cnode3 u03]# [root@12cnode3 u03]# ./sshUserSetup.sh -user oracle -hosts "12cnode1 12cnode2 12cnode3" -noPromptPassphrase -confirm -advanced The output of this script is also logged into /tmp/sshUserSetup_2016-03-16-20-13-42.log Hosts are 12cnode1 12cnode2 12cnode3 user is oracle Platform:- Linux Checking if the remote hosts are reachable PING 12cnode1.mydomain (192.168.0.115) 56(84) bytes of data. 64 bytes from 12cnode1.mydomain (192.168.0.115): icmp_seq=1 ttl=64 time=1.33 ms 64 bytes from 12cnode1.mydomain (192.168.0.115): icmp_seq=2 ttl=64 time=0.338 ms 64 bytes from 12cnode1.mydomain (192.168.0.115): icmp_seq=3 ttl=64 time=0.237 ms 64 bytes from 12cnode1.mydomain (192.168.0.115): icmp_seq=4 ttl=64 time=0.360 ms 64 bytes from 12cnode1.mydomain (192.168.0.115): icmp_seq=5 ttl=64 time=0.243 ms --- 12cnode1.mydomain ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4002ms rtt min/avg/max/mdev = 0.237/0.502/1.332/0.417 ms PING 12cnode2.mydomain (192.168.0.116) 56(84) bytes of data. 64 bytes from 12cnode2.mydomain (192.168.0.116): icmp_seq=1 ttl=64 time=0.787 ms 64 bytes from 12cnode2.mydomain (192.168.0.116): icmp_seq=2 ttl=64 time=0.250 ms 64 bytes from 12cnode2.mydomain (192.168.0.116): icmp_seq=3 ttl=64 time=0.267 ms 64 bytes from 12cnode2.mydomain (192.168.0.116): icmp_seq=4 ttl=64 time=0.260 ms 64 bytes from 12cnode2.mydomain (192.168.0.116): icmp_seq=5 ttl=64 time=0.229 ms --- 12cnode2.mydomain ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4001ms rtt min/avg/max/mdev = 0.229/0.358/0.787/0.215 ms PING 12cnode3.mydomain (192.168.56.113) 56(84) bytes of data. 64 bytes from 12cnode3.mydomain (192.168.56.113): icmp_seq=1 ttl=64 time=0.010 ms 64 bytes from 12cnode3.mydomain (192.168.56.113): icmp_seq=2 ttl=64 time=0.018 ms 64 bytes from 12cnode3.mydomain (192.168.56.113): icmp_seq=3 ttl=64 time=0.068 ms 64 bytes from 12cnode3.mydomain (192.168.56.113): icmp_seq=4 ttl=64 time=0.058 ms 64 bytes from 12cnode3.mydomain (192.168.56.113): icmp_seq=5 ttl=64 time=0.069 ms --- 12cnode3.mydomain ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 3999ms rtt min/avg/max/mdev = 0.010/0.044/0.069/0.026 ms Remote host reachability check succeeded. The following hosts are reachable: 12cnode1 12cnode2 12cnode3. The following hosts are not reachable: . All hosts are reachable. Proceeding further... firsthost 12cnode1 numhosts 3 The script will setup SSH connectivity from the host 12cnode3.mydomain to all the remote hosts. After the script is executed, the user can use SSH to run commands on the remote hosts or copy files between this host 12cnode3.mydomain and the remote hosts without being prompted for passwords or confirmations. NOTE 1: As part of the setup procedure, this script will use ssh and scp to copy files between the local host and the remote hosts. Since the script does not store passwords, you may be prompted for the passwords during the execution of the script whenever ssh or scp is invoked. NOTE 2: AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE directories. Do you want to continue and let the script make the above mentioned changes (yes/no)? Confirmation provided on the command line The user chose yes User chose to skip passphrase related questions. Creating .ssh directory on local host, if not present already Creating authorized_keys file on local host Changing permissions on authorized_keys to 644 on local host Creating known_hosts file on local host Changing permissions on known_hosts to 644 on local host Creating config file on local host If a config file exists already at /root/.ssh/config, it would be backed up to /root/.ssh/config.backup. Removing old private/public keys on local host Running SSH keygen on local host with empty passphrase Generating public/private rsa key pair. Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: a4:83:da:75:8e:99:a2:57:63:a7:2b:b0:e2:df:4f:09 root@12cnode3.mydomain The key's randomart image is: +--[ RSA 1024]----+ | | | | | . | | . o | | .E+ S | | .o .=Bo | | .ooo+*. | |. ..+.o | |.ooo ooo | +-----------------+ Creating .ssh directory and setting permissions on remote host 12cnode1 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT. The script would create ~oracle/.ssh/config file on remote host 12cnode1. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host 12cnode1. Warning: Permanently added '12cnode1,192.168.0.115' (RSA) to the list of known hosts. oracle@12cnode1's password: Done with creating .ssh directory and setting permissions on remote host 12cnode1. Creating .ssh directory and setting permissions on remote host 12cnode2 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT. The script would create ~oracle/.ssh/config file on remote host 12cnode2. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host 12cnode2. Warning: Permanently added '12cnode2,192.168.0.116' (RSA) to the list of known hosts. oracle@12cnode2's password: Done with creating .ssh directory and setting permissions on remote host 12cnode2. Creating .ssh directory and setting permissions on remote host 12cnode3 THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT. The script would create ~oracle/.ssh/config file on remote host 12cnode3. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup. The user may be prompted for a password here since the script would be running SSH on host 12cnode3. Warning: Permanently added '12cnode3' (RSA) to the list of known hosts. oracle@12cnode3's password: Done with creating .ssh directory and setting permissions on remote host 12cnode3. Copying local host public key to the remote host 12cnode1 The user may be prompted for a password or passphrase here since the script would be using SCP for host 12cnode1. oracle@12cnode1's password: Permission denied, please try again. oracle@12cnode1's password: Done copying local host public key to the remote host 12cnode1 Copying local host public key to the remote host 12cnode2 The user may be prompted for a password or passphrase here since the script would be using SCP for host 12cnode2. oracle@12cnode2's password: Done copying local host public key to the remote host 12cnode2 Copying local host public key to the remote host 12cnode3 The user may be prompted for a password or passphrase here since the script would be using SCP for host 12cnode3. oracle@12cnode3's password: Done copying local host public key to the remote host 12cnode3 Creating keys on remote host 12cnode1 if they do not exist already. This is required to setup SSH on host 12cnode1. Creating keys on remote host 12cnode2 if they do not exist already. This is required to setup SSH on host 12cnode2. Creating keys on remote host 12cnode3 if they do not exist already. This is required to setup SSH on host 12cnode3. Generating public/private rsa key pair. Your identification has been saved in .ssh/id_rsa. Your public key has been saved in .ssh/id_rsa.pub. The key fingerprint is: de:87:40:a9:83:7d:92:22:7e:01:cf:95:4d:25:7b:52 oracle@12cnode3.mydomain The key's randomart image is: +--[ RSA 1024]----+ | o.E | | + = | | . o * . | | + + + o | | . * * S | | . . o = o . | | . . . o . | | . . | | | +-----------------+ Updating authorized_keys file on remote host 12cnode1 Updating known_hosts file on remote host 12cnode1 Updating authorized_keys file on remote host 12cnode2 Updating known_hosts file on remote host 12cnode2 Updating authorized_keys file on remote host 12cnode3 Updating known_hosts file on remote host 12cnode3 SSH setup is complete. ------------------------------------------------------------------------ Verifying SSH setup =================== The script will now run the date command on the remote nodes using ssh to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP, THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR PASSWORDS. If you see any output other than date or are prompted for the password, ssh is not setup correctly and you will need to resolve the issue and set up ssh again. The possible causes for failure could be: 1. The server settings in /etc/ssh/sshd_config file do not allow ssh for user oracle. 2. The server may have disabled public key based authentication. 3. The client public key on the server may be outdated. 4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle. 5. User may not have passed -shared option for shared remote users or may be passing the -shared option for non-shared remote users. 6. If there is output in addition to the date, but no password is asked, it may be a security alert shown as part of company policy. Append the additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file. ------------------------------------------------------------------------ --12cnode1:-- Running /usr/bin/ssh -x -l oracle 12cnode1 date to verify SSH connectivity has been setup from local host to 12cnode1. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Wed Mar 16 20:14:26 IST 2016 ------------------------------------------------------------------------ --12cnode2:-- Running /usr/bin/ssh -x -l oracle 12cnode2 date to verify SSH connectivity has been setup from local host to 12cnode2. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Wed Mar 16 20:14:26 IST 2016 ------------------------------------------------------------------------ --12cnode3:-- Running /usr/bin/ssh -x -l oracle 12cnode3 date to verify SSH connectivity has been setup from local host to 12cnode3. IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR. Wed Mar 16 20:14:27 IST 2016 ------------------------------------------------------------------------ ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from 12cnode1 to 12cnode1 IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Mar 16 20:14:28 IST 2016 ------------------------------------------------------------------------ ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from 12cnode1 to 12cnode2 IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Wed Mar 16 20:14:27 IST 2016 ------------------------------------------------------------------------ ------------------------------------------------------------------------ Verifying SSH connectivity has been setup from 12cnode1 to 12cnode3 IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Warning: Permanently added the RSA host key for IP address '192.168.0.120' to the list of known hosts. Wed Mar 16 20:14:28 IST 2016 ------------------------------------------------------------------------ -Verification from complete- SSH verification complete. [root@12cnode3 u03]#
Though the script says that verification of passwordless ssh is verified, I would like to check this out with a small test.
[oracle@12cnode1 ~]$ ssh 12cnode2 date Wed Mar 16 20:15:53 IST 2016 [oracle@12cnode1 ~]$ [oracle@12cnode1 ~]$ [oracle@12cnode1 ~]$ ssh 12cnode3 date Wed Mar 16 20:15:59 IST 2016 [oracle@12cnode1 ~]$ [oracle@12cnode1 ~]$ [oracle@12cnode1 ~]$ ssh 12cnode1 date Wed Mar 16 20:16:05 IST 2016 [oracle@12cnode1 ~]$
[oracle@12cnode2 ~]$ ssh 12cnode1 date Wed Mar 16 20:16:15 IST 2016 [oracle@12cnode2 ~]$ [oracle@12cnode2 ~]$ [oracle@12cnode2 ~]$ ssh 12cnode3 date Warning: Permanently added the RSA host key for IP address '192.168.0.120' to the list of known hosts. Wed Mar 16 20:16:19 IST 2016 [oracle@12cnode2 ~]$ [oracle@12cnode2 ~]$ [oracle@12cnode2 ~]$ ssh 12cnode2 date Wed Mar 16 20:16:25 IST 2016
[oracle@12cnode3 dev]$ ssh 12cnode1 date Wed Mar 16 20:16:37 IST 2016 [oracle@12cnode3 dev]$ [oracle@12cnode3 dev]$ ssh 12cnode2 date Wed Mar 16 20:16:42 IST 2016 [oracle@12cnode3 dev]$ [oracle@12cnode3 dev]$ [oracle@12cnode3 dev]$ ssh 12cnode3 date Wed Mar 16 20:16:48 IST 2016
Run the CLUVFY utility with the options “pre nodeadd” to check the pre-requisites on the new node. This needs to be run from the node which is already part of the cluster.
Here, it’s being run from node “12cnode1”.
[oracle@12cnode1 bin]$ cd /u01/app/12.1.0.2/grid/bin [oracle@12cnode1 bin]$ ./cluvfy stage -pre nodeadd -n 12cnode3 -verbose > /u02/12cnode3prenodeadd.txt [oracle@12cnode1 bin]$
Let me check what all conditions are not met.
[oracle@12cnode1 bin]$ grep -i "failed" /u02/12cnode3prenodeadd.txt 12cnode3 2.6.32-71.el6.x86_64 2.6.39 failed 12cnode1 2.6.32-71.el6.x86_64 2.6.39 failed Result: Kernel version check failed Result: Clock synchronization check using Network Time Protocol(NTP) failed 12cnode1 failed 12cnode3 failed Check for integrity of file "/etc/resolv.conf" failed
The above 2 conditions have failed. We can ignore the Kernel version if it’s greater than or equal to 2.6.32. The integrity check of file “/etc/resolv.conf” for DNS has failed and I would ignore this as I’m not using any DNS.
Let’s move on with the node addition. In 12c, the location of “addnode.sh” is quite different than that of in 11gR2. In 11gR2, it was located under GRID_HOME/oui/bin, but in 12c, it’s located under “GRID_HOME/addnode”. On the node which is already a part of the cluster, run the “addnode.sh” script. Here, it’s been run from 12cnode1 in silent mode (and not through GUI). Specify the new node name and new node VIP name. Since the above mentioned 2 conditions have failed, I’m running “addnode.sh” with the “ignorePrereq” option.
[oracle@12cnode1 ~]$ cd /u01/app/12.1.0.2/grid/addnode/ [oracle@12cnode1 addnode]$ ./addnode.sh "CLUSTER_NEW_NODES={12cnode3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={12cnode3-vip}" -silent -ignorePrereq Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 44743 MB Passed Checking swap space: must be greater than 150 MB. Actual 8138 MB Passed Prepare Configuration in progress. Prepare Configuration successful. .................................................. 8% Done. You can find the log of this install session at: /u01/app/oraInventory/logs/addNodeActions2016-03-23_05-18-17PM.log Instantiate files in progress. Instantiate files successful. .................................................. 14% Done. Copying files to node in progress. Copying files to node successful. .................................................. 73% Done. Saving cluster inventory in progress. .................................................. 80% Done. Saving cluster inventory successful. The Cluster Node Addition of /u01/app/12.1.0.2/grid was successful. Please check '/tmp/silentInstall.log' for more details. Setup Oracle Base in progress. Setup Oracle Base successful. .................................................. 88% Done. As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/12.1.0.2/grid/root.sh Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [12cnode3] Execute /u01/app/12.1.0.2/grid/root.sh on the following nodes: [12cnode3] The scripts can be executed in parallel on all the nodes. .......... Update Inventory in progress. .................................................. 100% Done. Update Inventory successful. Successfully Setup Software. [oracle@12cnode1 addnode]$
Run as ROOT user the scripts “/u01/app/oraInventory/orainstRoot.sh” and “/u01/app/12.1.0.2/grid/root.sh” on the new node 12cnode3.
Here is the execution of the script “orainstRoot.sh”
[root@12cnode3 grid]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete.
Execution of “root.sh” is as follows:
[root@12cnode3 grid]# /u01/app/12.1.0.2/grid/root.sh Check /u01/app/12.1.0.2/grid/install/root_12cnode3.mydomain_2016-03-23_18-49-49.log for the output of root script [root@12cnode3 grid]#
[root@12cnode3 ~]# cat /u01/app/12.1.0.2/grid/install/root_12cnode3.mydomain_2016-03-23_18-49-49.log Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/12.1.0.2/grid Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params 2016/03/23 18:49:51 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. 2016/03/23 18:49:52 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2016/03/23 18:49:56 CLSRSC-363: User ignored prerequisites during installation OLR initialization - successful 2016/03/23 18:52:28 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf' CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '12cnode3' CRS-2673: Attempting to stop 'ora.drivers.acfs' on '12cnode3' CRS-2677: Stop of 'ora.drivers.acfs' on '12cnode3' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '12cnode3' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on '12cnode3' CRS-2672: Attempting to start 'ora.evmd' on '12cnode3' CRS-2676: Start of 'ora.mdnsd' on '12cnode3' succeeded CRS-2676: Start of 'ora.evmd' on '12cnode3' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on '12cnode3' CRS-2676: Start of 'ora.gpnpd' on '12cnode3' succeeded CRS-2672: Attempting to start 'ora.gipcd' on '12cnode3' CRS-2676: Start of 'ora.gipcd' on '12cnode3' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on '12cnode3' CRS-2676: Start of 'ora.cssdmonitor' on '12cnode3' succeeded CRS-2672: Attempting to start 'ora.cssd' on '12cnode3' CRS-2672: Attempting to start 'ora.diskmon' on '12cnode3' CRS-2676: Start of 'ora.diskmon' on '12cnode3' succeeded CRS-2676: Start of 'ora.cssd' on '12cnode3' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on '12cnode3' CRS-2672: Attempting to start 'ora.ctssd' on '12cnode3' CRS-2676: Start of 'ora.ctssd' on '12cnode3' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on '12cnode3' succeeded CRS-2672: Attempting to start 'ora.asm' on '12cnode3' CRS-2676: Start of 'ora.asm' on '12cnode3' succeeded CRS-2672: Attempting to start 'ora.storage' on '12cnode3' CRS-2676: Start of 'ora.storage' on '12cnode3' succeeded CRS-2672: Attempting to start 'ora.crf' on '12cnode3' CRS-2676: Start of 'ora.crf' on '12cnode3' succeeded CRS-2672: Attempting to start 'ora.crsd' on '12cnode3' CRS-2676: Start of 'ora.crsd' on '12cnode3' succeeded CRS-6017: Processing resource auto-start for servers: 12cnode3 CRS-2672: Attempting to start 'ora.net1.network' on '12cnode3' CRS-2672: Attempting to start 'ora.FRA.dg' on '12cnode3' CRS-2676: Start of 'ora.net1.network' on '12cnode3' succeeded CRS-2672: Attempting to start 'ora.ons' on '12cnode3' CRS-2676: Start of 'ora.ons' on '12cnode3' succeeded CRS-2676: Start of 'ora.FRA.dg' on '12cnode3' succeeded CRS-6016: Resource auto-start has completed for server 12cnode3 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2016/03/23 19:01:12 CLSRSC-343: Successfully started Oracle Clusterware stack clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 12c Release 1. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2016/03/23 19:02:21 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Let’s check the CRS status and the resources on the new node.
[root@12cnode3 ~]# cd /u01/app/12.1.0.2/grid/bin/ [root@12cnode3 bin]# ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
[root@12cnode3 bin]# ./crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE 12cnode1 STABLE ONLINE ONLINE 12cnode2 STABLE ONLINE ONLINE 12cnode3 STABLE ora.FRA.dg ONLINE ONLINE 12cnode1 STABLE ONLINE ONLINE 12cnode2 STABLE ONLINE ONLINE 12cnode3 STABLE ora.LISTENER.lsnr ONLINE ONLINE 12cnode1 STABLE ONLINE ONLINE 12cnode2 STABLE ONLINE ONLINE 12cnode3 STABLE ora.asm ONLINE ONLINE 12cnode1 Started,STABLE ONLINE ONLINE 12cnode2 Started,STABLE ONLINE ONLINE 12cnode3 Started,STABLE ora.net1.network ONLINE ONLINE 12cnode1 STABLE ONLINE ONLINE 12cnode2 STABLE ONLINE ONLINE 12cnode3 STABLE ora.ons ONLINE ONLINE 12cnode1 STABLE ONLINE ONLINE 12cnode2 STABLE ONLINE ONLINE 12cnode3 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.12cnode1.vip 1 ONLINE ONLINE 12cnode1 STABLE ora.12cnode2.vip 1 ONLINE ONLINE 12cnode2 STABLE ora.12cnode3.vip 1 ONLINE ONLINE 12cnode3 STABLE ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE 12cnode2 STABLE ora.MGMTLSNR 1 ONLINE ONLINE 12cnode1 169.254.3.189 192.16 8.1.107,STABLE ora.cvu 1 ONLINE ONLINE 12cnode1 STABLE ora.mgmtdb 1 ONLINE ONLINE 12cnode1 Open,STABLE ora.oc4j 1 ONLINE ONLINE 12cnode2 STABLE ora.scan1.vip 1 ONLINE ONLINE 12cnode2 STABLE ora.srprim.db 1 ONLINE ONLINE 12cnode1 Open,STABLE 2 ONLINE ONLINE 12cnode2 Open,STABLE -------------------------------------------------------------------------------- [root@12cnode3 bin]#
Cluster status on all the nodes:
[oracle@12cnode1 ~]$ cd /u01/app/12.1.0.2/grid/bin/ [oracle@12cnode1 bin]$ ./crsctl check cluster -all ************************************************************** 12cnode1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** 12cnode2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** 12cnode3: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [oracle@12cnode1 bin]$
Now it’s time to add the RDBMS home on to the new node. This is done in the same way as above by running the “addNode.sh” script from the RDBMS home on the existing node and not from the new node. (Of-course, it cannot be run on the new node as the RDBMS Home is still not available in it 🙂 )
The script is located at $ORACLE_HOME/addnode and here it’s being run from 12cnode1. Here too, I’m ignoring the pre-requisite as the pre-requisite for “resolve.conf” is not met.
[oracle@12cnode1 ~]$ cd /u01/app/oracle/product/12.1.0.2/db_1/addnode/ [oracle@12cnode1 addnode]$ ./addnode.sh "CLUSTER_NEW_NODES={12cnode3}" -silent -ignorePrereq Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 44736 MB Passed Checking swap space: must be greater than 150 MB. Actual 8189 MB Passed Prepare Configuration in progress. Prepare Configuration successful. .................................................. 8% Done. You can find the log of this install session at: /u01/app/oraInventory/logs/addNodeActions2016-03-24_11-18-10AM.log Instantiate files in progress. Instantiate files successful. .................................................. 14% Done. Copying files to node in progress. Copying files to node successful. .................................................. 73% Done. Saving cluster inventory in progress. .................................................. 80% Done. Saving cluster inventory successful. The Cluster Node Addition of /u01/app/oracle/product/12.1.0.2/db_1 was successful. Please check '/tmp/silentInstall.log' for more details. Setup Oracle Base in progress. Setup Oracle Base successful. .................................................. 88% Done. As a root user, execute the following script(s): 1. /u01/app/oracle/product/12.1.0.2/db_1/root.sh Execute /u01/app/oracle/product/12.1.0.2/db_1/root.sh on the following nodes: [12cnode3] .......... Update Inventory in progress. .................................................. 100% Done. Update Inventory successful. Successfully Setup Software. [oracle@12cnode1 addnode]$
Now, run the “ROOT.SH” script from the new node as ROOT user. Here is the execution of the script.
[root@12cnode3 ~]# /u01/app/oracle/product/12.1.0.2/db_1/root.sh Check /u01/app/oracle/product/12.1.0.2/db_1/install/root_12cnode3.mydomain_2016-03-24_12-01-16.log for the output of root script [root@12cnode3 ~]# cat /u01/app/oracle/product/12.1.0.2/db_1/install/root_12cnode3.mydomain_2016-03-24_12-01-16.log Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/12.1.0.2/db_1 Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. [root@12cnode3 ~]#
We now need to add a new instance “srprim3” on the cluster database that needs to be run on the new node 12cnode3. But before we do that, we need to have few of the cluster parameters modified according to the instance.
instance_number Thread Online Redo log groups for the new thread UNDO_TABLESPACE
Let’s get the current log group and it’s members’ details based on which we can add new groups to the new thread.
SYS@srprim1>select group#,thread#,bytes/1024/1024,members from v$log; GROUP# THREAD# BYTES/1024/1024 MEMBERS ---------- ---------- --------------- ---------- 1 1 50 2 2 1 50 2 3 2 50 2 4 2 50 2 SYS@srprim1>select group#,member from v$logfile order by group#; GROUP# MEMBER ---------- ------------------------------------------------------- 1 +FRA/SRPRIM/ONLINELOG/group_1.257.904395571 1 +DATA/SRPRIM/ONLINELOG/group_1.282.904395571 2 +FRA/SRPRIM/ONLINELOG/group_2.258.904395577 2 +DATA/SRPRIM/ONLINELOG/group_2.283.904395575 3 +DATA/SRPRIM/ONLINELOG/group_3.289.904396373 3 +FRA/SRPRIM/ONLINELOG/group_3.259.904396375 4 +DATA/SRPRIM/ONLINELOG/group_4.290.904396377 4 +FRA/SRPRIM/ONLINELOG/group_4.260.904396379
Let me add new groups to the new thread 3.
SYS@srprim1>alter database add logfile thread 3 group 5 size 50M; Database altered. SYS@srprim1>alter database add logfile thread 3 group 6 size 50M; Database altered. SYS@srprim1>select group#,thread#,bytes/1024/1024,members from v$log; GROUP# THREAD# BYTES/1024/1024 MEMBERS ---------- ---------- --------------- ---------- 1 1 50 2 2 1 50 2 3 2 50 2 4 2 50 2 5 3 50 2 6 3 50 2 6 rows selected. SYS@srprim1>select group#,member from v$logfile order by group#; GROUP# MEMBER ---------- ------------------------------------------------------- 1 +DATA/SRPRIM/ONLINELOG/group_1.282.904395571 1 +FRA/SRPRIM/ONLINELOG/group_1.257.904395571 2 +DATA/SRPRIM/ONLINELOG/group_2.283.904395575 2 +FRA/SRPRIM/ONLINELOG/group_2.258.904395577 3 +DATA/SRPRIM/ONLINELOG/group_3.289.904396373 3 +FRA/SRPRIM/ONLINELOG/group_3.259.904396375 4 +DATA/SRPRIM/ONLINELOG/group_4.290.904396377 4 +FRA/SRPRIM/ONLINELOG/group_4.260.904396379 5 +DATA/SRPRIM/ONLINELOG/group_5.296.907330775 5 +FRA/SRPRIM/ONLINELOG/group_5.261.907330777 6 +DATA/SRPRIM/ONLINELOG/group_6.297.907330877 GROUP# MEMBER ---------- ------------------------------------------------------- 6 +FRA/SRPRIM/ONLINELOG/group_6.262.907330879 12 rows selected.
Getting the UNDO tablespace details in order to know the undo tabelspaces assigned to the instances of the database.
SYS@srprim1>select inst_id,name,value from gv$parameter where name like 'undo_tablespa%'; INST_ID NAME VALUE ---------- ---------------------------------------- ---------------------------------------- 2 undo_tablespace UNDOTBS2 1 undo_tablespace UNDOTBS1
Now, let me create a new UNDO tablespace called “undotbs3” and is assign it to the new instance “srprim3”.
SYS@srprim1>create undo tablespace UNDOTBS3 datafile size 400M; Tablespace created. SYS@srprim1>alter system set undo_tablespace=UNDOTBS3 scope=spfile sid='srprim3'; System altered.
Assign an instance number to the new instance.
SYS@srprim1>alter system set instance_number=3 SCOPE=SPFILE SID='srprim3'; System altered.
Assign the newly created Redo Log thread to the new instance.
SYS@srprim1>alter system set thread=3 scope=spfile sid='srprim3'; System altered.
The current configuration of the “srprim” database is as follows:
[oracle@12cnode1 admin]$ srvctl config database -db srprim Database unique name: srprim Database name: srprim Oracle home: /u01/app/oracle/product/12.1.0.2/db_1 Oracle user: oracle Spfile: +DATA/SRPRIM/PARAMETERFILE/spfile.291.904396381 Password file: +DATA/SRPRIM/PASSWORD/pwdsrprim.276.904395317 Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: FRA,DATA Mount point paths: Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: oinstall OSOPER group: oinstall Database instances: srprim1,srprim2 Configured nodes: 12cnode1,12cnode2 Database is administrator managed [oracle@12cnode1 admin]$ [oracle@12cnode1 admin]$
It’s time to add the instance “srprim3” to the database “srprim” using srvctl utility.
[oracle@12cnode1 admin]$ [oracle@12cnode1 admin]$ [oracle@12cnode1 admin]$ srvctl add instance -db srprim -instance srprim3 -node 12cnode3
Now, check the updated database configuration.
[oracle@12cnode1 admin]$ srvctl config database -db srprim Database unique name: srprim Database name: srprim Oracle home: /u01/app/oracle/product/12.1.0.2/db_1 Oracle user: oracle Spfile: +DATA/SRPRIM/PARAMETERFILE/spfile.291.904396381 Password file: +DATA/SRPRIM/PASSWORD/pwdsrprim.276.904395317 Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: Disk Groups: FRA,DATA Mount point paths: Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: oinstall OSOPER group: oinstall Database instances: srprim1,srprim2,srprim3 Configured nodes: 12cnode1,12cnode2,12cnode3 Database is administrator managed [oracle@12cnode1 admin]$
Since the PFILE of instance srprim1 has nothing but only the location of the SPFILE, copy it to the new node and rename it accordingly to the new instance name (srprim3).
[oracle@12cnode1 ~]$ cd $ORACLE_HOME/dbs [oracle@12cnode1 dbs]$ ls -lrt initsrprim* -rw-r-----. 1 oracle oinstall 39 Feb 21 13:15 initsrprim1.ora [oracle@12cnode1 dbs]$ [oracle@12cnode1 dbs]$ cat initsrprim1.ora SPFILE='+DATA/srprim/spfilesrprim.ora' [oracle@12cnode1 dbs]$ [oracle@12cnode1 dbs]$ [oracle@12cnode1 dbs]$ scp initsrprim1.ora oracle@12cnode3:/u01/app/oracle/product/12.1.0.2/db_1/dbs/initsrprim3.ora initsrprim1.ora 100% 39 0.0KB/s 00:00 [oracle@12cnode1 dbs]$
[oracle@12cnode3 ~]$ cd /u01/app/oracle/product/12.1.0.2/db_1/dbs [oracle@12cnode3 dbs]$ ls -lrt initsrprim* -rw-r-----. 1 oracle oinstall 39 Mar 24 12:38 initsrprim3.ora [oracle@12cnode3 dbs]$ [oracle@12cnode3 dbs]$ [oracle@12cnode3 dbs]$ cat initsrprim3.ora SPFILE='+DATA/srprim/spfilesrprim.ora' [oracle@12cnode3 dbs]$
Finally, before starting the instance, enable the newly created Redo Log thread 3 that was assigned to instance srprim3.
SYS@srprim1>alter database enable public thread 3; Database altered. SYS@srprim>
Start the instance srprim3 using SRVCTL utility.
[oracle@12cnode1 ~]$ srvctl status database -db srprim Instance srprim1 is running on node 12cnode1 Instance srprim2 is running on node 12cnode2 Instance srprim3 is not running on node 12cnode3 [oracle@12cnode1 ~]$ [oracle@12cnode1 ~]$ [oracle@12cnode1 ~]$ srvctl start instance -db srprim -instance srprim3 [oracle@12cnode1 ~]$ [oracle@12cnode1 ~]$ [oracle@12cnode1 ~]$ srvctl status database -db srprim Instance srprim1 is running on node 12cnode1 Instance srprim2 is running on node 12cnode2 Instance srprim3 is running on node 12cnode3
Below is just a list of the databases and the listeners running on the new node 12cnode3.
[oracle@12cnode3 ~]$ ps -ef | grep pmon oracle 4266 1 0 10:24 ? 00:00:00 asm_pmon_+ASM3 oracle 7749 1 0 10:28 ? 00:00:00 mdb_pmon_-MGMTDB oracle 23632 1 0 12:42 ? 00:00:00 ora_pmon_srprim3 oracle 25350 3534 0 12:47 pts/0 00:00:00 grep pmon
[oracle@12cnode3 ~]$ ps -ef | grep tns root 9 2 0 10:15 ? 00:00:00 [netns] oracle 5774 1 0 10:26 ? 00:00:00 /u01/app/12.1.0.2/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit oracle 5853 1 0 10:26 ? 00:00:00 /u01/app/12.1.0.2/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit oracle 5856 1 0 10:26 ? 00:00:00 /u01/app/12.1.0.2/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit oracle 25406 3534 0 12:47 pts/0 00:00:00 grep tns [oracle@12cnode3 ~]$
If the addnode.sh is run through GUI mode without using the “silent” option, then the instance addition too would be taken care by the GUI tool and there would be no requirement to go through the above manual steps.
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
Leave a Reply