Manually Upgrading RAC database from 10.2.0.1 to 10.2.0.4
After upgrading the CRS from 10.2.0.1 to 10.2.0.4 (link -> https://shivanandarao-oracle.com/2013/03/04/upgrade-crs-from-10-2-0-1-to-10-2-0-4/), now lets move on upgrading the database from 10.2.0.1 to 10.2.0.4
Step 1:
Lets stop the database running on both the nodes 10gnode1 and 10gnode2. Once the database is stopped, we shall stop the ASM instances and the nodeapps running on all the nodes.
[oracle@10gnode1 ~]$ srvctl stop database -d srprim [oracle@10gnode1 ~]$ srvctl stop asm -n 10gnode1 [oracle@10gnode2 ~]$ srvctl stop asm -n 10gnode2 [oracle@10gnode1 ~]$ srvctl stop nodeapps -n 10gnode1 [oracle@10gnode2 ~]$ srvctl stop nodeapps -n 10gnode2
Step 2:
From the unzipped patch, run the runInstaller file on 10gnode1.
Select the 10g Database home that needs to be upgraded and verify the home path.
Once the patch has been applied, the installation would recommend you to run root.sh script from the 10g Database home as ROOT user. Run the script on both the nodes.
[root@10gnode1 ~]# /u01/app/oracle/product/10.2.0/db_1/root.sh [root@10gnode2 ~]# /u01/app/oracle/product/10.2.0/db_1/root.sh
Step 3:
Now, lets start the Nodeapps and the ASM instances on both the nodes.
[oracle@10gnode1 ~]$ srvctl start nodeapps -n 10gnode1 [oracle@10gnode2 ~]$ srvctl start nodeapps -n 10gnode2 [oracle@10gnode1 ~]$ srvctl start asm -n 10gnode1 [oracle@10gnode2 ~]$ srvctl start asm -n 10gnode2
Step 4:
Database Upgrade.
On 10gnode1, start the instance “srprim1”. Since we are upgrading the database, we are starting up only one instance.
[oracle@10gnode1 dbs]$ export ORACLE_SID=srprim1 [oracle@10gnode1 dbs]$ export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 [oracle@10gnode1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:02:552013 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. Connected to an idle instance. SQL> startup nomount ORACLE instance started. Total System Global Area 926941184 bytes Fixed Size 1270748 bytes Variable Size 247467044 bytes Database Buffers 675282944 bytes Redo Buffers 2920448 bytes SQL>
Step 5:
Create pfile from SPFILE in order to remove the below mentioned Cluster Parameters.
SQL> create pfile='/u01/initsrprim1.ora' from spfile; File created. SQL> shut immediate ORA-01507: database not mounted ORACLE instance shut down.
Step 6:
Now, remove or comment the below mentioned cluster parameters in the newly created pfile.
vi /u01/initsrprim1.ora #REMOVE THESE PARAMETERS# #*.cluster_database_instances=2 #*.cluster_database=true #srprim2.instance_number=2 #srprim1.instance_number=1
Step 7:
Startup the instance “srprim1” in upgrade mode using the modified PFILE.
[oracle@10gnode1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:09:212013 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. Connected to an idle instance. SQL> startup upgrade pfile='/u01/initsrprim1.ora'; ORACLE instance started. Total System Global Area 926941184 bytes Fixed Size 1270748 bytes Variable Size 247467044 bytes Database Buffers 675282944 bytes Redo Buffers 2920448 bytes Database mounted. Database opened.
Step 8:
Once the database is started up in the UPGRADE mode, run the CATUPGRD.SQL script to carryout the upgrade process.
SQL> spool '/u01/10204upgrd.txt'; SQL> @/u01/app/oracle/product/10.2.0/db_1/rdbms/admin/catupgrd.sql
Step 9:
Once the script is executed successfully, shutdown the database and open it normally using the above pfile. Run the utlrp.sql script from the 10.2.0 Oracle DB home/rdbms/admin path to validate the invalid objects.
[oracle@10gnode1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:33:082013 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. Connected to: Oracle Database 10g Release 10.2.0.4.0 - Production With the Real Application Clusters option SQL> select status,instance_name from v$instance; STATUS INSTANCE_NAME ------------ ---------------- OPEN MIGRATE srprim1 SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> exit Disconnected from Oracle Database 10g Release 10.2.0.4.0 - Production With the Real Application Clusters option [oracle@10gnode1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:33:532013 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. Connected to an idle instance. SQL> startup pfile=/u01/initsrprim1.ora ORACLE instance started. Total System Global Area 926941184 bytes Fixed Size 1270748 bytes Variable Size 247467044 bytes Database Buffers 675282944 bytes Redo Buffers 2920448 bytes Database mounted. Database opened. SQL> SQL> @/u01/app/oracle/product/10.2.0/db_1/rdbms/admin/utlrp.sql SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL>
STEP 10:
Now, lets start the instance “srprim1” using the global SPFILE rather than the temporary PFILE (“/u01/initsrprim1.ora”)
SQL> startup ORACLE instance started. Total System Global Area 926941184 bytes Fixed Size 1270748 bytes Variable Size 247467044 bytes Database Buffers 675282944 bytes Redo Buffers 2920448 bytes Database mounted. Database opened. SQL> SQL> select status,instance_name from v$instance; STATUS INSTANCE_NAME ------ --------------------- OPEN srprim1 SQL> select * from v$version; BANNER --------------------------------------------------------------- Oracle Database 10g Release 10.2.0.4.0 - Production PL/SQL Release 10.2.0.4.0 - Production CORE 10.2.0.4.0 Production TNS for Linux: Version 10.2.0.4.0 - Production NLSRTL Version 10.2.0.4.0 - Production
Step 11:
Now that the SRPRIM database is upgraded to 10.2.0.4, lets check the status of the instances using SRVCTL
[oracle@10gnode1 dbs]$ srvctl status database -d srprim Instance srprim1 is running on node 10gnode1 Instance srprim2 is not running on node 10gnode2 [oracle@10gnode1 dbs]$
In STEP 6 through STEP 9, you can notice that the upgrade was carried out using a temporary PFILE located at “/u01/initsrprim1.ora” and that no changes were made to the original PFILE or SPFILE (globally located).
Hence after the upgrade was done successfully, I had no necessity of uncommenting the changes done to the CLUSTER parameters in the temporary PFILE. The instance “srprim1” has been started using the Global SPFILE which can be noticed in STEP 10.
You can notice above that the instance “srprim2” is not started.
Lets start it using the “SRVCTL start instance” command.
[oracle@10gnode1 dbs]$ srvctl start instance -d srprim -i srprim2
Lets crossverify once again the status of all the instances.
[oracle@10gnode1 dbs]$ srvctl status database -d srprim Instance srprim1 is running on node 10gnode1 Instance srprim2 is running on node 10gnode2
Here you go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
Leave a Reply