In this post, I’m demonstrating on the steps involved in upgrading the RAC database from version 11.2.0.1 to 11.2.0.3
RAC nodes: 10gnode1, 10gnode2
Current Database version: 11.2.0.1 Enterprise Edition
11.2.0.1 Oracle HOME Path: /u01/app/oracle/product/11.2.0/db_1
As per Oracle’s recommendations, the database is upgraded using an out-of-place upgrade option. So, I’m installing 11.2.0.3 database software
on a new path (/u01/app/oracle/product/11.2.0/db3)
Below are the steps involved in installing the Oracle Database Software version 11.2.0.3 on a new home.
On the first node (10gnode1):
1. Run the runInstaller from the unzipped 11.2.0.3 database software.
2. Select the Skip software updates option.
3. Select install database software only option.
4. Select all the cluter nodes on which the software needs to be installed.
5. Select the lanugage as English and the Edition as per your requirements. My requirement here is that I’m upgrading database from 11.2.0.1
Enterprise Edition database to 11.2.0.3 Enterprise Edition and hence selecting Enterprise Edition.
6. Provide the path where the new installation needs to be installed.
7. Proceed with the pre-requisite checks and if everything success, then proceed with the installation. Else, fix the errors given during the
pre-requisite check and proceed with the installation.
8. As root, run the root.sh on node1 followed by node2.
[root@10gnode1 ~]# /u01/app/oracle/product/11.2.0/db3/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/db3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. yFinished product-specific root actions. [root@10gnode1 ~]#
Run the root.sh script on 2nd node (10gnode2):
[root@10gnode2 ~]# /u01/app/oracle/product/11.2.0/db3/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/db3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Finished product-specific root actions.
Now let’s perform the pre-upgrade prerequisite checks. Copy the utlu112i.sql script from the newly installed 11.2.0.3 home to a temporary location. Here, I’ve copied the script to location “/u03”.
Execute this pre-upgrade script (utlu112i.sql) from the 11.2.0.1 database.
[oracle@10gnode1 ~]$ sqlplus sys/oracle@srprim1 as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Tue Dec 31 09:49:32 2013 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> create pfile='/u03/upgrd11201_to_11203/initsrprim1.ora' from spfile; File created. SQL> SQL> spool '/u03/preupgd.txt' SQL> SQL> @/u01/app/oracle/product/11.2.0/db3/rdbms/admin/utlu112i.sql Oracle Database 11.2 Pre-Upgrade Information Tool 12-31-2013 09:55:26 Script Version: 11.2.0.3.0 Build: 001 . ************************************************************* Database: ************************************************************* --> name: SRPRIM --> version: 11.2.0.1.0 --> compatible: 11.2.0.0.0 --> blocksize: 8192 --> platform: Linux IA (32-bit) --> timezone file: V11 . ************************************************************* Tablespaces: [make adjustments in the current environment] ************************************************************* --> SYSTEM tablespace is adequate for the upgrade. .... minimum required size: 685 MB --> SYSAUX tablespace is adequate for the upgrade. .... minimum required size: 450 MB --> UNDOTBS1 tablespace is adequate for the upgrade. .... minimum required size: 400 MB --> TEMP tablespace is adequate for the upgrade. .... minimum required size: 60 MB --> EXAMPLE tablespace is adequate for the upgrade. .... minimum required size: 78 MB . ************************************************************* Flashback: OFF ************************************************************* ************************************************************* Update Parameters: [Update Oracle Database 11.2 init.ora or spfile] Note: Pre-upgrade tool was run on a lower version 32-bit database. ************************************************************* --> If Target Oracle is 32-Bit, refer here for Update Parameters: -- No update parameter changes are required. . --> If Target Oracle is 64-Bit, refer here for Update Parameters: -- No update parameter changes are required. . ************************************************************* Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile] ************************************************************* -- No renamed parameters found. No changes are required. . ************************************************************* Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile] ************************************************************* -- No obsolete parameters found. No changes are required . ************************************************************* Components: [The following database components will be upgraded or installed] ************************************************************* --> Oracle Catalog Views [upgrade] VALID --> Oracle Packages and Types [upgrade] VALID --> JServer JAVA Virtual Machine [upgrade] VALID --> Oracle XDK for Java [upgrade] VALID --> Real Application Clusters [upgrade] VALID --> Oracle Workspace Manager [upgrade] VALID --> OLAP Analytic Workspace [upgrade] VALID --> OLAP Catalog [upgrade] VALID --> EM Repository [upgrade] VALID --> Oracle Text [upgrade] VALID --> Oracle XML Database [upgrade] VALID --> Oracle Java Packages [upgrade] VALID --> Oracle interMedia [upgrade] VALID --> Spatial [upgrade] VALID --> Expression Filter [upgrade] VALID --> Rule Manager [upgrade] VALID --> Oracle Application Express [upgrade] VALID ... APEX will only be upgraded if the version of APEX in ... the target Oracle home is higher than the current one. --> Oracle OLAP API [upgrade] VALID . ************************************************************* Miscellaneous Warnings ************************************************************* WARNING: --> The "cluster_database" parameter is currently "TRUE" .... and must be set to "FALSE" prior to running a manual upgrade. WARNING: --> Database is using a timezone file older than version 14. .... After the release migration, it is recommended that DBMS_DST package .... be used to upgrade the 11.2.0.1.0 database timezone version .... to the latest version which comes with the new release. WARNING: --> Your recycle bin is turned on and currently contains no objects. .... Because it is REQUIRED that the recycle bin be empty prior to upgrading .... and your recycle bin is turned on, you may need to execute the command: PURGE DBA_RECYCLEBIN .... prior to executing your upgrade to confirm the recycle bin is empty. WARNING: --> Database contains schemas with objects dependent on DBMS_LDAP package. .... Refer to the 11g Upgrade Guide for instructions to configure Network ACLs. .... USER APEX_030200 has dependent objects. . ************************************************************* Recommendations ************************************************************* Oracle recommends gathering dictionary statistics prior to upgrading the database. To gather dictionary statistics execute the following command while connected as SYSDBA: EXECUTE dbms_stats.gather_dictionary_stats; ************************************************************* Oracle recommends reviewing any defined events prior to upgrading. To view existing non-default events execute the following commands while connected AS SYSDBA: Events: SELECT (translate(value,chr(13)||chr(10),' ')) FROM sys.v$parameter2 WHERE UPPER(name) ='EVENT' AND isdefault='FALSE' Trace Events: SELECT (translate(value,chr(13)||chr(10),' ')) from sys.v$parameter2 WHERE UPPER(name) = '_TRACE_EVENTS' AND isdefault='FALSE' Changes will need to be made in the init.ora or spfile. *************************************************************
Perform the recommendations given out by the execution of the pre-upgrade (utlu112i.sql) script.
SQL> PURGE DBA_RECYCLEBIN; SQL> EXECUTE dbms_stats.gather_dictionary_stats;
Also, create a pfile to a temporary location to make necessary changes in order to remove the Cluster parameters.Following are the cluster parameters which need to be commented out before proceeding with the DB upgrade.
cluster_database=true srprim2.instance_number=2 srprim1.instance_number=1 srprim2.thread=2 srprim1.thread=1 srprim2.undo_tablespace='UNDOTBS2'
Stop the database running on all the nodes.
[oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ srvctl stop database -d srprim
Copy the password file from the 11.2.0.1 RDBMS_HOME/dbs (/u01/app/oracle/product/11.2.0/db_1/dbs) to the newly installed 11.2.0.3
RDBMS_HOME/dbs (/u01/app/oracle/product/11.2.0/db3/dbs) location. Also copy the above created pfile from temporary location (/u03/upgrd11201_to_11203/initsrprim1.ora) to the newly installed 11.2.0.3 RDBMS_HOME/dbs location.
[oracle@10gnode1 ~]$ cp /u01/app/oracle/product/11.2.0/db_1/dbs/orapwsrprim1 /u01/app/oracle/product/11.2.0/db3/dbs/ [oracle@10gnode1 ~]$ cp /u03/upgrd11201_to_11203/initsrprim1.ora /u01/app/oracle/product/11.2.0/db3/dbs/
Now, remove the cluster parameters set in the PFILE copied to 11.2.0.3 RDBMS_HOME/dbs location.
[oracle@10gnode1 ~]$ vi /u01/app/oracle/product/11.2.0/db3/dbs/initsrprim1.ora
Comment out the following cluster parameters.
#*.cluster_database=true #srprim2.instance_number=2 #srprim1.instance_number=1 #srprim2.thread=2 #srprim1.thread=1 #srprim2.undo_tablespace='UNDOTBS2'
Startup only 1 instance (SRPRIM1) in upgrade mode on 10gnode1 using the newly modified pfile placed at 11.2.0.3 RDBMS_HOME/dbs location.
[oracle@10gnode1 ~]$ export ORACLE_SID=srprim1 [oracle@10gnode1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db3 [oracle@10gnode1 ~]$ cd /u01/app/oracle/product/11.2.0/db3/bin [oracle@10gnode1 bin]$ ./sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Dec 31 10:26:53 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. SQL> startup upgrade pfile=$ORACLE_HOME/dbs/initsrprim1.ora ORACLE instance started. Total System Global Area 1272213504 bytes Fixed Size 1344680 bytes Variable Size 822086488 bytes Database Buffers 436207616 bytes Redo Buffers 12574720 bytes Database mounted. Database opened.
Execute the CATUPGRD.SQL script to upgrade the database dictionaries.
SQL> spool '/u03/upgrd11201_to_11203/catupgrade.txt' SQL> @/u01/app/oracle/product/11.2.0/db3/rdbms/admin/catupgrd.sql
Once the above script is executed, the database is shutdown automatically.Cross check if there were any errors reported in the execution of
the above script.
Startup the instance in normal mode now using the same PFILE.
[oracle@10gnode1 bin]$ export ORACLE_SID=srprim1 [oracle@10gnode1 bin]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db3 [oracle@10gnode1 bin]$ cd $ORACLE_HOME/bin [oracle@10gnode1 bin]$ ./sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Dec 31 11:07:25 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. SQL> startup pfile=$ORACLE_HOME/dbs/initsrprim1.ora ORACLE instance started. Total System Global Area 1272213504 bytes Fixed Size 1344680 bytes Variable Size 822086488 bytes Database Buffers 436207616 bytes Redo Buffers 12574720 bytes Database mounted. Database opened. SQL> SQL> spool '/u03/upgrd11201_to_11203/catuppst.sql' SQL> @/u01/app/oracle/product/11.2.0/db3/rdbms/admin/catuppst.sql
Execute the CATUPPST.SQL script to perform remaining upgrade actions that do not require the database to be in UPGRADE mode.
Check if the components have been upgraded and also the status.
SQL> select comp_id,comp_name,version,status from dba_registry; COMP_ID COMP_NAME VERSION STATUS ------- ----------------------------- ----------- -------- OWB OWB 11.2.0.1.0 VALID APEX Oracle Application Express 3.2.1.00.10 VALID EM Oracle Enterprise Manager 11.2.0.3.0 VALID AMD OLAP Catalog 11.2.0.3.0 VALID SDO Spatial 11.2.0.3.0 VALID ORDIM Oracle Multimedia 11.2.0.3.0 VALID XDB Oracle XML Database 11.2.0.3.0 VALID CONTEXT Oracle Text 11.2.0.3.0 VALID EXF Oracle Expression Filter 11.2.0.3.0 VALID RUL Oracle Rules Manager 11.2.0.3.0 VALID OWM Oracle Workspace Manager 11.2.0.3.0 VALID COMP_ID COMP_NAME VERSION STATUS ------- --------------------------------- ---------- ------- CATALOG Oracle Database Catalog Views 11.2.0.3.0 VALID CATPROC Oracle Database Packages and Types 11.2.0.3.0 VALID JAVAVM JServer JAVA Virtual Machine 11.2.0.3.0 VALID XML Oracle XDK 11.2.0.3.0 VALID CATJAVA Oracle Database Java Packages 11.2.0.3.0 VALID APS OLAP Analytic Workspace 11.2.0.3.0 VALID XOQ Oracle OLAP API 11.2.0.3.0 VALID RAC Oracle Real Application Clusters 11.2.0.3.0 VALID 19 rows selected.
Check for the INVALID objects and validate them by executing UTLRP.SQL script.
SQL> select count(*) from dba_objects where status='INVALID'; COUNT(*) ---------- 6189 SQL> @/u01/app/oracle/product/11.2.0/db3/rdbms/admin/utlrp.sql SQL> select count(*) from dba_objects where status='INVALID'; COUNT(*) ---------- 0
Now, once the DB is upgraded, we need to add the other instance (srprim2) into cluster. Hence we need to revert the cluster changes made to
the PFILE. For this shutdown the srprim1 instance and uncomment the cluster parameters in initsrprim1.ora PFILE that were commented earlier.
SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down.
#*.cluster_database=true #srprim2.instance_number=2 #srprim1.instance_number=1 #srprim2.thread=2 #srprim1.thread=1 #srprim2.undo_tablespace='UNDOTBS2'
Also, create a global SPFILE that is placed on a shared location to be accessed by all the instances within the cluster.
[oracle@10gnode1 bin]$ ./sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Dec 31 11:25:16 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. SQL> create spfile='+DATA/srprim/spfilesrprim.ora' from pfile; File created.
Backup the pfile initsrprim1.ora in the 11.2.0.3 DB Home by renaming it to initsrprim1.ora.bak. Create a new pfile “initsrprim1.ora” in the
same location and add only one entry to it.
SPFILE='+DATA/srprim/spfilesrprim.ora'
We are making this so that the instances (srprim1 and srprim2) are using a global spfile and not local PFILES.
[oracle@10gnode1 bin]$ cd $ORACLE_HOME/dbs [oracle@10gnode1 dbs]$ mv initsrprim1.ora initsrprim1.ora.bak [oracle@10gnode1 dbs]$ [oracle@10gnode1 dbs]$ vi initsrprim1.ora SPFILE='+DATA/srprim/spfilesrprim.ora' [oracle@10gnode1 dbs]$ ls -lrt total 20 -rw-r--r-- 1 oracle oinstall 2851 May 15 2009 init.ora -rw-r----- 1 oracle oinstall 1536 Dec 31 10:09 orapwsrprim1 -rw-rw---- 1 oracle oinstall 1544 Dec 31 10:27 hc_srprim1.dat -rwxr-xr-x 1 oracle oinstall 1467 Dec 31 11:24 initsrprim1.ora.bak -rw-r--r-- 1 oracle oinstall 39 Dec 31 11:27 initsrprim1.ora [oracle@10gnode1 dbs]$
Copy the PFILE on 10gnode1 to 10gnode2 and rename it to initsrprim2.ora
[oracle@10gnode1 dbs]$ scp initsrprim1.ora oracle@10gnode2:/u01/app/oracle/product/11.2.0/db3/dbs/initsrprim2.ora initsrprim1.ora 100% 39 0.0KB/s 00:00 [oracle@10gnode1 dbs]$
Remove the SRPRIM database configuration from 11.2.0.1 RDBMS HOME and configure it on 11.2.0.3 RDBMS HOME.
[oracle@10gnode1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/bin [oracle@10gnode1 bin]$ ./srvctl remove database -d srprim Remove the database srprim? (y/[n]) y [oracle@10gnode1 bin]$ Configuring SRPRIM database on 11.2.0.3 RDBMS HOME. [oracle@10gnode1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db3 [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ cd /u01/app/oracle/product/11.2.0/db3/bin [oracle@10gnode1 bin]$ [oracle@10gnode1 bin]$ [oracle@10gnode1 bin]$ ./srvctl add database -d srprim -o /u01/app/oracle/product/11.2.0/db3 [oracle@10gnode1 bin]$ ./srvctl add instance -i srprim1 -n 10gnode1 -d srprim [oracle@10gnode1 bin]$ ./srvctl add instance -i srprim2 -n 10gnode2 -d srprim [oracle@10gnode1 bin]$ ./srvctl config database -d srprim Database unique name: srprim Database name: Oracle home: /u01/app/oracle/product/11.2.0/db3 Oracle user: oracle Spfile: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: srprim Database instances: srprim1,srprim2 Disk Groups: Mount point paths: Services: Type: RAC Database is administrator managed
In a simpler method and recommended one, the database configuration and its’ services can be upgraded at srvctl using the “srvctl upgrade database -d db_unique_name -o
$new_ORACLE_HOME” syntax than following the method done above. The above method would not upgrade the services and will have to be created manually on the new oracle home.
[oracle@10gnode1 ~]$ srvctl upgrade database -d srprim -o /u01/app/oracle/product/11.2.0/db3</pre> <pre></pre> <pre>
Run srvctl from the database Home to which the database configuration needs to be upgraded.
Startup both the instances and cross check the functionality.
[oracle@10gnode1 ~]$ echo $ORACLE_HOME /u01/app/oracle/product/11.2.0/db3 [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ echo $ORACLE_SID srprim1 [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Dec 31 11:36:39 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORACLE instance started. Total System Global Area 1272213504 bytes Fixed Size 1344680 bytes Variable Size 822086488 bytes Database Buffers 436207616 bytes Redo Buffers 12574720 bytes Database mounted. Database opened. SQL> SQL> show parameter spfile NAME TYPE VALUE ------ ------ --------------------------------- spfile string +DATA/srprim/spfilesrprim.ora SQL> SQL> show parameter cluster_ NAME TYPE VALUE -------------------------- --------- ----------- cluster_database boolean TRUE cluster_database_instances integer 2 cluster_interconnects string
[oracle@10gnode1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Dec 31 11:59:14 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER ------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production
[oracle@10gnode1 ~]$ srvctl status database -d srprim -v -f Instance srprim1 is running on node 10gnode1. Instance status: Open. Instance srprim2 is not running on node 10gnode2
[oracle@10gnode1 ~]$ srvctl start instance -i srprim2 -d srprim [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ srvctl status database -d srprim -v -f Instance srprim1 is running on node 10gnode1. Instance status: Open. Instance srprim2 is running on node 10gnode2. Instance status: Open. [oracle@10gnode1 ~]$
Note that there were services running on both the instances before upgrading. We need to start them as well after the database is upgraded.
[oracle@10gnode1 ~]$ srvctl config database -d srprim Database unique name: srprim Database name: Oracle home: /u01/app/oracle/product/11.2.0/db3 Oracle user: oracle Spfile: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: srprim Database instances: srprim1,srprim2 Disk Groups: DATA,FRA Mount point paths: Services: newsrv Type: RAC Database is administrator managed
[oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ srvctl config service -s newsrv -d srprim Service name: newsrv Service is enabled Server pool: srprim_newsrv Cardinality: 2 Disconnect: false Service role: PRIMARY Management policy: AUTOMATIC DTP transaction: false AQ HA notifications: false Failover type: NONE Failover method: BASIC TAF failover retries: 0 TAF failover delay: 0 Connection Load Balancing Goal: LONG Runtime Load Balancing Goal: NONE TAF policy specification: BASIC Edition: Preferred instances: srprim1,srprim2 Available instances:
Now that the databases have been upgraded, we can remove the 11.2.0.1 DB Home. But before removing it, make sure that you have copied the
tnsnames.ora file from the 11.2.0.1 DB Home to the 11.2.0.3 HOME. Also make necessary changes to the Environment settings by specifying the
new 11.2.0.3 ORACLE HOME path in the profile files.
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
Before upgrading CRS to 11.2.0.3 version, pre-requisite patch 9413827 needs to be applied.
Once the patch is applied, unset the ORACLE_HOME, ORACLE_BASE and CRS Home path values and start the GI installation of 11.2.0.3 from the unzipped directory.
[oracle@10gnode1 ~]$ unset ORA_CRS_HOME [oracle@10gnode1 ~]$ unset ORACLE_HOME [oracle@10gnode1 ~]$ unset ORACLE_BASE
Run the runInstaller file from the unzipped 11.2.0.3 grid file on 10gnode1.
1. select skip software Updates and proceed NEXT.
2. Select Upgrade Grid Infrastructure or Oracle Automatic Storage Management
3. Select the preferred language.
4. Select the nodes where the GI needs to be upgraded.
5. Select the appropriate OS groups.
6. Select the software location (home path) where the new 11.2.0.3 GI needs to
be installed.
7. Look out for the prerequisites check and work on the failed checks. If
there are no failed checks, then proceed with the installation.
8. Run the rootupgrade.sh scripts on each node separately as root user.
[root@10gnode1 ~]# /u01/app/11.2.0/grid3/rootupgrade.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/11.2.0/grid3 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: n Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it?(y/n) [n]: n Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid3/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation ASM upgrade has started on first node. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '10gnode1' CRS-2673: Attempting to stop 'ora.crsd' on '10gnode1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on '10gnode1' CRS-2673: Attempting to stop 'ora.registry.acfs' on '10gnode1' CRS-2673: Attempting to stop 'ora.srprim.newsrv.svc' on '10gnode1' CRS-2677: Stop of 'ora.registry.acfs' on '10gnode1' succeeded CRS-2677: Stop of 'ora.srprim.newsrv.svc' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on '10gnode1' CRS-2673: Attempting to stop 'ora.srprim.db' on '10gnode1' CRS-2677: Stop of 'ora.srprim.db' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.DATA.dg' on '10gnode1' CRS-2673: Attempting to stop 'ora.FRA.dg' on '10gnode1' CRS-2677: Stop of 'ora.DATA.dg' on '10gnode1' succeeded CRS-2677: Stop of 'ora.FRA.dg' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.asm' on '10gnode1' CRS-2677: Stop of 'ora.asm' on '10gnode1' succeeded CRS-2677: Stop of 'ora.LISTENER.lsnr' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.10gnode1.vip' on '10gnode1' CRS-2677: Stop of 'ora.10gnode1.vip' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.eons' on '10gnode1' CRS-2673: Attempting to stop 'ora.ons' on '10gnode1' CRS-2677: Stop of 'ora.ons' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on '10gnode1' CRS-2677: Stop of 'ora.net1.network' on '10gnode1' succeeded CRS-2677: Stop of 'ora.eons' on '10gnode1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on '10gnode1' has completed CRS-2677: Stop of 'ora.crsd' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on '10gnode1' CRS-2673: Attempting to stop 'ora.asm' on '10gnode1' CRS-2673: Attempting to stop 'ora.cssdmonitor' on '10gnode1' CRS-2673: Attempting to stop 'ora.ctssd' on '10gnode1' CRS-2673: Attempting to stop 'ora.evmd' on '10gnode1' CRS-2673: Attempting to stop 'ora.mdnsd' on '10gnode1' CRS-2673: Attempting to stop 'ora.drivers.acfs' on '10gnode1' CRS-2677: Stop of 'ora.cssdmonitor' on '10gnode1' succeeded CRS-2677: Stop of 'ora.mdnsd' on '10gnode1' succeeded CRS-2677: Stop of 'ora.gpnpd' on '10gnode1' succeeded CRS-2677: Stop of 'ora.evmd' on '10gnode1' succeeded CRS-2677: Stop of 'ora.drivers.acfs' on '10gnode1' succeeded CRS-2677: Stop of 'ora.ctssd' on '10gnode1' succeeded CRS-2677: Stop of 'ora.asm' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on '10gnode1' CRS-2677: Stop of 'ora.cssd' on '10gnode1' succeeded CRS-2673: Attempting to stop 'ora.diskmon' on '10gnode1' CRS-2673: Attempting to stop 'ora.gipcd' on '10gnode1' CRS-2677: Stop of 'ora.gipcd' on '10gnode1' succeeded CRS-2677: Stop of 'ora.diskmon' on '10gnode1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '10gnode1' has completed CRS-4133: Oracle High Availability Services has been stopped. OLR initialization - successful Replacing Clusterware entries in inittab clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Preparing packages for installation... cvuqdisk-1.0.9-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@10gnode1 ~]#
Once the rootupgrade.sh is run on 10gnode1, run the same on 10gnode2. This would upgrade the ASM as well.
[oracle@10gnode1 bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
[oraclee@10gnode1 bin]$ ./crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.FRA.dg
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.LISTENER.lsnr
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.asm
ONLINE ONLINE 10gnode1 Started
ONLINE ONLINE 10gnode2 Started
ora.eons
ONLINE OFFLINE 10gnode1
ONLINE OFFLINE 10gnode2
ora.gsd
OFFLINE OFFLINE 10gnode1
OFFLINE OFFLINE 10gnode2
ora.net1.network
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.ons
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
ora.registry.acfs
ONLINE ONLINE 10gnode1
ONLINE ONLINE 10gnode2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.10gnode1.vip
1 ONLINE ONLINE 10gnode1
ora.10gnode2.vip
1 ONLINE ONLINE 10gnode2
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE 10gnode2
ora.oc4j
1 OFFLINE OFFLINE
ora.scan1.vip
1 ONLINE ONLINE 10gnode2
ora.srprim.db
1 ONLINE ONLINE 10gnode1 Open
2 ONLINE ONLINE 10gnode2 Open
ora.srprim.newsrv.svc
1 ONLINE ONLINE 10gnode1
2 ONLINE ONLINE 10gnode2
[oracle@10gnode1 upgrd11201_to_11203]$ cd /u01/app/11.2.0/grid3/bin/ [oracle@10gnode1 bin]$ ./crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.3.0] [oracle@10gnode1 bin]$
[oracle@10gnode1 bin]$ ./crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [11.2.0.3.0] [oracle@10gnode1 bin]$
[oracle@10gnode1 bin]$ ./crsctl query crs softwareversion Oracle Clusterware version on node [10gnode1] is [11.2.0.3.0]
[oracle@10gnode1 bin]$ srvctl status database -d srprim [oracle@10gnode1 bin]$ srvctl status database -d srprim -v -f Instance srprim1 is running on node 10gnode1 with online services newsrv Instance srprim2 is running on node 10gnode2 with online services newsrv [oracle@10gnode1 bin]$
[oracle@10gnode1 bin]$ cd [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ . oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid3 is /u01/app/oracle [oracle@10gnode1 ~]$ [oracle@10gnode1 ~]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Mon Dec 30 10:40:09 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Real Application Clusters and Automatic Storage Management options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select status,instance_name from v$instance; STATUS INSTANCE_NAME -------- ------------------- STARTED +ASM1 SQL> exit
[oracle@10gnode1 bin]$ srvctl status database -d srprim [oracle@10gnode1 bin]$ srvctl status database -d srprim -v -f Instance srprim1 is running on node 10gnode1 with online services newsrv Instance srprim2 is running on node 10gnode2 with online services newsrv
[oracle@10gnode1 ~]$ . oraenv ORACLE_SID = [srprim1] ? The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 is /u01/app/oracle [oracle@10gnode1 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Mon Dec 30 10:41:42 2013 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select status,instance_name,host_name from gv$instance; STATUS INSTANCE_NAME HOST_NAME ------ ------------- --------------------------------------------- OPEN srprim1 10gnode1.mydomain OPEN srprim2 10gnode2.mydomain SQL> SQL> select * from v$version; BANNER ------------------------------------------------------------------ Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for Linux: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.
Here’s an excerpt:
The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 73,000 times in 2013. If it were an exhibit at the Louvre Museum, it would take about 3 days for that many people to see it.
Click here to see the complete report.
COPYRIGHT
© Shivananda Rao P, 2012, 2013, 2014. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
While trying to create an SPFILE for ASM instance on a diskgroup, oracle threw ORA-15221: ASM operation requires compatible.asm of 11.2.0.0.0 or higher.
COMPATIBLE.ASM is an attribute associated with every diskgroup. This attribute describes the minimum software version required for the ASM instance that can access the
diskgroup.
Here, the ASM instance version that was used was 11.2.0.1.0 and the error threw that the compatible parameter must be set to a minimum of 11.2.0.0.0
[oracle@10gnode1 bin]$ ./sqlplus / as sysasm SQL*Plus: Release 11.2.0.1.0 Production on Fri Dec 27 11:54:13 2013 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Real Application Clusters and Automatic Storage Management options SQL> create spfile='+DATA/ASM/spfileASM.ora' from pfile; create spfile='+DATA/ASM/spfileASM.ora' from pfile * ERROR at line 1: ORA-17502: ksfdcre:4 Failed to create file +DATA/ASM/spfileASM.ora ORA-15221: ASM operation requires compatible.asm of 11.2.0.0.0 or higher SQL> select group_number, name,compatibility, database_compatibility from v$asm_diskgroup; GROUP_NUMBER NAME COMPATIBILITY DATABASE_COMPATIBILITY ------------ ----- ------------- ------------------------- 1 DATA 10.1.0.0.0 10.1.0.0.0 2 FRA 10.1.0.0.0 10.1.0.0.0
Here you can see that the compatibility was set to 10.1.0.0 and which is why ASM instance was unable to access the diskgroup.
After changing the COMPATIBILITY to 11.2.0.0.0, the issue resolved.
SQL> alter diskgroup DATA set attribute 'compatible.asm'='11.2.0.0.0'; Diskgroup altered. SQL> alter diskgroup FRA set attribute 'compatible.asm'='11.2.0.0.0'; Diskgroup altered.
SQL> select group_number, name,compatibility, database_compatibility from v$asm_diskgroup; GROUP_NUMBER NAME COMPATIBILITY DATABASE_COMPATIBILITY ------------ ---- ------------ -------------------------------- 1 DATA 11.2.0.0.0 10.1.0.0.0 2 FRA 11.2.0.0.0 10.1.0.0.0
There is also another attribute called “compatible.rdbms” which determines the minimum COMPATIBLE database initialization parameter setting for the database instance
to access the diskgroup. Here, in my case, the attribute was set to 10.1.0.0.0 but the Dataabse instance compatiblity that had to access the diskgroup was 11.2.0.1.0
SQL> alter diskgroup DATA set attribute 'compatible.rdbms'='11.2.0.0.0'; Diskgroup altered. SQL> alter diskgroup FRA set attribute 'compatible.rdbms'='11.2.0.0.0'; Diskgroup altered.
SQL> select group_number, name,compatibility, database_compatibility from v$asm_diskgroup; GROUP_NUMBER NAME COMPATIBILITY DATABASE_COMPATIBILITY ------------ ---- ------------- ------------------------- 1 DATA 11.2.0.0.0 11.2.0.0.0 2 FRA 11.2.0.0.0 11.2.0.0.0
After setting the “compatible.asm” attribute to 11.2.0.0.0, ASM instance was able to access the diskgroup and was able to create the spfile on the diskgroup.
[oracle@10gnode1 bin]$ ./sqlplus / as sysasm SQL*Plus: Release 11.2.0.1.0 Production on Fri Dec 27 12:05:07 2013 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Real Application Clusters and Automatic Storage Management options SQL> create spfile='+DATA/ASM/spfileASM.ora' from pfile; File created. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Real Application Clusters and Automatic Storage Management options
[oracle@10gnode1 bin]$ ./asmcmd ASMCMD> cd DATA/ASM ASMCMD> ls spfileASM.ora
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
This article speaks of creating a RAC physical standby database for RAC Primary database.
Primary and Standby databases are of 2 node RAC.
Primary Database : PRIM
DBNAME : PRIM
DB UNIQUE NAME : PRIM
Instances : PRIM1 on node1
PRIM2 on node2
Standby Database : SRPSTB
DBNAME : PRIM
DB UNIQUE NAME : SRPSTB
Instances : SRPSTB1 on drnode1
SRPSTB2 on drnode2
Database version: Oracle 11.2.0.1
Below are the steps to create a RAC standby database for a RAC primary database.
Step 1: Add the following standby parameters on the primary database.
log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=prim' log_archive_dest_2='service=srpstb valid_for=(online_logfiles,primary_role) db_unique_name=srpstb' fal_client='prim' #Oracle net service name of primary database fal_server='srpstb' #oracle net service name of standby database.
Step 2: Create a pfile of the primary database and copy this file to the standby server.
The contents of the pfile of primary database is as below. (This is the pfile taken from instance prim1 of primary DB.)
[oracle@node1 u02]$ cat initprim1.ora prim1.__db_cache_size=385875968 prim2.__db_cache_size=436207616 prim1.__java_pool_size=16777216 prim2.__java_pool_size=16777216 prim1.__large_pool_size=16777216 prim2.__large_pool_size=16777216 prim1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment prim1.__pga_aggregate_target=520093696 prim2.__pga_aggregate_target=520093696 prim1.__sga_target=754974720 prim2.__sga_target=754974720 prim1.__shared_io_pool_size=0 prim2.__shared_io_pool_size=0 prim1.__shared_pool_size=318767104 prim2.__shared_pool_size=268435456 prim1.__streams_pool_size=0 prim2.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/prim/adump' *.audit_trail='db' *.cluster_database=true *.compatible='11.2.0.0.0' *.control_files='+DATA/prim/controlfile/current.260.826037247' *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_domain='' *.db_name='prim' *.db_unique_name='prim' *.db_recovery_file_dest_size=2147483648 *.db_recovery_file_dest='+FRA' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=primXDB)' *.fal_client='prim' *.fal_server='srpstb' prim1.instance_number=1 prim2.instance_number=2 prim1.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node1-vip)(PORT=1521))))' prim2.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=node2-vip)(PORT=1521))))' *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=prim' *.log_archive_dest_2='service=srpstb valid_for=(online_logfiles,primary_role) db_unique_name=srpstb' *.memory_target=1264582656 *.open_cursors=300 *.processes=150 *.remote_listener='node-scan:1521' *.remote_login_passwordfile='exclusive' prim2.thread=2 prim1.thread=1 prim2.undo_tablespace='UNDOTBS2' prim1.undo_tablespace='UNDOTBS1'
Step 3. Listener File contents on Primary Database server.
[oracle@node1 admin]$ cat listener.ora LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))) # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (ORACLE_HOME=/u01/app/oracle/product/11.2.0/db1) (SID_NAME=prim1) ) )
Step 4: TNS entries on primary database server.
### PRIMARY ENTRIES ###
PRIM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prim)
)
)
PRIM1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip)(PORT=1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prim)
(INSTANCE_NAME = prim1)
)
)
PRIM2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip)(PORT=1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = prim)
(INSTANCE_NAME = prim2)
)
)
### Standby TNS ENTRIES ###
SRPSTB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = drnode-scan)(PORT = 1521) )
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = srpstb)
(UR = A)
)
)
SRPSTB2 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = drnode2-vip)(PORT = 1521) )
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = srpstb)
(UR = A)
(INSTANCE_NAME = srpstb2)
)
)
SRPSTB1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = drnode1-vip)(PORT = 1521) )
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = srpstb)
(UR = A)
(INSTANCE_NAME = srpstb1)
)
)
Step 5: Copy the above TNS entries to standby database server node drnode1.
Step 6: Copy the password file of any instance (“orapwprim1” or “orapwprim2”) of primary database located at “$ORACLE_HOME/dbs” location to
the standby database server node drnode1 location “$ORACLE_HOME/dbs” and rename it to “orapwsrpstb1”.
Step 7: On the standby database server, perpare the pfile for srpstb1 instance as initsrpstb1.ora .
Contents of initsrpstb1.ora File:
[oracle@drnode1 dbs]$ cat initsrpstb1.ora srpstb1.__db_cache_size=419430400 srpstb1.__java_pool_size=16777216 srpstb1.__large_pool_size=16777216 srpstb1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment srpstb1.__pga_aggregate_target=520093696 srpstb1.__sga_target=754974720 srpstb1.__shared_io_pool_size=0 srpstb1.__shared_pool_size=285212672 srpstb1.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/srpstb/adump' *.audit_trail='db' *.cluster_database=false *.compatible='11.2.0.0.0' *.control_files='+DATA','+FRA' *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_name='prim' *.db_unique_name='srpstb' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=srpstbXDB)' *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srpstb' *.log_archive_dest_2='service=prim valid_for=(online_logfiles,primary_role) db_unique_name=prim' *.db_recovery_file_dest='+FRA' *.db_recovery_file_dest_size=2G *.memory_target=1264582656 *.open_cursors=300 *.processes=150 *.remote_listener='drnode-scan:1521' *.remote_login_passwordfile='EXCLUSIVE' srpstb1.undo_tablespace='UNDOTBS1' srpstb1.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=drnode1-vip)(PORT=1521))))' srpstb1.fal_client='srpstb1' *.fal_server='prim'
Step 8: Set up the listener.ora file on the standby database server drnode1.
Contents of Listener.ora file on DRNODE1:
[oracle@drnode1 admin]$ cat listener.ora LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))) # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (ORACLE_HOME=/u01/app/oracle/product/11.2.0.1/db1) (SID_NAME=srpstb1) ) )
Step 9: Nomount the standby instance srpstb1 using the above pfile “initsrpstb1.ora”.
Step 10: Now connect to the Primary database as target database and standby database as auxiliary instance through RMAN. Make sure that the
Primary database is open.
[oracle@drnode1 ~]$ rman target sys/oracle@prim auxiliary sys/oracle@srpstb1
Recovery Manager: Release 11.2.0.1.0 - Production on Fri Oct 25 19:54:07 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: PRIM (DBID=4118338878)
connected to auxiliary database: PRIM (not mounted)
RMAN&gt; duplicate target database for standby from active database;
Starting Duplicate Db at 25-OCT-13
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=28 device type=DISK
contents of Memory Script:
{
backup as copy reuse
targetfile '/u01/app/oracle/product/11.2.0/db1/dbs/orapwprim2' auxiliary format
'/u01/app/oracle/product/11.2.0.1/db1/dbs/orapwsrpstb1' ;
}
executing Memory Script
Starting backup at 25-OCT-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 instance=prim2 device type=DISK
Finished backup at 25-OCT-13
contents of Memory Script:
{
backup as copy current controlfile for standby auxiliary form at '+DATA/srpstb/controlfile/current.269.829771129';
restore clone controlfile to '+FRA/srpstb/controlfile/current.261.829771129' from
'+DATA/srpstb/controlfile/current.269.829771129';
sql clone &quot;create spfile from memory&quot;;
shutdown clone immediate;
startup clone nomount;
sql clone &quot;alter system set control_files =
''+DATA/srpstb/controlfile/current.269.829771129'', ''+FRA/sr pstb/controlfile/current.261.829771129'' comment=
''Set by RMAN'' scope=spfile&quot;;
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script
Starting backup at 25-OCT-13
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying standby control file
output file name=/u01/app/oracle/product/11.2.0/db1/dbs/snapcf_prim2.f tag=TAG20131025T195848 RECID=2 STAMP=829771151
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:46
Finished backup at 25-OCT-13
Starting restore at 25-OCT-13
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: copied control file copy
Finished restore at 25-OCT-13
sql statement: create spfile from memory
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 1269366784 bytes
Fixed Size 2212976 bytes
Variable Size 855640976 bytes
Database Buffers 402653184 bytes
Redo Buffers 8859648 bytes
sql statement: alter system set control_files = ''+DATA/srpstb/controlfile/current.269.829771129'',
''+FRA/srpstb/controlfile/current.261.829771129'' comment= ''Set by RMAN'' scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 1269366784 bytes
Fixed Size 2212976 bytes
Variable Size 855640976 bytes
Database Buffers 402653184 bytes
Redo Buffers 8859648 bytes
contents of Memory Script:
{
sql clone 'alter database mount standby database';
}
executing Memory Script
sql statement: alter database mount standby database
contents of Memory Script:
{
set newname for clone tempfile 1 to new;
switch clone tempfile all;
set newname for clone datafile 1 to new;
set newname for clone datafile 2 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 5 to new;
set newname for clone datafile 6 to new;
backup as copy reuse
datafile 1 auxiliary format new
datafile 2 auxiliary format new
datafile 3 auxiliary format new
datafile 4 auxiliary format new
datafile 5 auxiliary format new
datafile 6 auxiliary format new
;
sql 'alter system archive log current';
}
executing Memory Script
executing command: SET NEWNAME
renamed tempfile 1 to +DATA in control file
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting backup at 25-OCT-13
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=+DATA/prim/datafile/system.256.826037011
output file name=+DATA/srpstb/datafile/system.266.829771513 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:45
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=+DATA/prim/datafile/sysaux.257.826037013
output file name=+DATA/srpstb/datafile/sysaux.277.829771637 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:55
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DATA/prim/datafile/undotbs1.258.826037015
output file name=+DATA/srpstb/datafile/undotbs1.276.829771735 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+DATA/prim/datafile/example.264.826037271
output file name=+DATA/srpstb/datafile/example.275.829771771 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=+DATA/prim/datafile/undotbs2.265.826037561
output file name=+DATA/srpstb/datafile/undotbs2.274.829771805 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=+DATA/prim/datafile/users.259.826037015
output file name=+DATA/srpstb/datafile/users.273.829771831 tag=TAG20131025T200453
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
Finished backup at 25-OCT-13
sql statement: alter system archive log current
contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script
datafile 1 switched to datafile copy
input datafile copy RECID=2 STAMP=829771843 file name=+DATA/srpstb/datafile/system.266.829771513
datafile 2 switched to datafile copy
input datafile copy RECID=3 STAMP=829771843 file name=+DATA/srpstb/datafile/sysaux.277.829771637
datafile 3 switched to datafile copy
input datafile copy RECID=4 STAMP=829771843 file name=+DATA/srpstb/datafile/undotbs1.276.829771735
datafile 4 switched to datafile copy
input datafile copy RECID=5 STAMP=829771843 file name=+DATA/srpstb/datafile/users.273.829771831
datafile 5 switched to datafile copy
input datafile copy RECID=6 STAMP=829771843 file name=+DATA/srpstb/datafile/example.275.829771771
datafile 6 switched to datafile copy
input datafile copy RECID=7 STAMP=829771843 file name=+DATA/srpstb/datafile/undotbs2.274.829771805
Finished Duplicate Db at 25-OCT-13
Step 11: Once the duplication is completed, close the RMAN prompt and connect to the standby database through SQL.
sqlplus sys/&lt;password&gt;@srpstb1 as sysdba
Check the status of the standby database by making sure it is in mount stage.
sql>select status,instance_name,database_role from v$instance,v$database;
Step 12: Now start the managed recovery process on the standby database.
sql>alter database recover managed standby database disconnect from session;
Step 13: Now check if the managed recovery process (MRP) has been started on the standby database or not.
SQL> select process,status,sequence#,thread# from v$managed_standby; PROCESS STATUS SEQUENCE# THREAD# ------- ------------ --------- ------- ARCH CONNECTED 0 0 ARCH CONNECTED 0 0 ARCH CONNECTED 0 0 ARCH CONNECTED 0 0 MRP0 WAIT_FOR_LOG 65 2 RFS IDLE 0 0 RFS IDLE 0 0 RFS IDLE 0 0 RFS IDLE 0 0 RFS IDLE 75 1 RFS IDLE 0 0 RFS IDLE 0 0 RFS IDLE 0 0 RFS IDLE 0 0 14 rows selected.
Here, the MRP has been started and is waiting for the log sequence# 39. If MRP is not started, then the above query would not show up the
MRP0 under the process column.
Step 14: On the primary database, perform a few log switches and check if the logs are applied to the standby database.
Primary Database Archive Sequence:
sqlplus sys/<password>@prim1 as sysdba SQL> select thread#,max(sequence#) from v$archived_log group by thread#; THREAD# MAX(SEQUENCE#) ------- ---------------- 1 76 2 67
Standby database SRPSTB1 archive sequence being applied:
sqlplus sys/<password>@srpstb1 as sysdba SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#; THREAD# MAX(SEQUENCE#) ------- ----------------- 1 75 2 67
Here, the maximum sequence# generated on the Primary database from instance 1 (prim1) is 76 and from instance 2 (prim2) is 67 and the maximum sequence# applied on the standby database for thread 1 is 75 and thread 2 is 67 which means that the standby database is in sync with the primary database.
Now lets add the 2nd instance srpstb2 to the Standby Database srpstb.
Step 15: Create a pfile from the standby instance srpstb1 to add the cluster parameters.
cluster_database=TRUE srpstb1.undo_tablespace='UNDOTBS1' srpstb2.undo_tablespace='UNDOTBS2' srpstb2.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=drnode2-vip)(PORT=1521))))' srpstb1.instance_number=1 srpstb2.instance_number=2 srpstb1.thread=1 srpstb2.thread=2 srpstb1.fal_client='srpstb1' srpstb2.fal_client='srpstb2'
The new pfile required for the standby database cluster would look as below.
[oracle@drnode1 dbs]$ cat initsrpstb1.ora srpstb1.__db_cache_size=402653184 srpstb1.__java_pool_size=16777216 srpstb1.__large_pool_size=16777216 srpstb1.__pga_aggregate_target=520093696 srpstb1.__sga_target=754974720 srpstb1.__shared_io_pool_size=0 srpstb1.__shared_pool_size=285212672 srpstb1.__streams_pool_size=0 srpstb2.__db_cache_size=402653184 srpstb2.__java_pool_size=16777216 srpstb2.__large_pool_size=16777216 srpstb2.__pga_aggregate_target=520093696 srpstb2.__sga_target=754974720 srpstb2.__shared_io_pool_size=0 srpstb2.__shared_pool_size=285212672 srpstb2.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/srpstb/adump' *.audit_trail='DB' *.cluster_database=TRUE *.compatible='11.2.0.0.0' *.control_files='+DATA/srpstb/controlfile/current.269.829771129','+FRA/srpstb/controlfile/current.261.829771129'#Set by RMAN *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_name='prim' *.db_recovery_file_dest='+FRA' *.db_recovery_file_dest_size=2G *.db_unique_name='srpstb' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=srpstbXDB)' *.fal_client='srpstb1' *.fal_server='prim' srpstb1.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCO=TCP)(HOST=drnode1-vip)(PORT=1521))))' srpstb2.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=drnode2-vip)(PORT=1521))))' *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srpstb' *.log_archive_dest_2='service=prim valid_for=(online_logfiles,primary_role) db_unique_name=prim' *.log_buffer=8491008# log buffer update *.memory_target=1216M *.open_cursors=300 *.processes=150 *.query_rewrite_enabled='TRUE' *.remote_login_passwordfile='EXCLUSIVE' srpstb1.undo_tablespace='UNDOTBS1' srpstb2.undo_tablespace='UNDOTBS2' srpstb1.instance_number=1 srpstb2.instance_number=2 srpstb1.thread=1 srpstb2.thread=2 srpstb1.fal_client='srpstb1' srpstb2.fal_client='srpstb2' *.fal_server='prim'
Step 16: Shutdown the instance srpstb1, mount it using the newly above pfile, create an spfile to be placed in the shared location (ASM
diskgroup, as it is being shared by both the instances srpstb1 and srpstb2.)
SQL> create spfile='+DATA/SRPSTB/spfilesrpstb.ora' from pfile; File created.
Step 17: Create a new pfile initsrpstb1.ora in drnode1 located at $ORACLE_HOME/dbs with just one entry to point to the spfile location.
[oracle@drnode1 dbs]$ cat initsrpstb1.ora SPFILE='+DATA/SRPSTB/spfilesrpstb.ora'
Step 17: Copy the password file of srpstb1 (orapwsrpstb1) to drnode2 location “$ORACLE_HOME/dbs” and rename it as orapwsrpstb2.
Step 18: Copy the newly created pfile (initsrpstb1.ora) fto drnode2 location “$ORACLE_HOME/dbs” and rename it as initsrpstb2.ora
Step 19: Mount both srpstb1 and srpstb2 instances.
Step 20: Start MRP on any one instance using the below query.
SQL> alter database recover managed standby database disconnect from session;
Step 21: Check the max archive sequence generated on primary and compare it with the max archive sequence applied on the standby.
Primary:
SQL> select thread#,max(sequence#) from v$archived_log group by thread#;
Standby:
SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
Step 22: The next step would be to add the standby database SRPSTB to the cluster.
[oracle@drnode1 ~]$ srvctl add database -d srpstb -o /u01/app/oracle/product/11.2.0.1/db1 -r PHYSICAL_STANDBY -s MOUNT
Step 23: Now, we also need to add the instances entrires to the standby database.
[oracle@drnode1 ~]$ srvctl add instance -d srpstb -i srpstb1 -n drnode1 [oracle@drnode1 ~]$ srvctl add instance -d srpstb -i srpstb2 -n drnode2
Step 24: Now check the status of the standby database using srvctl.
[oracle@drnode1 ~]$ srvctl start database -d srpstb [oracle@drnode1 ~]$ srvctl status database -d srpstb Instance srpstb1 is running on node drnode1 Instance srpstb2 is running on node drnode2
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
There were certain blocks corrupted in a datafile which did not allow me to query tables associated with the corrupted blocks
SQL> select * from update7sp7.test1; select * from update7sp7.test1 * ERROR at line 1: ORA-01578: ORACLE data block corrupted (file # 8, block # 138 ) ORA-01110: data file 8: '+DATA_NEW/srpstb/datafile/myts.262.8 19379005'
SQL> select * from update7sp7.test2; select * from update7sp7.test2 * ERROR at line 1: ORA-01578: ORACLE data block corrupted (file # 8, block # 146) ORA-01110: data file 8: '+DATA_NEW/srpstb/datafile/myts.262.819379005'
The following errors were reported in alert log file:
Errors in file /u01/app/oracle/diag/rdbms/srpstb/srpstb/trace/srpstb_m000_3839.trc (incident=43378):
ORA-01578: ORACLE data block corrupted (file # 8, block # 138) ORA-01110: data file 8: '+DATA_NEW/srpstb/datafile/myts.262.819379005' Errors in file /u01/app/oracle/diag/rdbms/srpstb/srpstb/trace/srpstb_m000_3839.trc (incident=43379): ORA-01578: ORACLE data block corrupted (file # 8, block # 146) ORA-01110: data file 8: '+DATA_NEW/srpstb/datafile/myts.262.819379005' Checker run found 3 new persistent data failures Fri Oct 04 18:29:25 2013
Now, let me check the same using DBV utility.
[oracle@ora1-2 trace]$ dbv file='+DATA_NEW/srpstb/datafile/myts.262.819379005' userid=sys/oracle DBVERIFY: Release 11.2.0.3.0 - Production on Fri Oct 4 18:38:34 2013 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY - Verification starting : FILE = +DATA_NEW/srpstb/datafile/myts.262.819379005 DBV-00111: OCI failure (4157) (ORA-00600: internal error code,arguments: [kudbvslfw1], [], [], [], [], [], [], [], [], [], [], [] ORA-06512: at "SYS.DBMS_DBVERIFY", line 24 ORA-06512: at line 1
Oops !! DBV is giving me out an internal error. Upon checking with Oracle Support, came to know that it was a bug
Bug 13811172 dbms_dbverify may fail with dbv-111: oci failure (4157) ora-600 [kudbvslfw1]
So, let me check for the corruption using RMAN “validate database”.
RMAN> validate database; Starting validate at 04-OCT-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=50 device type=DISK channel ORA_DISK_1: starting validation of datafile channel ORA_DISK_1: specifying datafile(s) for validation input datafile file number=00001 name=+DATA_NEW/srpstb/datafile/system.261.819379005 input datafile file number=00002 name=+DATA_NEW/srpstb/datafile/sysaux.257.819378991 input datafile file number=00005 name=+DATA_NEW/srpstb/datafile/example.258.819378991 input datafile file number=00008 name=+DATA_NEW/srpstb/datafile/myts.262.819379005 input datafile file number=00003 name=+DATA_NEW/srpstb/datafile/undotbs1.263.819379005 input datafile file number=00006 name=+DATA_NEW/srpstb/datafile/myts.264.819379005 input datafile file number=00007 name=+DATA_NEW/srpstb/datafile/tbs.259.819378991 input datafile file number=00004 name=+DATA_NEW/srpstb/datafile/users.260.819378991 channel ORA_DISK_1: validation complete, elapsed time: 00:00:35 List of Datafiles ================= File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------- 1 OK 0 14671 92203 2505790 File Name: +DATA_NEW/srpstb/datafile/system.261.819379005 Block Type Blocks Failing Blocks Processed ---------- -------------- ---------------- Data 0 60610 Index 0 12734 Other 0 4145 File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------- 2 OK 0 18749 74365 2505788 File Name: +DATA_NEW/srpstb/datafile/sysaux.257.819378991 Block Type Blocks Failing Blocks Processed ---------- -------------- ---------------- Data 0 12822 Index 0 9345 Other 0 33324 File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------- 3 OK 0 1 6403 2505790 File Name: +DATA_NEW/srpstb/datafile/undotbs1.263.819379005 Block Type Blocks Failing Blocks Processed ---------- -------------- ---------------- Data 0 0 Index 0 0 Other 0 6399 File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------- 4 OK 0 18 669 1786990 File Name: +DATA_NEW/srpstb/datafile/users.260.819378991 Block Type Blocks Failing Blocks Processed ---------- -------------- ---------------- Data 0 106 Index 0 39 Other 0 477 File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------- 5 OK 0 31391 44242 2397708 File Name: +DATA_NEW/srpstb/datafile/example.258.819378991 Block Type Blocks Failing Blocks Processed ---------- -------------- ---------------- Data 0 4509 Index 0 1148 Other 0 7192 File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------- 6 OK 0 1 6401 1370949 File Name: +DATA_NEW/srpstb/datafile/myts.264.819379005 Block Type Blocks Failing Blocks Processed ---------- -------------- ---------------- Data 0 0 Index 0 0 Other 0 6399 File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------- 7 OK 0 1 1281 1348023 File Name: +DATA_NEW/srpstb/datafile/tbs.259.819378991 Block Type Blocks Failing Blocks Processed ---------- -------------- ---------------- Data 0 0 Index 0 0 Other 0 1279 File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------- 8 OK 3 1 12809 2396687 File Name: +DATA_NEW/srpstb/datafile/myts.262.819379005 Block Type Blocks Failing Blocks Processed ---------- -------------- ---------------- Data 0 26 Index 0 9 Other 0 12764 channel ORA_DISK_1: starting validation of datafile channel ORA_DISK_1: specifying datafile(s) for validation including current control file for validation including current SPFILE in backup set channel ORA_DISK_1: validation complete, elapsed time: 00:00:01 List of Control File and SPFILE =============================== File Type Status Blocks Failing Blocks Examined --------- ------ -------------- --------------- SPFILE OK 0 2 Control File OK 0 642 Finished validate at 04-OCT-13
We can even use “Validate datafile 8;” command at RMAN to check the corruption if we know the exact datafile ID.
From the above outcome, it is clear that datafile ID 8 (+DATA_NEW/srpstb/datafile/myts.262.819379005) has 3 corrupted blocks. Let me get the corrupted block# from v$database_block_corruption view.
SQL> select * from v$database_block_corruption; FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO ----- ------ ------ ------------------ ---------------- 8 137 2 0 CORRUPT 8 146 1 0 CORRUPT
From the above outcome, it is clear that 2 blocks starting from block# 137 (block 137 and 138) of datafile 8 are corrupted and so is block 146 of datafile 8 corrupted.
Let’s recover them using RMAN (Block Recovery).
[oracle@ora1-2 trace]$ rman target / Recovery Manager: Release 11.2.0.3.0 - Production on Fri Oct 418:41:21 2013 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: SRPRIM (DBID=216679430) RMAN> recover datafile 8 block 137 to 138,146; Starting recover at 04-OCT-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=59 device type=DISK searching flashback logs for block images until SCN 2501389 finished flashback log search, restored 0 blocks channel ORA_DISK_1: restoring block(s) from datafile copy /u02/bkp/myts_8 failover to previous backup channel ORA_DISK_1: restoring block(s) channel ORA_DISK_1: specifying block(s) to restore from backupset restoring blocks of datafile 00008 channel ORA_DISK_1: reading from backup piece /u02/bkp/srprim_inc0_37olibpj_1_1.bak channel ORA_DISK_1: piece handle=/u02/bkp/srprim_inc0_37olibpj_1_1.bak tag=TAG20131004T120611 channel ORA_DISK_1: restored block(s) from backup piece 1 channel ORA_DISK_1: block restore complete, elapsed time: 00:00:01 starting media recovery media recovery complete, elapsed time: 00:00:03 Finished recover at 04-OCT-13
Recovery has been completed successfully. Let’s verify the same from the view “V$DATABASE_BLOCK_CORRUPTION”.
[oracle@ora1-2 trace]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Fri Oct 4 20:01:48 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select status,instance_name,database_role from v$instance,v$database; STATUS INSTANCE_NAME DATABASE_ROLE ------ ------------- -------------- OPEN srpstb PRIMARY SQL> select * from v$database_block_corruption; no rows selected
There are no corruption reported by the view and now we should be able to query the tables that was initially done.
SQL> select * from update7sp7.test1; CODE NAME ---- ------ 100 rama 101 raja 102 soma 103 ravi 104 shivu 105 adsf 106 ijasdf 7 rows selected.
SQL> select * from update7sp7.test2; CODE ---------- 200 201 202 203 204
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
Here is an example on creating Data Guard Broker configuration using DGMGRL.
Primary and standby database server details:
Primary Server: ora1-1.mydomain Database: SRPRIM
Standby Server: ora1-2.mydomain Database: SRPSTB
Primary Database Details:
SQL> select status,instance_name,database_role,protection_mode from v$database,v$instance; STATUS INSTANCE_NAME DATABASE_ROLE PROTECTION_MODE ------ ------------- ------------- --------------- OPEN srprim PRIMARY MAXIMUM AVAILABILITY SQL> select max(sequence#) from v$archived_log; MAX(SEQUENCE#) -------------- 350
Standby Database Details:
SQL> select status,instance_name,database_role,protection_mode from v$database,v$instance; STATUS INSTANCE_NAME DATABASE_ROLE PROTECTION_MODE ------- ------------- ---------------- -------------------- MOUNTED srpstb PHYSICAL STANDBY MAXIMUM AVAILABILITY SQL> select max(sequence#) from v$archived_log where applied='YES'; MAX(SEQUENCE#) -------------- 350
By default, the DG Broker configuration files are located at “$ORACLE_HOME/dbs” location. If you need to change it, then you can by using “alter system set dg_broker_config_file1=<required location>”. Similarly for “dg_broker_config_file2”.
Remember that, on a RAC system, the DG broker configuration files need to be placed on a shared location where in it can be accessed by all the nodes.
On both primary and standby databases, enable the dataguard broker process by setting the value of the parameter “DG_BROKER_START” to TRUE.
SQL> alter system set dg_broker_start=true;
Add a static entry for the DGMGRL in the listener.ora file on both the primary and standby servers.
On Primary:
(SID_DESC= (GLOBAL_DBNAME=srprim_dgmgrl) (ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1) (SID_NAME=srprim) )
On Standby:
(SID_DESC= (SID_NAME=srpstb) (GLOBAL_DBNAME=srpstb_dgmgrl) (ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1) )
Listener.ora file contents on primary server:
LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora1-1.mydomain)(PORT = 1 521)) ) ) SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (GLOBAL_DBNAME=srprim_dgmgrl) (ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1) (SID_NAME=srprim) ) (SID_DESC= (ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1) (SID_NAME=krat) ) ) ADR_BASE_LISTENER = /u01/app/oracle ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent
Listener.ora file contents on standby server:
LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522)) (ADDRESS = (PROTOCOL = TCP)(HOST = ora1-2.mydomain)(PORT = 1 522)) ) ) SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (SID_NAME=srpstb) (GLOBAL_DBNAME=srpstb_dgmgrl) (ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1) ) (SID_DESC= (SID_NAME=srpstb) (ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1) ) ) ADR_BASE_LISTENER = /u01/app/oracle ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent
On the primary server, create the Dataguard Broker configuration.
[oracle@ora1-1 ~]$ dgmgrl DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect sys/oracle@srprim Connected. DGMGRL> create configuration 'dgtest' > as primary database is 'srprim' > connect identifier is srprim; Configuration "dgtest" created with primary database "srprim" DGMGRL> DGMGRL> show configuration; Configuration - dgtest Protection Mode: MaxAvailability Databases: srprim - Primary database Fast-Start Failover: DISABLED Configuration Status: DISABLED
Add the standby database “SRPSTB” to the configuration created above.
DGMGRL> DGMGRL> add database 'srpstb' > as connect identifier is srpstb > maintained as physical; Database "srpstb" added DGMGRL> DGMGRL> show configuration; Configuration - dgtest Protection Mode: MaxAvailability Databases: srprim - Primary database srpstb - Physical standby database Fast-Start Failover: DISABLED Configuration Status: DISABLED
The configuration added, needs to be enabled.
DGMGRL> enable configuration; Enabled. DGMGRL> show configuration Configuration - dgtest Protection Mode: MaxAvailability Databases: srprim - Primary database srpstb - Physical standby database Fast-Start Failover: DISABLED Configuration Status: SUCCESS
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
Primary Database : SRPRIM
Standby Database : SRPSTB
Database version : 11gR2
The database is running under Maximum Performance mode.
SQL> select status,instance_name,database_role,protection_mode from v$database,v$instance; STATUS INSTANCE_NAME DATABASE_ROLE PROTECTION_MODE ------ ------------- ------------- -------------------- OPEN srprim PRIMARY MAXIMUM PERFORMANCE
In order to change the protection mode to either MAXIMUM AVAILABILITY/MAXIMUM PROTECTION, we need to have the standby redo logs configured on
the standby database. Also, the redo shippment parameter (log_archive_dest_2) on the primary database should be configured to use SYNCHRONOUS
(“SYNC”) mode.
Let’s check the number of online redo logs and it’s size on primary database
SQL> select group#,bytes/1024/1024 from v$log; GROUP# BYTES/1024/1024 ------ ------------------ 1 100 2 100 3 100
It can be noticed from below that there are no standby redo log groups configured on the primary database.
SQL> select group#,bytes/1024/1024 from v$standby_log; no rows selected
Add standby redo log groups on the primary database with the same size as that of the online redo log groups.
SQL> alter database add standby logfile group 4 size 100M; Database altered. SQL> alter database add standby logfile group 5 size 100M; Database altered. SQL> alter database add standby logfile group 6 size 100M; Database altered. SQL> alter database add standby logfile group 7 size 100M; Database altered.
We can now notice that 4 standby redo log groups have been added with the same size as that of the online redo logs.
These standby redo logs will not be used on the primary database and will be used only when a switchover takes place.
SQL> select group#,bytes/1024/1024 from v$standby_log; GROUP# BYTES/1024/1024 ------ --------------- 4 100 5 100 6 100 7 100 SQL> select thread#,max(sequence#) from v$archived_log group bythread#; THREAD# MAX(SEQUENCE#) ------- -------------- 1 311
Standby:
SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#; THREAD# MAX(SEQUENCE#) ------- -------------- 1 311
Standby database is in sync with the primary database.
Standby database details:
SQL> select status,instance_name,database_role from v$database,v$Instance; STATUS INSTANCE_NAME DATABASE_ROLE ------- ------------- ------------------ MOUNTED srpstb PHYSICAL STANDBY
On the standby database, there are 3 online redo log groups with the size 100M and there are no standby redo log groups.
SQL> select group#,bytes/1024/1024 from v$log; GROUP# BYTES/1024/1024 ------ ---------------- 1 100 3 100 2 100 SQL> select group#,bytes/1024/1024 from v$standby_log; no rows selected
Let’s add standby redo log groups on the standby database but before that, we need to check if MRP (Managed Recovery Process) is running on
the standby database and if running, then it needs to be cancelled.
SQL> select process,status,sequence# from v$managed_standby; PROCESS STATUS SEQUENCE# ------- ------------ ---------- ARCH CONNECTED 0 ARCH CONNECTED 0 ARCH CONNECTED 0 ARCH CONNECTED 0 RFS IDLE 0 RFS IDLE 0 RFS IDLE 0 MRP0 WAIT_FOR_LOG 312 8 rows selected.
Cancel the MRP process on the standby database:
SQL> alter database recover managed standby database cancel; Database altered.
Add 4 Standby Redo Log (SRL) groups of size same as online redo log groups (100M) on the standby database:
SQL> alter database add standby logfile group 4 size 100M; Database altered. SQL> alter database add standby logfile group 5 size 100M; Database altered. SQL> alter database add standby logfile group 6 size 100M; Database altered. SQL> alter database add standby logfile group 7 size 100M; Database altered. SQL> select group#,bytes/1024/1024 from v$standby_log; GROUP# BYTES/1024/1024 ------ --------------- 4 100 5 100 6 100 7 100
Once the SRLs are added, start the MRP on the standby database
SQL> alter database recover managed standby database disconnectfrom session; Database altered. SQL> select process,status,sequence# from v$managed_standby; PROCESS STATUS SEQUENCE# ------- ----------- ---------- ARCH CONNECTED 0 ARCH CONNECTED 0 ARCH CONNECTED 0 ARCH CONNECTED 0 RFS IDLE 0 RFS IDLE 0 RFS IDLE 0 MRP0 WAIT_FOR_LOG 312 8 rows selected.
As said earlier, configure the redo shippment parameter (log_archive_dest_2) on the primary database to use SYNCHRONOUS mode.
Primary database:
SQL> show parameter log_archive_dest_2
NAME TYPE VALUE
------------------ ------ -----------------------------------
log_archive_dest_2 string service=srpstb valid_for=(online_logf iles,primary_role) db_unique_name=srp stb
SQL> alter system set log_archive_dest_2='service=srpstb LGWR AFFIRM SYNC valid_for=(online_logfiles,primary_role) db_unique_name=srpstb';
System altered.
SQL> show parameter dest_2
NAME TYPE VALUE
--------------------------- ------ ----------------------------
db_create_online_log_dest_2 string
log_archive_dest_2 string service=srpstb LGWR AFFIRM SYNC valid_for=(online_logfiles,p
rimary_role) db_unique_name=srpstb
Shutdown the primary database and mount it.
SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 208769024 bytes Fixed Size 2226936 bytes Variable Size 180356360 bytes Database Buffers 20971520 bytes Redo Buffers 5214208 bytes Database mounted. SQL>
Now change the protection mode on the primary database according to the requirement using the below command
“alter database set standby database to maximize {AVAILABILITY | PROTECTION| PERFORMANCE}”
Here, I am changing the protection mode from MAXIMUM PERFORMANCE to MAXIMUM AVAILABILITY
SQL> alter database set standby database to maximize availability; Database altered.
Once the mode is changed, open the primary database and verify the same.
SQL> alter database open; Database altered. SQL> select status,instance_name,database_role,protection_mode from v$database,v$instance; STATUS INSTANCE_NAME DATABASE_ROLE PROTECTION_MODE ------ ------------- ------------- ---------------------- OPEN srprim PRIMARY MAXIMUM AVAILABILITY
Check if the standby database is in sync with the primary database
On primary:
SQL> select thread#,max(sequence#) from v$archived_log group bythread#; THREAD# MAX(SEQUENCE#) ------- -------------- 1 316
Standby:
SQL> select status,instance_name,database_role,protection_mode from v$database,v$instance; STATUS INSTANCE_NAME DATABASE_ROLE PROTECTION_MODE ------- ------------- ---------------- -------------------- MOUNTED srpstb PHYSICAL STANDBY MAXIMUM AVAILABILITY SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#; THREAD# MAX(SEQUENCE#) ------- -------------- 1 316
Standby database is in sync with the primary and also the PROTECTION mode has been changed to MAXIMUM AVAILABILITY.
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
Database Name : SRPRIM
Tablespace Name : MYTS
A table associated with the tablespace MYTS was accidentally dropped on one our Test environment. There was no flashback enabled on this
database and the recyclebin too was purged out. As a result, the table could not be recovered using flashback method. The other methods to
recover the dropped table would be:
1. Duplicate/Clone the database in which the table is dropped until time before the table was dropped to a temporary database, export the
table from the temporary database and import it to the main database.
–This method is the preferrable one. I shall demonstrate using this method in my coming posts.
2. Using Tablespace Point In Time Recovery. Here, the tablespace associate with the table that is dropped would be recovered until the time.
The main disadvantage of this method is that any other successfull transactions that were carried out on this tablespace after the time the
table was dropped, would be lost when you do a point in time recovery.
As this was a test environment and due to lack of time, I had to opt this method but yes, “only upon cofirmation from the productivity
managment site.” 🙂
Schema name: Shivu
Table name : TEST
SQL> select * from test; CODE ---------- 100 200 300 400 500 600 700 800 900 1000 10 rows selected.
Make sure that you have a valid RMAN level 0 backup to proceed.
SQL> select to_char(sysdate,'DD-MON-YYYY HH24:MI:SS') from dual; TO_CHAR(SYSDATE,'DD- -------------------- 04-MAY-2013 11:38:12
Table TEST has been dropped and also the recyclebin has been purged.
SQL> drop table shivu.test; Table dropped. SQL> SQL> purge recyclebin; Recyclebin purged. SQL> select count(*) from shivu.test; select count(*) from shivu.test * ERROR at line 1: ORA-00942: table or view does not exist
Now lets, perform the tablespace point in time recovery until the time before the table was dropped.
For TSPITR, we need to specify the auxiliary destination where RMAN would create a temporary database by restoring the controlfile, SYSTEM
tablespace, UNDO tablespace, SYSAUX tablespace and finally the tablespace that needs to be recovered.
Once the temporary database is restored and recovered, RMAN automatically exports the contents of tablespace to be recovered from the
temproary tablespace to a dump file and imports this dump into the Target(Main) database where the table was dropped.
[oracle@ora1-1 dbs]$ rman target /
Recovery Manager: Release 11.2.0.3.0 - Production on Sat May 4 14:07:23 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database: SRPRIM (DBID=216679430)
RMAN> run
2> {
3> recover tablespace MYTS until time "to_date('2013-05-04:11:38:00','YYYY-MM:DD:HH24:MI:SS')" auxiliary destination '+FRA';
4> }
Starting recover at 04-MAY-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=51 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point-in-time
List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1
Creating automatic instance, with SID='snya'
initialization parameters used for automatic instance:
db_name=SRPRIM
db_unique_name=snya_tspitr_SRPRIM
compatible=11.2.0.0.0
db_block_size=8192
db_files=200
sga_target=280M
processes=50
db_create_file_dest=+FRA
log_archive_dest_1='location=+FRA'
#No auxiliary parameter file used
starting up automatic instance SRPRIM
Oracle instance started
Total System Global Area 292278272 bytes
Fixed Size 2227744 bytes
Variable Size 100663776 bytes
Database Buffers 184549376 bytes
Redo Buffers 4837376 bytes
Automatic instance created
List of tablespaces that have been dropped from the target database:
Tablespace MYTS
contents of Memory Script:
{
# set requested point in time
set until time "to_date('2013-05-04:11:38:00','YYYY-MM:DD:HH24:MI:SS')";
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
}
executing Memory Script
executing command: SET until clause
Starting restore at 04-MAY-13
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=23 device type=DISK
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/autobackup/c-216679430-20130503-00
channel ORA_AUX_DISK_1: piece handle=/u01/autobackup/c-216679430-20130503-00 tag=TAG20130503T202409
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
output file name=+FRA/srprim/controlfile/current.446.814543667
Finished restore at 04-MAY-13
sql statement: alter database mount clone database
sql statement: alter system archive log current
sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;
contents of Memory Script:
{
# set requested point in time
set until time "to_date('2013-05-04:11:38:00','YYYY-MM:DD:HH24:MI:SS')";
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 2 to new;
set newname for clone tempfile 1 to new;
set newname for datafile 6 to new;
set newname for datafile 8 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 3, 2, 6, 8;
switch clone datafile all;
}
executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
renamed tempfile 1 to +FRA in control file
Starting restore at 04-MAY-13
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00002 to +FRA
channel ORA_AUX_DISK_1: reading from backup piece +FRA/srprim/backupset/2013_05_03/nnndn0_tag20130503t202042_0.425.814479643
channel ORA_AUX_DISK_1: piece handle=+FRA/srprim/backupset/2013_05_03/nnndn0_tag20130503t202042_0.425.814479643 tag=TAG20130503T202042
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:46
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to +FRA
channel ORA_AUX_DISK_1: restoring datafile 00003 to +FRA
channel ORA_AUX_DISK_1: restoring datafile 00006 to +DATA
channel ORA_AUX_DISK_1: restoring datafile 00008 to +DATA
channel ORA_AUX_DISK_1: reading from backup piece +FRA/srprim/backupset/2013_05_03/nnndn0_tag20130503t202042_0.426.814479645
channel ORA_AUX_DISK_1: piece handle=+FRA/srprim/backupset/2013_05_03/nnndn0_tag20130503t202042_0.426.814479645 tag=TAG20130503T202042
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:35
Finished restore at 04-MAY-13
datafile 6 switched to datafile copy
input datafile copy RECID=13 STAMP=814543825 file name=+DATA/srprim/datafile/myts.275.814543731
datafile 8 switched to datafile copy
input datafile copy RECID=14 STAMP=814543825 file name=+DATA/srprim/datafile/myts.276.814543733
datafile 1 switched to datafile copy
input datafile copy RECID=15 STAMP=814543825 file name=+FRA/srprim/datafile/system.450.814543729
datafile 3 switched to datafile copy
input datafile copy RECID=16 STAMP=814543825 file name=+FRA/srprim/datafile/undotbs1.452.814543733
datafile 2 switched to datafile copy
input datafile copy RECID=17 STAMP=814543825 file name=+FRA/srprim/datafile/sysaux.448.814543685
contents of Memory Script:
{
# set requested point in time
set until time "to_date('2013-05-04:11:38:00','YYYY-MM:DD:HH24:MI:SS')";
# online the datafiles restored or switched
sql clone "alter database datafile 1 online";
sql clone "alter database datafile 3 online";
sql clone "alter database datafile 2 online";
sql clone "alter database datafile 6 online";
sql clone "alter database datafile 8 online";
# recover and open resetlogs
recover clone database tablespace "MYTS", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script
executing command: SET until clause
sql statement: alter database datafile 1 online
sql statement: alter database datafile 3 online
sql statement: alter database datafile 2 online
sql statement: alter database datafile 6 online
sql statement: alter database datafile 8 online
Starting recover at 04-MAY-13
using channel ORA_AUX_DISK_1
starting media recovery
archived log for thread 1 with sequence 283 is already on disk as file +FRA/srprim/archivelog/2013_05_03/thread_1_seq_283.427.814479821
archived log for thread 1 with sequence 284 is already on disk as file +FRA/srprim/archivelog/2013_05_03/thread_1_seq_284.429.814479833
archived log for thread 1 with sequence 285 is already on disk as file +FRA/srprim/archivelog/2013_05_04/thread_1_seq_285.433.814534647
archived log for thread 1 with sequence 286 is already on disk as file +FRA/srprim/archivelog/2013_05_04/thread_1_seq_286.434.814534651
archived log for thread 1 with sequence 287 is already on disk as file +FRA/srprim/archivelog/2013_05_04/thread_1_seq_287.435.814534887
archived log file name=+FRA/srprim/archivelog/2013_05_03/thread_1_seq_283.427.814479821 thread=1 sequence=283
archived log file name=+FRA/srprim/archivelog/2013_05_03/thread_1_seq_284.429.814479833 thread=1 sequence=284
archived log file name=+FRA/srprim/archivelog/2013_05_04/thread_1_seq_285.433.814534647 thread=1 sequence=285
archived log file name=+FRA/srprim/archivelog/2013_05_04/thread_1_seq_286.434.814534651 thread=1 sequence=286
archived log file name=+FRA/srprim/archivelog/2013_05_04/thread_1_seq_287.435.814534887 thread=1 sequence=287
media recovery complete, elapsed time: 00:00:08
Finished recover at 04-MAY-13
database opened
contents of Memory Script:
{
# make read only the tablespace that will be exported
sql clone 'alter tablespace MYTS read only';
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
+FRA''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
+FRA''";
}
executing Memory Script
sql statement: alter tablespace MYTS read only
sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''+FRA''
sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''+FRA''
Performing export of metadata...
EXPDP> Starting "SYS"."TSPITR_EXP_snya":
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table "SYS"."TSPITR_EXP_snya" successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_snya is:
EXPDP> +FRA/tspitr_snya_51808.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace MYTS:
EXPDP> +DATA/srprim/datafile/myts.275.814543731
EXPDP> +DATA/srprim/datafile/myts.276.814543733
EXPDP> Job "SYS"."TSPITR_EXP_snya" successfully completed at 14:13:10
Export completed
contents of Memory Script:
{
# shutdown clone before import
shutdown clone immediate
}
executing Memory Script
database closed
database dismounted
Oracle instance shut down
Performing import of metadata...
IMPDP> Master table "SYS"."TSPITR_IMP_snya" successfully loaded/unloaded
IMPDP> Starting "SYS"."TSPITR_IMP_snya":
IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
IMPDP> Job "SYS"."TSPITR_IMP_snya" successfully completed at 14:13:46
Import completed
contents of Memory Script:
{
# make read write and offline the imported tablespaces
sql 'alter tablespace MYTS read write';
sql 'alter tablespace MYTS offline';
# enable autobackups after TSPITR is finished
sql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;';
}
executing Memory Script
sql statement: alter tablespace MYTS read write
sql statement: alter tablespace MYTS offline
sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;
Removing automatic instance
Automatic instance removed
auxiliary instance file +FRA/srprim/tempfile/temp.456.814543869 deleted
auxiliary instance file +FRA/srprim/onlinelog/group_3.455.814543857 deleted
auxiliary instance file +FRA/srprim/onlinelog/group_2.454.814543849 deleted
auxiliary instance file +FRA/srprim/onlinelog/group_1.453.814543837 deleted
auxiliary instance file +FRA/srprim/datafile/sysaux.448.814543685 deleted
auxiliary instance file +FRA/srprim/datafile/undotbs1.452.814543733 deleted
auxiliary instance file +FRA/srprim/datafile/system.450.814543729 deleted
auxiliary instance file +FRA/srprim/controlfile/current.446.814543667 deleted
Finished recover at 04-MAY-13
RMAN> exit
Once the import is done successfully, RMAN automatically deletes the temporary database that it had created earlier.
Now, lets connect to the main database and check if we are able to access the dropped table. But, before that, you need to bring the
tablespace online.
This can be noticed here.
[oracle@ora1-1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sat May 4 14:16:25 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, DataMining and Real Application Testing options SQL> set linesize 300 SQL> col name for a60 SQL> select name,status from v$datafile; NAME STATUS --------------------------------------------- ------------ +DATA/srprim/datafile/system.257.800915677 SYSTEM +DATA/srprim/datafile/sysaux.258.800915679 ONLINE +DATA/srprim/datafile/undotbs1.265.800915683 ONLINE +DATA/srprim/datafile/users.256.800915683 ONLINE +DATA/srprim/datafile/example.259.800915681 ONLINE +DATA/srprim/datafile/myts.276.814543733 OFFLINE +DATA/srprim/datafile/tbs.268.804023681 ONLINE +DATA/srprim/datafile/myts.275.814543731 OFFLINE 8 rows selected. SQL> select tablespace_name,status from dba_tablespaces; TABLESPACE_NAME STATUS --------------------------- --------- SYSTEM ONLINE SYSAUX ONLINE UNDOTBS1 ONLINE TEMP ONLINE USERS ONLINE EXAMPLE ONLINE MYTS OFFLINE TBS ONLINE 8 rows selected.
You can notice that the tablespace MYTS is offline. Before proceeding, this tablespace needs to be made online.
SQL> alter tablespace MYTS online; Tablespace altered. SQL> select tablespace_name,status from dba_tablespaces; TABLESPACE_NAME STATUS ------------------- --------- SYSTEM ONLINE SYSAUX ONLINE UNDOTBS1 ONLINE TEMP ONLINE USERS ONLINE EXAMPLE ONLINE MYTS ONLINE TBS ONLINE 8 rows selected.
Now lets check the contents of the table “TEST” of schema “SHIVU”.
SQL> select count(*) from shivu.test; COUNT(*) ---------- 10 SQL> select * from shivu.test; CODE ---------- 100 200 300 400 500 600 700 800 900 1000 10 rows selected.
We are able to get back the contents of the TEST table.
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
After upgrading the CRS from 10.2.0.1 to 10.2.0.4 (link -> https://shivanandarao-oracle.com/2013/03/04/upgrade-crs-from-10-2-0-1-to-10-2-0-4/), now lets move on upgrading the database from 10.2.0.1 to 10.2.0.4
Step 1:
Lets stop the database running on both the nodes 10gnode1 and 10gnode2. Once the database is stopped, we shall stop the ASM instances and the nodeapps running on all the nodes.
[oracle@10gnode1 ~]$ srvctl stop database -d srprim [oracle@10gnode1 ~]$ srvctl stop asm -n 10gnode1 [oracle@10gnode2 ~]$ srvctl stop asm -n 10gnode2 [oracle@10gnode1 ~]$ srvctl stop nodeapps -n 10gnode1 [oracle@10gnode2 ~]$ srvctl stop nodeapps -n 10gnode2
Step 2:
From the unzipped patch, run the runInstaller file on 10gnode1.
Select the 10g Database home that needs to be upgraded and verify the home path.
Once the patch has been applied, the installation would recommend you to run root.sh script from the 10g Database home as ROOT user. Run the script on both the nodes.
[root@10gnode1 ~]# /u01/app/oracle/product/10.2.0/db_1/root.sh [root@10gnode2 ~]# /u01/app/oracle/product/10.2.0/db_1/root.sh
Step 3:
Now, lets start the Nodeapps and the ASM instances on both the nodes.
[oracle@10gnode1 ~]$ srvctl start nodeapps -n 10gnode1 [oracle@10gnode2 ~]$ srvctl start nodeapps -n 10gnode2 [oracle@10gnode1 ~]$ srvctl start asm -n 10gnode1 [oracle@10gnode2 ~]$ srvctl start asm -n 10gnode2
Step 4:
Database Upgrade.
On 10gnode1, start the instance “srprim1”. Since we are upgrading the database, we are starting up only one instance.
[oracle@10gnode1 dbs]$ export ORACLE_SID=srprim1 [oracle@10gnode1 dbs]$ export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 [oracle@10gnode1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:02:552013 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. Connected to an idle instance. SQL> startup nomount ORACLE instance started. Total System Global Area 926941184 bytes Fixed Size 1270748 bytes Variable Size 247467044 bytes Database Buffers 675282944 bytes Redo Buffers 2920448 bytes SQL>
Step 5:
Create pfile from SPFILE in order to remove the below mentioned Cluster Parameters.
SQL> create pfile='/u01/initsrprim1.ora' from spfile; File created. SQL> shut immediate ORA-01507: database not mounted ORACLE instance shut down.
Step 6:
Now, remove or comment the below mentioned cluster parameters in the newly created pfile.
vi /u01/initsrprim1.ora #REMOVE THESE PARAMETERS# #*.cluster_database_instances=2 #*.cluster_database=true #srprim2.instance_number=2 #srprim1.instance_number=1
Step 7:
Startup the instance “srprim1” in upgrade mode using the modified PFILE.
[oracle@10gnode1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:09:212013 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. Connected to an idle instance. SQL> startup upgrade pfile='/u01/initsrprim1.ora'; ORACLE instance started. Total System Global Area 926941184 bytes Fixed Size 1270748 bytes Variable Size 247467044 bytes Database Buffers 675282944 bytes Redo Buffers 2920448 bytes Database mounted. Database opened.
Step 8:
Once the database is started up in the UPGRADE mode, run the CATUPGRD.SQL script to carryout the upgrade process.
SQL> spool '/u01/10204upgrd.txt'; SQL> @/u01/app/oracle/product/10.2.0/db_1/rdbms/admin/catupgrd.sql
Step 9:
Once the script is executed successfully, shutdown the database and open it normally using the above pfile. Run the utlrp.sql script from the 10.2.0 Oracle DB home/rdbms/admin path to validate the invalid objects.
[oracle@10gnode1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:33:082013 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. Connected to: Oracle Database 10g Release 10.2.0.4.0 - Production With the Real Application Clusters option SQL> select status,instance_name from v$instance; STATUS INSTANCE_NAME ------------ ---------------- OPEN MIGRATE srprim1 SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> exit Disconnected from Oracle Database 10g Release 10.2.0.4.0 - Production With the Real Application Clusters option [oracle@10gnode1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.4.0 - Production on Fri Feb 8 13:33:532013 Copyright (c) 1982, 2007, Oracle. All Rights Reserved. Connected to an idle instance. SQL> startup pfile=/u01/initsrprim1.ora ORACLE instance started. Total System Global Area 926941184 bytes Fixed Size 1270748 bytes Variable Size 247467044 bytes Database Buffers 675282944 bytes Redo Buffers 2920448 bytes Database mounted. Database opened. SQL> SQL> @/u01/app/oracle/product/10.2.0/db_1/rdbms/admin/utlrp.sql SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL>
STEP 10:
Now, lets start the instance “srprim1” using the global SPFILE rather than the temporary PFILE (“/u01/initsrprim1.ora”)
SQL> startup ORACLE instance started. Total System Global Area 926941184 bytes Fixed Size 1270748 bytes Variable Size 247467044 bytes Database Buffers 675282944 bytes Redo Buffers 2920448 bytes Database mounted. Database opened. SQL> SQL> select status,instance_name from v$instance; STATUS INSTANCE_NAME ------ --------------------- OPEN srprim1 SQL> select * from v$version; BANNER --------------------------------------------------------------- Oracle Database 10g Release 10.2.0.4.0 - Production PL/SQL Release 10.2.0.4.0 - Production CORE 10.2.0.4.0 Production TNS for Linux: Version 10.2.0.4.0 - Production NLSRTL Version 10.2.0.4.0 - Production
Step 11:
Now that the SRPRIM database is upgraded to 10.2.0.4, lets check the status of the instances using SRVCTL
[oracle@10gnode1 dbs]$ srvctl status database -d srprim Instance srprim1 is running on node 10gnode1 Instance srprim2 is not running on node 10gnode2 [oracle@10gnode1 dbs]$
In STEP 6 through STEP 9, you can notice that the upgrade was carried out using a temporary PFILE located at “/u01/initsrprim1.ora” and that no changes were made to the original PFILE or SPFILE (globally located).
Hence after the upgrade was done successfully, I had no necessity of uncommenting the changes done to the CLUSTER parameters in the temporary PFILE. The instance “srprim1” has been started using the Global SPFILE which can be noticed in STEP 10.
You can notice above that the instance “srprim2” is not started.
Lets start it using the “SRVCTL start instance” command.
[oracle@10gnode1 dbs]$ srvctl start instance -d srprim -i srprim2
Lets crossverify once again the status of all the instances.
[oracle@10gnode1 dbs]$ srvctl status database -d srprim Instance srprim1 is running on node 10gnode1 Instance srprim2 is running on node 10gnode2
Here you go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.

