Skip to content
January 16, 2016 / Shivananda Rao P

GI upgrade from 11.2.0.3 to 12.1.0.2 on a standalone machine using silent mode

This post demonstrates on how to perform a GI upgrade from 11.2.0.3 version to 12.1.0.2 on a standalone machine. Usually the upgrade is performed via a GUI, but here I’m demonstrating the upgrade through silent mode using the response file.

 

Environment:


Hostname                                  :   ora1-4
Current GI version                        :   11.2.0.3
GI response file location                 :   /u02/grid_install.rsp
Existing 11.2.0.3 GI Home                 :   /u01/app/oracle/product/11.2.0.3/grid
To be installed 12.1.0.2 GI Home location :   /u01/app/oracle/product/12.1.0.2/grid/

 

As you know, GI on a standalone machine makes use of High Availability Service (HAS) and not Cluster Ready Service (CRS). Let’s query the current HAS version.

 


[oracle@ora1-4 bin]$ ./crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
[oracle@ora1-4 bin]$
[oracle@ora1-4 bin]$
[oracle@ora1-4 bin]$ ./crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.3.0]
[oracle@ora1-4 bin]$

 

Unzip the 12.1.0.2 GI software pack and take a copy of the response file grid_install.rsp located at “<unzipped location>/grid/response”. Here I have copied it to “/u02”.

 

Make necessary changes to the copied response file. The changes for a standalone GI upgrade include:

 

1. ORACLE_HOSTNAME
     - Specify the hostname on which the GI upgrade is being performed.
2. INVENTORY_LOCATION
     - Specify the Inventory location.
3. oracle.install.option
     - Choose the option as UPGRADE as this is what we are performing
4. ORACLE_BASE
     - Specify the ORACLE_BASE location.
5. ORACLE_HOME
     - Specify the location where GI 12.1.0.2 needs to be installed.
6. Specify the OS groups for OSDBA, OSOPER and OSASM

7. Since this is a standalone, SCAN and Cluster has no role to play. So skipping these.

8. ASM part is also skipped because the GI on 11.2.0.3 is already having an ASM instance using DATA_NEW and FRA_NEW as diskgroups. Since I do not want a fresh ASM instance or a fresh diskgroup to be created, this has been skipped.

9. Under the UPGRADE section, if you would want the GI to be managed by OEM Grid Control, then select oracle.install.config.managementOption to CLOUD_CONTROL, else choose NONE. If CLOUD_CONTROL is chosen, then provide more details that has been asked for like OMS host name, OMS Port. In this demo, I do not want the GI to be managed by Grid Control and hence chosen as NONE.

 

Here is the response file with the updated values used to perform the upgrade to 12.1.0.2.

GI respone file.
Run the “runInstaller” with the silent option and providing the complete path of the response file. Make sure that ORACLE_HOME and ORA_CRS_HOME are unset else you may face the below error message.

 

 


[oracle@ora1-4 grid]$ ./runInstaller -silent -responseFile /u02/grid_install.rsp -showProgress
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 415 MB. Actual 8756 MB Passed
Checking swap space: must be greater than 150 MB. Actual 10236 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-01-10_06-29-38PM. Please wait ...[oracle@ora1-4 grid]$ [FATAL] [INS-30131] Initial setup required for the execution of installer validations failed.
CAUSE: Failed to access the temporary location.
ACTION: Ensure that the current user has required permissions to access the temporary location.
*ADDITIONAL INFORMATION:*
Exception details
 - PRVG-1561 : Setting ORA_CRS_HOME variable is not supported

Unset the ORACLE_HOME and ORA_CRS_HOME and run the runInstaller with silent option. If the pre-requisite checks fail, then the installation throws error.
If so, then take necessary actions accordingly.

 


[oracle@ora1-4 grid]$ unset ORACLE_HOME
[oracle@ora1-4 grid]$ unset ORA_CRS_HOME
[oracle@ora1-4 grid]$
[oracle@ora1-4 grid]$ ./runInstaller -silent -responseFile /u02/grid_install.rsp -showProgress
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 415 MB. Actual 8756 MB Passed
Checking swap space: must be greater than 150 MB. Actual 10236 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-01-10_06-30-53PM. Please wait ...
You can find the log of this install session at:
/u01/app/oraInventory/logs/installActions2016-01-10_06-30-53PM.log

Prepare in progress.
.................................................. 7% Done.

Prepare successful.

Copy files in progress.
.................................................. 12% Done.
.................................................. 22% Done.
.................................................. 27% Done.
.................................................. 32% Done.
.................................................. 40% Done.
.................................................. 48% Done.
.................................................. 53% Done.
.................................................. 58% Done.
.................................................. 63% Done.
....................
Copy files successful.

Link binaries in progress.
..........
Link binaries successful.

Setup files in progress.
.................................................. 68% Done.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.
.................................................. 73% Done.

Finish Setup successful.
The installation of Oracle Grid Infrastructure 12c was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2016-01-10_06-30-53PM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
.................................................. 82% Done.

Prepare for configuration steps in progress.

Prepare for configuration steps successful.
..........
Update Inventory in progress.

Update Inventory successful.
.................................................. 95% Done.

As a root user, execute the following script(s):
1. /u01/app/oracle/product/12.1.0.2/grid/rootupgrade.sh
Run the script on the local node.

.................................................. 100% Done.
Successfully Setup Software.

 

Run the rootupgrade.sh script as ROOT user from the new 12.1.0.2 GI home. Here is outcome of the rootupgrade.sh script.

 

[root@ora1-4 ~]# /u01/app/oracle/product/12.1.0.2/grid/rootupgrade.sh
Check /u01/app/oracle/product/12.1.0.2/grid/install/root_ora1-4.mydomain_2016-01-10_18-50-05.log for the output of root script

[root@ora1-4 ~]#

 

Let’s check the output for the rootupgrade script.

 


[oracle@ora1-4 ~]$ cat /u01/app/oracle/product/12.1.0.2/grid/install/root_ora1-4.mydomain_2016-01-10_18-50-05.log
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0.2/grid
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/oracle/product/12.1.0.2/grid/crs/install/crsconfig_params

ASM Configuration upgraded successfully.

Creating OCR keys for user 'oracle', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node ora1-4 successfully pinned.
2016/01/10 18:51:37 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

2016/01/10 18:52:40 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

2016/01/10 18:54:10 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.3.0 -d 12.1.0.2.0 -p first'

2016/01/10 18:54:18 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.3.0 -d 12.1.0.2.0 -p last'

ora1-4 2016/01/10 18:54:19 /u01/app/oracle/product/12.1.0.2/grid/cdata/ora1-4/backup_20160110_185419.olr 0

ora1-4 2016/01/09 13:02:23 /u01/app/oracle/product/11.2.0.3/grid/cdata/ora1-4/backup_20160109_130223.olr -
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'ora1-4'
CRS-2673: Attempting to stop 'ora.evmd' on 'ora1-4'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'ora1-4'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'ora1-4' succeeded
CRS-2677: Stop of 'ora.evmd' on 'ora1-4' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'ora1-4'
CRS-2677: Stop of 'ora.cssd' on 'ora1-4' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ora1-4' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/01/10 18:56:03 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

[oracle@ora1-4 ~]$

 

ASM too has been upgraded. Let’s check the HAS version from the 12.1.0.2 GI home.

 


[oracle@ora1-4 grid]$ cd /u01/app/oracle/product/12.1.0.2/grid/bin/
[oracle@ora1-4 bin]$
[oracle@ora1-4 bin]$
[oracle@ora1-4 bin]$ ./crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]
[oracle@ora1-4 bin]$
[oracle@ora1-4 bin]$
[oracle@ora1-4 bin]$ ./crsctl query has softwareversion
Oracle High Availability Services version on the local node is [12.1.0.2.0]

 

The ASM home too would be updated in the ORATAB file automatically during the upgrade.

 


[oracle@ora1-4 bin]$ cat /etc/oratab | grep ASM
+ASM:/u01/app/oracle/product/12.1.0.2/grid:N # line added by Agent

 

Let’s connect to ASM instance and verify ther version and the state of it’s diskgroups.

 

[oracle@ora1-4 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jan 10 19:33:28 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Automatic Storage Management option

SQL> select name,state from v$asm_diskgroup;

NAME                           STATE
------------------------------ -----------
DATA_NEW                       MOUNTED
FRA_NEW                        MOUNTED

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE 12.1.0.2.0 Production
TNS for Linux: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production

As seen above, the GI has been upgraded successfully from 11.2.0.3 to 12.1.0.2.

 

 

COPYRIGHT

© Shivananda Rao P, 2012 to 2017. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 
DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.

Advertisements
January 16, 2016 / Shivananda Rao P

ORA-16629: database reports a different protection level from the protection mode

While checking the DG Broker configuration status, I was surprised to see that DGMGRL was throwing out “ORA-16629: database reports a different protection level from the protection mode” warning message.

 

Though the standby database was in sync with its primary database, DGMGRL was making noise by throwing the above warning message. This could have been happened if the standby redo log files were incorrectly sized from that of the online redo log files or if there is a lag on the standby database with MAXAVAILABILITY mode.. But in my case, none of these symptoms were seen as the SRLs were of the same size as that of the ORLs and nor was there a lag on the standby database.

 

Environment:


Primary database : srpstb
Primary node     : ora1-2
Standby database : srprim
Standby node     : ora1-1

 


DGMGRL> show configuration;

Configuration - dgtest

Protection Mode: MaxAvailability
Databases:
srpstb - Primary database
Warning: ORA-16629: database reports a different protection level from the protection mode

srprim - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
WARNING

I just went through re-reading the error message. The message is being reported for the primary database and it clearly says that the Protection Level of the database is different from that of it’s protection mode. A protection level is an aggregation of the protection mode currently being used for the database.

This let me to query and check what difference do I see with the Protection Mode and the Protection Level for the primary and standby databases.
On the primary, when I query, I see that the protection level reports as RESYNCHRONIZATION and the protection mode as MaxAvailability

 


SYS@srpstb> select protection_mode,protection_level from v$database;

PROTECTION_MODE      PROTECTION_LEVEL
-------------------- --------------------
MAXIMUM AVAILABILITY RESYNCHRONIZATION

 

But on the standby database, the protection mode and protection level were reporting as MAXIMUM PERFORMANCE. Quite strange !!

 


SYS@srprim> select protection_mode,protection_level from v$database;

PROTECTION_MODE      PROTECTION_LEVEL
-------------------- --------------------
MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE

To set the protection mode same across the primary and physical standby database, I changed the protection mode to MAXPERFORMANCE at the broker level.

 


DGMGRL> edit configuration set protection mode as MAXPERFORMANCE;
Succeeded.
DGMGRL> show configuration;

Configuration - dgtest

Protection Mode: MaxPerformance
Databases:
srpstb - Primary database
srprim - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Now that the broker status shows as “SUCCESS”, let me query the protection mode and level individually for the primary and physical standby database.

On the primary, as expected, the protection mode and level is now reporting as MAX PERFORMANCE.

 


SYS@srpstb> select protection_mode,protection_level from v$database;

PROTECTION_MODE      PROTECTION_LEVEL
-------------------- --------------------
MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE

 

On the standby too, the protection mode and level were reporting as MAX PERFORMANCE.

 


SYS@srprim> select protection_mode,protection_level from v$database;

PROTECTION_MODE      PROTECTION_LEVEL
-------------------- --------------------
MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE

COPYRIGHT

© Shivananda Rao P, 2012 to 2016. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 
DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.

December 18, 2015 / Shivananda Rao P

Roll Forward Physical standby database in 12c

As you are aware that rolling forward a physical standby via an incremental SCN backup method is the simplest method of getting your standby database with lag into sync with the primary database. The lag can be due to missing archives on the primary which haven’t been shipped or applied on the standby.

 

With 12c, the roll forward technique is a bit different. In the prior versions, we had to initially take an incremental backup of the primary database from the SCN where the standby was stalled and then on copy these backup pieces to the standby server and recover the standby using them. Here is a detailed steps of how to roll forward your physical standby database in 11g or 10g.
Roll Forward Physical Standby in 10g/11g

 

But, with 12c, it’s different. The entire operation of restore and recovery can be performed through RMAN over the network using the “net service name”.

 

The syntax would be as “recover database…from service”. This has reduced the multi steps of manual method of rolling forward a standby database. Let’s move on in rolling forward a physical standby database. The environment is of a 2 node Primary database and a 2 node standby database which uses ASM as a storage media and has the broker configuration enabled (not mandatory).

 

Environment:

 

Primary:

 

Primary database nodes     : ora12c-node1, ora12c-node2
Database name              : srprim
Primary DB UNIQUE NAME     : srprim
Primary database instances : srprim1, srprim2

 

Standby:

 

Standby database nodes     : ora12cdr1, ora12cdr2
Database name              : srprim
Standby DB UNIQUE NAME     : srpstb
Standby database instances : srpstb1, srpstb2

The details of the primary and standby instances are as follows:

 

Primary:

 

[oracle@ora12c-node1 ~]$ srvctl status database -db srprim -v -f
Instance srprim1 is running on node ora12c-node1. Instance status: Open.
Instance srprim2 is running on node ora12c-node2. Instance status: Open.

Standby:

[oracle@ora12cdr1 ~]$ srvctl status database -db srpstb -v -f
Instance srpstb1 is running on node ora12cdr1. Instance status: Mounted (Closed).
Instance srpstb2 is running on node ora12cdr2. Instance status: Mounted (Closed).

Broker configuration Details and Status:

 

[oracle@ora12c-node1 ~]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.1.0 - 64bit Production

Copyright (c) 2000, 2012, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL>
DGMGRL> show configuration;

Configuration - dgsrp

  Protection Mode: MaxPerformance
  Databases:
  srprim - Primary database
    srpstb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

 

 

On the primary, the last sequence generated is 217 and 175 for thread 1 and 2 respectively.

 

SYS@srprim1 > select thread#,max(sequence#) from v$archived_log group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1            217
         2            175

While on the standby, the last sequence applied is 216 and 175 for thread 1 and 2 respectively.

 

SYS@srpstb1 > select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1            216
         2            175

Now, due to missing archives on the primary, there has been a gap detected on the standby and it’s waiting for log sequence 219 of thread 1.

 

SYS@srpstb1 > select process,status,sequence#,thread# from gv$managed_standby;

PROCESS   STATUS        SEQUENCE#    THREAD#
--------- ------------ ---------- ----------
ARCH      CLOSING             225          1
ARCH      CLOSING             217          1
ARCH      CONNECTED             0          0
ARCH      CLOSING             178          2
MRP0      WAIT_FOR_GAP        219          1
RFS       IDLE                  0          0
RFS       IDLE                  0          0
RFS       IDLE                179          2
RFS       IDLE                  0          0
RFS       IDLE                  0          0
RFS       IDLE                  0          0

PROCESS   STATUS        SEQUENCE#    THREAD#
--------- ------------ ---------- ----------
ARCH      CONNECTED             0          0
ARCH      CLOSING             226          1
ARCH      CONNECTED             0          0
ARCH      CONNECTED             0          0
RFS       IDLE                227          1
RFS       IDLE                  0          0

17 rows selected.

A gap exists from log sequence 219 to 221 of thread 1 and this is what is been informed in the alert log of the standby database.

 

FAL[client]: Failed to request gap sequence
 GAP - thread 1 sequence 219-221
 DBID 307664432 branch 891722994
FAL[client]: All defined FAL servers have been attempted.

 

Now, let’s compare the checkpoint change# for each of the datafiles on the primary database with the corresponding datafiles on the standby database to understand which all need recovery on the standby database.

 

On the primary:

 

SYS@srprim1 > select file#,checkpoint_change# from v$datafile;

     FILE# CHECKPOINT_CHANGE#
---------- ------------------
         1            3111435
         3            3111435
         4            3111435
         5            1739965
         6            3111435
         7            1739965
         8            3111435
        30            3050385
        31            3050385

9 rows selected.

On the standby database:

 

SYS@srpstb1 > select file#,checkpoint_change# from v$datafile;

     FILE# CHECKPOINT_CHANGE#
---------- ------------------
         1            3110480
         3            3110480
         4            3110480
         5            1739965
         6            3110480
         7            1739965
         8            3110480
        30            3050385
        31            3050385

9 rows selected.

From the above outcomes, it’s clear that datafiles 1,3,4,6 and 8 have a different checkpoint change# value on the standby when compared to the corresponding files on the primary database. This means, that the archive log sequence 219 to 221 contain the changes that need to be applied to these 5 (1,3,4,6,8) datafiles.

 

To move on, cancel the recovery on the standby database.

 

SYS@srpstb1 > alter database recover managed standby database cancel;

Database altered.

Connect the standby database through RMAN as “target” and issue the “recover database from service <primary net alias>” command.

 

This command, recovers the target database (standby database) from the service name that you specify (primary database). You can also specify the “using compressed backupset” clause along with the above command so that RMAN takes a compressed backupset from the primary database and restores and recovers the standby database.
RMAN takes a backup of the corresponding datafile, transfers it over the network to the standby server and restores the file.

 

Secondly, RMAN is clever enough to restore only those datafiles whose checkpoint change# on the standby database is different from that of the primary database. This is a great feature in 12c, where as in prior versions, we had to manually take the incremental SCN backup of the entire primary database, manually ship the backups to the standby site and then recover. But here, RMAN does in one single command.

 

From the below outcome, it can be noticed that RMAN is skipping datafiles 5,7,30 and 31 as they have the same checkpoint change# across the primary and the standby sites and restoring only datafiles 1,3,4,6 and 8.

 

[oracle@ora12cdr1 ~]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Dec 2 13:17:00 2015

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SRPRIM (DBID=307664432, not open)

RMAN> recover database from service srprim noredo using compressed backupset;

Starting recover at 02-DEC-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=53 instance=srpstb1 device type=DISK
skipping datafile 5; already restored to SCN 1739965
skipping datafile 7; already restored to SCN 1739965
skipping datafile 30; already restored to SCN 3050385
skipping datafile 31; already restored to SCN 3050385
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service srprim
destination for restore of datafile 00001: +DATA/SRPSTB/DATAFILE/system.270.893882851
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service srprim
destination for restore of datafile 00003: +DATA/SRPSTB/DATAFILE/sysaux.268.893882883
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service srprim
destination for restore of datafile 00004: +DATA/SRPSTB/DATAFILE/undotbs1.266.893882909
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service srprim
destination for restore of datafile 00006: +DATA/SRPSTB/DATAFILE/users.264.893882923
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service srprim
destination for restore of datafile 00008: +DATA/SRPSTB/DATAFILE/undotbs2.262.893882949
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

Finished recover at 02-DEC-15

RMAN>

So, now the datafiles have been restored and recovered, but remember that the controlfile needs to be updated. Hence we need to restore the controlfile from the primary site.
To achieve this, stop the standby database and start one instance of it in nomount state.

 

[oracle@ora12cdr1 ~]$ srvctl stop database -db srpstb
[oracle@ora12cdr1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Wed Dec 2 13:21:26 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@srpstb1 > startup nomount

ORACLE instance started.

Total System Global Area 1286066176 bytes
Fixed Size                  2287960 bytes
Variable Size             536872616 bytes
Database Buffers          738197504 bytes
Redo Buffers                8708096 bytes
SYS@srpstb1 > exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Advanced Analytics
and Real Application Testing options

Connect the standby database through RMAN as target and restore the controlfile from primary using the ‘service’ clause.

 

[oracle@ora12cdr1 ~]$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Dec 2 13:21:40 2015

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SRPRIM (not mounted)

RMAN> restore standby controlfile from service srprim;

Starting restore at 02-DEC-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=36 instance=srpstb1 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service srprim
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
output file name=+DATA/SRPSTB/CONTROLFILE/current.267.893882827
output file name=+FRA1/SRPSTB/CONTROLFILE/current.258.893882827
Finished restore at 02-DEC-15

 

 

Shutdown the standby instance that was started earlier before the restore of the controlfile and start the complete standby database in MOUNT state.

 

[oracle@ora12cdr1 ~]$ srvctl start database -db srpstb
[oracle@ora12cdr1 ~]$ srvctl status database -db srpstb -v -f
Instance srpstb1 is running on node ora12cdr1. Instance status: Mounted (Closed).
Instance srpstb2 is running on node ora12cdr2. Instance status: Mounted (Closed).

 

If the datafiles locations on the primary and standby database are different, then you may have to follow this step. Since that we have restored the controlfile from the primary database, the datafiles locations will still be pointing to the location of the datafiles that we have on the primary site.

 

To overcome this, you need to catalog the standby datafile locations.

 

Connect the standby database through RMAN and catalog the location of it’s datafiles. In my environment, the datafiles of the standby are placed in “+DATA/SRPSTB” location.

 

RMAN> catalog start with '+DATA/SRPSTB/';

searching for all files that match the pattern +DATA/SRPSTB/

List of Files Unknown to the Database
=====================================
File Name: +DATA/SRPSTB/spfilesrpstb.ora
File Name: +DATA/SRPSTB/orapwsrpstb
File Name: +DATA/SRPSTB/243D4D0CCA302D9BE0537200A8C03F95/DATAFILE/SYSTEM.315.895493381
File Name: +DATA/SRPSTB/243D4D0CCA302D9BE0537200A8C03F95/DATAFILE/SYSAUX.316.895493429
File Name: +DATA/SRPSTB/241D296020C9234AE0537200A8C02179/DATAFILE/SYSTEM.309.895350841
File Name: +DATA/SRPSTB/241D296020C9234AE0537200A8C02179/DATAFILE/SYSAUX.310.895350855
File Name: +DATA/SRPSTB/23F4F3269D3E2C5FE0537200A8C07E9C/DATAFILE/SYSTEM.286.895175737
File Name: +DATA/SRPSTB/TEMPFILE/TEMP.283.895009995
File Name: +DATA/SRPSTB/DATAGUARDCONFIG/srpstb.282.894924201
File Name: +DATA/SRPSTB/241D296020C8234AE0537200A8C02179/DATAFILE/SYSTEM.307.895347841
File Name: +DATA/SRPSTB/241D296020C8234AE0537200A8C02179/DATAFILE/SYSAUX.308.895348445
File Name: +DATA/SRPSTB/ONLINELOG/group_1.274.893883017
File Name: +DATA/SRPSTB/ONLINELOG/group_2.273.893883019
File Name: +DATA/SRPSTB/ONLINELOG/group_3.272.893883019
File Name: +DATA/SRPSTB/ONLINELOG/group_4.271.893883021
File Name: +DATA/SRPSTB/ONLINELOG/group_5.257.893886219
File Name: +DATA/SRPSTB/ONLINELOG/group_6.275.893886225
File Name: +DATA/SRPSTB/ONLINELOG/group_7.276.893886227
File Name: +DATA/SRPSTB/ONLINELOG/group_8.277.893886233
File Name: +DATA/SRPSTB/ONLINELOG/group_9.278.893886235
File Name: +DATA/SRPSTB/ONLINELOG/group_10.279.893886237
File Name: +DATA/SRPSTB/ONLINELOG/group_5.287.895259785
File Name: +DATA/SRPSTB/ONLINELOG/group_7.288.895259787
File Name: +DATA/SRPSTB/ONLINELOG/group_6.289.895259795
File Name: +DATA/SRPSTB/ONLINELOG/group_10.290.895259837
File Name: +DATA/SRPSTB/ONLINELOG/group_8.291.895259837
File Name: +DATA/SRPSTB/ONLINELOG/group_9.292.895259841
File Name: +DATA/SRPSTB/ONLINELOG/group_1.293.895260099
File Name: +DATA/SRPSTB/ONLINELOG/group_2.294.895260103
File Name: +DATA/SRPSTB/ONLINELOG/group_3.295.895260103
File Name: +DATA/SRPSTB/ONLINELOG/group_4.296.895260107
File Name: +DATA/SRPSTB/ONLINELOG/group_1.297.895261173
File Name: +DATA/SRPSTB/ONLINELOG/group_2.298.895261175
File Name: +DATA/SRPSTB/ONLINELOG/group_3.299.895261179
File Name: +DATA/SRPSTB/ONLINELOG/group_4.300.895261181
File Name: +DATA/SRPSTB/ONLINELOG/group_7.301.895261323
File Name: +DATA/SRPSTB/ONLINELOG/group_5.302.895261323
File Name: +DATA/SRPSTB/ONLINELOG/group_6.303.895261329
File Name: +DATA/SRPSTB/ONLINELOG/group_8.304.895261343
File Name: +DATA/SRPSTB/ONLINELOG/group_10.305.895261343
File Name: +DATA/SRPSTB/ONLINELOG/group_9.306.895261345
File Name: +DATA/SRPSTB/ONLINELOG/group_7.261.895492899
File Name: +DATA/SRPSTB/ONLINELOG/group_5.260.895492899
File Name: +DATA/SRPSTB/ONLINELOG/group_10.259.895492907
File Name: +DATA/SRPSTB/ONLINELOG/group_6.258.895492907
File Name: +DATA/SRPSTB/ONLINELOG/group_8.313.895492911
File Name: +DATA/SRPSTB/ONLINELOG/group_9.314.895492915
File Name: +DATA/SRPSTB/ONLINELOG/group_1.317.895493507
File Name: +DATA/SRPSTB/ONLINELOG/group_2.318.895493509
File Name: +DATA/SRPSTB/ONLINELOG/group_3.319.895493513
File Name: +DATA/SRPSTB/ONLINELOG/group_4.320.895493515
File Name: +DATA/SRPSTB/ONLINELOG/group_10.312.895515035
File Name: +DATA/SRPSTB/ONLINELOG/group_8.311.895496139
File Name: +DATA/SRPSTB/ONLINELOG/group_9.321.895496147
File Name: +DATA/SRPSTB/ONLINELOG/group_5.322.895496149
File Name: +DATA/SRPSTB/ONLINELOG/group_6.323.895496153
File Name: +DATA/SRPSTB/ONLINELOG/group_7.324.895496157
File Name: +DATA/SRPSTB/ONLINELOG/group_1.325.895496313
File Name: +DATA/SRPSTB/ONLINELOG/group_2.326.895496317
File Name: +DATA/SRPSTB/ONLINELOG/group_3.327.895496321
File Name: +DATA/SRPSTB/ONLINELOG/group_4.328.895496325
File Name: +DATA/SRPSTB/20E576F1A6FE02E0E0537200A8C01581/TEMPFILE/TEMP.285.895010041
File Name: +DATA/SRPSTB/20E55957E61A7D6CE0537200A8C09297/TEMPFILE/TEMP.284.895009997
File Name: +DATA/SRPSTB/20E55957E61A7D6CE0537200A8C09297/DATAFILE/SYSTEM.265.893882915
File Name: +DATA/SRPSTB/20E55957E61A7D6CE0537200A8C09297/DATAFILE/SYSAUX.263.893882923
File Name: +DATA/SRPSTB/DATAFILE/SYSTEM.270.893882851
File Name: +DATA/SRPSTB/DATAFILE/SYSAUX.268.893882883
File Name: +DATA/SRPSTB/DATAFILE/UNDOTBS1.266.893882909
File Name: +DATA/SRPSTB/DATAFILE/USERS.264.893882923
File Name: +DATA/SRPSTB/DATAFILE/UNDOTBS2.262.893882949
File Name: +DATA/SRPSTB/CONTROLFILE/Current.269.893882795

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +DATA/SRPSTB/243D4D0CCA302D9BE0537200A8C03F95/DATAFILE/SYSTEM.315.895493381
File Name: +DATA/SRPSTB/243D4D0CCA302D9BE0537200A8C03F95/DATAFILE/SYSAUX.316.895493429
File Name: +DATA/SRPSTB/241D296020C9234AE0537200A8C02179/DATAFILE/SYSTEM.309.895350841
File Name: +DATA/SRPSTB/241D296020C9234AE0537200A8C02179/DATAFILE/SYSAUX.310.895350855
File Name: +DATA/SRPSTB/23F4F3269D3E2C5FE0537200A8C07E9C/DATAFILE/SYSTEM.286.895175737
File Name: +DATA/SRPSTB/TEMPFILE/TEMP.283.895009995
File Name: +DATA/SRPSTB/241D296020C8234AE0537200A8C02179/DATAFILE/SYSTEM.307.895347841
File Name: +DATA/SRPSTB/241D296020C8234AE0537200A8C02179/DATAFILE/SYSAUX.308.895348445
File Name: +DATA/SRPSTB/20E576F1A6FE02E0E0537200A8C01581/TEMPFILE/TEMP.285.895010041
File Name: +DATA/SRPSTB/20E55957E61A7D6CE0537200A8C09297/TEMPFILE/TEMP.284.895009997
File Name: +DATA/SRPSTB/20E55957E61A7D6CE0537200A8C09297/DATAFILE/SYSTEM.265.893882915
File Name: +DATA/SRPSTB/20E55957E61A7D6CE0537200A8C09297/DATAFILE/SYSAUX.263.893882923
File Name: +DATA/SRPSTB/DATAFILE/SYSTEM.270.893882851
File Name: +DATA/SRPSTB/DATAFILE/SYSAUX.268.893882883
File Name: +DATA/SRPSTB/DATAFILE/UNDOTBS1.266.893882909
File Name: +DATA/SRPSTB/DATAFILE/USERS.264.893882923
File Name: +DATA/SRPSTB/DATAFILE/UNDOTBS2.262.893882949

Switch the standby database to the copy.

 

RMAN> switch database to copy;

datafile 1 switched to datafile copy "+DATA/SRPSTB/DATAFILE/system.270.893882851"
datafile 3 switched to datafile copy "+DATA/SRPSTB/DATAFILE/sysaux.268.893882883"
datafile 4 switched to datafile copy "+DATA/SRPSTB/DATAFILE/undotbs1.266.893882909"
datafile 5 switched to datafile copy "+DATA/SRPSTB/20E55957E61A7D6CE0537200A8C09297/DATAFILE/system.265.893882915"
datafile 6 switched to datafile copy "+DATA/SRPSTB/DATAFILE/users.264.893882923"
datafile 7 switched to datafile copy "+DATA/SRPSTB/20E55957E61A7D6CE0537200A8C09297/DATAFILE/sysaux.263.893882923"
datafile 8 switched to datafile copy "+DATA/SRPSTB/DATAFILE/undotbs2.262.893882949"
datafile 30 switched to datafile copy "+DATA/SRPSTB/243D4D0CCA302D9BE0537200A8C03F95/DATAFILE/system.315.895493381"
datafile 31 switched to datafile copy "+DATA/SRPSTB/243D4D0CCA302D9BE0537200A8C03F95/DATAFILE/sysaux.316.895493429"

Now that we are done, check the progress of the recovery on the standby database.
It could be seen that now MRP is applying log sequence 180 of thread 2.

 

<pre>SYS@srpstb1 > select process,status,sequence#,thread# from gv$managed_standby;

PROCESS   STATUS        SEQUENCE#    THREAD#
--------- ------------ ---------- ----------
ARCH      CLOSING             229          1
ARCH      CONNECTED             0          0
ARCH      CLOSING             227          1
ARCH      CLOSING             228          1
RFS       CLOSING             230          1
RFS       IDLE                  0          0
RFS       IDLE                  0          0
RFS       IDLE                  0          0
MRP0      APPLYING_LOG        180          2
ARCH      CONNECTED             0          0
ARCH      CONNECTED             0          0

PROCESS   STATUS        SEQUENCE#    THREAD#
--------- ------------ ---------- ----------
ARCH      CONNECTED             0          0
ARCH      CONNECTED             0          0
RFS       IDLE                180          2
RFS       IDLE                  0          0

15 rows selected.

Let’s check the sync status of the standby database with the primary database.
On the primary:

 

SYS@srprim1 > select thread#,max(sequence#) from v$archived_log group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1            231
         2            180

On the standby:

 

		 
SYS@srpstb1 > select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1            230
         2            180

It’s clear that the standby is sync with the primary and we are happy to go !

 

 

 

 

COPYRIGHT

© Shivananda Rao P, 2012, 2013, 2014, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.

 

December 7, 2015 / Shivananda Rao P

Dataguard Switchover in 12c

Switchover is a two way process in Oracle Dataguard. This way of role transition where the current primary database starts behaving as a Physical Standby and on the other hand the current Physical standby database starts behaving as a Primary database. Switchover in 12c is still the same, but new features have been incorporated and the usage of the “switchover commands” have in a great change.

 

Oracle has introduced the “Verify” functionality which does not perform the switchover but does verify if the switchover can be performed to the target standby database.

 

It mainly does:

1. The redo shipping from the primary to the standby database.
2. Checks if Managed Recovery Process (MRP) is running on the standby.
3. And finally checks if the standby database is in sync with the primary.

 

 
The syntax used is “Alter database switchover to <target_standby_database_unique_name> verify;” where in the “target_standby_database_unique_name” is the unique name of the target database to which the switchover needs to be performed.
As an example, below is how the “verify” functionality checks and reports the error.

 

SYS @ oraprim > alter database switchover to orastb verify;
alter database switchover to orastb verify
*
ERROR at line 1:
ORA-16470: Redo Apply is not running on switchover target

 

This error clearly states that Redo Apply (MRP) is not running on the standby database. A copy of the error message will also be written on the alert log file and necessary action can be taken up.
Another set of example is:

 

SYS @ oraprim > alter database switchover to orastb verify;
alter database switchover to orastb verify
*
ERROR at line 1:
ORA-16467: switchover target is not synchronized

 

Here, the error message reads out that the target standby database is not in sync with the primary database for the switchover to take place.
Necessary action will have to be taken out and get the standby in sync with the primary before moving on with the switchover.
Now, let’s move on with the demonstration of the switchover process in 12c.
Environment:

 

Primary Database Name  :  oraprim
Standby Database Name  :  orastb
Database Version       :  12.1.0.1

 

Primary Database Details:

 

SYS @ oraprim > select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
OPEN         oraprim          PRIMARY

 

Physical standby database details:

 

 

SYS @ orastb > select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
MOUNTED      orastb           PHYSICAL STANDBY

 

Now, let’s verify if the switchover of “oraprim” to “orastb” is possible or not using the “verify” functionality.

 

 

SYS @ oraprim > alter database switchover to orastb verify;

Database altered.

 

There has been no error message reported which signifies that we are good to go with the switchover.
The alert log on the primary database would read out the message something as below.

 

Sat Oct 31 11:49:22 2015
alter database switchover to orastb verify
Sat Oct 31 11:49:22 2015
SWITCHOVER VERIFY: Send VERIFY request to switchover target ORASTB
SWITCHOVER VERIFY COMPLETE
Completed: alter database switchover to orastb verify

 

Now, perform the actual switchover to the standby. With the legacy database versions (until 11.2), the syntax of the switchvoer from the primary to physical standby database used to be as “alter database commit to switchvoer to physical standby with session shutdown;” and then on the standby server, you had to switchover the physical standby to primary database which required to fire “alter database commit to switchover to primary with session shutdown;” commands, but with 12c, it’s a single command that does everything for us.

 

“Alter database switchover to <target_standby_db_unique_name>;” this command switches first the primary to the standby database and on the standby server, switches the physical standby database to primary database. So, it’s just a single command that needs to be fired and Oracle does everything for us.
On the primary database server, I run the switchover command.

 

 

SYS @ oraprim > alter database switchover to orastb;

Database altered.

 

This is how the alert log of the primary database would contain the information about the switchover.
The EOR (End Of Redo) is generated at sequence 293.

 

 

Sat Oct 31 15:53:09 2015
ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY [Process Id: 2612] (oraprim)
Sat Oct 31 15:53:09 2015
Thread 1 cannot allocate new log, sequence 294
Checkpoint not complete
  Current log# 3 seq# 293 mem# 0: /u01/app/oracle/oradata/oraprim/redo03.log
Sat Oct 31 15:53:09 2015
Waiting for target standby to receive all redo
Sat Oct 31 15:53:09 2015
Waiting for all non-current ORLs to be archived...
Sat Oct 31 15:53:09 2015
All non-current ORLs have been archived.
Sat Oct 31 15:53:09 2015
Waiting for all FAL entries to be archived...
Sat Oct 31 15:53:09 2015
All FAL entries have been archived.
Sat Oct 31 15:53:09 2015
Waiting for dest_id 3 to become synchronized...
Sat Oct 31 15:53:09 2015
Active, synchronized Physical Standby switchover target has been identified
Preventing updates and queries at the Primary
Generating and shipping final logs to target standby
Switchover End-Of-Redo Log thread 1 sequence 293 has been fixed
Switchover: Primary highest seen SCN set to 0x0.0x275a24
ARCH: Noswitch archival of thread 1, sequence 293
ARCH: End-Of-Redo Branch archival of thread 1 sequence 293
ARCH: LGWR is actively archiving destination LOG_ARCHIVE_DEST_3
ARCH: Standby redo logfile selected for thread 1 sequence 293 for destination LOG_ARCHIVE_DEST_3
ARCH: Archiving is disabled due to current logfile archival

 

The alert log on the standby database server would have the following information written down.
If you go through carefully, the EOR is received at sequence 293 and then oracle internally runs the command “Alter database commit to switchvoer to primary” and finally the role transition is done.

 

RFS[3]: Selected log 4 for thread 1 sequence 293 dbid 4209209247 branch 890507875
Sat Oct 31 15:53:08 2015
Resetting standby activation ID 4209242527 (0xfae3f19f)
Sat Oct 31 15:53:08 2015
Media Recovery End-Of-Redo indicator encountered
Sat Oct 31 15:53:08 2015
Media Recovery Continuing
Media Recovery Waiting for thread 1 sequence 294
Sat Oct 31 15:53:08 2015
Archived Log entry 197 added for thread 1 sequence 293 ID 0xfae3f19f dest 1:
Sat Oct 31 15:53:09 2015
SWITCHOVER: received request 'ALTER DTABASE COMMIT TO SWITCHOVER  TO PRIMARY' from primary database.
Sat Oct 31 15:53:09 2015
ALTER DATABASE SWITCHOVER TO PRIMARY (orastb)
Maximum wait for role transition is 15 minutes.
Switchover: Media recovery is still active
Role Change: Canceling MRP - no more redo to apply
Sat Oct 31 15:53:11 2015
MRP0: Background Media Recovery cancelled with status 16037
Sat Oct 31 15:53:11 2015
Errors in file /u01/app/oracle/diag/rdbms/orastb/orastb/trace/orastb_mrp0_2605.trc:
ORA-16037: user requested cancel of managed recovery operation
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Sat Oct 31 15:53:11 2015
MRP0: Background Media Recovery cancelled with status 16037
Sat Oct 31 15:53:11 2015
Errors in file /u01/app/oracle/diag/rdbms/orastb/orastb/trace/orastb_mrp0_2605.trc:
ORA-16037: user requested cancel of managed recovery operation
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Sat Oct 31 15:53:11 2015
MRP0: Background Media Recovery process shutdown (orastb)
Sat Oct 31 15:53:11 2015
Role Change: Canceled MRP
Killing 2 processes (PIDS:2595,2593) (all RFS) in order to disallow current and future RFS connections. Requested by OS process 2627
Backup controlfile written to trace file /u01/app/oracle/diag/rdbms/orastb/orastb/trace/orastb_rmi_2627.trc
SwitchOver after complete recovery through change 2578980
Online logfile pre-clearing operation disabled by switchover
Online log /u01/app/oracle/oradata/orastb/redo01.log: Thread 1 Group 1 was previously cleared
Online log /u01/app/oracle/oradata/orastb/redo02.log: Thread 1 Group 2 was previously cleared
Online log /u01/app/oracle/oradata/orastb/redo03.log: Thread 1 Group 3 was previously cleared
Standby became primary SCN: 2578978
Switchover: Complete - Database mounted as primary
SWITCHOVER: completed request from primary database.
Sat Oct 31 15:57:27 2015

 

Let’s check the status of the databases. The former standby database (orastb) is now behaving as a Primary database .

SYS @ orastb > select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
MOUNTED      orastb           PRIMARY

SYS @ orastb > alter database open;

Database altered.

 

On the other hand, the former primary database (oraprim) is behaving as a physical standby database.

 

SYS @ oraprim > select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
MOUNTED      oraprim          PHYSICAL STANDBY

 

Let’s start the MRP on the new standby database.

 

SYS @ oraprim > alter database recover managed standby database disconnect;

Database altered.

SYS @ oraprim > select process,status,sequence# from v$managed_standby;

PROCESS   STATUS        SEQUENCE#
--------- ------------ ----------
ARCH      CONNECTED             0
ARCH      CONNECTED             0
ARCH      CLOSING             295
ARCH      CLOSING             294
RFS       IDLE                  0
RFS       IDLE                296
RFS       IDLE                  0
MRP0      APPLYING_LOG        296

8 rows selected.

It could be seen that Managed Recovery Process is now running on the new standby database “oraprim” with sequence 296 being applied.

 

Sync status: On the Primary database:

 

SYS @ orastb > select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)
--------------
           295

On the Physcial Standby Database:

 

SYS @ oraprim > select max(sequence#) from v$archived_log where applied='YES';

MAX(SEQUENCE#)
--------------
           295

After the switchover, the corresponding PDBS of the databases will be in the closed state and will have to be opened manually.
On the new primary, we can see that all the PDBs are closed and will have to be opened manually.

 

SYS @ orastb > select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
TESTPDB1                       MOUNTED
TESTPDB3                       MOUNTED

On the new standby as well, they are in closed state.

 

SYS @ oraprim > select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       MOUNTED
TESTPDB1                       MOUNTED
TESTPDB3                       MOUNTED

On a note, the physical standby database need not have to be in MOUNT state for switchover to be performed. If Active Dataguard is in place and the standby is in OPEN READ ONLY MODE with APPLY and it’s corresponding PDBs are in READ ONLY mode, it’s still possible to perform the switchvoer and not necessary to have them in MOUNT state.

 

 

 

 

COPYRIGHT

© Shivananda Rao P, 2012, 2013, 2014, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.

November 28, 2015 / Shivananda Rao P

Fast-Start Failover in Oracle 11g

Fast Start Failover is a feature available through oracle dataguard broker which performs an automatic failover to the chosen standby database in case of a primary database crash.

 

FSFO involves an OBSERVER component which runs on a different machine than that of the primary and the standby database. The observer hardly consumes any resource and requires only a client or database software installed in it’s site and TNS connectivity established to the primary and the standby database. As the name suggests, the observer monitors the availability of the primary database and initiates a fast start failover when the primary database loses the connectivity with the observer and the chosen standby database target.

 

Before we move with configuring FSFO, there are some required FSFO properties that we need to know of. These include:

 

FastStartFailoverLaglimit: With FSFO, we can use only MaxAvailability or MaxPerformance mode. When in MaxPerformance mode, this property specifies the maximum amount of data loss that is permissible. This value is in “SECONDS” and the default value is 30 seconds. When in MaxAvailability mode, FSFO ensures that there is no data loss.

 

FastStartFailoverThreshold: This property specifies the number of seconds that the observer and the target standby database will wait before initiating the failover. By default, the value of this property is 30 seconds.

 

FastStartFailoverPmyShutdown: If this property is set to true and the primary database is stalled for more than FastStartFailoverThreshold seconds, then the primary database will shutdown.

 

FastStartFailoverTarget: This property specifies the db_unique_name of the database which would be the target database of the fast start failover.

 

FastStartFailoverAutoReinstate: If set to true, then the former primary database will be reinstated after the fast-start failover.

 

On another note, Fast-Start-Failover has a few restrictions:

 

1. You cannot change the protection mode of the dataguard broker configuration and nor can you change the log shipping mode (logxptmode) of the primary and the target standby database.

 

2. You cannot perform a switchover or a failover to a standby database which is not the “fast-start-failover target”.

 

3. The broker configuration cannot be removed if FSFO is eanbled and nor can the target standby database be deleted.

 

4. FSFO is not possible if the primary database was shutdown without the “abort” option or if the observer is not running.

 

Now let’s move on with the steps involved in enabling FSFO. This post assumes that a broker configuration already exists it’s creation is not outlined here.

The steps required to configure dataguard broker can be referred here https://shivanandarao-oracle.com/2013/07/10/data-guard-broker-configuration/

The configuration name used here is “dgtest” and uses the MaxAvailability mode.
Environment:

Primary Site:


DB Name            : srprim
DB Unique Name     : srprim
Connect Identifier : srprim
Hostname           : ora1-1

Standby Site:

DB Name            : srprim
DB Unique Name     : srpstb
Connect Identifier : srpstb
Hostname           : ora1-2

Observer Site:

Hostname : ora1-3

The configuration is enabled and it’s status is as below.

 

DGMGRL> show configuration;

Configuration - dgtest

  Protection Mode: MaxAvailability
  Databases:
    srprim - Primary database
    srpstb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Make sure that both, primary and the standby databases have the flashback and FRA featured enabled which forms a pre-requisite for FSFO.
Now, let’s try to enable to “FSFO”.

 

DGMGRL> ENABLE FAST_START FAILOVER;
Error: ORA-16651: requirements not met for enabling fast-start failover

Failed.

This of-course fails. Reason — there are certain pre-requiste properties mentioned earlier which needs to be set.
Set the FastStartFailoverTarget to the target database (srpstb) for the primary database (srprim)

 

DGMGRL> edit database srprim set property 'FastStartFailoverTarget'='srpstb';
Property "FastStartFailoverTarget" updated
DGMGRL>

 

Similarly, set the FastStartFailoverTarget to the target database (srprim) for the database srpstb. This will be used if srpstb starts behaving as a primary database and srprim as a standby after the role transition.

 

DGMGRL> edit database srpstb set property 'FastStartFailoverTarget'='srprim';
Property "FastStartFailoverTarget" updated
DGMGRL>

Verfiy the property set above.

 

DGMGRL> show database srprim 'FastStartFailoverTarget';
  FastStartFailoverTarget = 'srpstb'

Now set the property FastStartFailoverThreshold to 60 seconds which will be time in seconds that the observer will wait before initiating the failover.

 

  
DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverThreshold = 60;
Property "faststartfailoverthreshold" updated

Observer process is a continuous process and does not return the prompt at the DGMGRL session until you stop the observer from another DGMGRL session.
Due to this, it’s preferred to run the observer in background using the “nohup”.
In order to run the observer in the background, just connect to the broker configuration from the observer site and run the “start observer” command.

 

[oracle@ora1-3 ~]$ nohup dgmgrl sys/oracle@srprim "start observer" &amp;amp;

If willing to run in the foreground, then connect to the broker configuration from the observer site and run the “start observer” command to start the observer.

 

[oracle@ora1-3 ~]$ dgmgrl sys/oracle@srprim
DGMGRL for Linux: Version 11.2.0.2.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> start observer;
Observer started

Once that the observer is running, let me enable FSFO. To do so, connect to the broker configuration and execute “enable fast_start failover” command.

 

DGMGRL> enable fast_start failover;
Enabled.
DGMGRL>
DGMGRL> show configuration;

Configuration - dgtest

  Protection Mode: MaxAvailability
  Databases:
    srprim - Primary database
    srpstb - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS

It’s clear from the above outcome that FSFO is enabled.
Let’s get the details of the properties set for FSFO.

 

DGMGRL> show fast_start failover;

Fast-Start Failover: ENABLED

  Threshold:        60 seconds
  Target:           srpstb
  Observer:         ora1-3.mydomain
  Lag Limit:        30 seconds (not in use)
  Shutdown Primary: TRUE
  Auto-reinstate:   TRUE

Configurable Failover Conditions
  Health Conditions:
    Corrupted Controlfile          YES
    Corrupted Dictionary           YES
    Inaccessible Logfile            NO
    Stuck Archiver                  NO
    Datafile Offline               YES

  Oracle Error Conditions:
    (none)

It can be noticed that FastStartFailoverThreshold is set to 60 seconds with FastStartFailoverTarget as SRPSTB, Observer being running at “ora1-3” host, FastStartFailoverLaglimit with the default value (30 seconds) which is currently not used in this configuration as the protection mode is set to MaxAvailability and most importantly FastStartFailoverPmyShutdown and FastStartFailoverAutoReinstate being set to TRUE.

 

FSFO can also be triggered with certain additional (optional) conditions. There has been no conditions specified from my end. So the values you see above are the default ones.

 

FSFO will occur if the Controlfile is corrupted or if the dictionary is corrupted or if a datafile is offline due to write error. In addition, you can configure other conditions such as Struck archiver (FSFO will occur if archive process is unable to archive the redo due to write error or the disk being full), Inaccessible logfile (LGWR is unable to write to the redo logs due to write error).

Query the primary database to check the FSFO and the observer status.

 

SQL> select  FS_FAILOVER_STATUS,FS_FAILOVER_OBSERVER_PRESENT   from v$database;

FS_FAILOVER_STATUS     FS_FAIL
---------------------- -------
SYNCHRONIZED           YES

To simulate a fast start failover, I crash the primary database by shutting it down using the “Abort” clause so that it looses the connectivity with the observer and the standby database.

 

SQL> shut abort
ORACLE instance shut down.

Now on checking the status of the configuration, Oracle throws out the error message that the primary database is unavailable and cannot determine it’s status.

 

DGMGRL> show configuration;

Configuration - dgtest

  Protection Mode: MaxAvailability
  Databases:
    srprim - Primary database
    srpstb - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
ORA-01034: ORACLE not available
ORA-16625: cannot reach database "srprim"
DGM-17017: unable to determine configuration status

The observer did not immediately trigger the failover as it has to wait for the FastStartFailoverThreshold number of seconds.
After a few seconds of wait, check the configuration status again. FSFO is in progress as seen below.

 

  
DGMGRL> show configuration;

Configuration - dgtest

  Protection Mode: MaxAvailability
  Databases:
    srpstb - Primary database
    srprim - (*) Physical standby database (disabled)

Fast-Start Failover: ENABLED

Configuration Status:
ORA-16610: command "FAILOVER TO srpstb" in progress
DGM-17017: unable to determine configuration status

It’s good to see below that fast start failover has occured and “srpstb” is the primary database.

 

DGMGRL> show configuration;

Configuration - dgtest

  Protection Mode: MaxAvailability
  Databases:
    srpstb - Primary database
      Warning: ORA-16817: unsynchronized fast-start failover configuration

    srprim - (*) Physical standby database (disabled)
      ORA-16661: the standby database needs to be reinstated

Fast-Start Failover: ENABLED

Configuration Status:
WARNING

Reinstating Former Primary database as New Standby database:

 

Now, that failover occurred, I’d like to reinstate the former primary database “srprim”. Since FastStartFailoverAutoReinstate was set to True, the observer will

reinstate the former primary database automatically once it is up.
But the below portion is a snippet of reinstating a former primary database as a new standby database if in case FastStartFailoverAutoReinstate was set to FALSE .
Connect to the former primary database “srprim” and mount it up.

 

[oracle@ora1-1 ~]$ sqlplus sys/oracle@srprim as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Thu Oct 15 19:30:46 2015

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area  943669248 bytes
Fixed Size                  2232128 bytes
Variable Size             641728704 bytes
Database Buffers          293601280 bytes
Redo Buffers                6107136 bytes
Database mounted.

Now connect to the broker configuration with the new primary database and execute “reinstate database ‘<former primary database>'”.
In my case it is : “reinstate databsae ‘srprim’;”

 

[oracle@ora1-2 ~]$ dgmgrl sys/oracle@srpstb
DGMGRL for Linux: Version 11.2.0.2.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> reinstate database 'srprim';
Reinstating database "srprim", please wait...
Reinstatement of database "srprim" succeeded
DGMGRL>

Let’s check the configuration.

 

DGMGRL> show configuration;

Configuration - dgtest

  Protection Mode: MaxAvailability
  Databases:
    srpstb - Primary database
    srprim - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS

DGMGRL>

Good to see that srprim has been converted to the new standby database.

 

 

 

COPYRIGHT

© Shivananda Rao P, 2012, 2013, 2014, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.

November 12, 2015 / Shivananda Rao P

Cascading Standby Database in 11g

This article demonstrates on the steps involved in configuring a cascaded physical standby database. A cascading standby environment is one wherein a standby receives the redo data directly from the primary database and then transmits it to the other cascaded standby databases.

 

This method offloads the primary in transmitting the redo to multiple standbys, instead it just transmits to the single cascading standby which in turn cascades to multiple standbys. Also, the redo received by the cascading physical standby database can cascade it to upto 30 physical or logical or snapshot standbys.

 

After speaking about the advantages of “Cascaded Standby Databases”, it also has certain restrictions:

 

1. A cascading environment cannot be used if the primary database is a RAC configuration. (This restriction has been removed from 11.2.0.2)

 

2. If dataguard broker is configured, then cascading redo to other standbys is not allowed.

 

3. A logical standby database or a snapshot standby database cannot act as a Cascading standby database. In other words, a logical or a snapshot standby cannot retransmit the redo data to a cascaded standby database.

 

4. Since the redo data on a cascaded standby is received from a cascading standby database and not directly from the primary database, there is always a lag at the cascaded standby database. The redo will be received only when the current redo on the cascading standby will be archived.

 

To minimize the lag, Standby Redo Logs are needed while implementing Cascaded Standbys to make use of the Real Time Apply feature.

 

Let’s move on with the steps involved in achieving this. This primary database used here is a standalone database with OMF managed datafiles through ASM and the cascading as well as cascaded standbys too are standalone databases with OMF managed datafiles through ASM. Also, this post assumes that a physical standby is setup for the primary which would be acting as a cascading standby and the standby redo logs are configured on both primary and the cascading standby database and it’s creation is not outlined here.

 

Environment:

Primary Site:

DB Name        : srprim
DB Unique name : srprim
Hostname       : ora1-1

 

Cascading Site:

 

DB Name        : srprim
DB Unique Name : srpstb
Hostname       : ora1-2

 

Cascaded Site:

DB Name        : srprim
DB Unique Name : srcstb
Hostname       : ora1-3

 

The configuration uses “Flash Recovery Area” on all (primary, cascading and cascaded standby databases). To graphically help you understand, the cascaded environment looks as below:

SRPRIM    —– >    SRPSTB    —– >   SRCSTB

 

Cascade_standby

 

Create a pfile for the cascaded database “srcstb” with all the required parameters. Below is how the PFILE for the “srcstb” instance looks.

 

[oracle@ora1-3 ~]$ cat /u02/initsrcstb.ora
srcstb.__db_cache_size=381681664
srcstb.__java_pool_size=4194304
srcstb.__large_pool_size=4194304
srcstb.__pga_aggregate_target=381681664
srcstb.__sga_target=566231040
srcstb.__shared_io_pool_size=0
srcstb.__shared_pool_size=159383552
srcstb.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/srcstb/adump'
*.audit_trail='DB'
*.compatible='11.2.0.0.0'
*.control_files='+DATA1','+FRA1'
*.core_dump_dest='/u01/app/oracle/diag/rdbms/srcstb/srpstb/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATA1'
*.db_domain=''
*.db_file_name_convert='+DATA_NEW','+DATA1'
*.db_name='srprim'
*.db_recovery_file_dest='+FRA1'
*.db_recovery_file_dest_size=4032M
*.db_unique_name='srcstb'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=srcstbXDB)'
*.fal_server='srprim'
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srcstb'
*.log_buffer=5820416# log buffer update
*.log_file_name_convert='+FRA_NEW','+FRA1'
*.memory_target=904M
*.open_cursors=300
*.optimizer_dynamic_sampling=2
*.optimizer_mode='ALL_ROWS'
*.plsql_warnings='DISABLE:ALL'# PL/SQL warnings at init.ora
*.processes=150
*.query_rewrite_enabled='TRUE'
*.remote_login_passwordfile='EXCLUSIVE'
*.result_cache_max_size=2336K
*.skip_unusable_indexes=TRUE

Make sure that the TNS entries for SRPRIM, SRPSTB and SRCSTB exist on each of the 3 (ora1-1, ora1-2 and ora1-3) servers.

 


SRPRIM =
(DESCRIPTION =
  (ADDRESS = (PROTOCOL = TCP)(HOST = ora1-1.mydomain)(PORT = 1521))
  (CONNECT_DATA =
     (SERVER = DEDICATED)
     (SERVICE_NAME = srprim)
  )
)

SRPSTB =
(DESCRIPTION =
  (ADDRESS = (PROTOCOL = TCP)(HOST = ora1-2.mydomain)(PORT = 1521))
  (CONNECT_DATA =
     (SERVER = DEDICATED)
     (SERVICE_NAME = srpstb)
  )
)

SRCSTB =
(DESCRIPTION =
  (ADDRESS = (PROTOCOL = TCP)(HOST = ora1-3.mydomain)(PORT = 1521))
  (CONNECT_DATA =
   (SERVER = DEDICATED)
   (SERVICE_NAME = srcstb)
  )
)

Add a static entry about the details of SRCSTB instance to the listener.ora file on the cascaded standby host “ora1-3”.

 

SID_LIST_LISTENER =
(SID_LIST =
  (SID_DESC =
     (ORACLE_HOME = /u01/app/oracle/product/11.2.0.2/db1)
     (SID_NAME = srcstb)
  )
)

 

Copy the password file from “srpstb” to the cascading standby site “ora1-3” and rename it according to the cascading standby database name “srcstb”.

 

[oracle@ora1-2 ~]$ scp /u01/app/oracle/product/11.2.0.2/db_1/dbs/orapwsrpstb oracle@ora1-3:/u01/app/oracle/product/11.2.0.2/db1/dbs/orapwsrcstb
The authenticity of host 'ora1-3 (192.168.56.107)' can't be established.
RSA key fingerprint is 26:01:48:55:e9:ae:ae:9a:f9:fd:38:db:29:b7:fa:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ora1-3' (RSA) to the list of known hosts.
oracle@ora1-3's password:
orapwsrpstb 100% 1536 1.5KB/s 00:00
[oracle@ora1-2 ~]$

 

Start the instance “srcstb” in nomount stage using the PFILE created earlier.

 

[oracle@ora1-3 ~]$ sqlplus sys/oracle@srcstb as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 8 10:04:49 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to an idle instance.

SYS@srcstb> startup nomount
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.

Total System Global Area 943669248 bytes
Fixed Size 2232128 bytes
Variable Size 553648320 bytes
Database Buffers 381681664 bytes
Redo Buffers 6107136 bytes

Current settings of log shipping parameters on my primary database “srprim” is as :

SYS@srprim> select name,value from v$parameter where name in ('log_archive_dest_1','log_archive_dest_2');

NAME                  VALUE
--------------------- -----------------------------------------------------------------------
log_archive_dest_1    location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) d                      b_unique_name=srprim
log_archive_dest_2    service="srpstb", LGWR SYNC AFFIRM delay=0 optional compression=disable                      max_failure=0 max_connections=1 reopen=300 db_unique_name="srpstb" net_                      t imeout=30, valid_for=(all_logfiles,primary_role)

It ships redo data only to the cascading standby database “srpstb”. Also, on “srpstb”, there is no redo shipment parameter configured to ship it to “srcstb”.

The details are as below for “srpstb” instance.

 

[oracle@ora1-2 ~]$ sqlplus sys/oracle@srpstb as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 8 10:06:32 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SYS@srpstb> select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
MOUNTED      srpstb           PHYSICAL STANDBY

SYS@srpstb> select name,value from v$parameter where name in ('log_archive_dest_1','log_archive_dest_2');

NAME                VALUE
------------------- -------------------------------------------------------------------------
log_archive_dest_1  location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_                    unique_name=srpstb
log_archive_dest_2  service=srprim SYNC valid_for=(all_logfiles,primary_role) db_unique_name=                    srprim
SYS@srpstb> select process,status,sequence# from v$managed_standby;

PROCESS   STATUS         SEQUENCE#
--------- ------------   ----------
ARCH      CONNECTED      0
ARCH      CONNECTED      0
ARCH      CONNECTED      0
ARCH      CLOSING        90
RFS       IDLE           0
RFS       IDLE           0
RFS       IDLE           0
RFS       IDLE           91
MRP0      WAIT_FOR_LOG   91

9 rows selected.

It can be noticed above that, MRP is currently running on “srpstb” waiting for log sequence 91.
Now, create the standby database “srcstb” using RMAN “active” duplication method. Note that, the duplication is being carried out with “srpstb” as a target database. In other words, I’m using a standby database as a source to perform RMAN active duplicate to create a new standby. This option is available from 11.2.0.2 version, provided there are pre-requisties patches that need to be applied on both source and target environment.

 

If RMAN active duplicate with source as standby is not feasible or un-supported for your version, then you can go with “RMAN active duplicate” from primary database or use the traditional way of RMAN backup restore and recovery to create the cascaded standby “srcstb”.

 

[oracle@ora1-3 dbs]$ rman target sys/oracle@srpstb auxiliary sys/oracle@srcstb

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Aug 16 11:46:55 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: SRPRIM (DBID=298418015, not open)
connected to auxiliary database: SRPRIM (not mounted)

RMAN> duplicate target database for standby from active database nofilenamecheck;

Starting Duplicate Db at 16-AUG-15
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=24 device type=DISK

contents of Memory Script:
{
backup as copy reuse
targetfile '/u01/app/oracle/product/11.2.0.2/db_1/dbs/orapwsrpstb' auxiliary format
'/u01/app/oracle/product/11.2.0.2/db1/dbs/orapwsrcstb' ;
}
executing Memory Script

Starting backup at 16-AUG-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=40 device type=DISK
Finished backup at 16-AUG-15

contents of Memory Script:
{
backup as copy current controlfile for standby auxiliary format '+DATA1/srcstb/controlfile/current.267.887888833';
restore clone controlfile to '+FRA1/srcstb/controlfile/current.267.887888835' from
'+DATA1/srcstb/controlfile/current.267.887888833';
sql clone "create spfile from memory";
shutdown clone immediate;
startup clone nomount;
sql clone "alter system set control_files =
''+DATA1/srcstb/controlfile/current.267.887888833'', ''+FRA1/srcstb/controlfile/current.267.887888835'' comment=
''Set by RMAN'' scope=spfile";
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script

Starting backup at 16-AUG-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
copying standby control file
output file name=/u01/app/oracle/product/11.2.0.2/db_1/dbs/snapcf_srpstb.f tag=TAG20150816T114714 RECID=18 STAMP=887888836
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 16-AUG-15

Starting restore at 16-AUG-15
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: copied control file copy
Finished restore at 16-AUG-15

sql statement: create spfile from memory

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 943669248 bytes

Fixed Size 2232128 bytes
Variable Size 553648320 bytes
Database Buffers 381681664 bytes
Redo Buffers 6107136 bytes

sql statement: alter system set control_files = ''+DATA1/srcstb/controlfile/current.267.887888833'', ''+FRA1/srcstb/controlfile/current.267.887888835'' comment= ''Set by RMAN'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 943669248 bytes

Fixed Size 2232128 bytes
Variable Size 553648320 bytes
Database Buffers 381681664 bytes
Redo Buffers 6107136 bytes

contents of Memory Script:
{
sql clone 'alter database mount standby database';
}
executing Memory Script

sql statement: alter database mount standby database
RMAN-05529: WARNING: DB_FILE_NAME_CONVERT resulted in invalid ASM names; names changed to disk group only.

contents of Memory Script:
{
set newname for tempfile 1 to
"+data1";
set newname for tempfile 2 to
"+data1";
switch clone tempfile all;
set newname for datafile 1 to
"+data1";
set newname for datafile 2 to
"+data1";
set newname for datafile 3 to
"+data1";
set newname for datafile 4 to
"+data1";
backup as copy reuse
datafile 1 auxiliary format
"+data1" datafile
2 auxiliary format
"+data1" datafile
3 auxiliary format
"+data1" datafile
4 auxiliary format
"+data1" ;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to +data1 in control file
renamed tempfile 2 to +data1 in control file

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 16-AUG-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=+DATA_NEW/srpstb/datafile/system.258.882637005
output file name=+DATA1/srcstb/datafile/system.276.887888881 tag=TAG20150816T114801
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=+DATA_NEW/srpstb/datafile/sysaux.259.882637031
output file name=+DATA1/srcstb/datafile/sysaux.275.887888917 tag=TAG20150816T114801
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DATA_NEW/srpstb/datafile/undotbs1.260.882637057
output file name=+DATA1/srcstb/datafile/undotbs1.274.887888941 tag=TAG20150816T114801
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=+DATA_NEW/srpstb/datafile/users.261.882637061
output file name=+DATA1/srcstb/datafile/users.273.887888945 tag=TAG20150816T114801
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 16-AUG-15

contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=18 STAMP=887888946 file name=+DATA1/srcstb/datafile/system.276.887888881
datafile 2 switched to datafile copy
input datafile copy RECID=19 STAMP=887888946 file name=+DATA1/srcstb/datafile/sysaux.275.887888917
datafile 3 switched to datafile copy
input datafile copy RECID=20 STAMP=887888946 file name=+DATA1/srcstb/datafile/undotbs1.274.887888941
datafile 4 switched to datafile copy
input datafile copy RECID=21 STAMP=887888946 file name=+DATA1/srcstb/datafile/users.273.887888945
Finished Duplicate Db at 16-AUG-15

RMAN> exit

Recovery Manager complete.
[oracle@ora1-3 dbs]$

Now that the standby is created, set the log shipping parameters on the primary and the cascading standby databases accordingly.

On the primary, I set “log_archive_dest_3” parameter to ship the redo to “srcstb”. Of-course, the “log_archive_dest_state_3” would be deffered as I do not want the primary to ship the redo to the cascaded standby.

By doing so, the concept of “cascaded standby” does not come into picture.

I set “log_archive_dest_3” on primary to ship the redo to “srcstb”, so that, if in case, “srpstb” goes down or is unreachable, then there wouldn’t be any DR database. So, as an alternate option, I have set this parameter.

 

[oracle@ora1-1 admin]$ sqlplus sys/oracle@srprim as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 16 12:16:14 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SYS@srprim> sho parameter dest_3

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_3          string
log_archive_dest_3                   string
log_archive_dest_30                  string
log_archive_dest_31                  string
SYS@srprim>
SYS@srprim> alter system set log_archive_dest_3='service=srcstb valid_for=(standby_logfiles,standby_role) db_unique_name=srcstb';

System altered.

SYS@srprim> alter system set log_archive_dest_state_3=defer;

System altered.
SYS@srprim> sho parameter log_archive_dest_3

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_3                   string      service=srcstb valid_for=(onli
                                                 ne_logfiles,primary_role) db_u
                                                 nique_name=srcstb
log_archive_dest_30                  string
log_archive_dest_31                  string
SYS@srprim>
SYS@srprim>
SYS@srprim> sho parameter dest_state_3

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_state_3             string      DEFER
log_archive_dest_state_30            string      enable
log_archive_dest_state_31            string      enable
SYS@srprim>

Set the log_archive_config parameter on all the 3 databases to host the value of all 3 database unique names.

 

SYS@srprim> alter system set log_archive_config='DG_CONFIG=(srprim,srpstb,srcstb)';

System altered.

SYS@srprim> sho parameter log_archive_config

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_config                   string      DG_CONFIG=(srprim,srpstb,srcst
                                                 b)

On the cascading standby database “srpstb”, set the parameter “log_archive_dest_3” to ship the redo data to the cascaded standby database “srcstb”.

[oracle@ora1-2 admin]$ sqlplus sys/oracle@srpstb as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 16 12:20:20 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SYS@srpstb> sho parameter dest_2

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_2          string      
log_archive_dest_2                   string      service=srprim val                                                                           id_for=(online_logfiles,primary_role) db_u
                                                 nique_name=srprim

SYS@srpstb> show parameter dest_3

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_3          string      
log_archive_dest_3                   string
SYS@srpstb>
SYS@srpstb> alter system set log_archive_dest_3='service=srcstb valid_for=(standby_logfiles,standby_role) db_unique_name=srcstb';

System altered.
SYS@srpstb> alter system set log_archive_config='DG_CONFIG=(srprim,srpstb,srcstb)';

System altered.

On “srcstb”, set the log_archive_config parameter and “fal_server” parameter.
FAL_SERVER on “srcstb” should use the NET alias name of “srpstb” as it would be receiving the redo data from “srpstb” database.

 

[oracle@ora1-3 dbs]$ sqlplus sys/oracle@srcstb as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 16 12:14:58 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SYS@srcstb>
SYS@srcstb> select status,instance_name,database_role,open_mode from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE    OPEN_MODE
------------ ---------------- ---------------- --------------------
MOUNTED      srcstb           PHYSICAL STANDBY MOUNTED

SYS@srcstb> alter system set log_archive_config='DG_CONFIG=(srprim,srpstb,srcstb)';

System altered.

SYS@srcstb> alter system set fal_server='srpstb';

System altered.

SYS@srcstb> alter system set fal_client='srcstb';

System altered.

In this environment, RMAN automatically creates the Standby Redo logs after the duplication as they were configured previously and existed on the primary and cascading standby databases. Verify there existance on the cascaded standby and if in case, do not exist, then create them manually.

Start the Managed Recovery Process on the cascaded standby database “srcstb”.

 

SYS@srcstb> alter database recover managed standby database disconnect from session using current logfile;

Database altered.

SYS@srcstb> select process,status,sequence# from v$managed_standby;

PROCESS   STATUS       SEQUENCE#
--------- ------------ ----------
ARCH      CONNECTED    0
ARCH      CONNECTED    0
ARCH      CONNECTED    0
ARCH      CLOSING      215
MRP0      WAIT_FOR_LOG 217
RFS       IDLE         0
RFS       IDLE         0
RFS       IDLE         0

8 rows selected.

Verify the sync status of cascading standby “srpstb” and cascaded standby “srcstb” databases.
On the primary, the latest archive sequence is generated is 226:

 

SYS@srprim> select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
OPEN         srprim           PRIMARY

SYS@srprim> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)
--------------
226

On the cascading standby SRPSTB, the last sequence applied is 226.

 

SYS@srpstb> select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
MOUNTED      srpstb           PHYSICAL STANDBY

SYS@srpstb> select max(sequence#) from v$archived_log where applied='YES';

MAX(SEQUENCE#)
--------------
226

Similarly, on the cascaded standby SRCSTB, the last sequence generated is 225.

 

SYS@srcstb> select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
MOUNTED      srcstb           PHYSICAL STANDBY

SYS@srcstb> select max(sequence#) from v$archived_log where applied='YES';

MAX(SEQUENCE#)
--------------
225

COPYRIGHT

© Shivananda Rao P, 2012, 2013, 2014, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.

 

November 4, 2015 / Shivananda Rao P

Creating RAC Physical Standby database for RAC Primary in 12c

This article focuses on creating a RAC standby database for a RAC primary database. The primary database is on a 2 node cluster and the standby database too will be created on a 2 node cluster. The entire setup is on oracle 12c 12.1.0.1. Also, the article is written with the primary database is using ASM with OMF managed datafiles and the standby, too, will be using OMF. On the other hand, ASM for the standby server is already configured and will not be outlined over here.

 

Environment:

Primary:

DB NAME          :  srprim
DB UNIQUE NAME   :  srprim
Instances        :  srprim1, srprim2
Hostnames        :  ora12c-node1, ora12c-node2

Standby:

DB NAME         :  srprim
DB UNIQUE NAME  :  srpstb
Instances       :  srpstb1, srpstb2
Hostnames       :  ora12cdr1, ora12cdr2

Speaking of the primary database, below is the details of the pluggable databases that are currently plugged into the primary.

 

SQL&gt; select status,instance_name,con_id from v$instance;

STATUS       INSTANCE_NAME        CON_ID
------------ ---------------- ----------
OPEN         srprim1                   0

SQL&gt; show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO

 

The configuration here, uses “Flash Recovery Area” on both primary and the standby database. Also, it’s assumed that the primary database is in ARCHIVELOG mode with FORCE LOGGING enabled.
Now, configure the essential parameters on the primary database related to the archival with local and remote destinations.

 

SQL&gt; alter system set log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srprim' sid='*';

System altered.

SQL&gt; alter  system set log_archive_dest_2='service=srpstb valid_for=(online_logfiles,primary_role) db_unique_name=srpstb' sid='*';

System altered.

 

Set the FAL_SERVER parameter with the “NET ALIAS NAME” of the standby database. This parameter will be used only when a switchover occurs and the primary starts behaving in the standby role.

 

SQL&gt; alter system set fal_server='srpstb';

System altered.

 

The PFILE of a primary database looks as below. Make sure the “remote_login_passwordfile” is set to EXCLUSIVE.

 

srprim1.__data_transfer_cache_size=0
srprim2.__data_transfer_cache_size=0
srprim2.__db_cache_size=805306368
srprim1.__db_cache_size=788529152
srprim1.__java_pool_size=16777216
srprim2.__java_pool_size=16777216
srprim1.__large_pool_size=33554432
srprim2.__large_pool_size=33554432
srprim1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
srprim2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
srprim1.__pga_aggregate_target=436207616
srprim2.__pga_aggregate_target=436207616
srprim1.__sga_target=1291845632
srprim2.__sga_target=1291845632
srprim1.__shared_io_pool_size=67108864
srprim2.__shared_io_pool_size=67108864
srprim2.__shared_pool_size=352321536
srprim1.__shared_pool_size=369098752
srprim1.__streams_pool_size=0
srprim2.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/srprim/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='12.1.0.0.0'
*.control_files='+DATA/SRPRIM/CONTROLFILE/current.262.891722993','+FRA/SRPRIM/CONTROLFILE/current.256.893794127'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='srprim'
*.db_recovery_file_dest_size=4194304000
*.db_recovery_file_dest='+FRA'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=srprimXDB)'
*.enable_pluggable_database=true
*.fal_client='srprim'
*.fal_server='srpstb'
srprim1.instance_number=1
srprim2.instance_number=2
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srprim'
*.log_archive_dest_2='service=srpstb valid_for=(online_logfiles,primary_role) db_unique_name=srpstb'
*.open_cursors=300
*.pga_aggregate_target=410m
*.processes=300
*.remote_login_passwordfile='exclusive'
*.sga_target=1230m
srprim2.thread=2
srprim1.thread=1
srprim1.undo_tablespace='UNDOTBS1'
srprim2.undo_tablespace='UNDOTBS2'

 

Make sure that the listener is up and running on both the nodes of the primary database.

 

[oracle@ora12c-node1 ~]$ srvctl status listener Listener
LISTENER is enabled Listener
LISTENER is running on node(s): ora12c-node1,ora12c-node2
[oracle@ora12c-node1 ~]$
[oracle@ora12c-node1 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node ora12c-node1

 

Below is how my listener.ora file looks on the standby database node ora12cdr1.

 

[oracle@ora12cdr1 dbs]$ cat /u01/app/12.1.0.1/grid/network/admin/listener.ora
# listener.ora Network Configuration File: /u01/app/12.1.0.1/grid/network/admin/listener.ora
# Generated by Oracle configuration tools.

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON

VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1=OFF             # line added by Agent

VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET

LISTENER =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER))
    )
  )

SID_LIST_LISTENER =
(SID_LIST =
  (SID_DESC =
    (ORACLE_HOME = /u01/app/oracle/product/12.1.0.1/db1)
    (SID_NAME = srpstb1)
  )
)

ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON

LISTENER_SCAN1 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1))
    )
  )

 

Create TNS entries for the standby database and have them in the TNSNAMES.ora file of both primary and the standby nodes.
SRPRIM, SRPRIM1 and SRPRIM2 are the TNS entries co-related to primary database.
SRPSTB, SRPSTB1 and SRPSTB2 are the TNS entries that co-relate with the standby database.

 

[oracle@ora12cdr1 dbs]$ cat /u01/app/oracle/product/12.1.0.1/db1/network/admin/tnsnames.ora
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/12.1.0.1/db1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

SRPRIM =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ora12c-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = srprim)
    )
  )

SRPRIM1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ora12c-node1-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = srprim)
      (instance_name = srprim1)
    )
  )

SRPRIM2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ora12c-node2-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = srprim)
      (instance_name = srprim2)
    )
  )

SRPSTB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ora12cdr-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = srpstb)
    )
  )

SRPSTB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ora12cdr1-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = srpstb1)(UR=A)
      (instance_name = srpstb1)
    )
  )

SRPSTB2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ora12cdr2-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = srpstb)
      (instance_name = srpstb2)
    )
  )

 

I have created a PFILE of the primary database parameters at “/u02/initsrprim1.ora” on “primary” node. Copy this over to the standby node “ora12cdr1”.

[oracle@ora12c-node1 ~]$ scp /u02/initsrprim1.ora ora12cdr1:/u03/
The authenticity of host 'ora12cdr1 (192.168.0.121)' can't be established.
RSA key fingerprint is f8:21:ec:7f:b3:68:53:42:12:a8:cf:95:b0:58:3a:5d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ora12cdr1,192.168.0.121' (RSA) to the list of known hosts.
oracle@ora12cdr1's password:
initsrprim1.ora                                                                                                                      100% 1456     1.4KB/s   00:00
[oracle@ora12c-node1 ~]$

 

Modify the above copied file on the standby node accordingly. Since we are using RMAN active duplicate from RAC to RAC, remove the cluster related parameters on the standby database PFILE.

The cluster related parameters are :

 

#*.cluster_database=true
#srpstb1.instance_number=1
#srpstb2.instance_number=2
#srpstb2.thread=2
#srpstb1.thread=1
#srpstb2.undo_tablespace='UNDOTBS2'

You can see below, that these parameters have been commented out from my standby pfile “initsrpstb1.ora”.
Make sure that “FAL_SERVER” parameter is set to the “NET ALIAS NAME” of the primary database from which the standby will fetch the redo.

Set the “db_file_name_convert” and “log_file_name_convert” parameters accordingly based on the diskgroup names or the “file system locations” that you have. In my case, the disk group names on which the datafiles are stored on primary and standby database is “DATA”, but the redo is stored on “FRA” and “FRA1” on primary and standby respectively.

 

[oracle@ora12cdr1 dbs]$ cat /u03/initsrpstb1.ora
srpstb1.__data_transfer_cache_size=0
srpstb1.__db_cache_size=771751936
srpstb1.__java_pool_size=16777216
srpstb1.__large_pool_size=33554432
srpstb1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
srpstb1.__pga_aggregate_target=436207616
srpstb1.__sga_target=1291845632
srpstb1.__shared_io_pool_size=50331648
srpstb1.__shared_pool_size=402653184
srpstb1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/srpstb/adump'
*.audit_trail='db'
#*.cluster_database=true
*.compatible='12.1.0.0.0'
*.control_files='+DATA','+FRA1'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='srprim'
*.db_unique_name='srpstb'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=srpstbXDB)'
*.db_recovery_file_dest_size=4000M
*.db_recovery_file_dest='+FRA1'
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srpstb'
*.log_archive_dest_2='service=srprim valid_for=(online_logfiles,primary_role) db_unique_name=srprim'
*.fal_server='srprim'
*.enable_pluggable_database=true
#srpstb1.instance_number=1
#srpstb2.instance_number=2
*.open_cursors=300
*.pga_aggregate_target=410m
*.processes=300
*.remote_login_passwordfile='exclusive'
*.sga_target=1230m
#srpstb2.thread=2
#srpstb1.thread=1
srpstb1.undo_tablespace='UNDOTBS1'
#srpstb2.undo_tablespace='UNDOTBS2'
*.log_archive_config='DG_CONFIG=(srprim,srpstb)'
*.log_file_name_convert='+FRA','+FRA1'

 

With 12c, password files will be placed on ASM diskgroup and is a new feature of 12c. To get the password file of the primary database, connect to the ASM instance on the primary node and use the “pwget” command over “asmcmd” utility.

On primary node:

 

ASMCMD&gt; pwget --dbuniquename srprim
+DATA/srprim/orapwsrprim
ASMCMD&gt;

 

Copy the password file from the ASM diskgroup to the local file system (here I have copied it to “/u02/” location on “ora12c-node1”), so that the same can be copied over and used at the standby site.

 

ASMCMD&gt; pwcopy '+DATA/srprim/orapwsrprim' '/u02/orapwsrprim'
copying +DATA/srprim/orapwsrprim -&amp;gt; /u02/orapwsrprim
ASMCMD-9456: password file should be located on an ASM disk group
ASMCMD&gt; exit
[oracle@ora12c-node1 ~]$ ls -lrt /u02/orapwsrprim
-rw-r----- 1 oracle oinstall 7680 Oct 23 17:38 /u02/orapwsrprim

 

Copy the password file to the standby database server “ora12cdr1” and rename it according to the standby instance name.

 

[oracle@ora12c-node1 ~]$ scp /u02/orapwsrprim ora12cdr1:/u01/app/oracle/product/12.1.0.1/db1/dbs/orapwsrpstb1
oracle@ora12cdr1's password:
orapwsrprim                                                                                                                          100% 7680     7.5KB/s   00:00
[oracle@ora12c-node1 ~]$

 

Place the standby instance “orastb1” in nomount stage using the previously created PFILE.

 

[oracle@ora12cdr1 ~]$ . oraenv
ORACLE_SID = [srpstb1] ?
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12cdr1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Thu Oct 22 20:00:19 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL&gt; startup nomount pfile='/u03/initsrpstb1.ora';
ORACLE instance started.

Total System Global Area 1286066176 bytes
Fixed Size                  2287960 bytes
Variable Size             452986536 bytes
Database Buffers          822083584 bytes
Redo Buffers                8708096 bytes

 

Connect to SRPRIM as target and “SRPSTB1” as auxiliary through RMAN and initiate the RMAN duplicate.
I’m using the “RMAN active duplicate” method here to create the standby.

 

[oracle@ora12cdr1 dbs]$ rman target sys/oracle@srprim auxiliary sys/oracle@srpstb1

Recovery Manager: Release 12.1.0.1.0 - Production on Fri Oct 23 20:46:09 2015

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SRPRIM (DBID=307664432)
connected to auxiliary database: SRPRIM (not mounted)

RMAN&gt; duplicate target database for standby from active database nofilenamecheck;

Starting Duplicate Db at 23-OCT-15
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=31 device type=DISK

contents of Memory Script:
{
   backup as copy reuse
   targetfile  '+DATA/srprim/orapwsrprim' auxiliary format
 '/u01/app/oracle/product/12.1.0.1/db1/dbs/orapwsrpstb1'   ;
}
executing Memory Script

Starting backup at 23-OCT-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=78 instance=srprim1 device type=DISK
Finished backup at 23-OCT-15
.
.
.
&lt;output Trimmed&gt;
.
.
.
input datafile copy RECID=26 STAMP=893883014 file name=+DATA/SRPSTB/20E576F1A6FE02E0E0537200A8C01581/DATAFILE/sysaux.260.893882967
datafile 11 switched to datafile copy
input datafile copy RECID=27 STAMP=893883014 file name=+DATA/SRPSTB/20E576F1A6FE02E0E0537200A8C01581/DATAFILE/users.259.893882991
datafile 12 switched to datafile copy
input datafile copy RECID=28 STAMP=893883014 file name=+DATA/SRPSTB/20E576F1A6FE02E0E0537200A8C01581/DATAFILE/example.258.893882993
Finished Duplicate Db at 23-OCT-15

RMAN&gt;

 

Add the related cluster parameters to the standby database.

 

cluster_database=TRUE
srpstb1.undo_tablespace='UNDOTBS1'
srpstb2.undo_tablespace='UNDOTBS2'
srpstb2.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=drnode2-vip)(PORT=1521))))'
srpstb1.instance_number=1
srpstb2.instance_number=2
srpstb1.thread=1
srpstb2.thread=2
srpstb1.fal_client='srpstb1'
srpstb2.fal_client='srpstb2'

 

Here is how my PFILE of standby database looks.

 

[oracle@ora12cdr1 dbs]$ cat initsrpstb1.ora
srpstb1.__data_transfer_cache_size=0
srpstb1.__db_cache_size=805306368
srpstb1.__java_pool_size=16777216
srpstb1.__large_pool_size=150994944
srpstb1.__pga_aggregate_target=436207616
srpstb1.__sga_target=1291845632
srpstb1.__shared_io_pool_size=0
srpstb1.__shared_pool_size=285212672
srpstb1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/srpstb/adump'
*.audit_trail='DB'
*.background_dump_dest='/u01/app/oracle/diag/rdbms/srpstb/srpstb1/trace'#Deprecate parameter
*.compatible='12.1.0.0.0'
*.connection_brokers='((TYPE=DEDICATED)(BROKERS=1))','((TYPE=EMON)(BROKERS=1))'# connection_brokers default value
*.control_files='+DATA/SRPSTB/CONTROLFILE/current.267.893882827','+FRA1/SRPSTB/CONTROLFILE/current.258.893882827'#Restore Controlfile
*.core_dump_dest='/u01/app/oracle/diag/rdbms/srpstb/srpstb1/cdump'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_domain=''
*.db_name='srprim'
*.db_recovery_file_dest='+FRA1'
*.db_recovery_file_dest_size=4000M
*.db_unique_name='srpstb'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=srpstbXDB)'
*.enable_pluggable_database=TRUE
*.fal_server='srprim'
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srpstb'
*.log_archive_dest_2='service=srprim valid_for=(online_logfiles,primary_role) db_unique_name=srprim'
*.log_buffer=8343552# log buffer update
*.log_file_name_convert='+FRA','+FRA1'
*.open_cursors=300
*.optimizer_dynamic_sampling=2
*.optimizer_mode='ALL_ROWS'
*.pga_aggregate_target=410M
*.plsql_warnings='DISABLE:ALL'# PL/SQL warnings at init.ora
*.processes=300
*.query_rewrite_enabled='TRUE'
*.remote_login_passwordfile='EXCLUSIVE'
*.result_cache_max_size=6336K
*.sga_target=1232M
*.skip_unusable_indexes=TRUE
#*.undo_tablespace='UNDBOTBSi1'
*.user_dump_dest='/u01/app/oracle/diag/rdbms/srpstb/srpstb1/trace'#Deprecate parameter
*.cluster_database=TRUE
srpstb1.undo_tablespace='UNDOTBS1'
srpstb2.undo_tablespace='UNDOTBS2'
srpstb1.instance_number=1
srpstb2.instance_number=2
srpstb1.thread=1
srpstb2.thread=2
srpstb1.fal_client='srpstb1'
srpstb2.fal_client='srpstb2'
srpstb1.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=ora12cdr1-vip)(PORT=1521))))'
srpstb2.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=ora12cdr2-vip)(PORT=1521))))'
*.remote_listener='ora12cdr-scan:1521'
[oracle@ora12cdr1 dbs]$

 

Connect to the standby instance “srpstb1” and create a global SPFILE to be placed on the diskgroup which will be shared across by all the standby nodes.

 

[oracle@ora12cdr1 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Sat Oct 24 10:50:07 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL&gt; create spfile='+DATA/srpstb/spfilesrpstb.ora' from pfile;

File created.

 

Create a PFILE on each of the standby nodes to point out to the shared SPFILE.

 

[oracle@ora12cdr1 dbs]$ cat initsrpstb1.ora
spfile='+DATA/srpstb/spfilesrpstb.ora'

[oracle@ora12cdr1 dbs]$ scp initsrpstb1.ora ora12cdr2:/u01/app/oracle/product/12.1.0.1/db1/dbs/initsrpstb2.ora
initsrpstb1.ora                                                                                                                       100%   39     0.0KB/s   00:00
[oracle@ora12cdr1 dbs]$

 

Having done this, we need to move the password file that was copied earlier to the “$ORACLE_HOME/dbs” location of the standby node “ora12cdr1” to ASM.

Once the password file is placed on ASM diskgroup, it would be shared across all the nodes. So, this removes the need for us to copy the password file to each node.

Using the “pwcopy” command through “ASMCMD” utility, copy the password file from “$ORACLE_HOME/dbs” to “+DATA”.

 

[oracle@ora12cdr1 dbs]$ . oraenv
ORACLE_SID = [srpstb1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12cdr1 dbs]$
[oracle@ora12cdr1 dbs]$ asmcmd
ASMCMD&gt; pwcopy '/u01/app/oracle/product/12.1.0.1/db1/dbs/orapwsrpstb1' '+DATA/srpstb/orapwsrpstb'
copying /u01/app/oracle/product/12.1.0.1/db1/dbs/orapwsrpstb1 -&gt; +DATA/srpstb/orapwsrpstb
ASMCMD&gt;

 

Remove the password file that was copied earlier.

 

[oracle@ora12cdr1 dbs]$ pwd
/u01/app/oracle/product/12.1.0.1/db1/dbs
[oracle@ora12cdr1 dbs]$
[oracle@ora12cdr1 dbs]$ rm -rf orapwsrpstb1
[oracle@ora12cdr1 dbs]$

 

Add the standby database “srpstb” and it’s details to the cluster so that it can be managed by clusterware. You can add the relevant options to the database addition accordingly.

 

[oracle@ora12cdr1 dbs]$ srvctl add database -db srpstb -o /u01/app/oracle/product/12.1.0.1/db1 -startoption mount -role physical_standby -pwfile +DATA/srpstb/orapwsrpstb

 

Add the instances “srpstb1” and “srpstb2” to the standby configuration.

 

[oracle@ora12cdr1 dbs]$ srvctl add instance -instance srpstb1 -db srpstb -node ora12cdr1
[oracle@ora12cdr1 dbs]$ srvctl add instance -instance srpstb2 -db srpstb -node ora12cdr2

 

[oracle@ora12cdr1 dbs]$ srvctl status database -db srpstb
Instance srpstb1 is running on node ora12cdr1
Instance srpstb2 is running on node ora12cdr2
[oracle@ora12cdr1 dbs]$

 

Now, it’s time to add the standby redo logs on both primary and the standby databases.

I have my primary database with 2 threads and 2 Online redo log groups per each thread.

 

SQL&gt; select thread#,group#,status,bytes/1024/1024 from v$log;

   THREAD#     GROUP# STATUS           BYTES/1024/1024
---------- ---------- ---------------- ---------------
         1          1 INACTIVE                      50
         1          2 CURRENT                       50
         2          3 INACTIVE                      50
         2          4 CURRENT                       50

 

Based on the oracle documentation, calculate the number of standby redo logs required. This turns out to 6.

(maximum number of logfiles for each thread + 1) * maximum number of threads

(2+1) *3 = 6

So we need to add 6 groups of standby redo logs with 3 groups per thread.

 

SQL&gt; alter database add standby logfile thread 1 size 50M;

Database altered.

SQL&gt; alter database add standby logfile thread 1 size 50M;

Database altered.

SQL&gt; alter database add standby logfile thread 1 size 50M;

Database altered.

SQL&gt; alter database add standby logfile thread 2 size 50M;

Database altered.

SQL&gt; alter database add standby logfile thread 2 size 50M;

Database altered.

SQL&gt; alter database add standby logfile thread 2 size 50M;

Database altered.

 

Query the v$log and v$standby_log views to get the details of the Online Redo logs and the Standby Redo Logs.
We can see that the standby redo log group 5,6 and 7 are associated with thread 1 and standby redo log groups 8,9 and 10 are associated with thread 2.

 

SQL&gt; select thread#,group#,status,bytes/1024/1024 from v$log;

   THREAD#     GROUP# STATUS           BYTES/1024/1024
---------- ---------- ---------------- ---------------
         1          1 INACTIVE                      50
         1          2 CURRENT                       50
         2          3 INACTIVE                      50
         2          4 CURRENT                       50

SQL&gt; select thread#,group#,status,bytes/1024/1024 from v$standby_log;

   THREAD#     GROUP# STATUS     BYTES/1024/1024
---------- ---------- ---------- ---------------
         1          5 UNASSIGNED              50
         1          6 UNASSIGNED              50
         1          7 UNASSIGNED              50
         2          8 UNASSIGNED              50
         2          9 UNASSIGNED              50
         2         10 UNASSIGNED              50

6 rows selected.

 

Connect to the standby database and start the MRP process. Also, verify the recovery status/progress of the standby.

 

[oracle@ora12cdr1 dbs]$ sqlplus sys/oracle@srpstb1 as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Sat Oct 24 11:15:23 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL&gt; select inst_id,status,instance_name,con_id from gv$instance;

   INST_ID STATUS       INSTANCE_NAME        CON_ID
---------- ------------ ---------------- ----------
         1 MOUNTED      srpstb1                   0
         2 MOUNTED      srpstb2                   0

SQL&gt; select inst_id,con_id,name,open_mode from gv$pdbs;

   INST_ID     CON_ID NAME                           OPEN_MODE
---------- ---------- ------------------------------ ----------
         1          2 PDB$SEED                       MOUNTED
         1          3 PDB1                           MOUNTED
         2          2 PDB$SEED                       MOUNTED
         2          3 PDB1                           MOUNTED

SQL&gt; alter database recover managed standby database disconnect;

Database altered.

 

SQL&gt; select process,status,sequence#,thread# from v$managed_standby;

PROCESS   STATUS        SEQUENCE#    THREAD#
--------- ------------ ---------- ----------
ARCH      CONNECTED             0          0
ARCH      CONNECTED             0          0
ARCH      CONNECTED             0          0
ARCH      CONNECTED             0          0
MRP0      APPLYING_LOG         15          2
RFS       IDLE                  0          0

6 rows selected.

 

On the Primary, query the v$archived_log view to get the current archivelog sequence generated.

 

SQL&gt; select thread#,max(sequence#) from v$archived_log  group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1             32
         2             23

 

On the standby, compare the above results with the outcome of the below query. This tells me, that the standby is in sync with the primary.

 

SQL&gt; select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

   THREAD# MAX(SEQUENCE#)
---------- --------------
         1             31
         2             23

COPYRIGHT

© Shivananda Rao P, 2012, 2013, 2014, 2015. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.

 

 

 

DISCLAIMER

The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.