This article focuses on creating a RAC standby database for a RAC primary database. The primary database is on a 2 node cluster and the standby database too will be created on a 2 node cluster. The entire setup is on oracle 12c 12.1.0.1. Also, the article is written with the primary database is using ASM with OMF managed datafiles and the standby, too, will be using OMF. On the other hand, ASM for the standby server is already configured and will not be outlined over here.
Environment:
Primary:
DB NAME : srprim DB UNIQUE NAME : srprim Instances : srprim1, srprim2 Hostnames : ora12c-node1, ora12c-node2
Standby:
DB NAME : srprim DB UNIQUE NAME : srpstb Instances : srpstb1, srpstb2 Hostnames : ora12cdr1, ora12cdr2
Speaking of the primary database, below is the details of the pluggable databases that are currently plugged into the primary.
SQL> select status,instance_name,con_id from v$instance; STATUS INSTANCE_NAME CON_ID ------------ ---------------- ---------- OPEN srprim1 0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO
The configuration here, uses “Flash Recovery Area” on both primary and the standby database. Also, it’s assumed that the primary database is in ARCHIVELOG mode with FORCE LOGGING enabled.
Now, configure the essential parameters on the primary database related to the archival with local and remote destinations.
SQL>; alter system set log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srprim' sid='*'; System altered. SQL> alter system set log_archive_dest_2='service=srpstb valid_for=(online_logfiles,primary_role) db_unique_name=srpstb' sid='*'; System altered.
Set the FAL_SERVER parameter with the “NET ALIAS NAME” of the standby database. This parameter will be used only when a switchover occurs and the primary starts behaving in the standby role.
SQL> alter system set fal_server='srpstb'; System altered.
The PFILE of a primary database looks as below. Make sure the “remote_login_passwordfile” is set to EXCLUSIVE.
srprim1.__data_transfer_cache_size=0 srprim2.__data_transfer_cache_size=0 srprim2.__db_cache_size=805306368 srprim1.__db_cache_size=788529152 srprim1.__java_pool_size=16777216 srprim2.__java_pool_size=16777216 srprim1.__large_pool_size=33554432 srprim2.__large_pool_size=33554432 srprim1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment srprim2.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment srprim1.__pga_aggregate_target=436207616 srprim2.__pga_aggregate_target=436207616 srprim1.__sga_target=1291845632 srprim2.__sga_target=1291845632 srprim1.__shared_io_pool_size=67108864 srprim2.__shared_io_pool_size=67108864 srprim2.__shared_pool_size=352321536 srprim1.__shared_pool_size=369098752 srprim1.__streams_pool_size=0 srprim2.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/srprim/adump' *.audit_trail='db' *.cluster_database=true *.compatible='12.1.0.0.0' *.control_files='+DATA/SRPRIM/CONTROLFILE/current.262.891722993','+FRA/SRPRIM/CONTROLFILE/current.256.893794127' *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_domain='' *.db_name='srprim' *.db_recovery_file_dest_size=4194304000 *.db_recovery_file_dest='+FRA' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=srprimXDB)' *.enable_pluggable_database=true *.fal_client='srprim' *.fal_server='srpstb' srprim1.instance_number=1 srprim2.instance_number=2 *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srprim' *.log_archive_dest_2='service=srpstb valid_for=(online_logfiles,primary_role) db_unique_name=srpstb' *.open_cursors=300 *.pga_aggregate_target=410m *.processes=300 *.remote_login_passwordfile='exclusive' *.sga_target=1230m srprim2.thread=2 srprim1.thread=1 srprim1.undo_tablespace='UNDOTBS1' srprim2.undo_tablespace='UNDOTBS2'
Make sure that the listener is up and running on both the nodes of the primary database.
[oracle@ora12c-node1 ~]$ srvctl status listener Listener LISTENER is enabled Listener LISTENER is running on node(s): ora12c-node1,ora12c-node2 [oracle@ora12c-node1 ~]$ [oracle@ora12c-node1 ~]$ srvctl status scan_listener SCAN Listener LISTENER_SCAN1 is enabled SCAN listener LISTENER_SCAN1 is running on node ora12c-node1
Below is how my listener.ora file looks on the standby database node ora12cdr1.
[oracle@ora12cdr1 dbs]$ cat /u01/app/12.1.0.1/grid/network/admin/listener.ora # listener.ora Network Configuration File: /u01/app/12.1.0.1/grid/network/admin/listener.ora # Generated by Oracle configuration tools. ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1=OFF # line added by Agent VALID_NODE_CHECKING_REGISTRATION_LISTENER = SUBNET LISTENER = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER)) ) ) SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (ORACLE_HOME = /u01/app/oracle/product/12.1.0.1/db1) (SID_NAME = srpstb1) ) ) ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER = ON LISTENER_SCAN1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = LISTENER_SCAN1)) ) )
Create TNS entries for the standby database and have them in the TNSNAMES.ora file of both primary and the standby nodes.
SRPRIM, SRPRIM1 and SRPRIM2 are the TNS entries co-related to primary database.
SRPSTB, SRPSTB1 and SRPSTB2 are the TNS entries that co-relate with the standby database.
[oracle@ora12cdr1 dbs]$ cat /u01/app/oracle/product/12.1.0.1/db1/network/admin/tnsnames.ora # tnsnames.ora Network Configuration File: /u01/app/oracle/product/12.1.0.1/db1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. SRPRIM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12c-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = srprim) ) ) SRPRIM1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12c-node1-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = srprim) (instance_name = srprim1) ) ) SRPRIM2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12c-node2-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = srprim) (instance_name = srprim2) ) ) SRPSTB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12cdr-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = srpstb) ) ) SRPSTB1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12cdr1-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = srpstb1)(UR=A) (instance_name = srpstb1) ) ) SRPSTB2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12cdr2-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = srpstb) (instance_name = srpstb2) ) )
I have created a PFILE of the primary database parameters at “/u02/initsrprim1.ora” on “primary” node. Copy this over to the standby node “ora12cdr1”.
[oracle@ora12c-node1 ~]$ scp /u02/initsrprim1.ora ora12cdr1:/u03/ The authenticity of host 'ora12cdr1 (192.168.0.121)' can't be established. RSA key fingerprint is f8:21:ec:7f:b3:68:53:42:12:a8:cf:95:b0:58:3a:5d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ora12cdr1,192.168.0.121' (RSA) to the list of known hosts. oracle@ora12cdr1's password: initsrprim1.ora 100% 1456 1.4KB/s 00:00 [oracle@ora12c-node1 ~]$
Modify the above copied file on the standby node accordingly. Since we are using RMAN active duplicate from RAC to RAC, remove the cluster related parameters on the standby database PFILE.
The cluster related parameters are :
#*.cluster_database=true #srpstb1.instance_number=1 #srpstb2.instance_number=2 #srpstb2.thread=2 #srpstb1.thread=1 #srpstb2.undo_tablespace='UNDOTBS2'
You can see below, that these parameters have been commented out from my standby pfile “initsrpstb1.ora”.
Make sure that “FAL_SERVER” parameter is set to the “NET ALIAS NAME” of the primary database from which the standby will fetch the redo.
Set the “db_file_name_convert” and “log_file_name_convert” parameters accordingly based on the diskgroup names or the “file system locations” that you have. In my case, the disk group names on which the datafiles are stored on primary and standby database is “DATA”, but the redo is stored on “FRA” and “FRA1” on primary and standby respectively.
[oracle@ora12cdr1 dbs]$ cat /u03/initsrpstb1.ora srpstb1.__data_transfer_cache_size=0 srpstb1.__db_cache_size=771751936 srpstb1.__java_pool_size=16777216 srpstb1.__large_pool_size=33554432 srpstb1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment srpstb1.__pga_aggregate_target=436207616 srpstb1.__sga_target=1291845632 srpstb1.__shared_io_pool_size=50331648 srpstb1.__shared_pool_size=402653184 srpstb1.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/srpstb/adump' *.audit_trail='db' #*.cluster_database=true *.compatible='12.1.0.0.0' *.control_files='+DATA','+FRA1' *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_domain='' *.db_name='srprim' *.db_unique_name='srpstb' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=srpstbXDB)' *.db_recovery_file_dest_size=4000M *.db_recovery_file_dest='+FRA1' *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srpstb' *.log_archive_dest_2='service=srprim valid_for=(online_logfiles,primary_role) db_unique_name=srprim' *.fal_server='srprim' *.enable_pluggable_database=true #srpstb1.instance_number=1 #srpstb2.instance_number=2 *.open_cursors=300 *.pga_aggregate_target=410m *.processes=300 *.remote_login_passwordfile='exclusive' *.sga_target=1230m #srpstb2.thread=2 #srpstb1.thread=1 srpstb1.undo_tablespace='UNDOTBS1' #srpstb2.undo_tablespace='UNDOTBS2' *.log_archive_config='DG_CONFIG=(srprim,srpstb)' *.log_file_name_convert='+FRA','+FRA1'
With 12c, password files will be placed on ASM diskgroup and is a new feature of 12c. To get the password file of the primary database, connect to the ASM instance on the primary node and use the “pwget” command over “asmcmd” utility.
On primary node:
ASMCMD> pwget --dbuniquename srprim +DATA/srprim/orapwsrprim ASMCMD>
Copy the password file from the ASM diskgroup to the local file system (here I have copied it to “/u02/” location on “ora12c-node1”), so that the same can be copied over and used at the standby site.
ASMCMD> pwcopy '+DATA/srprim/orapwsrprim' '/u02/orapwsrprim' copying +DATA/srprim/orapwsrprim -> /u02/orapwsrprim ASMCMD-9456: password file should be located on an ASM disk group ASMCMD> exit [oracle@ora12c-node1 ~]$ ls -lrt /u02/orapwsrprim -rw-r----- 1 oracle oinstall 7680 Oct 23 17:38 /u02/orapwsrprim
Copy the password file to the standby database server “ora12cdr1” and rename it according to the standby instance name.
[oracle@ora12c-node1 ~]$ scp /u02/orapwsrprim ora12cdr1:/u01/app/oracle/product/12.1.0.1/db1/dbs/orapwsrpstb1 oracle@ora12cdr1's password: orapwsrprim 100% 7680 7.5KB/s 00:00 [oracle@ora12c-node1 ~]$
Place the standby instance “orastb1” in nomount stage using the previously created PFILE.
[oracle@ora12cdr1 ~]$ . oraenv ORACLE_SID = [srpstb1] ? The Oracle base remains unchanged with value /u01/app/oracle [oracle@ora12cdr1 ~]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Thu Oct 22 20:00:19 2015 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL> startup nomount pfile='/u03/initsrpstb1.ora'; ORACLE instance started. Total System Global Area 1286066176 bytes Fixed Size 2287960 bytes Variable Size 452986536 bytes Database Buffers 822083584 bytes Redo Buffers 8708096 bytes
Connect to SRPRIM as target and “SRPSTB1” as auxiliary through RMAN and initiate the RMAN duplicate.
I’m using the “RMAN active duplicate” method here to create the standby.
[oracle@ora12cdr1 dbs]$ rman target sys/oracle@srprim auxiliary sys/oracle@srpstb1 Recovery Manager: Release 12.1.0.1.0 - Production on Fri Oct 23 20:46:09 2015 Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved. connected to target database: SRPRIM (DBID=307664432) connected to auxiliary database: SRPRIM (not mounted) RMAN> duplicate target database for standby from active database nofilenamecheck; Starting Duplicate Db at 23-OCT-15 using target database control file instead of recovery catalog allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=31 device type=DISK contents of Memory Script: { backup as copy reuse targetfile '+DATA/srprim/orapwsrprim' auxiliary format '/u01/app/oracle/product/12.1.0.1/db1/dbs/orapwsrpstb1' ; } executing Memory Script Starting backup at 23-OCT-15 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=78 instance=srprim1 device type=DISK Finished backup at 23-OCT-15 . . . <output Trimmed> . . . input datafile copy RECID=26 STAMP=893883014 file name=+DATA/SRPSTB/20E576F1A6FE02E0E0537200A8C01581/DATAFILE/sysaux.260.893882967 datafile 11 switched to datafile copy input datafile copy RECID=27 STAMP=893883014 file name=+DATA/SRPSTB/20E576F1A6FE02E0E0537200A8C01581/DATAFILE/users.259.893882991 datafile 12 switched to datafile copy input datafile copy RECID=28 STAMP=893883014 file name=+DATA/SRPSTB/20E576F1A6FE02E0E0537200A8C01581/DATAFILE/example.258.893882993 Finished Duplicate Db at 23-OCT-15 RMAN>
Add the related cluster parameters to the standby database.
cluster_database=TRUE srpstb1.undo_tablespace='UNDOTBS1' srpstb2.undo_tablespace='UNDOTBS2' srpstb2.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=drnode2-vip)(PORT=1521))))' srpstb1.instance_number=1 srpstb2.instance_number=2 srpstb1.thread=1 srpstb2.thread=2 srpstb1.fal_client='srpstb1' srpstb2.fal_client='srpstb2'
Here is how my PFILE of standby database looks.
[oracle@ora12cdr1 dbs]$ cat initsrpstb1.ora srpstb1.__data_transfer_cache_size=0 srpstb1.__db_cache_size=805306368 srpstb1.__java_pool_size=16777216 srpstb1.__large_pool_size=150994944 srpstb1.__pga_aggregate_target=436207616 srpstb1.__sga_target=1291845632 srpstb1.__shared_io_pool_size=0 srpstb1.__shared_pool_size=285212672 srpstb1.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/srpstb/adump' *.audit_trail='DB' *.background_dump_dest='/u01/app/oracle/diag/rdbms/srpstb/srpstb1/trace'#Deprecate parameter *.compatible='12.1.0.0.0' *.connection_brokers='((TYPE=DEDICATED)(BROKERS=1))','((TYPE=EMON)(BROKERS=1))'# connection_brokers default value *.control_files='+DATA/SRPSTB/CONTROLFILE/current.267.893882827','+FRA1/SRPSTB/CONTROLFILE/current.258.893882827'#Restore Controlfile *.core_dump_dest='/u01/app/oracle/diag/rdbms/srpstb/srpstb1/cdump' *.db_block_size=8192 *.db_create_file_dest='+DATA' *.db_domain='' *.db_name='srprim' *.db_recovery_file_dest='+FRA1' *.db_recovery_file_dest_size=4000M *.db_unique_name='srpstb' *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=srpstbXDB)' *.enable_pluggable_database=TRUE *.fal_server='srprim' *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=srpstb' *.log_archive_dest_2='service=srprim valid_for=(online_logfiles,primary_role) db_unique_name=srprim' *.log_buffer=8343552# log buffer update *.log_file_name_convert='+FRA','+FRA1' *.open_cursors=300 *.optimizer_dynamic_sampling=2 *.optimizer_mode='ALL_ROWS' *.pga_aggregate_target=410M *.plsql_warnings='DISABLE:ALL'# PL/SQL warnings at init.ora *.processes=300 *.query_rewrite_enabled='TRUE' *.remote_login_passwordfile='EXCLUSIVE' *.result_cache_max_size=6336K *.sga_target=1232M *.skip_unusable_indexes=TRUE #*.undo_tablespace='UNDBOTBSi1' *.user_dump_dest='/u01/app/oracle/diag/rdbms/srpstb/srpstb1/trace'#Deprecate parameter *.cluster_database=TRUE srpstb1.undo_tablespace='UNDOTBS1' srpstb2.undo_tablespace='UNDOTBS2' srpstb1.instance_number=1 srpstb2.instance_number=2 srpstb1.thread=1 srpstb2.thread=2 srpstb1.fal_client='srpstb1' srpstb2.fal_client='srpstb2' srpstb1.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=ora12cdr1-vip)(PORT=1521))))' srpstb2.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=ora12cdr2-vip)(PORT=1521))))' *.remote_listener='ora12cdr-scan:1521' [oracle@ora12cdr1 dbs]$
Connect to the standby instance “srpstb1” and create a global SPFILE to be placed on the diskgroup which will be shared across by all the standby nodes.
[oracle@ora12cdr1 dbs]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Sat Oct 24 10:50:07 2015 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options SQL>create spfile='+DATA/srpstb/spfilesrpstb.ora' from pfile; File created.
Create a PFILE on each of the standby nodes to point out to the shared SPFILE.
[oracle@ora12cdr1 dbs]$ cat initsrpstb1.ora spfile='+DATA/srpstb/spfilesrpstb.ora' [oracle@ora12cdr1 dbs]$ scp initsrpstb1.ora ora12cdr2:/u01/app/oracle/product/12.1.0.1/db1/dbs/initsrpstb2.ora initsrpstb1.ora 100% 39 0.0KB/s 00:00 [oracle@ora12cdr1 dbs]$
Having done this, we need to move the password file that was copied earlier to the “$ORACLE_HOME/dbs” location of the standby node “ora12cdr1” to ASM.
Once the password file is placed on ASM diskgroup, it would be shared across all the nodes. So, this removes the need for us to copy the password file to each node.
Using the “pwcopy” command through “ASMCMD” utility, copy the password file from “$ORACLE_HOME/dbs” to “+DATA”.
[oracle@ora12cdr1 dbs]$ . oraenv ORACLE_SID = [srpstb1] ? +ASM1 The Oracle base remains unchanged with value /u01/app/oracle [oracle@ora12cdr1 dbs]$ [oracle@ora12cdr1 dbs]$ asmcmd ASMCMD> pwcopy '/u01/app/oracle/product/12.1.0.1/db1/dbs/orapwsrpstb1' '+DATA/srpstb/orapwsrpstb' copying /u01/app/oracle/product/12.1.0.1/db1/dbs/orapwsrpstb1 -> +DATA/srpstb/orapwsrpstb ASMCMD>
Remove the password file that was copied earlier.
[oracle@ora12cdr1 dbs]$ pwd /u01/app/oracle/product/12.1.0.1/db1/dbs [oracle@ora12cdr1 dbs]$ [oracle@ora12cdr1 dbs]$ rm -rf orapwsrpstb1 [oracle@ora12cdr1 dbs]$
Add the standby database “srpstb” and it’s details to the cluster so that it can be managed by clusterware. You can add the relevant options to the database addition accordingly.
[oracle@ora12cdr1 dbs]$ srvctl add database -db srpstb -o /u01/app/oracle/product/12.1.0.1/db1 -startoption mount -role physical_standby -pwfile +DATA/srpstb/orapwsrpstb
Add the instances “srpstb1” and “srpstb2” to the standby configuration.
[oracle@ora12cdr1 dbs]$ srvctl add instance -instance srpstb1 -db srpstb -node ora12cdr1 [oracle@ora12cdr1 dbs]$ srvctl add instance -instance srpstb2 -db srpstb -node ora12cdr2
[oracle@ora12cdr1 dbs]$ srvctl status database -db srpstb Instance srpstb1 is running on node ora12cdr1 Instance srpstb2 is running on node ora12cdr2 [oracle@ora12cdr1 dbs]$
Now, it’s time to add the standby redo logs on both primary and the standby databases.
I have my primary database with 2 threads and 2 Online redo log groups per each thread.
SQL> select thread#,group#,status,bytes/1024/1024 from v$log; THREAD# GROUP# STATUS BYTES/1024/1024 ---------- ---------- ---------------- --------------- 1 1 INACTIVE 50 1 2 CURRENT 50 2 3 INACTIVE 50 2 4 CURRENT 50
Based on the oracle documentation, calculate the number of standby redo logs required. This turns out to 6.
(maximum number of logfiles for each thread + 1) * maximum number of threads
(2+1) *3 = 6
So we need to add 6 groups of standby redo logs with 3 groups per thread.
SQL> alter database add standby logfile thread 1 size 50M; Database altered. SQL> alter database add standby logfile thread 1 size 50M; Database altered. SQL> alter database add standby logfile thread 1 size 50M; Database altered. SQL> alter database add standby logfile thread 2 size 50M; Database altered. SQL> alter database add standby logfile thread 2 size 50M; Database altered. SQL> alter database add standby logfile thread 2 size 50M; Database altered.
Query the v$log and v$standby_log views to get the details of the Online Redo logs and the Standby Redo Logs.
We can see that the standby redo log group 5,6 and 7 are associated with thread 1 and standby redo log groups 8,9 and 10 are associated with thread 2.
SQL> select thread#,group#,status,bytes/1024/1024 from v$log; THREAD# GROUP# STATUS BYTES/1024/1024 ---------- ---------- ---------------- --------------- 1 1 INACTIVE 50 1 2 CURRENT 50 2 3 INACTIVE 50 2 4 CURRENT 50 SQL> select thread#,group#,status,bytes/1024/1024 from v$standby_log; THREAD# GROUP# STATUS BYTES/1024/1024 ---------- ---------- ---------- --------------- 1 5 UNASSIGNED 50 1 6 UNASSIGNED 50 1 7 UNASSIGNED 50 2 8 UNASSIGNED 50 2 9 UNASSIGNED 50 2 10 UNASSIGNED 50 6 rows selected.
Connect to the standby database and start the MRP process. Also, verify the recovery status/progress of the standby.
[oracle@ora12cdr1 dbs]$ sqlplus sys/oracle@srpstb1 as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Sat Oct 24 11:15:23 2015 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Advanced Analytics and Real Application Testing options SQL> select inst_id,status,instance_name,con_id from gv$instance; INST_ID STATUS INSTANCE_NAME CON_ID ---------- ------------ ---------------- ---------- 1 MOUNTED srpstb1 0 2 MOUNTED srpstb2 0 SQL> select inst_id,con_id,name,open_mode from gv$pdbs; INST_ID CON_ID NAME OPEN_MODE ---------- ---------- ------------------------------ ---------- 1 2 PDB$SEED MOUNTED 1 3 PDB1 MOUNTED 2 2 PDB$SEED MOUNTED 2 3 PDB1 MOUNTED SQL> alter database recover managed standby database disconnect; Database altered.
SQL> select process,status,sequence#,thread# from v$managed_standby; PROCESS STATUS SEQUENCE# THREAD# --------- ------------ ---------- ---------- ARCH CONNECTED 0 0 ARCH CONNECTED 0 0 ARCH CONNECTED 0 0 ARCH CONNECTED 0 0 MRP0 APPLYING_LOG 15 2 RFS IDLE 0 0 6 rows selected.
On the Primary, query the v$archived_log view to get the current archivelog sequence generated.
SQL> select thread#,max(sequence#) from v$archived_log group by thread#; THREAD# MAX(SEQUENCE#) ---------- -------------- 1 32 2 23
On the standby, compare the above results with the outcome of the below query. This tells me, that the standby is in sync with the primary.
SQL> select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#; THREAD# MAX(SEQUENCE#) ---------- -------------- 1 31 2 23
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
In 12c, a physical standby is a copy in the Primary Role, in other words, a standby is build for a complete CDB and not for an individual containers.
Having said that, when a standby is built for a CDB with PDBs plugged into it, they (PDBs) get mirrored on the standby site as well.
Oracle 12c has a new feature called “Far SYNC instance” which receives redo data from the primary and ships it to the dataguard configuration members. This far sync instance is a remote instance and is expected to be hosted on a different server other than primary and standby. It receives the redo data from the primary to it’s Standby Redo Logs (SRL) and archives it to it’s local archive destination. Let me say that a FAR SYNC instance is not a cascade standby, even though it receives the redo from the primary and ships to the standby. Also, implementing FAR SYNC needs additional licensing.
In a far sync instance, redo apply cannont be run and it can never operate either as a primary or a standby database. Also, as said earlier, a far sync instance is always in MOUNT state, there is no option to add datafiles to this instance.
The fact of introducing or having a far sync instance in a dataguard configuration helps out in performing the failover to the required destination with ZERO DATA LOSS and also reduces the load on the primary in terms of redo transport to the standby. Thus, it consumes very minimal resources.
When a synchronous redo transport is implemented, the primary database will have to wait for an ackwnoledgment from the standby database that the respective transactional redo has been applied to the standby and upon which the primary database will confirm that the transaction is committed. The time that the primary database will have to wait from the standby database can be offloaded by having a FAR SYNC instance in place and having it close to the primary site.
This reduces the load on the primary database than having to ship the redo data to the standby database directly which would be located in a far remote location.
The configuration looks very simple. SYNCHRONOUS mode of transport is cofigured on the primary to ship the redo data to the FAR SYNC instance.
This reduces the commit response time of the primary, also offloads it from shipping the redo to a far remote standby and makes sure that there is zero data loss.
Let’s begin with creating the physical standby with FAR SYNC instance in place.
Environment:
Primary CDB Name : oraprim PDB name plugged into the primary CDB : TESTPDB1 Primary Database Server Name : ora12c-1 FAR SYNC INSTANCE Name : orafs FAR SYNC INSTANCE Server Name : ora12c-2 Standby CDB Name : orastb Standby Database Server Name : ora12c-3
Configure the required initialization parameters (log archive destinations) on the primary.
Here is how my PFILE of the Primary database looks. You can see that the parameter “log_archive_dest_2” is configured to use the “SYNC” mode of redo transport to the Far Sync Instance “orafs”.
I have set “log_archive_dest_3” as an alternate option which will ship the redo data to the standby directly in case when the “Far Sync Instance” is unreachable. Remember to set “max_failure=1” on log_archive_dest_3, so that when far sync in unreachable, log_archive_dest_2 does not keep waiting, instead upon one failure, it needs to jump to use “log_archive_dest_3”.
*.log_archive_dest_2='service=orafs SYNC AFFIRM alternate=log_archive_dest_3 valid_for=(online_logfiles,primary_role) db_unique_name=orafs' *.log_archive_dest_3='service=orastb SYNC max_failure=1 alternate=log_archive_dest_2 valid_for=(online_logfiles,primary_role) db_unique_name=orastb' *.log_archive_dest_state_3=alternate
Also, set the “log_archive_config” parameter to the net service names of the primary database, far sync instance and the standby database as all these fall under the same dataguard configuration.
initoraprim.ora
[oracle@ora12c-1 ~]$ cat /u01/app/oracle/product/12.1.0.1/db_1/dbs/initoraprim.ora oraprim.__data_transfer_cache_size=0 oraprim.__db_cache_size=637534208 oraprim.__java_pool_size=12582912 oraprim.__large_pool_size=8388608 oraprim.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment oraprim.__pga_aggregate_target=314572800 oraprim.__sga_target=943718400 oraprim.__shared_io_pool_size=46137344 oraprim.__shared_pool_size=226492416 oraprim.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/oraprim/adump' *.audit_trail='db' *.compatible='12.1.0.0.0' *.control_files='/u01/app/oracle/oradata/oraprim/control01.ctl','/u01/app/oracle/fast_recovery_area/oraprim/control02.ctl' *.db_block_size=8192 *.db_domain='' *.db_name='oraprim' *.db_unique_name='oraprim' *.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area' *.db_recovery_file_dest_size=4800m *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=oraprimXDB)' *.enable_pluggable_database=true *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=oraprim' *.log_archive_dest_2='service=orafs SYNC AFFIRM alternate=log_archive_dest_3 valid_for=(online_logfiles,primary_role) db_unique_name=orafs' *.log_archive_dest_3='service=orastb ASYNC max_failure=1 alternate=log_archive_dest_2 valid_for=(online_logfiles,primary_role) db_unique_name=orastb' *.log_archive_dest_state_3=alternate *.log_archive_config='DG_CONFIG=(oraprim,orafs,orastb)' *.local_listener='LISTENER_ORAPRIM' *.open_cursors=300 *.pga_aggregate_target=300m *.processes=300 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=900m *.undo_tablespace='UNDOTBS1' *.fal_server='orastb' *.db_file_name_convert='/u01/app/oracle/oradata/orastb/','/u01/app/oracle/oradata/oraprim/' *.log_file_name_convert='/u01/app/oracle/oradata/orastb/','/u01/app/oracle/oradata/oraprim/'
Create a pfile for the FAR SYNC Instance with the basic required parameters. Remember to set one archive destination as it’s local archival destination and another to ship the redo data to the standby database. Here, log_archive_dest_1 and log_archive_dest_2 are set as local and remote archival destinations respectively.
As said previously, set the “log_archive_config” parameter on the far sync and set “fal_server” parameter to the NET Service Name of the primary database (because it is from the primary database that the far sync receives the redo).
Here is how my PFILE of the Far Sync Instance looks:
initorafs.ora
[oracle@ora12c-2 ~]$ cat /u01/app/oracle/product/12.1.0.1/db1/dbs/initorafs.ora orafs.__data_transfer_cache_size=0 orafs.__db_cache_size=637534208 orafs.__java_pool_size=12582912 orafs.__large_pool_size=8388608 orafs.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment orafs.__pga_aggregate_target=314572800 orafs.__sga_target=943718400 orafs.__shared_io_pool_size=46137344 orafs.__shared_pool_size=226492416 orafs.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/orafs/adump' *.audit_trail='db' *.compatible='12.1.0.0.0' *.control_files='/u01/app/oracle/oradata/orafs/control01.ctl','/u01/app/oracle/fast_recovery_area/orafs/control02.ctl' *.db_block_size=8192 *.db_domain='' *.db_name='oraprim' *.db_unique_name='orafs' *.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area' *.db_recovery_file_dest_size=4800m *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=orafsXDB)' *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=orafs' *.log_archive_dest_2='service=orastb ASYNC valid_for=(standby_logfiles,standby_role) db_unique_name=orastb' *.log_archive_config='DG_CONFIG=(oraprim,orafs,orastb)' *.local_listener='LISTENER_ORAFS' *.open_cursors=300 *.enable_pluggable_database=true *.pga_aggregate_target=300m *.processes=300 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=900m *.undo_tablespace='UNDOTBS1' *.fal_server='oraprim' *.db_file_name_convert='/u01/app/oracle/oradata/oraprim/','/u01/app/oracle/oradata/orafs/' *.log_file_name_convert='/u01/app/oracle/oradata/oraprim/','/u01/app/oracle/oradata/orafs/'
Similarly, create a PFILE for the standby database with all the required parameters.
Set “fal_server” parameter to the Net service name of the “Far sync instance” and also of the primary database. This is because, in case, if the Far sync instance is unavailable, the standby will fetch the redo directly from the primary.
*.fal_server=’orafs’,’oraprim’
initorastb.ora
[oracle@ora12c-3 ~]$ cat /u01/app/oracle/product/12.1.0.1/db_1/dbs/initorastb.ora orastb.__data_transfer_cache_size=0 orastb.__db_cache_size=637534208 orastb.__java_pool_size=12582912 orastb.__large_pool_size=8388608 orastb.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment orastb.__pga_aggregate_target=314572800 orastb.__sga_target=943718400 orastb.__shared_io_pool_size=46137344 orastb.__shared_pool_size=226492416 orastb.__streams_pool_size=0 *.audit_file_dest='/u01/app/oracle/admin/orastb/adump' *.audit_trail='db' *.compatible='12.1.0.0.0' *.control_files='/u01/app/oracle/oradata/orastb/control01.ctl','/u01/app/oracle/fast_recovery_area/orastb/control02.ctl' *.db_block_size=8192 *.db_domain='' *.db_name='oraprim' *.db_unique_name='orastb' *.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area' *.db_recovery_file_dest_size=4800m *.diagnostic_dest='/u01/app/oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=orastbXDB)' *.enable_pluggable_database=true *.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=orastb' *.log_archive_dest_2='service=oraprim ASYNC valid_for=(online_logfiles,primary_role) db_unique_name=oraprim' *.log_archive_config='DG_CONFIG=(oraprim,orafs,orastb)' *.local_listener='LISTENER_ORASTB' *.open_cursors=300 *.pga_aggregate_target=300m *.processes=300 *.remote_login_passwordfile='EXCLUSIVE' *.sga_target=900m *.undo_tablespace='UNDOTBS1' *.fal_server='orafs','oraprim' *.db_file_name_convert='/u01/app/oracle/oradata/oraprim/','/u01/app/oracle/oradata/orastb/' *.log_file_name_convert='/u01/app/oracle/oradata/oraprim/','/u01/app/oracle/oradata/orastb/'
Have the TNS configured for the primary, standby and the far sync instance on all the 3 servers so that they are reachable from each other.
ORAPRIM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12c-1.mydomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = oraprim) ) ) ORAFS = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12c-2.mydomain)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = orafs) ) ) ORASTB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12c-3.mydomain)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = orastb) ) )
Have the listener started and the services of the respective instances listened by the respective listener on each of the servers.
To create a “far sync instance” controlfile, on the primary database, run the “create far sync instance controlfile” command. This would be used to mount the far sync instance.
On the primary:
[oracle@ora12c-1 dbs]$ sqlplus sys/oracle@oraprim as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Tue Sep 15 20:31:06 2015 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 7 Next log sequence to archive 9 Current log sequence 9 SQL> SQL> SQL> select name,open_mode from v$pdbs; NAME OPEN_MODE ----------------- ---------------- PDB$SEED READ ONLY TESTPDB1 READ WRITE
SQL> alter database create far sync instance controlfile as '/u02/farsync.ctl'; Database altered.
Transfer the above created controlfile to the “far sync instance” server.
[oracle@ora12c-1 dbs]$ scp /u02/farsync.ctl oracle@ora12c-2:/u01/app/oracle/oradata/orafs/control01.ctl oracle@ora12c-2's password: farsync.ctl 100% 17MB 17.1MB/s 00:01 [oracle@ora12c-1 ~]$ scp /u02/farsync.ctl oracle@ora12c-2:/u01/app/oracle/fast_recovery_area/orafs/control02.ctl oracle@ora12c-2's password: farsync.ctl 100% 17MB 17.1MB/s 00:01
Copy the password file of the primary database to the “far sync” and standby server and rename them according to the names of “far sync” instance and “standby database”.
[oracle@ora12c-1 ~]$ scp /u01/app/oracle/product/12.1.0.1/db_1/dbs/orapworaprim oracle@ora12c-3:/u01/app/oracle/product/12.1.0.1/db_1/dbs/orapworastb oracle@ora12c-3's password: orapworaprim 100% 7680 7.5KB/s 00:00 [oracle@ora12c-1 ~]$ scp /u01/app/oracle/product/12.1.0.1/db_1/dbs/orapworaprim oracle@ora12c-2:/u01/app/oracle/product/12.1.0.1/db1/dbs/orapworafs oracle@ora12c-2's password: orapworaprim 100% 7680 7.5KB/s 00:00 [oracle@ora12c-1 ~]$
Mount the “far sync Instance” with the previously copied controlfile.
[oracle@ora12c-2 dbs]$ sqlplus sys/oracle@orafs as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 16 08:20:16 2015 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL> startup mount ORACLE instance started. Total System Global Area 939495424 bytes Fixed Size 2295080 bytes Variable Size 348130008 bytes Database Buffers 583008256 bytes Redo Buffers 6062080 bytes Database mounted. SQL> select status,instance_name,database_role from v$database,v$instance; STATUS INSTANCE_NAME DATABASE_ROLE --------- ---------------- ---------------- MOUNTED orafs FAR SYNC
Start the standby instance using the pfile created previously.
[oracle@ora12c-3 dbs]$ sqlplus sys/oracle@orastb as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 16 09:14:53 2015 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to an idle instance. SQL> startup nomount ORACLE instance started. Total System Global Area 939495424 bytes Fixed Size 2295080 bytes Variable Size 348130008 bytes Database Buffers 583008256 bytes Redo Buffers 6062080 bytes
Initiate the duplication through RMAN by connecting to the primary database.
[oracle@ora12c-3 dbs]$ rman target sys/oracle@oraprim auxiliary sys/oracle@orastb Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 16 09:52:48 2015 Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved. connected to target database: ORAPRIM (DBID=4209209247) connected to auxiliary database: ORAPRIM (not mounted) RMAN> run 2> { 3> allocate channel ch1 device type disk; 4> allocate channel ch2 device type disk; 5> allocate auxiliary channel aux1 device type disk; 6> allocate auxiliary channel aux2 device type disk; 7> duplicate target database for standby from active database nofilenamecheck; 8> release channel ch1; 9> release channel ch2; 10> } using target database control file instead of recovery catalog allocated channel: ch1 channel ch1: SID=46 device type=DISK allocated channel: ch2 channel ch2: SID=55 device type=DISK allocated channel: aux1 channel aux1: SID=20 device type=DISK allocated channel: aux2 channel aux2: SID=21 device type=DISK Starting Duplicate Db at 16-SEP-15 contents of Memory Script: { backup as copy reuse targetfile '/u01/app/oracle/product/12.1.0.1/db_1/dbs/orapworaprim' auxiliary format '/u01/app/oracle/product/12.1.0.1/db_1/dbs/orapworastb' ; } executing Memory Script Starting backup at 16-SEP-15 Finished backup at 16-SEP-15 contents of Memory Script: { restore clone from service 'oraprim' standby controlfile; } executing Memory Script Starting restore at 16-SEP-15 channel aux1: starting datafile backup set restore channel aux1: using network backup set from service oraprim channel aux1: restoring control file channel aux1: restore complete, elapsed time: 00:00:03 output file name=/u01/app/oracle/oradata/orastb/control01.ctl output file name=/u01/app/oracle/fast_recovery_area/orastb/control02.ctl Finished restore at 16-SEP-15 contents of Memory Script: { sql clone 'alter database mount standby database'; } executing Memory Script sql statement: alter database mount standby database contents of Memory Script: { set newname for tempfile 1 to "/u01/app/oracle/oradata/orastb/temp01.dbf"; set newname for tempfile 2 to "/u01/app/oracle/oradata/orastb/pdbseed/pdbseed_temp01.dbf"; set newname for tempfile 3 to "/u01/app/oracle/oradata/orastb/testpdb1/testpdb1_temp01.dbf"; switch clone tempfile all; set newname for datafile 1 to "/u01/app/oracle/oradata/orastb/system01.dbf"; set newname for datafile 3 to "/u01/app/oracle/oradata/orastb/sysaux01.dbf"; set newname for datafile 4 to "/u01/app/oracle/oradata/orastb/undotbs01.dbf"; set newname for datafile 5 to "/u01/app/oracle/oradata/orastb/pdbseed/system01.dbf"; set newname for datafile 6 to "/u01/app/oracle/oradata/orastb/users01.dbf"; set newname for datafile 7 to "/u01/app/oracle/oradata/orastb/pdbseed/sysaux01.dbf"; set newname for datafile 8 to "/u01/app/oracle/oradata/orastb/testpdb1/system01.dbf"; set newname for datafile 9 to "/u01/app/oracle/oradata/orastb/testpdb1/sysaux01.dbf"; set newname for datafile 10 to "/u01/app/oracle/oradata/orastb/testpdb1/SAMPLE_SCHEMA_users01.dbf"; set newname for datafile 11 to "/u01/app/oracle/oradata/orastb/testpdb1/example01.dbf"; restore from service 'oraprim' clone database ; sql 'alter system archive log current'; } executing Memory Script executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME renamed tempfile 1 to /u01/app/oracle/oradata/orastb/temp01.dbf in control file renamed tempfile 2 to /u01/app/oracle/oradata/orastb/pdbseed/pdbseed_temp01.dbf in control file renamed tempfile 3 to /u01/app/oracle/oradata/orastb/testpdb1/testpdb1_temp01.dbf in control file executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting restore at 16-SEP-15 channel aux1: starting datafile backup set restore channel aux1: using network backup set from service oraprim channel aux1: specifying datafile(s) to restore from backup set channel aux1: restoring datafile 00001 to /u01/app/oracle/oradata/orastb/system01.dbf channel aux2: starting datafile backup set restore channel aux2: using network backup set from service oraprim channel aux2: specifying datafile(s) to restore from backup set channel aux2: restoring datafile 00003 to /u01/app/oracle/oradata/orastb/sysaux01.dbf channel aux1: restore complete, elapsed time: 00:00:45 channel aux1: starting datafile backup set restore channel aux1: using network backup set from service oraprim channel aux1: specifying datafile(s) to restore from backup set channel aux1: restoring datafile 00004 to /u01/app/oracle/oradata/orastb/undotbs01.dbf channel aux2: restore complete, elapsed time: 00:00:45 channel aux2: starting datafile backup set restore channel aux2: using network backup set from service oraprim channel aux2: specifying datafile(s) to restore from backup set channel aux2: restoring datafile 00005 to /u01/app/oracle/oradata/orastb/pdbseed/system01.dbf channel aux1: restore complete, elapsed time: 00:00:07 channel aux1: starting datafile backup set restore channel aux1: using network backup set from service oraprim channel aux1: specifying datafile(s) to restore from backup set channel aux1: restoring datafile 00006 to /u01/app/oracle/oradata/orastb/users01.dbf channel aux1: restore complete, elapsed time: 00:00:03 channel aux1: starting datafile backup set restore channel aux1: using network backup set from service oraprim channel aux1: specifying datafile(s) to restore from backup set channel aux1: restoring datafile 00007 to /u01/app/oracle/oradata/orastb/pdbseed/sysaux01.dbf channel aux2: restore complete, elapsed time: 00:00:26 channel aux2: starting datafile backup set restore channel aux2: using network backup set from service oraprim channel aux2: specifying datafile(s) to restore from backup set channel aux2: restoring datafile 00008 to /u01/app/oracle/oradata/orastb/testpdb1/system01.dbf channel aux1: restore complete, elapsed time: 00:00:50 channel aux1: starting datafile backup set restore channel aux1: using network backup set from service oraprim channel aux1: specifying datafile(s) to restore from backup set channel aux1: restoring datafile 00009 to /u01/app/oracle/oradata/orastb/testpdb1/sysaux01.dbf channel aux2: restore complete, elapsed time: 00:00:35 channel aux2: starting datafile backup set restore channel aux2: using network backup set from service oraprim channel aux2: specifying datafile(s) to restore from backup set channel aux2: restoring datafile 00010 to /u01/app/oracle/oradata/orastb/testpdb1/SAMPLE_SCHEMA_users01.dbf channel aux2: restore complete, elapsed time: 00:00:01 channel aux2: starting datafile backup set restore channel aux2: using network backup set from service oraprim channel aux2: specifying datafile(s) to restore from backup set channel aux2: restoring datafile 00011 to /u01/app/oracle/oradata/orastb/testpdb1/example01.dbf channel aux2: restore complete, elapsed time: 00:00:25 channel aux1: restore complete, elapsed time: 00:00:46 Finished restore at 16-SEP-15 sql statement: alter system archive log current contents of Memory Script: { switch clone datafile all; } executing Memory Script datafile 1 switched to datafile copy input datafile copy RECID=8 STAMP=890560571 file name=/u01/app/oracle/oradata/orastb/system01.dbf datafile 3 switched to datafile copy input datafile copy RECID=9 STAMP=890560571 file name=/u01/app/oracle/oradata/orastb/sysaux01.dbf datafile 4 switched to datafile copy input datafile copy RECID=10 STAMP=890560571 file name=/u01/app/oracle/oradata/orastb/undotbs01.dbf datafile 5 switched to datafile copy input datafile copy RECID=11 STAMP=890560571 file name=/u01/app/oracle/oradata/orastb/pdbseed/system01.dbf datafile 6 switched to datafile copy input datafile copy RECID=12 STAMP=890560571 file name=/u01/app/oracle/oradata/orastb/users01.dbf datafile 7 switched to datafile copy input datafile copy RECID=13 STAMP=890560571 file name=/u01/app/oracle/oradata/orastb/pdbseed/sysaux01.dbf datafile 8 switched to datafile copy input datafile copy RECID=14 STAMP=890560571 file name=/u01/app/oracle/oradata/orastb/testpdb1/system01.dbf datafile 9 switched to datafile copy input datafile copy RECID=15 STAMP=890560571 file name=/u01/app/oracle/oradata/orastb/testpdb1/sysaux01.dbf datafile 10 switched to datafile copy input datafile copy RECID=16 STAMP=890560571 file name=/u01/app/oracle/oradata/orastb/testpdb1/SAMPLE_SCHEMA_users01.dbf datafile 11 switched to datafile copy input datafile copy RECID=17 STAMP=890560571 file name=/u01/app/oracle/oradata/orastb/testpdb1/example01.dbf Finished Duplicate Db at 16-SEP-15 released channel: ch1 released channel: ch2 released channel: aux1 released channel: aux2 RMAN>
On the primary, I have 3 Online Redo Log Groups, so shall be creating 4 Standby redo log groups on the primary database, far sync instance and on the standby database.
SQL> select group#,bytes/1024/1024,members from v$log; GROUP# BYTES/1024/1024 MEMBERS ---------- --------------- ---------- 1 50 1 2 50 1 3 50 1 SQL> select member,group# from v$logfile; MEMBER GROUP# ------------------------------------------------------------ ---------- /u01/app/oracle/oradata/oraprim/redo03.log 3 /u01/app/oracle/oradata/oraprim/redo02.log 2 /u01/app/oracle/oradata/oraprim/redo01.log 1
SQL> alter database add standby logfile group 4 '/u01/app/oracle/oradata/oraprim/redo04.log' size 50M; Database altered. SQL> alter database add standby logfile group 5 '/u01/app/oracle/oradata/oraprim/redo05.log' size 50M; Database altered. SQL> alter database add standby logfile group 6 '/u01/app/oracle/oradata/oraprim/redo06.log' size 50M; Database altered. SQL> alter database add standby logfile group 7 '/u01/app/oracle/oradata/oraprim/redo07.log' size 50M; Database altered.
SQL> select group#,status from v$standby_log; GROUP# STATUS ---------- ---------- 4 UNASSIGNED 5 UNASSIGNED 6 UNASSIGNED 7 UNASSIGNED SQL> SQL> SQL> select group#,status from v$log; GROUP# STATUS ---------- ---------------- 1 INACTIVE 2 INACTIVE 3 CURRENT
Add the same on the far sync instance.
On Far SYNC:
SQL> alter database add standby logfile group 4 '/u01/app/oracle/oradata/orafs/redo04.log' size 50M; Database altered. SQL> alter database add standby logfile group 5 '/u01/app/oracle/oradata/orafs/redo05.log' size 50M; Database altered. SQL> alter database add standby logfile group 6 '/u01/app/oracle/oradata/orafs/redo06.log' size 50M; Database altered. SQL> alter database add standby logfile group 7 '/u01/app/oracle/oradata/orafs/redo07.log' size 50M; Database altered.
Finally, add them on the standby:
SQL> alter database add standby logfile group 4 '/u01/app/oracle/oradata/orastb/redo04.log' size 50M; Database altered. SQL> alter database add standby logfile group 5 '/u01/app/oracle/oradata/orastb/redo05.log' size 50M; Database altered. SQL> alter database add standby logfile group 6 '/u01/app/oracle/oradata/orastb/redo06.log' size 50M; Database altered. SQL> alter database add standby logfile group 7 '/u01/app/oracle/oradata/orastb/redo07.log' size 50M; Database altered.
Now, start the MRP on the standby database. The below command starts the MRP and enables Real Time Apply.
Note that, in 12c, if Real Time Apply needs to be enabled, you do not have to use the “USING CURRENT LOGFILE” clause while starting the MRP.
SQL> alter database recover managed standby database disconnect; Database altered. SQL> select process,status,sequence# from v$managed_standby; PROCESS STATUS SEQUENCE# --------- ------------ ---------- ARCH CONNECTED 0 ARCH CLOSING 18 ARCH CONNECTED 0 ARCH CONNECTED 0 RFS IDLE 0 MRP0 APPLYING_LOG 19 RFS IDLE 19 RFS IDLE 0 RFS IDLE 0 9 rows selected.
On the Far Sync, run the v$managed_standby query to look what’s happening.
SQL> select process,status,sequence# from v$managed_standby; PROCESS STATUS SEQUENCE# --------- ------------ ---------- ARCH CLOSING 16 ARCH CLOSING 18 ARCH CLOSING 17 ARCH CLOSING 15 RFS IDLE 0 RFS IDLE 0 RFS IDLE 0 LNS WRITING 19 RFS IDLE 19 9 rows selected.
We can see that the far sync is receiving (RFS process) the redo from the primary.
You can open the standby database in READ ONLY mode to make use of the Active DataGuard. (Active DataGuard option requires additonal licensing).
SQL> alter database recover managed standby database cancel; Database altered. SQL> alter database open ; Database altered. SQL> alter database recover managed standby database disconnect; Database altered.
When the standby is opened in READ ONLY mode, you can also open the respective plugged in PDBs in READ ONLY mode.
SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance; STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE -------- ------------- ------------------- -------------------- OPEN orastb PHYSICAL STANDBY READ ONLY WITH APPLY SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> SQL> SQL> select name,open_mode from v$pdbs; NAME OPEN_MODE ----------- ------------ PDB$SEED READ ONLY TESTPDB1 MOUNTED
We can open the PDB TESTPDB1 in read only mode.
SQL> alter pluggable database testpdb1 open; Pluggable database altered. SQL> select name,open_mode from v$pdbs; NAME OPEN_MODE ----------- ------------ PDB$SEED READ ONLY TESTPDB1 READ ONLY
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
In oracle 12c, a PITR can be performed at PDB level as well just as a PITR for a non-CDB. This post demonstrates on how to perform a Point In Time Recovery (PITR) of a Pluggable Database (PDB).
RMAN backups, in oracle 12c, can be taken either of a complete CDB or of a specific PDB or just of a ROOT. Backup taken of a CDB includes the ROOT, SEED and all the PDBs of that CDB. To proceed with, I’m taking a level 0 backup of the CDB.
Environment:
CDB Name : oracdb PDB Name : pdb1 Hostname : ora12c-1
[oracle@ora12c-1 ~]$ rman target / Recovery Manager: Release 12.1.0.1.0 - Production on Fri Oct 2 18:31:11 2015 Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved. connected to target database: ORACDB (DBID=2564384489) RMAN> run 2> { 3> backup as compressed backupset incremental level 0 database format '/u02/bkp/%d_inc0_%T_%U.bak'; 4> backup as compressed backupset archivelog all delete input format '/u02/bkp/%d_arc_%T_%U.bak'; 5> } Starting backup at 02-OCT-15 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=36 device type=DISK channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00001 name=/u01/app/oracle/oradata/oracdb/system01.dbf input datafile file number=00004 name=/u01/app/oracle/oradata/oracdb/undotbs01.dbf input datafile file number=00003 name=/u01/app/oracle/oradata/oracdb/sysaux01.dbf input datafile file number=00006 name=/u01/app/oracle/oradata/oracdb/users01.dbf channel ORA_DISK_1: starting piece 1 at 02-OCT-15 channel ORA_DISK_1: finished piece 1 at 02-OCT-15 piece handle=/u02/bkp/ORACDB_inc0_20151002_01qinglk_1_1.bak tag=TAG20151002T183220 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:01:45 channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00033 name=/u02/oradata/pdb1/sysaux01.dbf input datafile file number=00032 name=/u02/oradata/pdb1/system01.dbf input datafile file number=00034 name=/u02/oradata/pdb1/myts01.dbf channel ORA_DISK_1: starting piece 1 at 02-OCT-15 channel ORA_DISK_1: finished piece 1 at 02-OCT-15 piece handle=/u02/bkp/ORACDB_inc0_20151002_02qingot_1_1.bak tag=TAG20151002T183220 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:56 channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00007 name=/u01/app/oracle/oradata/oracdb/pdbseed/sysaux01.dbf input datafile file number=00005 name=/u01/app/oracle/oradata/oracdb/pdbseed/system01.dbf channel ORA_DISK_1: starting piece 1 at 02-OCT-15 channel ORA_DISK_1: finished piece 1 at 02-OCT-15 piece handle=/u02/bkp/ORACDB_inc0_20151002_03qingql_1_1.bak tag=TAG20151002T183220 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55 Finished backup at 02-OCT-15 Starting backup at 02-OCT-15 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting compressed archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=60 RECID=1 STAMP=892060183 input archived log thread=1 sequence=61 RECID=2 STAMP=892060184 input archived log thread=1 sequence=62 RECID=3 STAMP=892060185 input archived log thread=1 sequence=63 RECID=4 STAMP=892060186 input archived log thread=1 sequence=64 RECID=5 STAMP=892060186 input archived log thread=1 sequence=65 RECID=6 STAMP=892060188 input archived log thread=1 sequence=66 RECID=7 STAMP=892060558 channel ORA_DISK_1: starting piece 1 at 02-OCT-15 channel ORA_DISK_1: finished piece 1 at 02-OCT-15 piece handle=/u02/bkp/ORACDB_arc_20151002_04qingsf_1_1.bak tag=TAG20151002T183559 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 channel ORA_DISK_1: deleting archived log(s) archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_60_c0wzxzr0_.arc RECID=1 STAMP=892060183 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_61_c0wzy0qo_.arc RECID=2 STAMP=892060184 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_62_c0wzy1nd_.arc RECID=3 STAMP=892060185 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_63_c0wzy21y_.arc RECID=4 STAMP=892060186 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_64_c0wzy2mg_.arc RECID=5 STAMP=892060186 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_65_c0wzy456_.arc RECID=6 STAMP=892060188 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_66_c0x09pwf_.arc RECID=7 STAMP=892060558 Finished backup at 02-OCT-15 Starting Control File and SPFILE Autobackup at 02-OCT-15 piece handle=/u01/app/oracle/fast_recovery_area/ORACDB/autobackup/2015_10_02/o1_mf_s_892060560_c0x09s9z_.bkp comment=NONE Finished Control File and SPFILE Autobackup at 02-OCT-15 RMAN>
The database (CDB and the PDB) details along with their incarnation is as shown below.
SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ------ ---------- ---------------- ------------ 2 PDB$SEED READ ONLY NO 3 PDB1 MOUNTED
SQL>select DB_INCARNATION#,PDB_INCARNATION#,STATUS,INCARNATION_SCN,INCARNATION_TIME,BEGIN_RESETLOGS_SCN,BEGIN_RESETLOGS_TIME,CON_ID from v$pdb_incarnation order by con_id; DB_INCARNATION# PDB_INCARNATION# STATUS INCARNATION_SCN INCARNATI BEGIN_RESETLOGS_SCN BEGIN_RES CON_ID --------------- ---------------- ------- --------------- --------- ------------------- --------- ---------- 2 0 CURRENT 1720082 09-SEP-15 1720082 09-SEP-15 1 1 0 PARENT 1 24-MAY-13 1 24-MAY-13 1 2 0 CURRENT 1720082 09-SEP-15 1720082 09-SEP-15 2 1 0 PARENT 1 24-MAY-13 1 24-MAY-13 2 2 0 CURRENT 1720082 09-SEP-15 1720082 09-SEP-15 3 1 0 PARENT 1 24-MAY-13 1 24-MAY-13 3 6 rows selected.
We can see that the Incarnation SCN of CON_ID 3 (PDB1) was started at 1720082 on “9th September 2015”.
Here is a case where all the datafiles of the PDB1 were accidentally removed. This prevents me from starting the PDB1.
SQL> alter pluggable database pdb1 open; alter pluggable database pdb1 open * ERROR at line 1: ORA-01157: cannot identify/lock data file 34 - see DBWR trace file ORA-01110: data file 34: '/u02/oradata/pdb1/myts01.dbf' Sat Oct 03 10:52:21 2015 Errors in file /u01/app/oracle/diag/rdbms/oracdb/oracdb/trace/oracdb_dbw0_2854.trc: ORA-01157: cannot identify/lock data file 34 - see DBWR trace file ORA-01110: data file 34: '/u02/oradata/pdb1/myts01.dbf' ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 3
To add up, even some of the latest archive logs that weren’t backed up too were deleted. The latest archive log sequence generated by the database is 91 but what I could see is that I have the archive logs only until log sequence 80. The remaining 10 archives (from log sequence 81 to 91) have gone missing.
SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 91 Next log sequence to archive 93 Current log sequence 93 SQL> SQL> sho parameter db_reco NAME TYPE VALUE -------------------------- ------------- ---------------------------------- db_recovery_file_dest string /u01/app/oracle/fast_recovery_area db_recovery_file_dest_size big integer 4800M
[oracle@ora12c-1 2015_10_03]$ pwd /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_03 [oracle@ora12c-1 2015_10_03]$ ls -lrt *90* ls: *90*: No such file or directory [oracle@ora12c-1 2015_10_03]$ ls -lrt total 120028 -rw-r----- 1 oracle oinstall 1324032 Oct 3 10:23 o1_mf_1_75_c0yqtc57_.arc -rw-r----- 1 oracle oinstall 28605440 Oct 3 10:31 o1_mf_1_76_c0yr8f4z_.arc -rw-r----- 1 oracle oinstall 39796736 Oct 3 10:33 o1_mf_1_77_c0yrdrco_.arc -rw-r----- 1 oracle oinstall 49885696 Oct 3 10:34 o1_mf_1_78_c0yrhl8n_.arc -rw-r----- 1 oracle oinstall 3145728 Oct 3 10:51 o1_mf_1_79_c0ysgkf7_.arc -rw-r----- 1 oracle oinstall 1024 Oct 3 10:51 o1_mf_1_80_c0ysglos_.arc
Let me perform a PITR of PDB1 until log sequence 80. Connect RMAN to the CDB and issue “restore pluggable database <pdbname>” and “recover pluggable database <pdbname> auxiliary destination <dest locn>” command to restore and recover a particular PDB from the backup taken at the CDB level.
While recoverying a PDB, include the “auxiliary destination” clause if FRA is not configured. Also make sure that there is enough free space available in the FRA. If not, then use the “AUXILIARY DESTINATION” clause to specify a temporary location.
When a PITR is performed, all the data files of the PDB are recovered until the time / sequence / SCN that is specified. But, as we know, that the Undo tablespace in 12c architecture is shared by all the PDBs, it’s not possible to recover Undo tablespace for a specific PDB alone. As a result, RMAN creates a dummy database under the FRA location (if FRA isn’t configured, then RMAN creates the dummy database in the “auxiliary destination” location) by restoring all the datafiles of the ROOT (SYSTEM, SYSAUX and UNDO specifically) and then uses the undo information to recover the PDB.
Here is how it works. Connect RMAN to the CDB and restore/recover the required PDB. Make sure that the PDB is closed by performing the PITR. In my case, it’s already crashed. I’m using “/u03/temp_pdb” as an auxiliary destination.
[oracle@ora12c-1 pdb1]$ rman target / Recovery Manager: Release 12.1.0.1.0 - Production on Sat Oct 3 10:52:29 2015 Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved. connected to target database: ORACDB (DBID=2564384489) RMAN> run 2> { 3> set until sequence 80; 4> restore pluggable database pdb1; 5> recover pluggable database pdb1 auxiliary destination '/u03/temp_pdb'; 6> alter pluggable database pdb1 open resetlogs; 7> }
So what this does is that, it first restores the datafiles of the PDB to it’s location.
channel ORA_DISK_1: restoring datafile 00032 to /u02/oradata/pdb1/system01.dbf channel ORA_DISK_1: restoring datafile 00033 to /u02/oradata/pdb1/sysaux01.dbf channel ORA_DISK_1: restoring datafile 00034 to /u02/oradata/pdb1/myts01.dbf channel ORA_DISK_1: reading from backup piece /u02/bkp/ORACDB_inc0_20151002_02qingot_1_1.bak channel ORA_DISK_1: piece handle=/u02/bkp/ORACDB_inc0_20151002_02qingot_1_1.bak tag=TAG20151002T183220 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:55 Finished restore at 03-OCT-15
It then identifies the list of tablespaces that hold the UNDO segments.
List of tablespaces expected to have UNDO segments Tablespace SYSTEM Tablespace UNDOTBS1
Then creates a dummy instance with system generated name. Here it created with name “EbFT”.
Creating automatic instance, with SID='EbFt' initialization parameters used for automatic instance: db_name=ORACDB db_unique_name=EbFt_pitr_pdb1_ORACDB compatible=12.1.0.0.0 db_block_size=8192 db_files=200 sga_target=1G processes=80 diagnostic_dest=/u01/app/oracle db_create_file_dest=/u03/temp_pdb log_archive_dest_1='location=/u03/temp_pdb' enable_pluggable_database=true _clone_one_pdb_recovery=true #No auxiliary parameter file used
It then restores the controlfile for the dummy instance and mounts it.
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORACDB/autobackup/2015_10_02/o1_mf_s_892060560_c0x09s9z_.bkp channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORACDB/autobackup/2015_10_02/o1_mf_s_892060560_c0x09s9z_.bkp tag=TAG20151002T183600 channel ORA_AUX_DISK_1: restored backup piece 1 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03 output file name=/u03/temp_pdb/ORACDB/controlfile/o1_mf_c0ysohd3_.ctl Finished restore at 03-OCT-15 sql statement: alter database mount clone database
Then restores all the datafiles of the CDB, it uses the previously restored datafiles of PDB as a copy and then recovers the PDB.
# set requested point in time set until logseq 80 thread 1; # switch to valid datafilecopies switch clone datafile 32 to datafilecopy "/u02/oradata/pdb1/system01.dbf"; switch clone datafile 33 to datafilecopy "/u02/oradata/pdb1/sysaux01.dbf"; switch clone datafile 34 to datafilecopy "/u02/oradata/pdb1/myts01.dbf"; # set destinations for recovery set and auxiliary set datafiles set newname for clone datafile 1 to new; set newname for clone datafile 4 to new; set newname for clone datafile 3 to new; set newname for clone datafile 6 to new; # restore the tablespaces in the recovery set and the auxiliary set restore clone datafile 1, 4, 3, 6; switch clone datafile all; }
Here is the complete log of PDB PITR.
[oracle@ora12c-1 pdb1]$ rman target / Recovery Manager: Release 12.1.0.1.0 - Production on Sat Oct 3 10:52:29 2015 Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved. connected to target database: ORACDB (DBID=2564384489) RMAN> run 2> { 3> set until sequence 80; 4> restore pluggable database pdb1; 5> recover pluggable database pdb1 auxiliary destination '/u03/temp_pdb'; 6> alter pluggable database pdb1 open resetlogs; 7> } executing command: SET until clause Starting restore at 03-OCT-15 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=55 device type=DISK channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00032 to /u02/oradata/pdb1/system01.dbf channel ORA_DISK_1: restoring datafile 00033 to /u02/oradata/pdb1/sysaux01.dbf channel ORA_DISK_1: restoring datafile 00034 to /u02/oradata/pdb1/myts01.dbf channel ORA_DISK_1: reading from backup piece /u02/bkp/ORACDB_inc0_20151002_02qingot_1_1.bak channel ORA_DISK_1: piece handle=/u02/bkp/ORACDB_inc0_20151002_02qingot_1_1.bak tag=TAG20151002T183220 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:55 Finished restore at 03-OCT-15 Starting recover at 03-OCT-15 using channel ORA_DISK_1 RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time List of tablespaces expected to have UNDO segments Tablespace SYSTEM Tablespace UNDOTBS1 Creating automatic instance, with SID='EbFt' initialization parameters used for automatic instance: db_name=ORACDB db_unique_name=EbFt_pitr_pdb1_ORACDB compatible=12.1.0.0.0 db_block_size=8192 db_files=200 sga_target=1G processes=80 diagnostic_dest=/u01/app/oracle db_create_file_dest=/u03/temp_pdb log_archive_dest_1='location=/u03/temp_pdb' enable_pluggable_database=true _clone_one_pdb_recovery=true #No auxiliary parameter file used starting up automatic instance ORACDB Oracle instance started Total System Global Area 1068937216 bytes Fixed Size 2296576 bytes Variable Size 281019648 bytes Database Buffers 780140544 bytes Redo Buffers 5480448 bytes Automatic instance created contents of Memory Script: { # set requested point in time set until logseq 80 thread 1; # restore the controlfile restore clone controlfile; # mount the controlfile sql clone 'alter database mount clone database'; } executing Memory Script executing command: SET until clause Starting restore at 03-OCT-15 allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=19 device type=DISK channel ORA_AUX_DISK_1: starting datafile backup set restore channel ORA_AUX_DISK_1: restoring control file channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/ORACDB/autobackup/2015_10_02/o1_mf_s_892060560_c0x09s9z_.bkp channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/ORACDB/autobackup/2015_10_02/o1_mf_s_892060560_c0x09s9z_.bkp tag=TAG20151002T183600 channel ORA_AUX_DISK_1: restored backup piece 1 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:03 output file name=/u03/temp_pdb/ORACDB/controlfile/o1_mf_c0ysohd3_.ctl Finished restore at 03-OCT-15 sql statement: alter database mount clone database contents of Memory Script: { # set requested point in time set until logseq 80 thread 1; # switch to valid datafilecopies switch clone datafile 32 to datafilecopy "/u02/oradata/pdb1/system01.dbf"; switch clone datafile 33 to datafilecopy "/u02/oradata/pdb1/sysaux01.dbf"; switch clone datafile 34 to datafilecopy "/u02/oradata/pdb1/myts01.dbf"; # set destinations for recovery set and auxiliary set datafiles set newname for clone datafile 1 to new; set newname for clone datafile 4 to new; set newname for clone datafile 3 to new; set newname for clone datafile 6 to new; # restore the tablespaces in the recovery set and the auxiliary set restore clone datafile 1, 4, 3, 6; switch clone datafile all; } executing Memory Script executing command: SET until clause datafile 32 switched to datafile copy input datafile copy RECID=7 STAMP=892119312 file name=/u02/oradata/pdb1/system01.dbf datafile 33 switched to datafile copy input datafile copy RECID=8 STAMP=892119312 file name=/u02/oradata/pdb1/sysaux01.dbf datafile 34 switched to datafile copy input datafile copy RECID=9 STAMP=892119312 file name=/u02/oradata/pdb1/myts01.dbf executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting restore at 03-OCT-15 using channel ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: starting datafile backup set restore channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set channel ORA_AUX_DISK_1: restoring datafile 00001 to /u03/temp_pdb/ORACDB/datafile/o1_mf_system_%u_.dbf channel ORA_AUX_DISK_1: restoring datafile 00004 to /u03/temp_pdb/ORACDB/datafile/o1_mf_undotbs1_%u_.dbf channel ORA_AUX_DISK_1: restoring datafile 00003 to /u03/temp_pdb/ORACDB/datafile/o1_mf_sysaux_%u_.dbf channel ORA_AUX_DISK_1: restoring datafile 00006 to /u03/temp_pdb/ORACDB/datafile/o1_mf_users_%u_.dbf channel ORA_AUX_DISK_1: reading from backup piece /u02/bkp/ORACDB_inc0_20151002_01qinglk_1_1.bak channel ORA_AUX_DISK_1: piece handle=/u02/bkp/ORACDB_inc0_20151002_01qinglk_1_1.bak tag=TAG20151002T183220 channel ORA_AUX_DISK_1: restored backup piece 1 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:28 Finished restore at 03-OCT-15 datafile 1 switched to datafile copy input datafile copy RECID=14 STAMP=892119459 file name=/u03/temp_pdb/ORACDB/datafile/o1_mf_system_c0ysoqlk_.dbf datafile 4 switched to datafile copy input datafile copy RECID=15 STAMP=892119459 file name=/u03/temp_pdb/ORACDB/datafile/o1_mf_undotbs1_c0ysoqn1_.dbf datafile 3 switched to datafile copy input datafile copy RECID=16 STAMP=892119459 file name=/u03/temp_pdb/ORACDB/datafile/o1_mf_sysaux_c0ysoqnf_.dbf datafile 6 switched to datafile copy input datafile copy RECID=17 STAMP=892119459 file name=/u03/temp_pdb/ORACDB/datafile/o1_mf_users_c0ysoqo1_.dbf contents of Memory Script: { # set requested point in time set until logseq 80 thread 1; # online the datafiles restored or switched sql clone "alter database datafile 1 online"; sql clone "alter database datafile 4 online"; sql clone "alter database datafile 3 online"; sql clone 'PDB1' "alter database datafile 32 online"; sql clone 'PDB1' "alter database datafile 33 online"; sql clone 'PDB1' "alter database datafile 34 online"; sql clone "alter database datafile 6 online"; # recover pdb recover clone database tablespace "SYSTEM", "UNDOTBS1", "SYSAUX", "USERS" pluggable database 'PDB1' delete archivelog; sql clone 'alter database open read only'; plsql <<<begin add_dropped_ts; end; >>>; plsql <<<begin save_pdb_clean_scn; end; >>>; # shutdown clone before import shutdown clone abort plsql <<<begin pdbpitr_inspect(pdbname => 'PDB1'); end; >>>; } executing Memory Script executing command: SET until clause sql statement: alter database datafile 1 online sql statement: alter database datafile 4 online sql statement: alter database datafile 3 online sql statement: alter database datafile 32 online sql statement: alter database datafile 33 online sql statement: alter database datafile 34 online sql statement: alter database datafile 6 online Starting recover at 03-OCT-15 using channel ORA_AUX_DISK_1 starting media recovery archived log for thread 1 with sequence 67 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_67_c0x0q1yd_.arc archived log for thread 1 with sequence 68 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_68_c0x0q8v7_.arc archived log for thread 1 with sequence 69 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_69_c0x0qfom_.arc archived log for thread 1 with sequence 70 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_70_c0x0qfpp_.arc archived log for thread 1 with sequence 71 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_71_c0x0qfps_.arc archived log for thread 1 with sequence 72 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_72_c0x0qjvc_.arc archived log for thread 1 with sequence 73 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_73_c0x0qjvp_.arc archived log for thread 1 with sequence 74 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_74_c0x0qloh_.arc archived log for thread 1 with sequence 75 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_03/o1_mf_1_75_c0yqtc57_.arc archived log for thread 1 with sequence 76 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_03/o1_mf_1_76_c0yr8f4z_.arc archived log for thread 1 with sequence 77 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_03/o1_mf_1_77_c0yrdrco_.arc archived log for thread 1 with sequence 78 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_03/o1_mf_1_78_c0yrhl8n_.arc archived log for thread 1 with sequence 79 is already on disk as file /u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_03/o1_mf_1_79_c0ysgkf7_.arc channel ORA_AUX_DISK_1: starting archived log restore to default destination channel ORA_AUX_DISK_1: restoring archived log archived log thread=1 sequence=66 channel ORA_AUX_DISK_1: reading from backup piece /u02/bkp/ORACDB_arc_20151002_04qingsf_1_1.bak channel ORA_AUX_DISK_1: piece handle=/u02/bkp/ORACDB_arc_20151002_04qingsf_1_1.bak tag=TAG20151002T183559 channel ORA_AUX_DISK_1: restored backup piece 1 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01 archived log file name=/u03/temp_pdb/1_66_889990250.dbf thread=1 sequence=66 channel clone_default: deleting archived log(s) archived log file name=/u03/temp_pdb/1_66_889990250.dbf RECID=8 STAMP=892119461 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_67_c0x0q1yd_.arc thread=1 sequence=67 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_68_c0x0q8v7_.arc thread=1 sequence=68 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_69_c0x0qfom_.arc thread=1 sequence=69 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_70_c0x0qfpp_.arc thread=1 sequence=70 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_71_c0x0qfps_.arc thread=1 sequence=71 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_72_c0x0qjvc_.arc thread=1 sequence=72 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_73_c0x0qjvp_.arc thread=1 sequence=73 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_02/o1_mf_1_74_c0x0qloh_.arc thread=1 sequence=74 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_03/o1_mf_1_75_c0yqtc57_.arc thread=1 sequence=75 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_03/o1_mf_1_76_c0yr8f4z_.arc thread=1 sequence=76 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_03/o1_mf_1_77_c0yrdrco_.arc thread=1 sequence=77 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_03/o1_mf_1_78_c0yrhl8n_.arc thread=1 sequence=78 archived log file name=/u01/app/oracle/fast_recovery_area/ORACDB/archivelog/2015_10_03/o1_mf_1_79_c0ysgkf7_.arc thread=1 sequence=79 media recovery complete, elapsed time: 00:00:32 Finished recover at 03-OCT-15 sql statement: alter database open read only Oracle instance shut down Removing automatic instance Automatic instance removed auxiliary instance file /u03/temp_pdb/ORACDB/datafile/o1_mf_sysaux_c0ysoqnf_.dbf deleted auxiliary instance file /u03/temp_pdb/ORACDB/controlfile/o1_mf_c0ysohd3_.ctl deleted Finished recover at 03-OCT-15 Statement processed RMAN>
After the PDB is recovered, connect to the CDB and verify the status.
We could see that PDB is back online in READ WRITE mode.
SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ------ ---------- ------------ ----------- 2 PDB$SEED READ ONLY NO 3 PDB1 READ WRITE NO
Also, the incarnation details of the CDB, SEED and the PDB is as follows with the current incarnation of CON_ID 3 (PDB1) being set to the time it was recovered (3rd Oct 2015).
SQL> select DB_INCARNATION#,PDB_INCARNATION#,STATUS,INCARNATION_SCN,INCARNATION_TIME,BEGIN_RESETLOGS_SCN,BEGIN_RESETLOGS_TIME,CON_ID from v$pdb_incarnation order by con_id; DB_INCARNATION# PDB_INCARNATION# STATUS INCARNATION_SCN INCARNATI BEGIN_RESETLOGS_SCN BEGIN_RES CON_ID --------------- ---------------- ------- --------------- --------- ------------------- --------- ---------- 2 0 CURRENT 1720082 09-SEP-15 1720082 09-SEP-15 1 1 0 PARENT 1 24-MAY-13 1 24-MAY-13 1 2 0 CURRENT 1720082 09-SEP-15 1720082 09-SEP-15 2 1 0 PARENT 1 24-MAY-13 1 24-MAY-13 2 2 1 CURRENT 2394990 03-OCT-15 2395699 03-OCT-15 3 2 0 PARENT 1720082 09-SEP-15 1720082 09-SEP-15 3 6 rows selected.
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
As discussed in my previous post Creating Pluggable Databases – Part I, the different ways to create PDB are as mentioned below.
2. Creating a PDB from another PDB or non-CDB (Cloning a PDB or non-CDB)
3. Creating a PDB by plugging in a an unplugged PDB
4. Creating PDB from a non-CDB
Clicking on each of these links would take you to the respective detailed method. In this post, I’ve discussed in the remaining 2 methods (“Creating PDB by plugging in an unplugged PDB” and “Creating a PDB from a non-CDB”) of creating a PDB in detailed.
Creating a PDB by plugging in a an unplugged PDB:
Environment:
Hostname : ora12c-1 Source CDB Name : MAINCDB Source PDB Name : TRANSPDB Target CDB Name : ORACDB Target PDB Name : orapdb4
So here I have Transpdb as the PDB which is currently plugged into the CDB named MAINCDB. What I do is, I unplug TRANSPDB from MAINCDB and plug it into ORACDB as ORAPDB4 database.
I’m using a different name as ORAPDB4 for the new PDB. I could also use the same name as TRANSPDB to plug into ORACDB.
So, let’s unplug TRANSPDB from MAINCDB. Connect to the ROOT container of MAINCDB, close the PDB to be unplugged and unplug it describing it in the XML file.
SQL> show con_name CON_NAME ------------------------------ CDB$ROOT
SQL> select status,instance_name from v$instance; STATUS INSTANCE_NAME ------ ---------------------- OPEN maincdb SQL> select name,open_mode from v$pdbs; NAME OPEN_MODE ----------- -------------- PDB$SEED READ ONLY TRANSPDB READ WRITE
Let me close TRANSPDB and unplug it into the XML file called TRANSPDB.XML
SQL> alter pluggable database TRANSPDB close immediate; Pluggable database altered. SQL> alter pluggable database TRANSPDB unplug into '/u03/TRANSPDB.xml'; Pluggable database altered.
If you are unsure about the location of the datafiles of TRANSPDB, the details can be obtained from the XML file.
[oracle@ora12c-1 ~]$ grep dbf /u03/TRANSPDB.xml <path>/u01/app/oracle/oradata/maincdb/transpdb/system01.dbf</path> <path>/u01/app/oracle/oradata/maincdb/transpdb/sysaux01.dbf</path> <path>/u01/app/oracle/oradata/maincdb/transpdb/transpdb_temp01.dbf</path> <path>/u01/app/oracle/oradata/maincdb/transpdb/SAMPLE_SCHEMA_users01.dbf</path> <path>/u01/app/oracle/oradata/maincdb/transpdb/example01.dbf</path> <path>/u01/app/oracle/oradata/maincdb/transpdb/myts01.dbf</path> [oracle@ora12c-1 ~]$
Create necessary directory for the new PDB to be plugged into ORACDB.
[oracle@ora12c-1 ~]$ mkdir -p /u01/app/oracle/oradata/oracdb/orapdb4 [oracle@ora12c-1 ~]$
Now connect to ORACDB as ROOT container and run the “CREATE PLUGGABLE DATABASE using XML” command.
SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> select status,instance_name from v$instance; STATUS INSTANCE_NAME ------------ ---------------- OPEN oracdb
SQL> create pluggable database orapdb4 using '/u03/TRANSPDB.xml' 2 copy file_name_convert=('/u01/app/oracle/oradata/maincdb/transpdb/','/u01/app/oracle/oradata/oracdb/orapdb4/'); Pluggable database created.
Here I’m specifying the XML file describing the TRANSPDB to create the new PDB. Also, there are some additional clauses that are used.
COPY: This clause is to be used if you would like oracle to copy the files from the Source PDB location to the Target PDB location. In my case, I’m copying the datafiles of TRANSPDB from “/u01/app/oracle/oradata/maincdb/transpdb/” location to “/u01/app/oracle/oradata/oracdb/orapdb4/”
NOCOPY: If you do not want Oracle to copy the files and would like to have the current location of data files of source PDB as the location for the new PDB, then make use of this option.
MOVE: If you would like Oracle to move the datafiles from the location of the source PDB to the Target PDB location, then use this.
FILE_NAME_CONVERT=When using a copy clause, you need to specify the source and destination file location and this can be done using the FILE_NAME_CONVERT clause.
AS CLONE: This clause is to be used when an existing PDB was already created with the unplugged PDB and a new PDB under the same CDB is now being created using the same unplugged PDB.
SOURCE_FILE_NAME_CONVERT: If the location of the files in the XML are different from that what you have for the source files, then this caluse is to be used. Let’s say that you have the source PDB on host1 and it’s datafiles are at location “/u01/sourcePDB/” and the same is recorded in the XML file. On the target server, you copy these files manually to say “/u03/targetPDB”. But when you use the XML file to create the PDB, the location of the files in the XML do not match or are not accurate as the files still point to location “/u01/sourcePDB/”. In such cases, you can use the SOURCE_FILE_NAME_CONVERT clause which is used mainly for the source PDB files if the XML file does not describe the exact current location of the source PDB files.
Now I see that ORAPDB4 is the NEW PDB plugged into ORACDB.
SQL> select pdb_id,pdb_name,status from cdb_pdbs order by pdb_id; PDB_ID PDB_NAME STATUS ------ ----------- ------------- 2 PDB$SEED NORMAL 3 ORAPDB1 NORMAL 4 ORAPDB2 NORMAL 5 ORAPDB3 NORMAL 6 ORAPDB4 NEW
SQL> select name,open_mode from v$pdbs; NAME OPEN_MODE ----------- ------------- PDB$SEED READ ONLY ORAPDB1 MOUNTED ORAPDB2 MOUNTED ORAPDB3 MOUNTED ORAPDB4 MOUNTED
Open the newly created PDB in READ WRITE mode.
SQL> alter pluggable database orapdb4 open; Pluggable database altered. SQL> select name,open_mode from v$pdbs; NAME OPEN_MODE ----------------- --------------- PDB$SEED READ ONLY ORAPDB1 MOUNTED ORAPDB2 MOUNTED ORAPDB3 MOUNTED ORAPDB4 READ WRITE
If the PDB is created using a COPY clause, and the data files of the unplugged source PDB are still in tact in the source location, then you could use the same procedure as above to plug it back to it’s CDB.
Environment:
Hostname : ora12c-1 NON-CDB Name : noncdb1 Target CDB Name : oracdb Target PDB Name : orapdb5
A non-CDB is a normal database and does not has any containers associated with it. Before we proceed, shutdown the NON-CDB and open it in READ ONLY mode.
SQL> select d.con_id,i.status,i.instance_name,d.cdb,d.open_mode from v$database d,v$instance i; CON_ID STATUS INSTANCE_NAME CDB OPEN_MODE ------ --------- --------------- ------- -------------------- 0 OPEN noncdb1 NO READ ONLY
Using the DBMS_PDB.DESCRIBE package, you can create an XML file that describes the locations of the datafiles of the non-cdb.
SQL> BEGIN 2 DBMS_PDB.DESCRIBE( 3 pdb_descr_file => '/u02/oradata/noncdb/noncdb1.xml'); 4 END; 5 / PL/SQL procedure successfully completed.
SQL> !ls -lrt /u02/oradata/noncdb/noncdb1.xml -rw-r--r-- 1 oracle oinstall 3974 Sep 21 22:21 /u02/oradata/noncdb/noncdb1.xml
Now shutdown the non-CDB to plug into the CDB oracdb.
SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down.
The target CDB has 1 existing PDB called orapdb1. Now let’s proceed to plug in the NON-CDB as a PDB.
SQL> select d.con_id,i.status,i.instance_name,d.cdb,d.open_mode from v$database d,v$instance i; CON_ID STATUS INSTANCE_NAME CDB OPEN_MODE ------ ---------- ---------------- ------ -------------------- 0 OPEN oracdb YES READ WRITE SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED -------- ------------ ------------- ------------ 2 PDB$SEED READ ONLY NO 3 ORAPDB1 MOUNTED
Use the “Create Pluggable database using” XML file statement to plug in the non-CDB.
I’m using here the NOCOPY cluase as I would like to retain the current location of the datafiles of the non-CDB as use the same after it’s plugged into the CDB. I do not want them on a different location.
SQL> create pluggable database orapdb5 using '/u02/oradata/noncdb/noncdb1.xml' 2 nocopy tempfile reuse; Pluggable database created.
The new PDB orapdb5 is created, in other words, the non-CDB is now plugged into the CDB as a PDB. But, it doesn’t ends here.
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED -------- --------------- ------------ ------------ 2 PDB$SEED READ ONLY NO 3 ORAPDB1 MOUNTED 4 orapdb5 MOUNTED
Do not OPEN the new PDB. Connect to this new PDB container and run the “noncdb_to_pdb.sql” script located at $ORACLE_HOME/rdbms/admin. This script OPENS the PDB in restricted mode, performs certain changes and then closes the PDB.
SQL> alter session set container=orapdb5; Session altered. SQL> @?/rdbms/admin/noncdb_to_pdb.sql
You can see that ORAPDB5 is not OPENED. Open the PDB and query it.
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED -------- ------------ ------------ ------------- 2 PDB$SEED READ ONLY NO 3 ORAPDB1 MOUNTED 4 orapdb5 MOUNTED
SQL> alter pluggable database orapdb5 open; Pluggable database altered. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED -------- -------------- -------------- ------------- 2 PDB$SEED READ ONLY NO 3 ORAPDB1 MOUNTED 4 orapdb5 READ WRITE NO
SQL> alter session set container=orapdb5; Session altered.
You can see that the new PDB is using the datafiles that was at the source location as we used the NOCOPY clause while creating the PDB.
SQL> select file_name,status from dba_data_files; FILE_NAME STATUS ---------------------------------------------- ---------------------- /u02/oradata/noncdb/noncdb1/users01.dbf AVAILABLE /u02/oradata/noncdb/noncdb1/sysaux01.dbf AVAILABLE /u02/oradata/noncdb/noncdb1/system01.dbf AVAILABLE 3 rows selected.
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
This article speaks about the different ways of creating Pluggable databases in 12c. Well, there are many ways to create a PDB of which some are listed:
Here, a PDB is created in a CDB using the files of the CDB. The datafiles of the seed (System and Sysaux) will be used by copying them to the location of the datafiles that would be hosted by the PDB.
2. Creating a PDB from another PDB or non-CDB (Cloning a PDB or non-CDB):
This method allows you to create a PDB by copying the files from another PDB or from another non-CDB. This process is called as “Cloning an existing PDB or non-CDB”. The PDB from which you would be cloning can be a PDB in the local CDB or a PDB in a remote CDB.
3. Creating a PDB by plugging in a an unplugged PDB:
Here, the PDB will be created by plugging an unplugged PDB. The unplugged PDB can be from any other CDB. This technique uses an XML file which describes the metadata of the unplugged PDB.
4. Creating PDB from a non-CDB:
This allows us to create a PDB by moving the non-CDB as a PDB of an existing CDB. This process uses the method of describing the non-CDB into an XML file using the DBMS_PDB package and then plugging this into a CDB based on the xml file details.
Let’s kick off with the creation of PDBS using each of these techniques.
Environment:
Hostname: ora12c-1 CDB Name: oracdb Database version: 12.1.0.1
Here I’m creating a PDB named “orapdb2” using the SEED. Before creating the PDB, make sure the directory to hold the datafiles of the new PDB is created.
[oracle@ora12c-1 ~]$ mkdir -p /u01/app/oracle/oradata/oracdb/orapdb2
Now, as connected to the ROOT container, run the “CREATE PLUGGABLE DATABASE” command. You’ll have to provide the local administrator name for this PDB with the create statement.
SQL> select status,instance_name from v$database,v$instance; STATUS INSTANCE_NAME ------------ ---------------- OPEN oracdb SQL> show con_name CON_NAME ------------------------------ CDB$ROOT
SQL> create pluggable database orapdb2 admin user orapdb2dba identified by oracle 2 file_name_convert=('/u01/app/oracle/oradata/oracdb/pdbseed/','/u01/app/oracle/oradata/oracdb/orapdb2/'); Pluggable database created.
The FILE_NAME_CONVERT clause is used to specify the datafile locations of the SEED and the new PDB.
The newly created PDB will be in mounted state and hsa to be opened in READ WRITE mode.
SQL> select con_id,name,open_mode from v$pdbs; CON_ID NAME OPEN_MODE ---------- ------------- ---------------- ---------- 2 PDB$SEED READ ONLY 3 ORAPDB1 READ WRITE 4 ORAPDB2 MOUNTED
SQL> alter pluggable database orapdb2 open; Pluggable database altered.
SQL> select con_id,name,open_mode from v$pdbs; CON_ID NAME OPEN_MODE ---------- ------------- -------------- 2 PDB$SEED READ ONLY 3 ORAPDB1 READ WRITE 4 ORAPDB2 READ WRITE
You can use some of the below relevant clauses in the CREATE PLUGGABLE DATABASE statement.
STORAGE – you can specify the storage maximum value that each of the tablespaces of this newly created PDB will have.
DEFAULT TABLESPACE – you can create a tablespace and assign it as the default permanent tablespace of the PDB.
ROLES – if there are any predefined oracle roles, then those can be assigned to the PDB_DBA role locally to that PDB.
An example on creating pluggable database using other options.
SQL> create pluggable database orapdb2 admin user orapdb2dba identified by oracle 2 storage (MAXSIZE 2G) 3 default tablespace myts 4 datafile '/u01/app/oracle/oradata/oracdb/orapdb2/myts01.dbf' 5 file_name_convert=('/u01/app/oracle/oradata/oracdb/pdbseed/','/u01/app/oracle/oradata/oracdb/orapdb2/'); Pluggable database created.
Now let’s move on to the 2nd option of creating a PDB by cloning.
2. Creating a PDB from another PDB or non-CDB (Cloning a PDB or non-CDB):
As said above, a PDB can be cloned from local CDB or from remote CDB. First let’s check out cloning a PDB from a local CDB.
2.1 Clone a local PDB :
Environment:
Hostname: ora12c-1 CDB Name: oracdb Database version: 12.1.0.1 Source PDB : orapdb1 Target PDB : orapdb3
Let’s get the details of the datafiles which orapdb1 is using.
SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> select name,con_id,open_mode from v$pdbs; NAME CON_ID OPEN_MODE ---------- ----------- --------------- PDB$SEED 2 READ ONLY ORAPDB1 3 READ WRITE ORAPDB2 4 READ WRITE
SQL> alter session set container=orapdb1; Session altered. SQL> select file_name from cdb_data_files; FILE_NAME -------------------------------------------------------------------------------- /u01/app/oracle/oradata/oracdb/orapdb1/system01.dbf /u01/app/oracle/oradata/oracdb/orapdb1/sysaux01.dbf /u01/app/oracle/oradata/oracdb/orapdb1/SAMPLE_SCHEMA_users01.dbf /u01/app/oracle/oradata/oracdb/orapdb1/example01.dbf
Create the necessary directory for the datafiles of the to be created PDB (orapdb3)
[oracle@ora12c-1 ~]$ mkdir -p /u01/app/oracle/oradata/oracdb/orapdb3
Now, login to the root container and clone the pdb orapdb1. But before proceeding, the source PDB should be in READ ONLY mode.
Here’s is a snippet.
SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> alter pluggable database orapdb1 close immediate; Pluggable database altered. SQL> alter pluggable database orapdb1 open READ ONLY; Pluggable database altered.
If you proceed without opening the Source PDB in READ ONLY mode, then oracle would throw the below error.
SQL> create pluggable database orapdb3 from orapdb1 2 file_name_convert=('/u01/app/oracle/oradata/oracdb/orapdb1/','/u01/app/oracle/oradata/oracdb/orapdb3'); create pluggable database orapdb3 from orapdb1 * ERROR at line 1: ORA-65081: database or pluggable database is not open in read only mode
Once the source PDB is opened in read only mode, proceed with it’s cloning by making use of “CREATE PLUGGABLE DATABASE FROM” statement.
SQL> create pluggable database orapdb3 from orapdb1 2 file_name_convert=('/u01/app/oracle/oradata/oracdb/orapdb1/','/u01/app/oracle/oradata/oracdb/orapdb3/'); Pluggable database created.
As explained earlier, the clause file_name_convert is used to map the location of the datafiles of source and the target PDBs. Also, you can use STORAGE clause which was explainer earlier.
Open the newly created PDB. Also, do remember to open the source PDB back in READ WRITE mode.
SQL> alter pluggable database orapdb3 open; Pluggable database altered. SQL> alter pluggable database orapdb1 close immediate; Pluggable database altered. SQL> alter pluggable database orapdb1 open; Pluggable database altered.
The list of PDBs now that are plugged into CDB oracdb.
SQL> select name,open_mode,con_id from v$pdbs; NAME OPEN_MODE CON_ID ---------- --------------- ---------- PDB$SEED READ ONLY 2 ORAPDB1 READ WRITE 3 ORAPDB2 READ WRITE 4 ORAPDB3 READ WRITE 5
You can query the list of datafiles for the newly created PDB.
SQL> alter session set container=orapdb3; Session altered. SQL> select file_name from dba_data_files; FILE_NAME -------------------------------------------------------------------------------- /u01/app/oracle/oradata/oracdb/orapdb3/system01.dbf /u01/app/oracle/oradata/oracdb/orapdb3/sysaux01.dbf /u01/app/oracle/oradata/oracdb/orapdb3/SAMPLE_SCHEMA_users01.dbf /u01/app/oracle/oradata/oracdb/orapdb3/example01.dbf
2.2 Clone a remote PDB (PDB that is plugged in another CDB) or from a non-CDB
This feature is supported from 12.1.0.2 and not on 12.1.0.1. Trying to use this on 12.1.0.1 would hit the bug 15931910
Snippet on the error that Oracle would throw if a remote PDB cloning is tired out 12.1.0.1.
SQL> create pluggable database orapdb5 from testpdb1@testlink 2 file_name_convert=('/u01/app/oracle/oradata/oraprim/testpdb1/','/u01/app/oracle/oradata/oracdb/orapdb5/'); create pluggable database orapdb5 from testpdb1@testlink * ERROR at line 1: ORA-17628: Oracle error 19505 returned by remote Oracle server ORA-19505: failed to identify file "" SQL> select sysdate from dual@testlink; SYSDATE --------- 23-SEP-15
A separate post on the remote cloning of PDB in 12.1.0.2 would be coming up soon on this site.
The remaining two methods of creating PDB shall be described in my next article “Creating Pluggable Databases – Part 2”
See you there !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
I’ve come across many posts in the OTN forums wherein the OP faces sync issues between the Physical Standby and the Primary, but there wouldn’t be much details provided to identify the loopholes. This post provides a couple of scripts which would pull out the required dataguard configuration parameters and the diagnostic information needed to identify where the issue lies and to identify on what’s happening.
The following script needs to be run on the Primary Database. Do not worry, the script doesn’t has a shutdown command 🙂
On the Primary:
set linesize 300 set pages 70 col name for a30 col value for a60 col message for a90 col destination for a35 col dest_name for a40 select name,value from gv$parameter where name in ('db_name','db_unique_name','db_domain','db_file_name_convert','log_file_name_convert','fal_server','fal_client','remote_login_passwordfile','standby_file_management','dg_broker_start','dg_broker_config_file1','dg_broker_config_file2'); select status,instance_name,database_role,open_mode,protection_mode,switchover_status from gv$instance,gv$database; select name,(space_limit/1024/1024/1024) "Limit in GB",(space_used/1024/1024/1024) "Used in GB" from v$recovery_file_dest; select thread#,max(sequence#) from gv$archived_log group by thread#; select inst_id,dest_id, dest_name, status, target, archiver , destination from GV$ARCHIVE_DEST where destination IS NOT NULL; select * from (select severity,error_code,message,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS') from v$dataguard_status where dest_id=2 order by rownum DESC) where rownum <= 7; select group#,thread#,status,members,(bytes/1024/1024)"Each ORL File Size in MB" from gv$log; select group#,thread#,status,(bytes/1024/1024)"Each SRL File Size in MB" from gv$standby_log;
Run the following script on the Physical Standby:
On the Standby:
set linesize 300 set pages 70 col name for a30 col value for a60 col message for a90 col destination for a35 col dest_name for a40 select name,value from gv$parameter where name in ('db_name','db_unique_name','db_domain','db_file_name_convert','log_file_name_convert','fal_server','fal_client','remote_login_passwordfile','standby_file_management','dg_broker_start','dg_broker_config_file1','dg_broker_config_file2'); select status,instance_name,database_role,open_mode,protection_mode,switchover_status from gv$instance,gv$database; select name,(space_limit/1024/1024/1024) "Limit in GB",(space_used/1024/1024/1024) "Used in GB" from v$recovery_file_dest; select thread#,max(sequence#) from gv$archived_log group by thread#; select thread#,max(sequence#) from gv$archived_log where applied='YES' group by thread#; select dest_id, dest_name, status, target, archiver , destination from GV$ARCHIVE_DEST where destination IS NOT NULL; select inst_id,process,status,sequence#,thread#,client_process from gv$managed_standby; select group#,thread#,status,members,(bytes/1024/1024)"Each ORL File Size in MB" from gv$log; select group#,thread#,status,(bytes/1024/1024)"Each SRL File Size in MB" from gv$standby_log;
Script to gather Standby Sync Details:
In addition to the above scripts, this script helps in monitoring the standby. It basically lists out the last archive that has been received on the standby and also the last archive sequence that has been applied on the standby.
select rec.thread#, rec.last_rec "Last Sequence Received", app.last_app "Last Sequence Applied" from (select thread#, max(sequence#) last_rec from v$archived_log where resetlogs_id = (select max(resetlogs_id) from v$archived_log) group by thread#) rec, (select thread#, max(sequence#) last_app from v$archived_log where resetlogs_id = (select max(resetlogs_id) from v$archived_log) and applied='YES' and registrar='RFS' group by thread#) app where rec.thread#=app.thread# and rec.thread# != 0 order by rec.thread# /
Script to monitor Standby Recovery:
The following script helps you to monitor the recovey progress on the physical standby.
select to_char(START_TIME,'DD-MON-YYYY HH24:MI:SS') "Recovery Start Time",to_char(item)||' = '||to_char(sofar)||' '||to_char(units) "Progress" from v$recovery_progress where start_time=(select max(start_time) from v$recovery_progress);
This post would be updated accordingly based on the script modifications.
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
In this article, I’m demonstrating on how to restore a backup of 11.2.0.2 database on 11.2.0.3 version. Basically, this post demonstrates on how to restore a database backup of lower version on a higher version.
Source DB Name : TESTDB Source DB Version : 11.2.0.2 Source DB Host Name : ora1-1 Target DB Name : TESTDB Target DB Version : 11.2.0.3 Target DB Host Name : ora1-2
The steps involved is quite simple and is jus the traditional restore and recovery operation. The only additional step in this would be to not open the database with RESETLOGS after recovery, but instead open the database with RESETLOGS UPGRADE clause after the recovery operation.
Opening the database with just RESETLOGS would terminate the instance and would write the message that the database needs to be opened in upgrade mode in the alert log. Once the database is opened with RESETLOGS UPGRADE option, follow the usual process of manual upgrade of the database.
Let me demonstrate this with an example.
I create a simple PFILE with just “DB_NAME=testdb” entry in the target host and start the instance in NOMOUNT.
[oracle@ora1-2 ~]$ export PATH=/usr/lib64/qt-3.3/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin:/u01/app/oracle/product/11.2.0.3/db1/bin [oracle@ora1-2 ~]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db1 [oracle@ora1-2 ~]$ export ORACLE_SID=testdb [oracle@ora1-2 ~]$ cd $ORACLE_HOME/dbs [oracle@ora1-2 dbs]$ cat inittestdb.ora *.db_name='testdb' [oracle@ora1-2 dbs]$
[oracle@ora1-2 dbs]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sun Aug 30 18:26:17 2015 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. SQL> startup force nomount ORACLE instance started. Total System Global Area 238034944 bytes Fixed Size 2227136 bytes Variable Size 180356160 bytes Database Buffers 50331648 bytes Redo Buffers 5120000 bytes
Now let me begin with the restore activity. First with the restore of SPFILE from the backup and then the controlfile.
[oracle@ora1-2 bkp]$ rman target / Recovery Manager: Release 11.2.0.3.0 - Production on Sun Aug 30 18:31:38 2015 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: TESTDB (not mounted) RMAN> restore spfile from '/u03/bkp/o1_mf_ncsn0_TAG20150830T201857_by661lbz_.bkp'; Starting restore at 30-AUG-15 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=19 device type=DISK channel ORA_DISK_1: restoring spfile from AUTOBACKUP /u03/bkp/o1_mf_ncsn0_TAG20150830T201857_by661lbz_.bkp; channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete Finished restore at 30-AUG-15
RMAN> restore controlfile from '/u03/bkp/o1_mf_ncsn0_TAG20150830T201857_by661lbz_.bkp'; Starting restore at 30-AUG-15 using channel ORA_DISK_1 channel ORA_DISK_1: restoring control file channel ORA_DISK_1: restore complete, elapsed time: 00:00:01 output file name=/u03/oradata/testdb/control01.ctl output file name=/u03/oradata/testdb/control02.ctl Finished restore at 30-AUG-15
Once the controlfile is restored, mount the instance and catalog the backup pieces (if the backup pieces are stored on a different location in the target server than the location it was originally taken on the source host).
RMAN> catalog start with '/u03/bkp/'; searching for all files that match the pattern /u03/bkp/ List of Files Unknown to the Database ===================================== File Name: /u03/bkp/o1_mf_annnn_TAG20150830T202029_by661p20_.bkp File Name: /u03/bkp/o1_mf_nnnd0_TAG20150830T201857_by65ysrh_.bkp File Name: /u03/bkp/TESTDB_inc0_0qqfu28b_1_1.bak File Name: /u03/bkp/o1_mf_ncsn0_TAG20150830T201857_by661lbz_.bkp File Name: /u03/bkp/ctl.bkp File Name: /u03/bkp/o1_mf_ncnnf_TAG20150830T202034_by661vv4_.bkp File Name: /u03/bkp/TESTDB_inc0_0nqfu25d_1_1.bak Do you really want to catalog the above files (enter YES or NO)? YES cataloging files... cataloging done List of Cataloged Files ======================= File Name: /u03/bkp/o1_mf_annnn_TAG20150830T202029_by661p20_.bkp File Name: /u03/bkp/o1_mf_nnnd0_TAG20150830T201857_by65ysrh_.bkp File Name: /u03/bkp/TESTDB_inc0_0qqfu28b_1_1.bak File Name: /u03/bkp/o1_mf_ncsn0_TAG20150830T201857_by661lbz_.bkp File Name: /u03/bkp/ctl.bkp File Name: /u03/bkp/o1_mf_ncnnf_TAG20150830T202034_by661vv4_.bkp File Name: /u03/bkp/TESTDB_inc0_0nqfu25d_1_1.bak
Start with the restore and recovery operations of the database.
RMAN> run { set newname for datafile 1 to '/u03/oradata/testdb/system01.dbf'; set newname for datafile 2 to '/u03/oradata/testdb/sysaux01.dbf'; set newname for datafile 3 to '/u03/oradata/testdb/undotbs01.dbf'; set newname for datafile 4 to '/u03/oradata/testdb/users01.dbf'; restore database; switch datafile all; recover database until sequence 28; } ... output trimmed ... archived log file name=/u03/oradata/fra/TESTDB/archivelog/2015_08_30/o1_mf_1_25_by67km3s_.arc RECID=21 STAMP=889130763 archived log file name=/u03/oradata/fra/TESTDB/archivelog/2015_08_30/o1_mf_1_26_by67km44_.arc thread=1 sequence=26 channel default: deleting archived log(s) archived log file name=/u03/oradata/fra/TESTDB/archivelog/2015_08_30/o1_mf_1_26_by67km44_.arc RECID=23 STAMP=889130763 archived log file name=/u03/oradata/fra/TESTDB/archivelog/2015_08_30/o1_mf_1_27_by67km3w_.arc thread=1 sequence=27 channel default: deleting archived log(s) archived log file name=/u03/oradata/fra/TESTDB/archivelog/2015_08_30/o1_mf_1_27_by67km3w_.arc RECID=22 STAMP=889130763 media recovery complete, elapsed time: 00:00:00 Finished recover at 30-AUG-15
Now open the database with “ALTER DATABASE OPEN RESETLOGS UPGRADE” command.
If you would try opening with just “ALTER DATABASE OPEN RESETLOGS”, then it might fail with the below error.
SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01092: ORACLE instance terminated. Disconnection forced ORA-00704: bootstrap process failure ORA-39700: database must be opened with UPGRADE option Process ID: 6773 Session ID: 19 Serial number: 25
Once opened, run the CATUPGRD.SQL script on the target database to upgrade the database.
SQL> alter database open resetlogs upgrade; Database altered. SQL> spool catupgrade.log SQL> @?/rdbms/admin/catupgrd.sqlplus
If any errors are encountered, fix them and re-run the script before proceeding further.
Now start the target database normally and look out for any INVALID objects. Compile them by running the UTLRP.SQL script.
[oracle@ora1-2 testdb]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sun Aug 30 21:41:10 2015 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORACLE instance started. Total System Global Area 943669248 bytes Fixed Size 2234000 bytes Variable Size 335546736 bytes Database Buffers 599785472 bytes Redo Buffers 6103040 bytes Database mounted. Database opened. SQL> @?/rdbms/admin/utlrp.sql
Add tempfiles to the database based on the requirement.
Conclusion:
It’s possible to restore backup of lower version database on a higher version provided the minimum version matrix matches (Refer MOS for the version matrix).
RMAN duplicate from lower version to higher version does not allow, because it automatically tries to open the database with resetlogs after recovery. So until 11gR2, this can be done through the traditional RMAN restore and recovery operation. Please note that, the database needs to be opened with RESETLOGS UPGRADE option after the recovery.
RMAN duplicate in 12c has an option which allows to not to open the database automatically after the recovery. Let’s discuss on this in coming posts.
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
This post helps you out on overcoming the error “ORA-00392: log %s of thread %s is being cleared, operation not allowed” while trying to open the database in RESETLOGS mode after an incomplete recovery.
Here is what I came across while creating a database clone using the traditional RMAN backup restore and recovery process. Well, everything went fine with the restoration and the recovery operations, but error was thrown out while opening the database with RESETLOGS. Reason for this to occur was that the “alter database open resetlogs” was abnormally aborted.
RMAN> alter database open resetlogs; RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of alter db command at 08/30/2015 20:47:22 ORA-00344: unable to re-create online log '/u02/oradata/testdb/testdb/redo01.log' ORA-27040: file create error, unable to create file Linux-x86_64 Error: 2: No such file or directory Additional information: 1 Recovery Manager complete.
[oracle@ora1-2 testdb]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sun Aug 30 20:47:44 2015 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> alter database open resetlogs upgrade; alter database open resetlogs upgrade * ERROR at line 1: ORA-00392: log 1 of thread 1 is being cleared, operation not allowed ORA-00312: online log 1 thread 1: '/u02/oradata/testdb/testdb/redo01.log'
Let’s check the status of each of the groups and try clearing the problematic group manually. From the above error, the error is being thrown for the GROUP 1. So let’s try clearing it manually.
SQL> select group#,thread#,status from v$log; GROUP# THREAD# STATUS ------- --------- ---------------- 1 1 CLEARING_CURRENT 3 1 CLEARING 2 1 CLEARING
SQL> alter database clear unarchived logfile group 1; alter database clear unarchived logfile group 1 * ERROR at line 1: ORA-00344: unable to re-create online log '/u02/oradata/testdb/testdb/redo01.log' ORA-27040: file create error, unable to create file Linux-x86_64 Error: 2: No such file or directory Additional information: 1
SQL> alter database clear logfile group 1; alter database clear logfile group 1 * ERROR at line 1: ORA-00344: unable to re-create online log '/u02/oradata/testdb/testdb/redo01.log' ORA-27040: file create error, unable to create file Linux-x86_64 Error: 2: No such file or directory Additional information: 1
Oops, it doesn’t allow me to clear as these files do not exist on the axuiliary server and that’s right. Redo logs would be created once you open the database with Reestlogs.
The control file has the records of the Redo Log file locations as that of the source server. Agreed.
SQL> select group#,status,member from v$logfile; GROUP# STATUS MEMBER ------- ----------- ---------------------------------------------------------- 3 /u02/oradata/testdb/testdb/redo03.log 2 /u02/oradata/testdb/testdb/redo02.log 1 /u02/oradata/testdb/testdb/redo01.log
What next ?
I tried to update the controlfile with the new location of redo logs by renaming the log files at the database level.
SQL> alter database rename file '/u02/oradata/testdb/testdb/redo01.log' to '/u03/oradata/testdb/redo01.log'; Database altered. SQL> alter database rename file '/u02/oradata/testdb/testdb/redo02.log' to '/u03/oradata/testdb/redo02.log'; Database altered. SQL> alter database rename file '/u02/oradata/testdb/testdb/redo03.log' to '/u03/oradata/testdb/redo03.log'; Database altered.
SQL> alter database clear logfile group 1; Database altered.
Now, I tried opening the database with resetlogs, yes of of-course without any abruption 🙂
SQL> alter database open resetlogs; Database altered
So basically, if there is an error “ORA-00392: log %s of thread%s is being cleared, operation not allowed”, then you need to clear the problematic Logfile Group. If the file doesn’t exist on the auxiliary server and the controlfile is still pointing to the Redo Log file location of the source database, then you can try renaming the files at the database level and then try clearing the ORLs.
Here we go !!
HTH
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.
I came across a question in OTN forums, wherein the OP wanted to do an RMAN “active” duplicate with source database in NOARCHIVELOG mode. This article is a small test or demo on how this can be done. Basically, when RMAN is used, the only thing that comes into our mind is that the database needs to be in ARCHIVELOG mode.
But, to duplicate a database using RMAN active method, Oracle document states that this method can be used while having the source database in either OPEN mode or MOUNT mode. Given this fact, let’s place the source database in NOARCHIVELOG mode and try the duplication.
Source database : SRPRIM
Auxiliary Database : NEWDB
Database Version : 11gR2
Let’s place the source database in NOARCHIVELOG mode and place it in MOUNT stage.
[oracle@ora1-1 ~]$ sqlplus sys/oracle@srprim as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Fri Aug 14 20:07:59 2015 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> SQL> startup mount ORACLE instance started. Total System Global Area 943669248 bytes Fixed Size 2232128 bytes Variable Size 591397056 bytes Database Buffers 343932928 bytes Redo Buffers 6107136 bytes Database mounted. SQL> SQL> alter database noarchivelog; Database altered.
SQL> select status,instance_name,database_role,open_mode from v$database,v$Instance; STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE ------------ ---------------- ---------------- -------------------- MOUNTED srprim PRIMARY MOUNTED
TNS Entry used for the Auxiliary database is as follows
NEWDB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = ora1-3.mydomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = newdb) ) )
Have the listener and the TNS configuration setup on both the Source and Auxiliary database servers. Check the connectivity, start the Auxiliary instance in NOMOUNT stage and initiate the duplication.
[oracle@ora1-3 ~]$ rman target sys/oracle@srprim auxiliary sys/oracle@newdb Recovery Manager: Release 11.2.0.2.0 - Production on Fri Aug 14 20:10:49 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: SRPRIM (DBID=298418015, not open) connected to auxiliary database: NEWDB (not mounted) RMAN> duplicate target database to 'newdb' from active database nofilenamecheck; Starting Duplicate Db at 14-AUG-15 using target database control file instead of recovery catalog allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=24 device type=DISK contents of Memory Script: { sql clone "create spfile from memory"; } executing Memory Script sql statement: create spfile from memory contents of Memory Script: { shutdown clone immediate; startup clone nomount; } executing Memory Script Oracle instance shut down connected to auxiliary database (not started) Oracle instance started Total System Global Area 943669248 bytes Fixed Size 2232128 bytes Variable Size 595591360 bytes Database Buffers 339738624 bytes Redo Buffers 6107136 bytes contents of Memory Script: { sql clone "alter system set control_files = ''+DATA1/newdb/controlfile/current.266.887746275'', ''+FRA1/newdb/controlfile/current.266.887746275'' comment= ''Set by RMAN'' scope=spfile"; sql clone "alter system set db_name = ''SRPRIM'' comment= ''Modified by RMAN duplicate'' scope=spfile"; sql clone "alter system set db_unique_name = ''NEWDB'' comment= ''Modified by RMAN duplicate'' scope=spfile"; shutdown clone immediate; startup clone force nomount backup as copy current controlfile auxiliary format '+DATA1/newdb/controlfile/current.267.887746275'; restore clone controlfile to '+FRA1/newdb/controlfile/current.267.887746275' from '+DATA1/newdb/controlfile/current.267.887746275'; sql clone "alter system set control_files = ''+DATA1/newdb/controlfile/current.267.887746275'', ''+FRA1/newdb/controlfile/current.267.887746275'' comment= ''Set by RMAN'' scope=spfile"; shutdown clone immediate; startup clone nomount; alter clone database mount; } executing Memory Script sql statement: alter system set control_files = ''+DATA1/newdb/controlfile/current.266.887746275'', ''+FRA1/newdb/controlfile/current.266.887746275'' comment= ''Set by RMAN'' scope=spfile sql statement: alter system set db_name = ''SRPRIM'' comment= ''Modified by RMAN duplicate'' scope=spfile sql statement: alter system set db_unique_name = ''NEWDB'' comment= ''Modified by RMAN duplicate'' scope=spfile Oracle instance shut down Oracle instance started Total System Global Area 943669248 bytes Fixed Size 2232128 bytes Variable Size 595591360 bytes Database Buffers 339738624 bytes Redo Buffers 6107136 bytes Starting backup at 14-AUG-15 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=27 device type=DISK channel ORA_DISK_1: starting datafile copy copying current control file output file name=/u01/app/oracle/product/11.2.0.2/db_1/dbs/snapcf_srprim.f tag=TAG20150814T201138 RECID=4 STAMP=887746299 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03 Finished backup at 14-AUG-15 Starting restore at 14-AUG-15 allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=23 device type=DISK channel ORA_AUX_DISK_1: copied control file copy Finished restore at 14-AUG-15 sql statement: alter system set control_files = ''+DATA1/newdb/controlfile/current.267.887746275'', ''+FRA1/newdb/controlfile/current.267.887746275'' comment= ''Set by RMAN'' scope=spfile Oracle instance shut down connected to auxiliary database (not started) Oracle instance started Total System Global Area 943669248 bytes Fixed Size 2232128 bytes Variable Size 595591360 bytes Database Buffers 339738624 bytes Redo Buffers 6107136 bytes database mounted RMAN-05529: WARNING: DB_FILE_NAME_CONVERT resulted in invalid ASM names; names changed to disk group only. contents of Memory Script: { set newname for datafile 1 to "+data1"; set newname for datafile 2 to "+data1"; set newname for datafile 3 to "+data1"; set newname for datafile 4 to "+data1"; backup as copy reuse datafile 1 auxiliary format "+data1" datafile 2 auxiliary format "+data1" datafile 3 auxiliary format "+data1" datafile 4 auxiliary format "+data1" ; } executing Memory Script executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting backup at 14-AUG-15 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile copy input datafile file number=00001 name=+DATA/srprim/datafile/system.256.882476449 output file name=+DATA1/newdb/datafile/system.268.887746333 tag=TAG20150814T201211 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25 channel ORA_DISK_1: starting datafile copy input datafile file number=00002 name=+DATA/srprim/datafile/sysaux.257.882476449 output file name=+DATA1/newdb/datafile/sysaux.269.887746359 tag=TAG20150814T201211 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:55 channel ORA_DISK_1: starting datafile copy input datafile file number=00003 name=+DATA/srprim/datafile/undotbs1.258.882476449 output file name=+DATA1/newdb/datafile/undotbs1.270.887746413 tag=TAG20150814T201211 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07 channel ORA_DISK_1: starting datafile copy input datafile file number=00004 name=+DATA/srprim/datafile/users.259.882476449 output file name=+DATA1/newdb/datafile/users.271.887746421 tag=TAG20150814T201211 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01 Finished backup at 14-AUG-15 contents of Memory Script: { switch clone datafile all; } executing Memory Script datafile 1 switched to datafile copy input datafile copy RECID=4 STAMP=887746422 file name=+DATA1/newdb/datafile/system.268.887746333 datafile 2 switched to datafile copy input datafile copy RECID=5 STAMP=887746422 file name=+DATA1/newdb/datafile/sysaux.269.887746359 datafile 3 switched to datafile copy input datafile copy RECID=6 STAMP=887746422 file name=+DATA1/newdb/datafile/undotbs1.270.887746413 datafile 4 switched to datafile copy input datafile copy RECID=7 STAMP=887746422 file name=+DATA1/newdb/datafile/users.271.887746421 contents of Memory Script: { recover clone database noredo delete archivelog ; } executing Memory Script Starting recover at 14-AUG-15 allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=23 device type=DISK Finished recover at 14-AUG-15 Oracle instance started Total System Global Area 943669248 bytes Fixed Size 2232128 bytes Variable Size 595591360 bytes Database Buffers 339738624 bytes Redo Buffers 6107136 bytes contents of Memory Script: { sql clone "alter system set db_name = ''NEWDB'' comment= ''Reset to original value by RMAN'' scope=spfile"; sql clone "alter system reset db_unique_name scope=spfile"; shutdown clone immediate; startup clone nomount; } executing Memory Script sql statement: alter system set db_name = ''NEWDB'' comment= ''Reset to original value by RMAN'' scope=spfile sql statement: alter system reset db_unique_name scope=spfile Oracle instance shut down connected to auxiliary database (not started) Oracle instance started Total System Global Area 943669248 bytes Fixed Size 2232128 bytes Variable Size 595591360 bytes Database Buffers 339738624 bytes Redo Buffers 6107136 bytes sql statement: CREATE CONTROLFILE REUSE SET DATABASE "NEWDB" RESETLOGS NOARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 MAXINSTANCES 8 MAXLOGHISTORY 292 LOGFILE GROUP 1 ( '+data1', '+fra1' ) SIZE 50 M REUSE, GROUP 2 ( '+data1', '+fra1' ) SIZE 50 M REUSE, GROUP 3 ( '+data1', '+fra1' ) SIZE 50 M REUSE DATAFILE '+DATA1/newdb/datafile/system.268.887746333' CHARACTER SET AL32UTF8 contents of Memory Script: { set newname for tempfile 1 to "+data1"; set newname for tempfile 2 to "+data1"; switch clone tempfile all; catalog clone datafilecopy "+DATA1/newdb/datafile/sysaux.269.887746359", "+DATA1/newdb/datafile/undotbs1.270.887746413", "+DATA1/newdb/datafile/users.271.887746421"; switch clone datafile all; } executing Memory Script executing command: SET NEWNAME executing command: SET NEWNAME renamed tempfile 1 to +data1 in control file renamed tempfile 2 to +data1 in control file cataloged datafile copy datafile copy file name=+DATA1/newdb/datafile/sysaux.269.887746359 RECID=1 STAMP=887746456 cataloged datafile copy datafile copy file name=+DATA1/newdb/datafile/undotbs1.270.887746413 RECID=2 STAMP=887746456 cataloged datafile copy datafile copy file name=+DATA1/newdb/datafile/users.271.887746421 RECID=3 STAMP=887746456 datafile 2 switched to datafile copy input datafile copy RECID=1 STAMP=887746456 file name=+DATA1/newdb/datafile/sysaux.269.887746359 datafile 3 switched to datafile copy input datafile copy RECID=2 STAMP=887746456 file name=+DATA1/newdb/datafile/undotbs1.270.887746413 datafile 4 switched to datafile copy input datafile copy RECID=3 STAMP=887746456 file name=+DATA1/newdb/datafile/users.271.887746421 contents of Memory Script: { Alter clone database open resetlogs; } executing Memory Script database opened Finished Duplicate Db at 14-AUG-15
Now let’s connect to the new database and check the status.
[oracle@ora1-3 ~]$ sqlplus sys/oracle@newdb as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Fri Aug 14 20:18:32 2015 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> select status,instance_name,database_role,open_mode from v$database,v$instance; STATUS INSTANCE_NAME DATABASE_ROLE OPEN_MODE ------------ ---------------- ---------------- -------------------- OPEN newdb PRIMARY READ WRITE SQL> SQL> archive log list Database log mode No Archive Mode Automatic archival Disabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 1 Current log sequence 1
We can see that the new database too is in NOARCHIVELOG mode. It’s recommended to have this changed to the Archive log mode.
Here we go !!
COPYRIGHT
© Shivananda Rao P, 2012 to 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Shivananda Rao and http://www.shivanandarao-oracle.com with appropriate and specific direction to the original content.
DISCLAIMER
The views expressed here are my own and do not necessarily reflect the views of any other individual, business entity, or organization. The views expressed by visitors on this blog are theirs solely and may not reflect mine.