标签:track higher media signed update eset block let osi
逻辑备用DG
今天是2014-04-29,近期一直忙的事情,也没来的急写点东西。今天继续整理dg的相关内容,要说的是逻辑dg的创建过程和注意事项。
什么是逻辑dg呢?物理dg类似于主库的完整副本。是块对块的重做应用。
可是对于逻辑dg。是类似採用日志挖掘技术将主库传来的redo数据,进行构造sql进而在备库进行应用。
且备库能够正常打开,并能维护其它不是主库传送的须要维护的对象。
那么就存在一个问题,假设在备库採用rowid获得的数据。有可能和主库的全然不一致。
那么当在主库改动数据的时候,我们建议在所改动的对象上。创建非空索引或是主键条件,那么在传送到备库的时候,继而可以应用这些约束,避免数据更新的错误性。但有时候并非非要说一定要在主库建立对象表的主键或是非空索引。假设没有创建了。那么将会产生大量的无用redo,这就是在全部字段上都进行了日志补充导致。
The physical organization in a logical standby database is different from that of the primary database, even though the logical standby database is created from a backup copy of the primary database. Thus, ROWIDs contained in the redo records generated by the primary database cannot be used to identify the corresponding row in the logical standby database.
Oracle uses primary-key or unique-constraint/index supplemental logging to logically identify a modified row in the logical standby database. When database-wide primary-key and unique-constraint/index supplemental logging is enabled, eachUPDATE
statement also writes the column values necessary in the redo log to uniquely identify the modified row in the logical standby database.
If a table has a primary key defined, then the primary key is logged along with the modified columns as part of theUPDATE
statement to identify the modified row.
If there is no primary key, then the shortest nonnull unique-constraint/index is logged along with the modified columns as part of theUPDATE
statement to identify the modified row.
If there is no primary key and no nonnull unique constraint/index, then all columns of bounded size are logged as part of the UPDATE statement to identify the modified row. All columns are logged except the following: LONG, LOB, LONG RAW, object type, and collections.
A function-based index, even though it is declared as unique, cannot be used to uniquely identify a modified row. However, logical standby databases support replication of tables that have function-based indexes defined, as long as modified rows can be uniquely identified.
Oracle recommends that you add a primary key or a nonnull unique index to tables in the primary database, whenever possible, to ensure that SQL Apply can efficiently apply redo data updates to the logical standby database.
那么哪些是sql apply支持的类型和不支持哪些类型呢?查看手冊例如以下:
Logical standby databases support the following datatypes:
BINARY_DOUBLE
BINARY_FLOAT
BLOB
CHAR
CLOB
and NCLOB
DATE
INTERVAL YEAR TO MONTH
INTERVAL DAY TO SECOND
LONG
LONG RAW
NCHAR
NUMBER
NVARCHAR2
RAW
TIMESTAMP
TIMESTAMP WITH TIMEZONE
TIMESTAMP WITH LOCAL TIMEZONE
VARCHAR
and VARCHAR2
LOBs stored as SecureFiles (requires that the primary database be run at a compatibility of 11.2 or higher.
XMLType
data for all storage models, assuming the following primary database compatibility requirements:
XMLType
stored in CLOB
format requires primary database to run at a compatibility of 11.1 or higher
XMLType
stored in object-relational format or as binary XML requires that the primary database be running Oracle Database 11g Release 2 (11.2.0.3) or higher with a redo compatibility setting of 11.2.0.3 or higher
SQL Apply support for the following has compatibility requirements on the primary database:
Multibyte CLOB
support requires primary database to run at a compatibility of 10.1 or higher.
IOT support without LOB
s and Overflows requires primary database to run at a compatibility of 10.1 or higher.
IOT support with LOB
and Overflow requires primary database to run at a compatibility of 10.2 or higher.
TDE support requires primary database to run at a compatibility of 11.1 or higher.
Segment compression requires primary database to run at a compatibility of 11.1 or higher.
Hybrid Columnar Compression support is dependent on the underlying storage system.
通过视图查看主库中的哪些表不能被sql apply :
[oracle@dg-one ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.4.0 Production on Tue Apr 29 21:30:01 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> SQL> desc dba_logstdby_unsupported Name Null? Type ----------------------------------------- -------- ---------------------------- OWNER VARCHAR2(30) TABLE_NAME VARCHAR2(30) COLUMN_NAME VARCHAR2(30) ATTRIBUTES VARCHAR2(39) DATA_TYPE VARCHAR2(32) SQL> select owner,table_name,column_name,attributes,data_type from dba_logstdby_unsupported; no rows selected 查看存在唯一性问题的表。例如以下: SQL> select * from dba_logstdby_not_unique; OWNER TABLE_NAME B ------------------------------ ------------------------------ - SCOTT BONUS N SCOTT SALGRADE N STDBYPERF STATS$MANAGED_STANDBY N STDBYPERF STATS$RECOVERY_PROGRESS N SQL> desc dba_logstdby_not_unique Name Null?Type ----------------------------------------- -------- ---------------------------- OWNER VARCHAR2(30) TABLE_NAME VARCHAR2(30) BAD_COLUMN VARCHAR2(1)
好了。如今開始创建逻辑dg,由于我已经创建了物理 dg,那么我们从该基础上转换成逻辑dg(注:不能从逻辑dg转为物理dg)。假设没有创建物理dg。那么能够
直接去创建逻辑dg,可是注意參数的改动。
第一步:停止备库redo应用:
查看数据库模式:
SQL> set linesize 200 SQL> col dest_name for a50 SQL> select database_mode,recovery_mode,protection_mode,dest_name from v$archive_dest_status where database_mode!=‘UNKNOWN‘; DATABASE_MODE RECOVERY_MODE PROTECTION_MODE DEST_NAME --------------- ----------------------- -------------------- -------------------------------------------------- OPEN_READ-ONLY MANAGED REAL TIME APPLY MAXIMUM PERFORMANCE LOG_ARCHIVE_DEST_1 SQL>
停止redo应用:
SQL> alter database recover managed standby database cancel; Database altered. SQL>
第二步:在主库build逻辑dg所需元数据。
SQL> exec dbms_logstdby.build; PL/SQL procedure successfully completed. SQL>
注:这个过程主要是做了什么操作呢?
A LogMiner dictionary must be built into the redo data so that the LogMiner component of SQL Apply can properly interpret changes it sees in the redo. As part of building the LogMiner dictionary, supplemental logging is automatically set up to log primary key and unique-constraint/index columns. The supplemental logging information ensures each update contains enough information to logically identify each row that is modified by the statement.
To build the LogMiner dictionary, issue the following statement:
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
The DBMS_LOGSTDBY.BUILD
procedure waits for all existing transactions to complete. Long-running transactions executed on the primary database will affect the timeliness of this command.
Note:
In databases created using Oracle Database 11g release 2 (11.2) or later, supplemental logging information is automatically propagated to any existing physical standby databases. However, for databases in earlier releases, or if the database was created using an earlier release and then upgraded to 11.2, you must check whether supplemental logging is enabled at the physical standby(s) if it is also enabled at the primary database. If it is not enabled at the physical standby(s), then before performing a switchover or failover, you must enable supplemental logging on all existing physical standby databases. To do so, issue the following SQL command on each physical standby:SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;
If you do not do this, then any logical standby that is also in the same Data Guard configuration will be unusable if a switchover or failover is performed to one of the physical standby databases. If a switchover or failover has already occurred and supplemental logging was not enabled, then you must recreate all logical standby databases
第三步:在备库告诉mrp继续应用redo。达到字典构建时的scn。
SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 313159680 bytes Fixed Size 2252824 bytes Variable Size 171970536 bytes Database Buffers 134217728 bytes Redo Buffers 4718592 bytes Database mounted. SQL> alter database recover to logical standby Amy; Database altered.
这时我们改动几个地方,首先是db_name;
SQL> alter system set db_name=Amy scope=spfile; System altered. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options
另外还有log_archive_dest_n 參数配置:
SQL> show parameter log_archive_dest_1 NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ log_archive_dest_1 string location=USE_DB_RECOVERY_FILE_ DEST valid_for=(all_logfiles,a ll_roles) log_archive_dest_10 string log_archive_dest_11 string log_archive_dest_12 string log_archive_dest_13 string log_archive_dest_14 string log_archive_dest_15 string log_archive_dest_16 string log_archive_dest_17 string NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ log_archive_dest_18 string log_archive_dest_19 string SQL> show parameter log_archive_dest_2 NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ log_archive_dest_2 string service=dg1 lgwr sync valid_fo r=(online_logfiles,primary_rol e) db_unique_name=dg1 log_archive_dest_20 string
这里须要说明第一个log_archive_dest_1 为什么是all_logfiles呢?由于我们是创建的逻辑备库,逻辑备库除了维护主库的对象外。还能够自己进行其它对象的改动操作那么这些对象的redo,是须要归档的。逻辑备库是一个真实数据库啊,另外假设不指定高速闪回区,那么应该在加一个log_archive_dest_3,将日志分开。另外假设是高速闪回区。能够省略valid_for,由于默认就是all_logfiles,all_roles。
另外要注意:假设逻辑备库和主库在一个同样的数据存储管理上。那么须要使用dbms_logstdby.skip去跳过alter tablespace dml的操作,不然可能产生误改动主库的表空间问题。
第四步:完毕逻辑dg的创建:
SQL> shutdown immediat; SP2-0717: illegal SHUTDOWN option SQL> shutdown immediate; ORA-01507: database not mounted ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 313159680 bytes Fixed Size 2252824 bytes Variable Size 171970536 bytes Database Buffers 134217728 bytes Redo Buffers 4718592 bytes Database mounted. SQL> alter database open resetlogs; Database altered. SQL> alter database start logical standby apply immediate; Database altered. SQL> SQL> set linesize 200 SQL> select * from v$log GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME ---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ --------- 1 1 1 52428800 512 1 NO CURRENT 1335271 29-APR-14 2.8147E+14 2 1 0 52428800 512 1 YES UNUSED 0 0 3 1 0 52428800 512 1 YES UNUSED 0 0
注:resetlogs做什么的?
resetlogs打开讲会对没有归档的redo日志文件里的内容进行覆盖,sequence又一次从1開始计算。相当于数据库进入了一个新的化身或是新的生命周期。进入incarnation的目的就是为了不在去应用resetlog之后的日志的lcr。
常常有朋友问我resetlog打开数据库了是否就是一个不全然恢复呢?这是一个很错误的观点,全然恢复和不全然恢复不是根据resetlog定的。而是我们在前滚的时候是否所有应用了日志。假设所有应用了,那么就是一次全然恢复过程。
我们能够查看数据库的原型列表例如以下。
[oracle@dg-two dbs]$ rman target / Recovery Manager: Release 11.2.0.4.0 - Production on Tue Apr 29 22:20:46 2014 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. connected to target database: AMY (DBID=1205246033) RMAN> list incarnation 2> ; using target database control file instead of recovery catalog List of Database Incarnations DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time ------- ------- -------- ---------------- --- ---------- ---------- 1 1 AMY 1205246033 PARENT 1335270 29-APR-14 2 2 AMY 1205246033 CURRENT 1335271 29-APR-14
第五步:验证逻辑dg:
SQL> select * from v$logstdby_progress; APPLIED_SCN APPLIED_T RESTART_SCN RESTART_T LATEST_SCN LATEST_TI MINING_SCN MINING_TI RESETLOGS_ID ----------- --------- ----------- --------- ---------- --------- ---------- --------- ------------ 1335268 1334503 1334503 SQL> select process,status,sequence# from v$managed_standby; PROCESS STATUS SEQUENCE# --------- ------------ ---------- ARCH CONNECTED 0 ARCH CONNECTED 0 ARCH CONNECTED 0 ARCH CONNECTED 0 ARCH CONNECTED 0 SQL>
咦?发现问题了,查看主数据库日志:
Error 1017 received logging on to the standby ------------------------------------------------------------ Check that the primary and standby are using a password file and remote_login_passwordfile is set to SHARED or EXCLUSIVE, and that the SYS password is same in the password files. returning error ORA-16191 ------------------------------------------------------------ PING[ARC1]: Heartbeat failed to connect to standby ‘dg2‘. Error is 16191.
password文件的问题咯
重建password文件?那应该是在10G可是到11g不能重建。我们要做的是将主库的password 文件copy到备库,然后mv。
之后查看主库日志 状态例如以下(能够发现主库每隔1分钟去主动ping一下备库):
------------------------------------------------------------ PING[ARC1]: Heartbeat failed to connect to standby ‘dg2‘. Error is 16191. Tue Apr 29 22:39:42 2014 Error 1017 received logging on to the standby ------------------------------------------------------------ Check that the primary and standby are using a password file and remote_login_passwordfile is set to SHARED or EXCLUSIVE, and that the SYS password is same in the password files. returning error ORA-16191 ------------------------------------------------------------ PING[ARC1]: Heartbeat failed to connect to standby ‘dg2‘. Error is 16191. Tue Apr 29 22:40:46 2014 ****************************************************************** LGWR: Setting ‘active‘ archival for destination LOG_ARCHIVE_DEST_2 ****************************************************************** LGWR: Standby redo logfile selected for thread 1 sequence 84 for destination LOG_ARCHIVE_DEST_2 Thread 1 advanced to log sequence 84 (LGWR switch) Current log# 3 seq# 84 mem# 0: /u01/app/oracle/oradata/dg/redo03.log Tue Apr 29 22:40:49 2014 Archived Log entry 138 added for thread 1 sequence 83 ID 0x68b85302 dest 1: Tue Apr 29 22:40:49 2014 ARC3: Standby redo logfile selected for thread 1 sequence 83 for destination LOG_ARCHIVE_DEST_2
查看备库日志:
LOGMINER: Read buffers: 16 LOGMINER: Memory LWM: limit 10M, LWM 24M, 80% LOGMINER: Memory Release Limit: 1M RFS LogMiner: Registered logfile [/u01/app/oracle/fast_recovery_area/DG2/foreign_archivelog/DG1/2014_04_29/o1_mf_1_82_9ozghfdo_.arc] to LogMiner session id [1] RFS LogMiner: Registered logfile [/u01/app/oracle/fast_recovery_area/DG2/foreign_archivelog/DG1/2014_04_29/o1_mf_1_81_9ozghffw_.arc] to LogMiner session id [1] Tue Apr 29 22:40:47 2014 Primary database is in MAXIMUM PERFORMANCE mode RFS[4]: Assigned to RFS process 3779 RFS[4]: Selected log 4 for thread 1 sequence 84 dbid 1756848898 branch 843517891 Tue Apr 29 22:40:48 2014 RFS[5]: Assigned to RFS process 3777 RFS[5]: Opened log for thread 1 sequence 79 dbid 1756848898 branch 843517891 RFS LogMiner: Registered logfile [/u01/app/oracle/fast_recovery_area/DG2/foreign_archivelog/DG1/2014_04_29/o1_mf_1_79_9ozghhy9_.arc] to LogMiner session id [1] Tue Apr 29 22:40:50 2014 LOGMINER: session#=1 (Logical_Standby$), reader MS00 pid=38 OS id=3781 sid=44 started Tue Apr 29 22:40:50 2014 LOGMINER: session#=1 (Logical_Standby$), builder MS01 pid=39 OS id=3783 sid=45 started Tue Apr 29 22:40:50 2014 LOGMINER: session#=1 (Logical_Standby$), preparer MS02 pid=40 OS id=3785 sid=48 started Tue Apr 29 22:40:50 2014 RFS LogMiner: RFS id [3787] assigned as thread [1] PING handler Tue Apr 29 22:40:51 2014 RFS[6]: Assigned to RFS process 3789 RFS[6]: Selected log 5 for thread 1 sequence 83 dbid 1756848898 branch 843517891 Tue Apr 29 22:40:55 2014 RFS LogMiner: Registered logfile [/u01/app/oracle/fast_recovery_area/DG2/foreign_archivelog/DG1/2014_04_29/o1_mf_1_83_9ozghp29_.arc] to LogMiner session id [1] LOGMINER: Begin mining logfile during dictionary load for session 1 thread 1 sequence 79, /u01/app/oracle/fast_recovery_area/DG2/foreign_archivelog/DG1/2014_04_29/o1_mf_1_79_9ozghhy9_.arc Tue Apr 29 22:41:09 2014 Thread 1 advanced to log sequence 2 (LGWR switch) Current log# 2 seq# 2 mem# 0: /u01/app/oracle/oradata/dg/DG2/onlinelog/o1_mf_2_9ozd8zc1_.log Tue Apr 29 22:41:11 2014 Archived Log entry 1 added for thread 1 sequence 1 ID 0x47d6f0d0 dest 1: Tue Apr 29 22:41:20 2014 Thread 1 advanced to log sequence 3 (LGWR switch) Current log# 3 seq# 3 mem# 0: /u01/app/oracle/oradata/dg/DG2/onlinelog/o1_mf_3_9ozd904o_.log Tue Apr 29 22:41:22 2014 Archived Log entry 2 added for thread 1 sequence 2 ID 0x47d6f0d0 dest 1: Thread 1 cannot allocate new log, sequence 4 Checkpoint not complete Current log# 3 seq# 3 mem# 0: /u01/app/oracle/oradata/dg/DG2/onlinelog/o1_mf_3_9ozd904o_.log Thread 1 advanced to log sequence 4 (LGWR switch) Current log# 1 seq# 4 mem# 0: /u01/app/oracle/oradata/dg/DG2/onlinelog/o1_mf_1_9ozd8yhq_.log Tue Apr 29 22:41:31 2014 Archived Log entry 3 added for thread 1 sequence 3 ID 0x47d6f0d0 dest 1: Tue Apr 29 22:41:47 2014 RFS LogMiner: RFS id [3787] assigned as thread [1] PING handler
确认进程:
SQL> select process,status,sequence# from v$managed_standby; PROCESS STATUS SEQUENCE# --------- ------------ ---------- ARCH CLOSING 2 ARCH CLOSING 3 ARCH CONNECTED 0 ARCH CLOSING 83 ARCH CLOSING 1 RFS IDLE 0 RFS IDLE 0 RFS IDLE 0 RFS IDLE 0 RFS IDLE 84 10 rows selected. SQL>
至此完毕逻辑dg的搭建过程:
标签:track higher media signed update eset block let osi
原文地址:http://www.cnblogs.com/lxjshuju/p/7026790.html