标签:cti 数据库 查询 put eof n+1 dash prim su -
Oracle 12C DataGuard部署以及维护
iverycd 2019-02-10 17:24:04 2842 收藏 4
分类专栏: DataGuard
版权
Oracle 12C DataGuard部署
一、环境部署
二、维护使用
2.1 检查主备库的归档日志号
2.2 检查备库的归档日志同步情况以及应用指标参数
2.3 查看主备库的模式
2.4 主备库归档日志维护
2.5 备库磁盘空间不足处理
2.6 重建备库
三、取消DataGuard环境
3.1 Standby Database Alone
3.2 主库脱离DataGuard
12c的Dataguard部署过程还是与11g类似,可以按照11g的部署方法进行部署。
一、环境部署
Primary Standby
DB Version 12.1.0.2 12.1.0.2
Hostname primary standby
HOST IP 172.16.4.51 172.16.4.52
DB_NAME orcl orcl
DB_UNIQUE_NAME orcl stdorcl
Instance_Name orcl stdorcl
DB Listener 1521 1521
DB Storage FileSystem FileSystem
DB files /u01/app/oracle/oradata/orcl/ /u01/app/oracle/oradata/stdorcl/
LOG files /u01/app/oracle/oradata/orcl/ /u01/app/oracle/oradata/stdorcl/
ORACLE_HOME /u01/app/oracle/product/12.1.0/db_1/ /u01/app/oracle/product/12.1.0/db_1/
OS CentOS 7.5 CentOS 7.5
1、hosts信息
主库以及备库:
172.16.4.51 primary
172.16.4.52 standby
2、在主库增加standby redologfile文件(如果redo log有N组,standby redo log则需要N+1组。)
主库查询确认组数
SQL> select group#,thread#,members,status from v$log;
GROUP# THREAD# MEMBERS STATUS
---------- ---------- ---------- ----------------
1 1 1 INACTIVE
2 1 1 CURRENT
3 2 1 INACTIVE
4 2 1 CURRENT
1
2
3
4
5
6
7
8
增加standbylogfile:
alter database add standby logfile group 5 ‘/u01/app/oracle/oradata/orcl/stdredo1.log‘ size 50m;
alter database add standby logfile group 6 ‘/u01/app/oracle/oradata/orcl/stdredo2.log‘ size 50m;
alter database add standby logfile group 7 ‘/u01/app/oracle/oradata/orcl/stdredo3.log‘ size 50m;
alter database add standby logfile group 8 ‘/u01/app/oracle/oradata/orcl/stdredo4.log‘ size 50m;
1
2
3
4
3、在备库oracle用户创建归档目录,数据目录并设置权限
oracle用户
mkdir -p /u01/app/oracle/diag/orcl
mkdir -p /u01/app/oracle/oradata/stdorcl/
mkdir -p /u01/arch
mkdir -p /u01/rman
mkdir -p /u01/app/oracle/oradata/stdorcl/pdbseed/
mkdir -p /u01/app/oracle/oradata/stdorcl/pdb/
1
2
3
4
5
6
4、主库环境开启强制归档
ALTER DATABASE FORCE LOGGING;
5、在主库已开启归档模式下修改如下参数
alter system set LOG_ARCHIVE_CONFIG=‘DG_CONFIG=(orcl,stdorcl)‘ scope=both sid=‘*‘;
alter system set log_archive_dest_1=‘LOCATION=/u01/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl‘ scope=spfile;
alter system set LOG_ARCHIVE_DEST_2=‘SERVICE=stdorcl LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stdorcl‘ scope=both sid=‘*‘;
alter system set fal_client=‘orcl‘ scope=both sid=‘*‘;
alter system set FAL_SERVER=‘stdorcl‘ scope=both sid=‘*‘;
alter system set standby_file_management=AUTO scope=both sid=‘*‘;
alter system set DB_FILE_NAME_CONVERT=‘/u01/app/oracle/oradata/stdorcl/‘,‘/u01/app/oracle/oradata/orcl/‘ scope=spfile sid=‘*‘;
alter system set LOG_FILE_NAME_CONVERT=‘/u01/app/oracle/oradata/stdorcl/‘,‘/u01/app/oracle/oradata/orcl/‘ scope=spfile sid=‘*‘;
alter system set log_archive_format=‘%t_%s_%r.arc‘ scope=spfile sid=‘*‘;
alter system set remote_login_passwordfile=‘EXCLUSIVE‘ scope=spfile;
alter system set PARALLEL_EXECUTION_MESSAGE_SIZE=8192 scope=spfile;
1
2
3
4
5
6
7
8
9
10
11
参数解析:
LOG_ARCHIVE_CONFIG:列出主备库上的DB_UNIQUE_NAME 参数。默认情况下,定义该参数能确保主备库数据库能够互相识别对方
LOG_ARCHIVE_DEST_1:本地归档路径。Primary与Standby需要定义各自的归档路径
LOG_ARCHIVE_DEST_2:设置redo log的传输方式(sync or async)以及传输目标(即standby apply node),当前节点设置的均为另一端数据库的db_unique_name,并且其中的service的值需要与tnsnames.ora别名相同。
FAL_CLIENT:(Fetch Archive Log)用来解决归档裂缝,定义的客户端(获取日志的客户端),参数取自Oracle Net Service Name
FAL_SERVER:当主库转为备库的时候此参数会生效。通过网络向FAL_CLIENT发送缺失的日志,参数取自Oracle Net Service Name上述例子当rac转为备库时,会向std获取redo或者归档应用。
STANDBY_FILE_MANAGEMENT:当主库转为备库的时候此参数会生效。用来控制是否自动将Primary数据库增加表空间或数据文件的改动,传播到物理Standby数据库。AUTO:如果该参数值设置为AUTO,则Primary数据库执行的表空间创建操作也会被传播到物理Standby数据库上执行。
MANUAL:如果设置为MANUAL或未设置任何值(默认值是MANUAL),需要手工复制新创建的数据文件到物理Standby服务器。
DB_FILE_NAME_CONVERT:当主库转为备库的时候此参数会生效。主数据库和备用数据库的数据文件转换目录对映(如果两数据库的目录结构不一样),如果有多个对映,逐一指明对映关系。注意: primary上的该参数仅在主备switch over后生效,格式应保持一致
LOG_FILE_NAME_CONVERT:当主库转为备库的时候此参数会生效。定义主备log文件的存放路径转换
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
6、编辑主库以及备库的tnsnames.ora文件
orcl =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = primary)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)
stdorcl =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = standby)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = stdorcl)
)
)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
7、在主库上生成备库的参数文件
create pfile from spfile
1
8、将主库的参数文件,密码文件拷贝到备库
备库的密码文件需要改成备库对应的密码文件(orapwSTANDBY_uniquename)
mv orapworcl orapwstdorcl
9、在备库上更改参数文件
*.audit_trail=‘NONE‘
*.compatible=‘12.1.0.2.0‘
*.control_files=‘/u01/app/oracle/oradata/stdorcl/control01.ctl‘,‘/u01/app/oracle/oradata/stdorcl/control02.ctl‘
*.db_block_size=8192
*.db_domain=‘‘
*.db_name=‘orcl‘
*.db_unique_name=‘stdorcl‘
*.diagnostic_dest=‘/u01/app/oracle‘
*.log_archive_format=‘%t_%s_%r.arc‘
*.open_cursors=300
enable_pluggable_database = true
*.pga_aggregate_target=367001600
*.processes=1000
*.remote_login_passwordfile=‘exclusive‘
*.sga_target=891289600
*.LOG_ARCHIVE_CONFIG=‘DG_CONFIG=(stdorcl,orcl)‘
*.LOG_ARCHIVE_DEST_1=‘LOCATION=/u01/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stdorcl‘
*.LOG_ARCHIVE_DEST_2=‘SERVICE=orcl LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=orcl‘
*.log_archive_format=‘%t_%s_%r.arc‘
*.FAL_SERVER=‘orcl‘
*.fal_client=‘stdorcl‘
*.DB_FILE_NAME_CONVERT=‘/u01/app/oracle/oradata/orcl/‘,‘/u01/app/oracle/oradata/stdorcl/‘
*.LOG_FILE_NAME_CONVERT=‘/u01/app/oracle/oradata/orcl/‘,‘/u01/app/oracle/oradata/stdorcl/‘
*.standby_file_management=AUTO
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
10、在备库增加静态监听
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_1)
(SID_NAME = stdorcl)
)
)
注意启动备库的监听lsnrctl start
1
2
3
4
5
6
7
8
11、在备库使用修改的参数文件启动备库数据库
startup nomount;
1
12、主库进入rman模式
rman target sys/oracle@orcl auxiliary sys/oracle@stdorcl
执行duplicate
run{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate AUXILIARY channel c4 type disk;
allocate AUXILIARY channel c5 type disk;
allocate AUXILIARY channel c6 type disk;
DUPLICATE TARGET DATABASE
FOR STANDBY
FROM ACTIVE DATABASE
DORECOVER
NOFILENAMECHECK;
release channel c1;
release channel c2;
release channel c3;
release channel c4;
release channel c5;
release channel c6;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
输出过程:
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=53 device type=DISK
allocated channel: c2
channel c2: SID=62 device type=DISK
allocated channel: c3
channel c3: SID=55 device type=DISK
allocated channel: c4
channel c4: SID=23 device type=DISK
allocated channel: c5
channel c5: SID=24 device type=DISK
allocated channel: c6
channel c6: SID=25 device type=DISK
Starting Duplicate Db at 2018/11/24 06:41:11
current log archived
contents of Memory Script:
{
backup as copy reuse
targetfile ‘/u01/app/oracle/product/12.1.0/db_1/dbs/orapworcl‘ auxiliary format
‘/u01/app/oracle/product/12.1.0/db_1/dbs/orapwstdorcl‘ ;
}
executing Memory Script
Starting backup at 2018/11/24 06:41:11
Finished backup at 2018/11/24 06:41:12
contents of Memory Script:
{
restore clone from service ‘orcl‘ standby controlfile;
}
executing Memory Script
Starting restore at 2018/11/24 06:41:12
channel c4: starting datafile backup set restore
channel c4: using network backup set from service orcl
channel c4: restoring control file
channel c4: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/stdorcl/control01.ctl
output file name=/u01/app/oracle/oradata/stdorcl/control02.ctl
Finished restore at 2018/11/24 06:41:13
contents of Memory Script:
{
sql clone ‘alter database mount standby database‘;
}
executing Memory Script
sql statement: alter database mount standby database
Using previous duplicated file /u01/app/oracle/oradata/stdorcl/system01.dbf for datafile 1 with checkpoint SCN of 1870657
Using previous duplicated file /u01/app/oracle/oradata/stdorcl/sysaux01.dbf for datafile 3 with checkpoint SCN of 1870658
Using previous duplicated file /u01/app/oracle/oradata/stdorcl/undotbs01.dbf for datafile 4 with checkpoint SCN of 1869963
Using previous duplicated file /u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf for datafile 5 with checkpoint SCN of 1822613
Using previous duplicated file /u01/app/oracle/oradata/stdorcl/users01.dbf for datafile 6 with checkpoint SCN of 1870760
contents of Memory Script:
{
set newname for tempfile 1 to
"/u01/app/oracle/oradata/stdorcl/temp01.dbf";
set newname for tempfile 2 to
"/u01/app/oracle/oradata/stdorcl/pdbseed/pdbseed_temp012018-11-24_12-05-19-AM.dbf";
set newname for tempfile 3 to
"/u01/app/oracle/oradata/stdorcl/pdb/pdb_temp012018-11-24_12-15-56-AM.dbf";
switch clone tempfile all;
set newname for datafile 1 to
"/u01/app/oracle/oradata/stdorcl/system01.dbf";
set newname for datafile 3 to
"/u01/app/oracle/oradata/stdorcl/sysaux01.dbf";
set newname for datafile 4 to
"/u01/app/oracle/oradata/stdorcl/undotbs01.dbf";
set newname for datafile 5 to
"/u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf";
set newname for datafile 6 to
"/u01/app/oracle/oradata/stdorcl/users01.dbf";
set newname for datafile 7 to
"/u01/app/oracle/oradata/stdorcl/pdbseed/sysaux01.dbf";
set newname for datafile 8 to
"/u01/app/oracle/oradata/stdorcl/pdb/system01.dbf";
set newname for datafile 9 to
"/u01/app/oracle/oradata/stdorcl/pdb/sysaux01.dbf";
set newname for datafile 10 to
"/u01/app/oracle/oradata/stdorcl/pdb/SAMPLE_SCHEMA_users01.dbf";
set newname for datafile 11 to
"/u01/app/oracle/oradata/stdorcl/pdb/example01.dbf";
restore
from service ‘orcl‘ clone datafile
7, 8, 9, 10, 11 ;
sql ‘alter system archive log current‘;
}
executing Memory Script
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
renamed tempfile 1 to /u01/app/oracle/oradata/stdorcl/temp01.dbf in control file
renamed tempfile 2 to /u01/app/oracle/oradata/stdorcl/pdbseed/pdbseed_temp012018-11-24_12-05-19-AM.dbf in control file
renamed tempfile 3 to /u01/app/oracle/oradata/stdorcl/pdb/pdb_temp012018-11-24_12-15-56-AM.dbf in control file
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 2018/11/24 06:41:18
channel c4: starting datafile backup set restore
channel c4: using network backup set from service orcl
channel c4: specifying datafile(s) to restore from backup set
channel c4: restoring datafile 00007 to /u01/app/oracle/oradata/stdorcl/pdbseed/sysaux01.dbf
channel c5: starting datafile backup set restore
channel c5: using network backup set from service orcl
channel c5: specifying datafile(s) to restore from backup set
channel c5: restoring datafile 00008 to /u01/app/oracle/oradata/stdorcl/pdb/system01.dbf
channel c6: starting datafile backup set restore
channel c6: using network backup set from service orcl
channel c6: specifying datafile(s) to restore from backup set
channel c6: restoring datafile 00009 to /u01/app/oracle/oradata/stdorcl/pdb/sysaux01.dbf
channel c5: restore complete, elapsed time: 00:00:38
channel c5: starting datafile backup set restore
channel c5: using network backup set from service orcl
channel c5: specifying datafile(s) to restore from backup set
channel c5: restoring datafile 00010 to /u01/app/oracle/oradata/stdorcl/pdb/SAMPLE_SCHEMA_users01.dbf
channel c4: restore complete, elapsed time: 00:00:54
channel c4: starting datafile backup set restore
channel c4: using network backup set from service orcl
channel c4: specifying datafile(s) to restore from backup set
channel c4: restoring datafile 00011 to /u01/app/oracle/oradata/stdorcl/pdb/example01.dbf
channel c5: restore complete, elapsed time: 00:00:16
channel c6: restore complete, elapsed time: 00:00:54
channel c4: restore complete, elapsed time: 00:00:35
Finished restore at 2018/11/24 06:42:48
sql statement: alter system archive log current
current log archived
contents of Memory Script:
{
restore clone force from service ‘orcl‘
archivelog from scn 1822613;
catalog clone datafilecopy "/u01/app/oracle/oradata/stdorcl/system01.dbf",
"/u01/app/oracle/oradata/stdorcl/sysaux01.dbf",
"/u01/app/oracle/oradata/stdorcl/undotbs01.dbf",
"/u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf",
"/u01/app/oracle/oradata/stdorcl/users01.dbf";
switch clone datafile 1 to datafilecopy
"/u01/app/oracle/oradata/stdorcl/system01.dbf";
switch clone datafile 3 to datafilecopy
"/u01/app/oracle/oradata/stdorcl/sysaux01.dbf";
switch clone datafile 4 to datafilecopy
"/u01/app/oracle/oradata/stdorcl/undotbs01.dbf";
switch clone datafile 5 to datafilecopy
"/u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf";
switch clone datafile 6 to datafilecopy
"/u01/app/oracle/oradata/stdorcl/users01.dbf";
switch clone datafile all;
}
executing Memory Script
Starting restore at 2018/11/24 06:42:51
channel c4: starting archived log restore to default destination
channel c4: using network backup set from service orcl
channel c4: restoring archived log
archived log thread=1 sequence=20
channel c5: starting archived log restore to default destination
channel c5: using network backup set from service orcl
channel c5: restoring archived log
archived log thread=1 sequence=21
channel c6: starting archived log restore to default destination
channel c6: using network backup set from service orcl
channel c6: restoring archived log
archived log thread=1 sequence=22
channel c4: restore complete, elapsed time: 00:00:04
channel c4: starting archived log restore to default destination
channel c4: using network backup set from service orcl
channel c4: restoring archived log
archived log thread=1 sequence=23
channel c5: restore complete, elapsed time: 00:00:04
channel c5: starting archived log restore to default destination
channel c5: using network backup set from service orcl
channel c5: restoring archived log
archived log thread=1 sequence=24
channel c6: restore complete, elapsed time: 00:00:04
channel c6: starting archived log restore to default destination
channel c6: using network backup set from service orcl
channel c6: restoring archived log
archived log thread=1 sequence=25
channel c4: restore complete, elapsed time: 00:00:05
channel c4: starting archived log restore to default destination
channel c4: using network backup set from service orcl
channel c4: restoring archived log
archived log thread=1 sequence=26
channel c5: restore complete, elapsed time: 00:00:05
channel c5: starting archived log restore to default destination
channel c5: using network backup set from service orcl
channel c5: restoring archived log
archived log thread=1 sequence=27
channel c6: restore complete, elapsed time: 00:00:05
channel c6: starting archived log restore to default destination
channel c6: using network backup set from service orcl
channel c6: restoring archived log
archived log thread=1 sequence=28
channel c4: restore complete, elapsed time: 00:00:00
channel c4: starting archived log restore to default destination
channel c4: using network backup set from service orcl
channel c4: restoring archived log
archived log thread=1 sequence=29
channel c5: restore complete, elapsed time: 00:00:00
channel c5: starting archived log restore to default destination
channel c5: using network backup set from service orcl
channel c5: restoring archived log
archived log thread=1 sequence=30
channel c4: restore complete, elapsed time: 00:00:00
channel c4: starting archived log restore to default destination
channel c4: using network backup set from service orcl
channel c4: restoring archived log
archived log thread=1 sequence=31
channel c6: restore complete, elapsed time: 00:00:01
channel c5: restore complete, elapsed time: 00:00:01
channel c4: restore complete, elapsed time: 00:00:00
Finished restore at 2018/11/24 06:43:02
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/stdorcl/system01.dbf RECID=7 STAMP=993019382
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/stdorcl/sysaux01.dbf RECID=8 STAMP=993019382
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/stdorcl/undotbs01.dbf RECID=9 STAMP=993019382
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf RECID=10 STAMP=993019382
cataloged datafile copy
datafile copy file name=/u01/app/oracle/oradata/stdorcl/users01.dbf RECID=11 STAMP=993019382
datafile 1 switched to datafile copy
input datafile copy RECID=7 STAMP=993019382 file name=/u01/app/oracle/oradata/stdorcl/system01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=8 STAMP=993019382 file name=/u01/app/oracle/oradata/stdorcl/sysaux01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=9 STAMP=993019382 file name=/u01/app/oracle/oradata/stdorcl/undotbs01.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=10 STAMP=993019382 file name=/u01/app/oracle/oradata/stdorcl/pdbseed/system01.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=11 STAMP=993019382 file name=/u01/app/oracle/oradata/stdorcl/users01.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=12 STAMP=993019383 file name=/u01/app/oracle/oradata/stdorcl/pdbseed/sysaux01.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=13 STAMP=993019383 file name=/u01/app/oracle/oradata/stdorcl/pdb/system01.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=14 STAMP=993019383 file name=/u01/app/oracle/oradata/stdorcl/pdb/sysaux01.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=15 STAMP=993019383 file name=/u01/app/oracle/oradata/stdorcl/pdb/SAMPLE_SCHEMA_users01.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=16 STAMP=993019383 file name=/u01/app/oracle/oradata/stdorcl/pdb/example01.dbf
contents of Memory Script:
{
set until scn 1871591;
recover
standby
clone database
delete archivelog
;
}
executing Memory Script
executing command: SET until clause
Starting recover at 2018/11/24 06:43:03
starting media recovery
archived log for thread 1 with sequence 25 is already on disk as file /u01/arch/1_25_992995473.arc
archived log for thread 1 with sequence 26 is already on disk as file /u01/arch/1_26_992995473.arc
archived log for thread 1 with sequence 27 is already on disk as file /u01/arch/1_27_992995473.arc
archived log for thread 1 with sequence 28 is already on disk as file /u01/arch/1_28_992995473.arc
archived log for thread 1 with sequence 29 is already on disk as file /u01/arch/1_29_992995473.arc
archived log for thread 1 with sequence 30 is already on disk as file /u01/arch/1_30_992995473.arc
archived log for thread 1 with sequence 31 is already on disk as file /u01/arch/1_31_992995473.arc
archived log file name=/u01/arch/1_25_992995473.arc thread=1 sequence=25
archived log file name=/u01/arch/1_26_992995473.arc thread=1 sequence=26
archived log file name=/u01/arch/1_27_992995473.arc thread=1 sequence=27
archived log file name=/u01/arch/1_28_992995473.arc thread=1 sequence=28
archived log file name=/u01/arch/1_29_992995473.arc thread=1 sequence=29
archived log file name=/u01/arch/1_30_992995473.arc thread=1 sequence=30
archived log file name=/u01/arch/1_31_992995473.arc thread=1 sequence=31
media recovery complete, elapsed time: 00:00:00
Finished recover at 2018/11/24 06:43:05
Finished Duplicate Db at 2018/11/24 06:43:09
released channel: c1
released channel: c2
released channel: c3
released channel: c4
released channel: c5
released channel: c6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
13、在备库上创建spfile文件
create spfile from pfile;
然后重启备库
shudown startup
14、查询主库状态
在主节点执行
select SWITCHOVER_STATUS,DATABASE_ROLE,PROTECTION_MODE,OPEN_MODE FROM v$database;
1
默认应该为MAXIMIZE PERFORMANCE
15、备库开启实时应用模式
12c可以不用using current logfile字句,使用实时日志
SQL> alter database recover managed standby database disconnect;
1
2
SQL>alter database open; (if not open)
SQL> SELECT NAME,DATABASE_ROLE,PROTECTION_MODE,OPEN_MODE FROM V$DATABASE;
NAME DATABASE_ROLE PROTECTION_MODE OPEN_MODE
----- -------------------- --------------------------
RAC PHYSICAL STANDBY MAXIMUM PERFORMANCE READ ONLY WITH APPLY
1
2
3
4
5
6
7
16、主备库查询比对
在备库
SQL> select process,status from v$managed_standby;
PROCESS STATUS
--------- ------------
ARCH CLOSING
ARCH CONNECTED
RFS IDLE
RFS IDLE
MRP0 APPLYING_LOG
1
2
3
4
5
6
7
8
在主节点多次切换日志文件,查询主备节点是否一致
alter system switch logfile;
SQL> select thread#,max(sequence#) from v$log_history group by thread#;
1
2
–以下为查询DataGuard状态
SQL> select thread#,sequence#,archived,applied from v$archived_log order by sequence# desc;
1
17、同步检查
在主库:
#su - oracle
$sqlplus / as sysdba
SQL>create table test as select * from dba_objects where rownum < 101;
1
2
3
在备库:
#su - oracle
$sqlplus / as sysdba
SQL>select count(*) from test;
count(*)
---------------
100
结果为100表示实时同步成功
在主库:
SQL> drop table test purge;
1
2
3
4
5
6
7
8
9
二、维护使用
2.1 检查主备库的归档日志号
在主库检查日志号:
SQL> archive log list;
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination +ARCH
Oldest online log sequence 159
Next log sequence to archive 160
Current log sequence 160
1
2
3
4
5
6
7
8
9
10
在备库检查日志号
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/arch_std
Oldest online log sequence 159
Next log sequence to archive 0
Current log sequence 160
检查主库、备库当前的Current log sequence是否都是一样的
1
2
3
4
5
6
7
8
9
10
11
2.2 检查备库的归档日志同步情况以及应用指标参数
在备库执行:
查询日志应用情况
SQL> select thread#,sequence#,first_time,next_time,applied from v$archived_log where applied=‘YES‘;
SQL> select thread#,sequence#,first_time,next_time,applied from v$archived_log where applied=‘NO‘;
1
2
3
4
SQL> SELECT PROCESS, STATUS,THREAD#,SEQUENCE#,BLOCK#,BLOCKS, DELAY_MINS FROM V$MANAGED_STANDBY;
PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS DELAY_MINS
--------- ------------ ---------- ---------- ---------- ---------- ----------
ARCH CLOSING 1 168 1 1 0
ARCH CLOSING 1 169 1 77 0
ARCH CONNECTED 0 0 0 0 0
ARCH CLOSING 2 96 1 31 0
RFS IDLE 0 0 0 0 0
RFS IDLE 0 0 0 0 0
RFS IDLE 0 0 0 0 0
RFS IDLE 2 97 4557 1 0
RFS IDLE 0 0 0 0 0
RFS IDLE 0 0 0 0 0
RFS IDLE 1 170 3917 1 0
MRP0 APPLYING_LOG 2 97 4555 102400 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
查询备库日志应用的速率
select * from V$RECOVERY_PROGRESS;
1
也可通过动态性能视图,如下查询
SQL> select message from V$DATAGUARD_STATUS order by TIMESTAMP;
该动态性能视图显示出在主库上产生了多少重做日志数据,但是还没有被备库所应用
MESSAGE
ARC0: Archival started
ARC1: Archival started
ARC2: Archival started
ARC1: Becoming the ‘no FAL‘ ARCH
ARC2: Becoming the heartbeat ARCH
ARC2: Becoming the active heartbeat ARCH
ARC3: Archival started
RFS[1]: Assigned to RFS process 1723
RFS[2]: Assigned to RFS process 1725
RFS[3]: Assigned to RFS process 1720
ARC3: Beginning to archive thread 1 sequence 100 (943743-966774)
ARC3: Completed archiving thread 1 sequence 100 (0-0)
Primary database is in MAXIMUM PERFORMANCE mode
ARC0: Completed archiving thread 1 sequence 105 (0-0)
ARC0: Beginning to archive thread 1 sequence 105 (989272-989582)
RFS[5]: Assigned to RFS process 1742
RFS[4]: Assigned to RFS process 1740
Attempt to start background Managed Standby Recovery process
MRP0: Background Managed Standby Recovery process started
Managed Standby Recovery starting Real Time Apply
Media Recovery Waiting for thread 2 sequence 18 观察可以发现是在等待thread2的日志应用
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
备注:
1、检查主备库之间的日志同步情况,如是是YES表示同步成功的,如果是NO表示没同步同成的日志,根据主备库与具体时间来确定。
2、实时应用MRP通常是APPLYING_LOG,ADG下通常是WAIT_FOR_LOG
1
2
3
2.3 查看主备库的模式
查看主备库性能模式
主库
SQL> SELECT NAME,DATABASE_ROLE,PROTECTION_MODE,OPEN_MODE FROM V$DATABASE;
NAME DATABASE_ROLE PROTECTION_MODE OPEN_MODE
--------- ---------------- -------------------- --------------------
ORCLDB PRIMARY MAXIMUM PERFORMANCE READ WRITE
默认为最大性能模式
1
2
3
4
5
6
备库
SQL> SELECT NAME,DATABASE_ROLE,PROTECTION_MODE,OPEN_MODE FROM V$DATABASE;
NAME DATABASE_ROLE PROTECTION_MODE OPEN_MODE
--------- ---------------- -------------------- --------------------
ORCLDB PHYSICAL STANDBY MAXIMUM PERFORMANCE READ ONLY WITH APPLY
备库最大性能模式,已开启日志应用
1
2
3
4
5
6
7
检查应用模式(是否启用了实时应用)
查询v$archive_dest_status 视图,如果打开了实时应用,则recovery_mode为:MANAGED REAL TIME APPLY,例如:
备库:
SQL> select recovery_mode from v$archive_dest_status where DEST_NAME=‘LOG_ARCHIVE_DEST_1‘;
RECOVERY_MODE
-----------------------
MANAGED REAL TIME APPLY
主库:
SQL> select recovery_mode from v$archive_dest_status where DEST_NAME=‘LOG_ARCHIVE_DEST_2‘;
RECOVERY_MODE
-----------------------
MANAGED REAL TIME APPLY
1
2
3
4
5
6
7
8
9
10
11
12
2.4 主备库归档日志维护
可以使用rman脚本清除备库已经应用完的归档日志
clean_arch.sh
#!/bin/bash
export ORACLE_SID=orcl
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export ORACLE_BASE=/u01/app/oracle
time=`date +"20%y%m%d%H%M%S"`
logdir="/home/oracle/arch_clean_log"
find $logdir -type f -name "*.log" -mtime +7 -exec rm -rf {} \;
$ORACLE_HOME/bin/rman target / log=$logdir/archClean_$time.log <<EOF
run{
crosscheck archivelog all;
delete noprompt expired archivelog all;
DELETE noprompt ARCHIVELOG ALL COMPLETED BEFORE ‘SYSDATE-7‘;
}
exit
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
crontab
0 22 * * * /bin/bash /home/oracle/clean_arch.sh >/dev/null 2>&1
1
2
2.5 备库磁盘空间不足处理
当主库创建新的数据文件,然而备库没有空间时,应当先扩容备库磁盘,然后将备库的文件管理改成手动管理,使用命令在备库创建对应的数据文件。
1、在主库创建新的表空间test,并且增加数据文件7,8
SQL> select file#,name from v$datafile;
FILE# NAME
---------- --------------------------------------------------
1 +ORADATA/orcl/datafile/system.273.971254283
2 +ORADATA/orcl/datafile/sysaux.272.971254303
3 +ORADATA/orcl/datafile/undotbs1.271.971254323
4 +ORADATA/orcl/datafile/undotbs2.269.971254379
5 +ORADATA/orcl/datafile/users.268.971254383
6 +ORADATA/orcl/datafile/admin.260.982939699
7 +ORADATA/orcl/datafile/test.259.983006823
8 +ORADATA/orcl/datafile/test.276.983007417
1
2
3
4
5
6
7
8
9
10
11
12
2、备库查询数据文件
SQL> select file#,name from v$datafile;
FILE# NAME
---------- --------------------------------------------------
1 /u01/app/oracle/oradata/orcl/datafile/system.273.971254283
2 /u01/app/oracle/oradata/orcl/datafile/sysaux.272.971254303
3 /u01/app/oracle/oradata/orcl/datafile/undotbs1.271.971254323
4 /u01/app/oracle/oradata/orcl/datafile/undotbs2.269.971254379
5 /u01/app/oracle/oradata/orcl/datafile/users.268.971254383
6 /u01/app/oracle/oradata/orcl/datafile/admin.260.982939699
7 /u01/app/oracle/product/11.2.0/db_1/dbs/UNNAMED00007
8 /u01/app/oracle/product/11.2.0/db_1/dbs/UNNAMED00008
可以看到备库的数据文件,并没有真正的创建,只是创建了临时文件。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
3、等备库磁盘空间充足时(新挂1块盘),手动创建数据文件对应主库
--默认为自动管理,改为手动管理
SQL> alter system set standby_file_management=‘MANUAL‘;
1
2
--由于u01空间已不够,可将数据文件创建到u02
SQL> alter database create datafile ‘/u01/app/oracle/product/11.2.0/db_1/dbs/UNNAMED00007‘ as ‘/u02/oradata/test_1.dbf‘;
SQL> alter database create datafile ‘/u01/app/oracle/product/11.2.0/db_1/dbs/UNNAMED00008‘ as ‘/u02/oradata/test_2.dbf‘;
1
2
3
4
4、将备库数据文件管理改回自动
SQL> alter system set standby_file_management=‘AUTO‘;
1
5、打开备库的日志应用
由于之前备库磁盘空间不足,无法创建新的数据文件,mrp进程会停止应用,现在需要手动启用
SQL> alter database recover managed standby database using current logfile disconnect;
1
6、检查主备库新的表空间数据是否同步
备注:系统做同步时,会先建临时文件,名称如下:
/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/UNNAMED000XX
如果空间足够,就会成功同步,并将文件更名为正确的名称和路径。
1
2
3
2.6 重建备库
如果备库损坏,可以将其删除之后再次重建。
1、删除备库
此方法只删除备库的数据文件,日志文件,控制文件,临时文件。
SQL> shutdown immediate;
SQL> startup restrict mount; (只有sysdba用户可以连)
SQL> select name from v$database;
NAME
---------
ORCLDB
SQL> drop database;
1
2
3
4
5
6
7
8
9
2、重新用备库的pfile文件将备库启动到nomount
3、主库重新用rman duplicate创建备库
三、取消DataGuard环境
3.1 Standby Database Alone
如果主库已完全不能使用,可将备库脱离dataguard环境,成为独立数据库。
1、修改备库的pfile
去掉dg环境参数
LOG_ARCHIVE_CONFIG
DB_FILE_NAME_CONVERT
LOG_FILE_NAME_CONVERT
LOG_ARCHIVE_DEST_STATE_2
STANDBY_FILE_MANAGEMENT
FAL_SERVER
FAL_CLIENT
1
2
3
4
5
6
7
2、启动数据库脱离dg环境
SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP
1
2
3
3、查看数据库状态
SQL> select database_role,switchover_status,open_mode from v$database;
DATABASE_ROLE SWITCHOVER_STATUS OPEN_MODE
---------------- -------------------- --------------------
PRIMARY NOT ALLOWED READ WRITE
已脱离dg
1
2
3
4
5
6
7
4、测试备库读写
参考MOS ID 2074686.1
1
3.2 主库脱离DataGuard
当主库不需要与备库保持数据同步,可以将主库有关dg的初始化参数移除,使主库完全脱离DataGuard环境。
1、查询确认主库是否是最大性能模式
如果不是最大性能模式,需要使用命令修改
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;
1
2、移除(S)PFILE 参数文件dg参数
LOG_ARCHIVE_CONFIG
DB_FILE_NAME_CONVERT
LOG_FILE_NAME_CONVERT
LOG_ARCHIVE_DEST_n pointing to the Standby Database and valid for STANDBY_LOGFILES
LOG_ARCHIVE_DEST_STATE_n
DG_BROKER_START
DG_BROKER_CONFIG_FILE1
DG_BROKER_CONFIG_FILE2
STANDBY_ARCHIVE_DEST
STANDBY_FILE_MANAGEMENT
FAL_SERVER
FAL_CLIENT
1
2
3
4
5
6
7
8
9
10
11
12
3、删除主库所有的Standby Redolog Groups
SQL> SELECT GROUP# FROM V$STANDBY_LOG;
SQL> alter database drop standby logfile group 3;
1
2
4、删除 Data Guard Broker Configuration 配置文件
SQL> show parameter DG_BROKER_CONFIG;
$ORACLE_HOME/dbs/dr1<ORACLE_SID>.dat and
$ORACLE_HOME/dbs/dr2<ORACLE_SID>.dat
1
2
3
4
参考MOS ID 733794.1
————————————————
版权声明:本文为CSDN博主「iverycd」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/kiral07/article/details/86916367
标签:cti 数据库 查询 put eof n+1 dash prim su -
原文地址:https://www.cnblogs.com/yaoyangding/p/13166698.html