m6米乐安卓版下载-米乐app官网下载
暂无图片
17
暂无图片
暂无图片
暂无图片

实战-m6米乐安卓版下载

原创 徐sir 2023-12-21
457

上一篇文章里在原有的rac-单实例的环境里又增加了新的rac做为备库。

这一篇里的主要工作就是把rac-rac的dg进行主备切换,然后对调ip地址,再恢复rac-rac-单机dg的环境

描述下主要的工作流程如:

为了不混淆新旧rac或是主备rac的说法,下面操作我以db_unique_name区分在哪个数据库集群上操作。

关于dataguard参数不明确的小伙伴可以参考强哥的文章爆肝一万字终于把 oracle data guard 核心参数搞明白了

1.1关闭rac主库2节点实例

切换前先把主库关闭1个节点

srvctl stop instance -d orcl -n rac2

1.2、备库rac检查状态

sql> set pagesize 200
sql> select process, status, thread#, sequence#, block#, blocks from v$managed_standby ;
process   status          thread#  sequence#     block#     blocks
--------- ------------ ---------- ---------- ---------- ----------
arch      closing               2        103      63488       1393
arch      closing               1        122      65536        108
arch      connected             0          0          0          0
arch      closing               2        102      63488        681
mrp0      applying_log          2        104       1252     102400
rfs       idle                  0          0          0          0
rfs       idle                  0          0          0          0
rfs       idle                  0          0          0          0
rfs       idle                  0          0          0          0
rfs       idle                  1        123       8415          1
rfs       idle                  0          0          0          0
rfs       idle                  0          0          0          0
rfs       idle                  0          0          0          0
rfs       idle                  0          0          0          0
rfs       idle                  2        104       1254          1
rfs       idle                  0          0          0          0
rfs       idle                  0          0          0          0
17 rows selected.

1.3、主库rac检查是否存在gap

sql> select thread#, low_sequence#, high_sequence# from v$archive_gap;
no rows selected

1.4、主库rac状态检查

这里返回to standby或session active都表示正常

sql> select open_mode,protection_mode,database_role,switchover_status from v$database;
open_mode            protection_mode      database_role    switchover_status
-------------------- -------------------- ---------------- --------------------
read write           maximum performance  primary          to standby

2.1、关闭rac主库

在主rac(就是db_unique_name为primary的)的rac1节点上进行切换命令

alter database commit to switchover to physical standby with session shutdown;

执行之后,以下是主库rac日志输出

wed dec 20 20:25:13 2023
alter database commit to switchover to physical standby with session shutdown
alter database commit to switchover to physical standby [process id: 17546] (orcl1)
waiting for all non-current orls to be archived...
all non-current orls have been archived.
waiting for all fal entries to be archived...
all fal entries have been archived.
waiting for potential physical standby switchover target to become synchronized...
active, synchronized physical standby switchover target has been identified
switchover end-of-redo log thread 1 sequence 124 has been fixed
switchover: primary highest seen scn set to 0x0.0x215734
arch: noswitch archival of thread 1, sequence 124
arch: end-of-redo branch archival of thread 1 sequence 124
arch: lgwr is actively archiving destination log_archive_dest_3
arch: lgwr is actively archiving destination log_archive_dest_2
arch: standby redo logfile selected for thread 1 sequence 124 for destination log_archive_dest_3
arch: standby redo logfile selected for thread 1 sequence 124 for destination log_archive_dest_2
archived log entry 512 added for thread 1 sequence 124 id 0x63df8a8b dest 1:
arch: archiving is disabled due to current logfile archival
primary will check for some target standby to have received alls redo
final check for a synchronized target standby. check will be made once.
log_archive_dest_2 is a potential physical standby switchover target
log_archive_dest_3 is a potential physical standby switchover target
active, synchronized target has been identified
target has also received all redo
backup controlfile written to trace file /u01/app/oracle/diag/rdbms/primary/orcl1/trace/orcl1_ora_17546.trc
clearing standby activation id 1675594379 (0x63df8a8b)
the primary database controlfile was created using the
'maxlogfiles 192' clause.
there is space for up to 188 standby redo logfiles
use the following sql commands on the standby database to create
standby redo logfiles that match the primary database:
alter database add standby logfile 'srl1.f' size 52428800;
alter database add standby logfile 'srl2.f' size 52428800;
alter database add standby logfile 'srl3.f' size 52428800;
alter database add standby logfile 'srl4.f' size 52428800;
alter database add standby logfile 'srl5.f' size 52428800;
archivelog for thread 1 sequence 124 required for standby recovery
switchover: primary controlfile converted to standby controlfile succesfully.
switchover: complete - database shutdown required
user (ospid: 17546): terminating the instance
instance terminated by user, pid = 17546
completed: alter database commit to switchover to physical standby with session shutdown
shutting down instance (abort)
license high water mark = 6
wed dec 20 20:25:18 2023
instance shutdown complete

然后是rac备库

---告警日志
wed dec 20 20:25:15 2023
rfs[11]: assigned to rfs process 16067
rfs[11]: selected log 11 for thread 1 sequence 124 dbid 1613952925 branch 1086172194
wed dec 20 20:25:16 2023
archived log entry 29 added for thread 1 sequence 124 id 0x63df8a8b dest 1:
wed dec 20 20:25:16 2023
resetting standby activation id 1675594379 (0x63df8a8b)
media recovery end-of-redo indicator encountered
media recovery continuing
media recovery waiting for thread 1 sequence 125
wed dec 20 20:25:17 2023
rfs[12]: assigned to rfs process 16065
rfs[12]: possible network disconnect with primary database
wed dec 20 20:25:17 2023
rfs[1]: possible network disconnect with primary database
wed dec 20 20:25:17 2023
rfs[13]: assigned to rfs process 15826
rfs[13]: possible network disconnect with primary database
wed dec 20 20:25:17 2023
rfs[5]: possible network disconnect with primary database
wed dec 20 20:25:17 2023
rfs[2]: possible network disconnect with primary database
wed dec 20 20:25:17 2023
rfs[10]: possible network disconnect with primary database
---查看归档
sql> archive log list;
database log mode              archive mode
automatic archival             enabled
archive destination             data
oldest online log sequence     123
next log sequence to archive   0
current log sequence           124

然后是单机备库

---告警日志
wed dec 20 20:25:15 2023
rfs[15]: assigned to rfs process 29450
rfs[15]: selected log 11 for thread 1 sequence 124 dbid 1613952925 branch 1086172194
wed dec 20 20:25:16 2023
archived log entry 77 added for thread 1 sequence 124 id 0x63df8a8b dest 1:
wed dec 20 20:25:16 2023
resetting standby activation id 1675594379 (0x63df8a8b)
media recovery end-of-redo indicator encountered
media recovery continuing
media recovery waiting for thread 1 sequence 125
wed dec 20 20:25:17 2023
rfs[16]: assigned to rfs process 29447
rfs[16]: possible network disconnect with primary database
wed dec 20 20:25:17 2023
rfs[13]: possible network disconnect with primary database
wed dec 20 20:25:17 2023
rfs[12]: possible network disconnect with primary database
wed dec 20 20:25:17 2023
rfs[14]: possible network disconnect with primary database
wed dec 20 20:25:17 2023
rfs[10]: possible network disconnect with primary database
---查看归档
sql> archive log list;
database log mode              archive mode
automatic archival             enabled
archive destination            /u01/app/oracle/oradata/orcldg/archivelog
oldest online log sequence     123
next log sequence to archive   0
current log sequence           124

2.2、在rac备库上执行切换

在rac备库(db_unique_name=orcl)的rac1节点上执行

alter database commit to switchover toprimary with session shutdown;
---执行后查看alert显示
alter database commit to switchover to primary with session shutdown
alter database switchover to primary (orcl1)
maximum wait for role transition is 15 minutes.
switchover: media recovery is still active
role change: canceling mrp - no more redo to apply
wed dec 20 20:45:40 2023
mrp0: background media recovery cancelled with status 16037
errors in file /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_pr00_5121.trc:
ora-16037: user requested cancel of managed recovery operation
wed dec 20 20:45:40 2023
managed standby recovery not using real time apply
recovery interrupted!
wed dec 20 20:45:41 2023
mrp0: background media recovery process shutdown (orcl1)
role change: canceled mrp
all dispatchers and shared servers shutdown
close: killing server sessions.
close: all sessions shutdown successfully.
wed dec 20 20:45:42 2023
smon: disabling cache recovery
backup controlfile written to trace file /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_ora_27390.trc
switchover after complete recovery through change 2185012
online log  data/orcl/onlinelog/group_1.257.1156020473: thread 1 group 1 was previously cleared
online log  data/orcl/onlinelog/group_1.256.1156020475: thread 1 group 1 was previously cleared
online log  data/orcl/onlinelog/group_2.260.1156020477: thread 1 group 2 was previously cleared
online log  data/orcl/onlinelog/group_2.261.1156020479: thread 1 group 2 was previously cleared
online log  data/orcl/onlinelog/group_3.274.1156020481: thread 2 group 3 was previously cleared
online log  data/orcl/onlinelog/group_3.275.1156020483: thread 2 group 3 was previously cleared
online log  data/orcl/onlinelog/group_4.270.1156020485: thread 2 group 4 was previously cleared
online log  data/orcl/onlinelog/group_4.271.1156020487: thread 2 group 4 was previously cleared
standby became primary scn: 2185010
audit_trail initialization parameter is changed back to its original value as specified in the parameter file.
switchover: complete - database mounted as primary
completed: alter database commit to switchover to primary with session shutdown
--启动备库rac为主库
sql> shutdown immediate;
ora-01109: database not open
database dismounted.
sql> startup;
oracle instance started.
total system global area  801701888 bytes
fixed size                  2257520 bytes
variable size             339742096 bytes
database buffers          452984832 bytes
redo buffers                6717440 bytes
database mounted.
database opened.

原始hosts文件

----db_unique_name=primary
192.168.56.10   rac1
192.168.56.11   rac2
10.10.10.1      rac1-priv
10.10.10.2      rac2-priv
192.168.56.12   rac1-vip
192.168.56.13   rac2-vip
192.168.56.14   rac-scan
----db_unique_name=orcl
192.168.56.30   rac1
192.168.56.31   rac2
10.10.10.1      rac1-priv
10.10.10.2      rac2-priv
192.168.56.32   rac1-vip
192.168.56.33   rac2-vip
192.168.56.20   rac-scan

3.1、停止两套rac的crs服务

在四台rac主机上,使用root用户分别执行

/g01/app/11.2.0/grid/bin/crsctl stop crs

然后先对db_unique_name=primary的rac集群进行断网,修改db_uniquer_name=orcl这套rac的ip地址

3.2、修改新rac主库的ip地址

接下来对db_uniquer_name=orcl这套rac进行操作:

先备份hosts文件

cp /etc/hosts /etc/hosts.bak

然后修改hosts文件,新增如下,原有的rac相关全部注释掉

#########new-address##############3
192.168.56.10   rac1
192.168.56.11   rac2
10.10.10.1      rac1-priv
10.10.10.2      rac2-priv
192.168.56.12   rac1-vip
192.168.56.13   rac2-vip
192.168.56.14   rac-scan

然后分别修改两台主机的ip地址从原来的192.168.56.30、31修改为192.168.56.10、11(这个操作我就不描述了,修改网卡ip即可)

修改完成后,连接新的ip地址192.168.56.10、11,之后分别在2个节点启动crs

/g01/app/11.2.0/grid/bin/crsctl start crs

起动之后,检查,vip己经自动更换,scan没有更换,还是192.168.56.20

[root@rac1 ~]# /g01/app/11.2.0/grid/bin/srvctl config nodeapps -a
network exists: 1/192.168.56.0/255.255.255.0/eth0, type static
vip exists: /rac1-vip/192.168.56.12/192.168.56.0/255.255.255.0/eth0, hosting node rac1
vip exists: /rac2-vip/192.168.56.13/192.168.56.0/255.255.255.0/eth0, hosting node rac2
[root@rac1 ~]# /g01/app/11.2.0/grid/bin/srvctl config scan
scan name: rac-scan, network: 1/192.168.56.0/255.255.255.0/eth0
scan vip name: scan1, ip: /rac-scan/192.168.56.20

这一步修改scan的地址,最后一条语句返回修改结果

/g01/app/11.2.0/grid/bin/srvctl stop scan_listener
/g01/app/11.2.0/grid/bin/srvctl stop scan         
/g01/app/11.2.0/grid/bin/srvctl modify scan -n 192.168.56.14
/g01/app/11.2.0/grid/bin/srvctl start scan_listener
/g01/app/11.2.0/grid/bin/srvctl config scan  
--返回的结果   
scan name: 192.168.56.14, network: 1/192.168.56.0/255.255.255.0/eth0
scan vip name: scan1, ip: /192.168.56.14/192.168.56.14

去两个数据库实例里检查local_listener,我这里检查结果是正确的,写的都是新的vip,如果不正确修改一下

---检查结果
sql> show parameter local
name                                 type        value
------------------------------------ ----------- ------------------------------
local_listener                       string       (address=(protocol=tcp)(host=
                                                 192.168.56.12)(port=1521))
log_archive_local_first              boolean     true
parallel_force_local                 boolean     false
sql> show parameter local
name                                 type        value
------------------------------------ ----------- ------------------------------
local_listener                       string       (address=(protocol=tcp)(host=
                                                 192.168.56.13)(port=1521))
log_archive_local_first              boolean     true
parallel_force_local                 boolean     false
---修改语句
alter system set local_listener='(description=(address_list=(address=(protocol=tcp)(host=192.168.56.12)(port=1521))))' scope=both sid='orcl1';
alter system set local_listener='(description=(address_list=(address=(protocol=tcp)(host=192.168.56.13)(port=1521))))' scope=both sid='orcl2';

检查无误后,找1出主机连接一下vip、scan分别进行测试,没问题那么ip修改成功。

接下来修改这套rac里的tnsnames.ora文件,两个节点都要改


racdg =
  (description =
    (address = (protocol = tcp)(host = 192.168.56.11)(port = 1521))
    (address = (protocol = tcp)(host = 192.168.56.12)(port = 1521))
    (connect_data =
      (server = dedicated)
      (service_name = orcl)
    )
  )
primary =
  (description =
    (address = (protocol = tcp)(host = 192.168.56.32)(port = 1521))
    (address = (protocol = tcp)(host = 192.168.56.33)(port = 1521))
    (connect_data =
      (server = dedicated)
      (service_name = orcl)
    )
  )
orcldg =
  (description =
    (address = (protocol = tcp)(host = 192.168.56.99)(port = 1521))
    (connect_data =
      (server = dedicated)
      (service_name = orcl)
    )
  )

修改备库192.168.56.99单机dg的tnsname.ora,增加新rac主库的监听

orcl =
  (description =
    (address = (protocol = tcp)(host = 192.168.56.12)(port = 1521))
    (address = (protocol = tcp)(host = 192.168.56.13)(port = 1521))
    (connect_data =
      (server = dedicated)
      (service_name = orcl)
    )
  )

修改单机dg参数

alter system set log_archive_config='dg_config=(orcl,orcldg)'  scope=both;
alter system set log_archive_dest_2='service=orcl  async noaffirm valid_for=(online_logfiles,primary_role) db_unique_name=orcl'  scope=both;
alter system set fal_client=orcldg scope=both;
alter system set fal_server=orcl scope=both;

在db_unique_name=orcl的新主库rac环境,修改如下参数

alter system set log_archive_config='dg_config=(orcl,orcldg,primary)'  scope=both;
alter system set log_archive_dest_3='service=orcldg  async noaffirm valid_for=(online_logfiles,primary_role) db_unique_name=orcldg'  scope=both;

执行之后,单机备库恢复接收日志,fetching gap sequence in thread 1, gap sequence 125-129,继续接收日志同步

---查看告警日志
wed dec 20 21:58:07 2023
rfs[17]: assigned to rfs process 30656
rfs[17]: selected log 11 for thread 1 sequence 130 dbid 1613952925 branch 1086172194
wed dec 20 21:58:07 2023
archived log entry 78 added for thread 1 sequence 130 id 0x645e171b dest 1:
wed dec 20 21:58:08 2023
fetching gap sequence in thread 1, gap sequence 125-129
wed dec 20 21:58:08 2023
rfs[18]: assigned to rfs process 30658
rfs[18]: selected log 21 for thread 2 sequence 108 dbid 1613952925 branch 1086172194
rfs[17]: opened log for thread 1 sequence 126 dbid 1613952925 branch 1086172194
archived log entry 79 added for thread 1 sequence 126 rlc 1086172194 id 0x645e171b dest 3:
wed dec 20 21:58:08 2023
archived log entry 80 added for thread 2 sequence 108 id 0x645e171b dest 1:
wed dec 20 21:58:08 2023
rfs[19]: assigned to rfs process 30662
rfs[19]: opened log for thread 1 sequence 127 dbid 1613952925 branch 1086172194
wed dec 20 21:58:08 2023
rfs[20]: assigned to rfs process 30660
rfs[20]: opened log for thread 1 sequence 125 dbid 1613952925 branch 1086172194
archived log entry 81 added for thread 1 sequence 125 rlc 1086172194 id 0x645e171b dest 3:
archived log entry 82 added for thread 1 sequence 127 rlc 1086172194 id 0x645e171b dest 3:
rfs[17]: opened log for thread 1 sequence 128 dbid 1613952925 branch 1086172194
archived log entry 83 added for thread 1 sequence 128 rlc 1086172194 id 0x645e171b dest 3:
rfs[20]: opened log for thread 1 sequence 129 dbid 1613952925 branch 1086172194
archived log entry 84 added for thread 1 sequence 129 rlc 1086172194 id 0x645e171b dest 3:
media recovery log /u01/app/oracle/oradata/orcldg/archivelog/1_125_1086172194.dbf
media recovery log /u01/app/oracle/oradata/orcldg/archivelog/1_126_1086172194.dbf
wed dec 20 21:58:09 2023
primary database is in maximum performance mode
rfs[21]: assigned to rfs process 30664
rfs[21]: selected log 11 for thread 1 sequence 132 dbid 1613952925 branch 1086172194
media recovery log /u01/app/oracle/oradata/orcldg/archivelog/1_127_1086172194.dbf
wed dec 20 21:58:10 2023
rfs[22]: assigned to rfs process 30666
rfs[22]: selected log 12 for thread 1 sequence 131 dbid 1613952925 branch 1086172194
media recovery log /u01/app/oracle/oradata/orcldg/archivelog/1_128_1086172194.dbf
wed dec 20 21:58:10 2023
archived log entry 85 added for thread 1 sequence 131 id 0x645e171b dest 1:
media recovery log /u01/app/oracle/oradata/orcldg/archivelog/1_129_1086172194.dbf
media recovery waiting for thread 2 sequence 106
fetching gap sequence in thread 2, gap sequence 106-107
rfs[22]: opened log for thread 2 sequence 106 dbid 1613952925 branch 1086172194
archived log entry 86 added for thread 2 sequence 106 rlc 1086172194 id 0x645e171b dest 3:
wed dec 20 21:58:11 2023
rfs[23]: assigned to rfs process 30668
rfs[23]: opened log for thread 2 sequence 107 dbid 1613952925 branch 1086172194
archived log entry 87 added for thread 2 sequence 107 rlc 1086172194 id 0x645e171b dest 3:
media recovery log /u01/app/oracle/oradata/orcldg/archivelog/2_106_1086172194.dbf
media recovery log /u01/app/oracle/oradata/orcldg/archivelog/2_107_1086172194.dbf
media recovery log /u01/app/oracle/oradata/orcldg/archivelog/2_108_1086172194.dbf
media recovery log /u01/app/oracle/oradata/orcldg/archivelog/1_130_1086172194.dbf
media recovery log /u01/app/oracle/oradata/orcldg/archivelog/1_131_1086172194.dbf
media recovery waiting for thread 2 sequence 109
wed dec 20 21:58:16 2023
primary database is in maximum performance mode
rfs[24]: assigned to rfs process 30680
rfs[24]: selected log 21 for thread 2 sequence 110 dbid 1613952925 branch 1086172194
wed dec 20 21:58:17 2023
rfs[25]: assigned to rfs process 30682
rfs[25]: selected log 22 for thread 2 sequence 109 dbid 1613952925 branch 1086172194
archived log entry 88 added for thread 2 sequence 109 id 0x645e171b dest 1:
media recovery log /u01/app/oracle/oradata/orcldg/archivelog/2_109_1086172194.dbf
wed dec 20 21:58:18 2023
media recovery waiting for thread 1 sequence 132 (in transit)
recovery of online redo log: thread 1 group 11 seq 132 reading mem 0
  mem# 0: /u01/app/oracle/fast_recover_area/orcldg/onlinelog/o1_mf_11_lr1wn405_.log
media recovery waiting for thread 2 sequence 110 (in transit)
recovery of online redo log: thread 2 group 21 seq 110 reading mem 0
  mem# 0: /u01/app/oracle/fast_recover_area/orcldg/onlinelog/o1_mf_21_lr1wn5v2_.log

在rac主库端创建个测试表

create table t2 as select * from scott.emp; //count(*) 结果 14行

然后去单机备库检查

sql> select count(*) from t2;    //count(*) 结果 14行

至此恢复到单机完成。

原始hosts文件

----db_unique_name=primary
192.168.56.10   rac1
192.168.56.11   rac2
10.10.10.1      rac1-priv
10.10.10.2      rac2-priv
192.168.56.12   rac1-vip
192.168.56.13   rac2-vip
192.168.56.14   rac-scan
----db_unique_name=orcl
192.168.56.30   rac1
192.168.56.31   rac2
10.10.10.1      rac1-priv
10.10.10.2      rac2-priv
192.168.56.32   rac1-vip
192.168.56.33   rac2-vip
192.168.56.20   rac-scan

接下来对db_uniquer_name=primary这套rac进行操作:

先备份hosts文件

cp /etc/hosts /etc/hosts.bak

然后修改hosts文件,新增如下,原有的rac相关全部注释掉

#########new-address##############3
192.168.56.30   rac1
192.168.56.31   rac2
10.10.10.1      rac1-priv
10.10.10.2      rac2-priv
192.168.56.32   rac1-vip
192.168.56.33   rac2-vip
192.168.56.20   rac-scan

然后分别修改两台主机的ip地址从原来的192.168.56.10、11修改为192.168.56.30、31(这个操作我就不描述了,修改网卡ip即可)

修改完成后,连接新的ip地址192.168.56.30、31,之后分别在2个节点启动crs

/g01/app/11.2.0/grid/bin/crsctl start crs

起动之后,检查,vip己经自动更换,scan没有更换,还是192.168.56.14

[root@rac1 ~]# /g01/app/11.2.0/grid/bin/srvctl config nodeapps -a
network exists: 1/192.168.56.0/255.255.255.0/eth0, type static
vip exists: /rac1-vip/192.168.56.32/192.168.56.0/255.255.255.0/eth0, hosting node rac1
vip exists: /rac2-vip/192.168.56.33/192.168.56.0/255.255.255.0/eth0, hosting node rac2
[root@rac1 ~]# /g01/app/11.2.0/grid/bin/srvctl config scan
scan name: 192.168.56.14, network: 1/192.168.56.0/255.255.255.0/eth0
scan vip name: scan1, ip: /192.168.56.14/192.168.56.14

这一步修改scan的地址,最后一条语句返回修改结果

/g01/app/11.2.0/grid/bin/srvctl stop scan_listener
/g01/app/11.2.0/grid/bin/srvctl stop scan         
/g01/app/11.2.0/grid/bin/srvctl modify scan -n 192.168.56.20
/g01/app/11.2.0/grid/bin/srvctl start scan_listener
/g01/app/11.2.0/grid/bin/srvctl config scan  
--返回的结果   
scan name: 192.168.56.20, network: 1/192.168.56.0/255.255.255.0/eth0
scan vip name: scan1, ip: /192.168.56.20/192.168.56.20

启动1个实例

srvctl start instance -d orcl -n rac1

去两个数据库实例里检查local_listener,我这里检查结果是正确的,写的都是新的vip,如果不正确修改一下

alter system set local_listener='(description=(address_list=(address=(protocol=tcp)(host=192.168.56.32)(port=1521))))' scope=both sid='orcl1';
alter system set local_listener='(description=(address_list=(address=(protocol=tcp)(host=192.168.56.33)(port=1521))))' scope=both sid='orcl2';

6、恢复rac-rac的dg环境

修改db_uniquer_name=primary这套rac下两个节点的tnsnames.ora文件

primary =
  (description =
    (address = (protocol = tcp)(host = 192.168.56.32)(port = 1521))
    (address = (protocol = tcp)(host = 192.168.56.33)(port = 1521))
    (connect_data =
      (server = dedicated)
      (service_name = orcl)
    )
  )
racdg =
  (description =
    (address = (protocol = tcp)(host = 192.168.56.12)(port = 1521))
    (address = (protocol = tcp)(host = 192.168.56.13)(port = 1521))
    (connect_data =
      (server = dedicated)
      (service_name = orcl)
    )
  )

修改rac参数恢复

alter system set log_archive_config='dg_config=(orcl,primary)'  scope=both;
alter system set log_archive_dest_3='' scope=both;
alter system set log_archive_dest_2='service=racdg  async noaffirm valid_for=(online_logfiles,primary_role) db_unique_name=orcl'  scope=both;
alter system set fal_client=primary scope=both;
alter system set fal_server=racdg  scope=both;

启动adg

alter database recover managed standby database using current logfile disconnect from session;

启动后发现接收不到来自db_unique_name=orcl的日志,于是去db_unique_name=orcl的rac里禁用启用了下归档路径

alter system set log_archive_dest_2='' scope=both;
alter system set log_archive_dest_2='service=primary  async noaffirm valid_for=(online_logfiles,primary_role) db_unique_name=primary'  scope=both;

然后再检查,恢复通讯

7、总结

至步,其实己经完成整个替换项目的工作,整体来说替换工作需要细心、操作前做好准备工作,捋清工作流程。

下一篇进行一个收尾工作,把备库rac的存储换了

这样就是有2套备库就都有各自的存储,相当于共有2份数据备份

同时rac备库不使用和产存储空间,可以把一些查询放到rac备库来进行,不占用生产存储的资源。上一篇文章里在原有的rac-单实例的环境里又增加了新的rac做为备库。

也欢迎关注我的公众号【徐sir的it之路】,一起学习!

————————————————————————————
公众号:徐sir的it之路
csdn :
墨天轮:https://www.modb.pro/u/3605
pgfans:

————————————————————————————


最后修改时间:2024-01-14 09:00:58
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
1人已赞赏
【米乐app官网下载的版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

文章被以下合辑收录

评论

网站地图