m6米乐安卓版下载-米乐app官网下载
暂无图片
4

实战tts迁移单个表空间8t大小,30w 数据对象 -m6米乐安卓版下载

原创 fanzhuozhuo 2023-11-06
527

实战tts迁移单个表空间8t大小,30w 数据对象

源端环境:
2节点19c rac。小版本19.13.0.0,归档模式。现在由于存储限制,已经开始存储阈值告警,没有多余空间承载这么大容量。所以经过讨论,把这套库里面的历史数据表空间,10t左右,迁移至别的数据库里面,源端删除表空间,达到释放空间的目的。也就是历史数据归档。

qhactdb2:/home/oracle(qhactdb2)$cat /etc/redhat-release red hat enterprise linux server release 7.6 (maipo) qhactdb2:/home/oracle(qhactdb2)$sqlplus / as sysdba sql*plus: release 19.0.0.0.0 - production on wed nov 1 13:45:06 2023 version 19.13.0.0.0 米乐app官网下载 copyright (c) 1982, 2021, oracle. all rights reserved. connected to: oracle database 19c enterprise edition release 19.0.0.0.0 - production version 19.13.0.0.0

此表空间8.6t左右,大约里面又28w张表。所以元数据信息非常大。

tbs_name type total(gb) usage(gb) free(gb) free pct % extensible(gb) max_size(gb) used pct of max % no_axf_num axf_num -------------------- -------------------- --------------- --------------- --------------- ---------- --------------- --------------- ----------------- ---------- ------- tbs_old_data permanent 8,610.000 4,861.615 3,748.385 56.46 .000 8,610.000 56.46 287 0 sql> archive log list; database log mode archive mode automatic archival enabled archive destination archivedg oldest online log sequence 82348 next log sequence to archive 82357 current log sequence 82357 sql> select count(*) from dba_tables where tablespace_name='tbs_old_data'; count(*) ---------- 285482 sql> select count(*) from dba_indexes where tablespace_name='tbs_old_data'; count(*) ---------- 46397

目标端环境:
有套现成的19c环境,存储空间也足够,所以打算归档到这套库上面,但未开归档。这是个pdb环境,所以可以创建单独pdb,来迁移本次历史表空间。

-bash-4.2$ cat /etc/redhat-release red hat enterprise linux server release 7.6 (maipo) -bash-4.2$ sqlplus / as sysdba sql*plus: release 19.0.0.0.0 - production on wed nov 1 13:48:39 2023 version 19.6.0.0.0 米乐app官网下载 copyright (c) 1982, 2019, oracle. all rights reserved. connected to: oracle database 19c enterprise edition release 19.0.0.0.0 - production version 19.6.0.0.0 sql> show pdbs; con_id con_name open mode restricted ---------- ------------------------------ ---------- ---------- ..... 2 pdb$seed read only no 8 tbmdbprd read write no ..... sql> archive log list; database log mode no archive mode automatic archival disabled archive destination /u01/app/oracle/product/19.3.0/db_1/dbs/arch oldest online log sequence 21446 current log sequence 21461 lasmcmd> sdg state type rebal sector logical_sector block au total_mb free_mb req_mir_free_mb usable_file_mb offline_disks voting_files name mounted normal n 512 512 4096 1048576 266649600 183535609 91422720 46056444 0 n hdd_data01/ mounted normal n 512 512 4096 4194304 30720 29740 10240 9750 0 y ocr/

环境限制:
网络:由于两套库没在一个地域,所以开通网络策略后,经过scp和asm cp远程复制测试,带宽仅有1mb/s。那么计算下,在不中断的情况下,10t的表空间,需要传输一个月左右,如果数据泵只传输数据,也要传至少输半个月。所以网络太慢。刚好本地有个跳板机,经他跳转,网速可以达到20mb/s,比直接传输快的多。
硬盘:本地没有多余空间,只有最多安装oracle的100g目录,远远达不到承载落地的目的。但是有块没加入asm的2t共享磁盘,可以临时使用下。
由于是历史数据表空间,上面没有业务,表空间可以无限read only。综上考虑,采用tts的方案。介于物理和逻辑的方法之间,折中的一种方案。
下面就详细说明下整个迁移过程:

其实主要就是表空间自包含检查

sql> exec dbms_tts.transport_set_check('tbs_old_data',true); -- 查看结果,结果为空,表示为自包含 col violations for a70 select * from transport_set_violations; pl/sql procedure successfully completed. sql> sql> sql> sql> violations ---------------------------------------------------------------------- ora-39907: index uqry_lc_zhangjin.ind_t2 in tablespace tbs_old_data po ints to table uqry_lc_zhangjin.t2 in tablespace tbs_dh_data. ora-39921: default partition (table) tablespace tbs_dh_data for tf_f_c ustomer not contained in transportable set. ora-39907: index lc_lipf.tmp_lipf_user in tablespace tbs_old_data poin ts to table lc_lipf.tmp_lipf_user in tablespace tmp_ts_vb_acct_01.

这都容易处理,把表move到合适的表空间即可。当然其中也有技巧。

创建文件系统

qhactdb2:/home/grid( asm2)$kfod disks=all status=true -------------------------------------------------------------------------------- disk size header path user group ================================================================================ ...... 16: 2097152 mb candidate /dev/asmdisk24 grid asmadmin ...... -------------------------------------------------------------------------------- oracle_sid oracle_home ================================================================================

有一块未用的大磁盘。

[root@qhactdb2 ~]# pvcreate /dev/sdy physical volume "/dev/sdy" successfully created. [root@qhactdb2 ~]# vgcreate vg_app /dev/sdy volume group "vg_app" successfully created [root@qhactdb2 ~]# lvcreate -l 100%free -n lv_app vg_app logical volume "lv_app" created. [root@qhactdb2 ~]# mkfs.xfs /dev/vg_app/lv_app meta-data=/dev/vg_app/lv_app isize=512 agcount=4, agsize=134217472 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=536869888, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=262143, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 [root@qhactdb2 ~]# mkdir /acttest [root@qhactdb2 ~]# mount /dev/mapper/vg_app-lv_app /acttest/ [root@qhactdb2 ~]# df -h filesystem size used avail use% mounted on ....... /dev/mapper/vg_app-lv_app 2.0t 33m 2.0t 1% /acttest [root@qhactdb2 ~]# cd /acttest/ [root@qhactdb2 acttest]# ls -ltr total 0

表空间置为read only

sql> alter tablespace tbs_old_data read only; tablespace altered.

生成可传输表空间集

qhactdb2:/dmp(qhactdb2)$nohup expdp \'/ as sysdba\' directory=dmp dumpfile=act_%u.dmp logfile=act.log transport_tablespaces=tbs_old_data cluster=n parallel=6 exclude=table_statistics,index_statistics &

会有报错:ora-39047: jobs of type transportable cannot use multiple execution streams.
transport_tablespaces和parallel不能同时使用。去掉parallel。

qhactdb2:/dmp(qhactdb2)$nohup expdp \'/ as sysdba\' directory=dmp dumpfile=act.dmp logfile=act.log transport_tablespaces=tbs_old_data cluster=n exclude=table_statistics,index_statistics &

具体过程如下:

qhactdb2:/dmp(qhactdb2)$cat act.log ;;; export: release 19.0.0.0.0 - production on thu nov 2 19:29:23 2023 version 19.13.0.0.0 米乐app官网下载 copyright (c) 1982, 2019, oracle and/or its affiliates. all rights reserved. ;;; connected to: oracle database 19c enterprise edition release 19.0.0.0.0 - production starting "sys"."sys_export_transportable_01": "/******** as sysdba" directory=dmp dumpfile=act.dmp logfile=act.log transport_tablespaces=tbs_old_data cluster=n exclude=table_statistics,index_statistics processing object type transportable_export/plugts_blk processing object type transportable_export/post_instance/plugts_blk processing object type transportable_export/table processing object type transportable_export/grant/owner_grant/object_grant processing object type transportable_export/index/index processing object type transportable_export/constraint/constraint processing object type transportable_export/comment processing object type transportable_export/constraint/ref_constraint master table "sys"."sys_export_transportable_01" successfully loaded/unloaded ****************************************************************************** dump file set for sys.sys_export_transportable_01 is: /dmp/act.dmp ****************************************************************************** datafiles required for transportable tablespace tbs_old_data: datadg1/qhactdb/datafile/tbs_old_data.359.1048698211 datadg1/qhactdb/datafile/tbs_old_data.362.1050870189 ................... datadg2/qhactdb/datafile/tbs_old_data.829.1120386801 datadg2/qhactdb/datafile/tbs_old_data.830.1120386865 datadg2/qhactdb/datafile/tbs_old_data.831.1120386925 job "sys"."sys_export_transportable_01" successfully completed at thu nov 2 21:17:22 2023 elapsed 0 01:47:57

30w 个对象,光导出元数据,就耗费了近2h。

落地asm数据文件

总共287个datafile

sql> select file_name from dba_data_files where tablespace_name='tbs_old_data' file_name -------------------------------------------------------------------------------- datadg2/qhactdb/datafile/tbs_old_data.829.1120386801 datadg2/qhactdb/datafile/tbs_old_data.830.1120386865 datadg2/qhactdb/datafile/tbs_old_data.831.1120386925 datadg1/qhactdb/datafile/tbs_old_data.941.1123234871 ..... asmcmd> cp datadg1/qhactdb/datafile/tbs_old_data.359.1048698211 /acttest/tbs_old_data_1.dbf cp datadg1/qhactdb/datafile/tbs_old_data.362.1050870189 /acttest/tbs_old_data_2.dbf cp datadg1/qhactdb/datafile/tbs_old_data.363.1050870415 /acttest/tbs_old_data_3.dbf cp datadg1/qhactdb/datafile/tbs_old_data.364.1050870523 /acttest/tbs_old_data_4.dbf cp datadg1/qhactdb/datafile/tbs_old_data.365.1050870527 /acttest/tbs_old_data_5.dbf cp datadg1/qhactdb/datafile/tbs_old_data.366.1050870531 /acttest/tbs_old_data_6.dbf cp datadg1/qhactdb/datafile/tbs_old_data.367.1050870533 /acttest/tbs_old_data_7.dbf cp datadg1/qhactdb/datafile/tbs_old_data.368.1050870537 /acttest/tbs_old_data_8.dbf ....... asmcmd> copying datadg1/qhactdb/datafile/tbs_old_data.362.1050870189 -> /acttest/tbs_old_data_2.dbf asmcmd> copying datadg1/qhactdb/datafile/tbs_old_data.363.1050870415 -> /acttest/tbs_old_data_3.dbf asmcmd> copying datadg1/qhactdb/datafile/tbs_old_data.364.1050870523 -> /acttest/tbs_old_data_4.dbf asmcmd> copying datadg1/qhactdb/datafile/tbs_old_data.365.1050870527 -> /acttest/tbs_old_data_5.dbf asmcmd> copying datadg1/qhactdb/datafile/tbs_old_data.366.1050870531 -> /acttest/tbs_old_data_6.dbf asmcmd> copying datadg1/qhactdb/datafile/tbs_old_data.367.1050870533 -> /acttest/tbs_old_data_7.dbf asmcmd> copying datadg1/qhactdb/datafile/tbs_old_data.368.1050870537 -> /acttest/tbs_old_data_8.dbf .......

压缩数据文件

由于网络带宽有限制,所以这10t的数据文件传起来还是有压力的,经过再三考虑,可以在源端先进行文件压缩,再进行传输,即可减少待传输的数据量。
经过前期测试,gzip压缩解压太耗时,此处采用它的并行版本pigz。
pigz(parallel implementation of gzip)是一个并行执行的压缩工具,解压缩比gzip快,同时cpu消耗是gzip的好几倍,在对短时间内cpu消耗较高不受影响的场景下,可以使用pigz。
测试压缩一个文件:

[root@qhactdb2 acttest]# pigz tbs_old_data_130.dbf
top - 21:35:59 up 315 days, 17:38, 9 users, load average: 11.92, 10.84, 10.83 tasks: 3463 total, 6 running, 3457 sleeping, 0 stopped, 0 zombie %cpu(s): 7.2 us, 2.0 sy, 0.0 ni, 89.7 id, 1.0 wa, 0.0 hi, 0.1 si, 0.0 st kib mem : 79055136 total, 74306400 free, 51713510 used, 19910987 buff/cache kib swap: 20971516 total, 17057532 free, 3913984 used. 24702427 avail mem pid user pr ni virt res shr s %cpu %mem time command 43881 root 20 0 11.3g 48588 700 d 552.3 0.0 1:37.41 pigz 158871 root 20 0 12.7g 3.2g 56856 s 145.1 0.4 284554:55 dsware_vbs 105001 root 20 0 4620 860 448 r 99.7 0.0 254:44.53 gzip 34863 root 20 0 4620 860 448 d 99.0 0.0 189:46.59 gzip 86044 root 20 0 4620 856 448 r 99.0 0.0 222:05.52 gzip 72305 root 20 0 4620 848 448 r 98.7 0.0 54:06.82 gzip

确实比gzip速度快,但是cpu消耗是gzip的几倍大。
用法:

1、压缩单个文件 pigz -k your_file_name # 加上-k选项保留原始文件,会在当前工作目录获得压缩后的your_file_name.gz 文件 2、解压单个文件 unpigz -d your_file_name.gz # 同样,如果需要保留.gz文件,记得加上-k选项 3、压缩文件夹 tar -cvf - dir1 dir2 dir3 | pigz > output.tar.gz # pigz没有压缩文件夹的选项,只能压缩单个文件。若想压缩文件夹,可以结合tar使用 4、解压文件夹 pigz -d output.tar.gz # 这样得到的是output.tar,再通过tar解打包就好了 tar -xzvf output.tar.gz # 也可以直接用tar命令解压

常用参数

-0 ~ -9 压缩等级,数字越大压缩率越高,速度越慢,默认为6 -k --keep 压缩后不删除原始文件 -l --list 列出压缩输入的内容 -k --zip compress to pkware zip (.zip) single entry format -d --decompress 解压缩输入 -p --processes n 使用n核处理,默认为使用所有cpu核心

参考:https://www.jianshu.com/p/db81073b491c

此处分多个文件夹,采用pigz并行压缩:

[root@qhactdb2 3]# pigz *

因为是核心生产库,注意观察cpu的使用率,如果使用率过高,注意-p调节cpu核心数。

sftp传输

此过程耗费了主要的时间,cp,压缩,落地传输,解压,cp大概经历了48h。

创建pdb

sql> create pluggable database oldactdb admin user pdbadmin identified by xxxx; pluggable database created. sql> show pdbs; con_id con_name open mode restricted ---------- ------------------------------ ---------- ---------- ...... 11 oldactdb mounted ...... sql> alter pluggable database oldactdb open; pluggable database altered.

收集下统计信息,要不然后面impdp元数据慢的很

set serveroutput on set timing on exec dbms_stats.set_param('degree','64'); select dbms_stats.get_param('degree') from dual; exec dbms_stats.gather_database_stats; exec dbms_stats.gather_dictionary_stats; exec dbms_stats.gather_fixed_objects_stats; exec dbms_stats.set_param('degree','0'); commit;

sftp上传数据文件和元数据文件

先解压这些传输的文件

[root@zdb034 acttest]# cd 4/ [root@zdb034 4]# unpigz -d *

赋权

[root@zdb034 dsgarchive]# chown oracle:oinstall -r acttest/ [root@zdb034 dsgarchive]# chmod 775 -r acttest/

文件系统拷贝到asm中

asmcmd> cp /dsgarchive/acttest/1/tbs_old_data_1.dbf hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/ cp /dsgarchive/acttest/1/tbs_old_data_2.dbf hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/ cp /dsgarchive/acttest/1/tbs_old_data_3.dbf hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/ cp /dsgarchive/acttest/1/tbs_old_data_4.dbf hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/ cp /dsgarchive/acttest/1/tbs_old_data_5.dbf hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/ .......

创建必要的dblink和directory

create public database link to_act connect to system identified by "xxxx" using '(description =(address_list =(address =(protocol = tcp)(host = xx.x.x.x)(port = 1688)))(connect_data =(service_name = xxaxxdb)))'; create directory impdp as '/home/oracle/enmo'; grant read,write on directory impdp to public;

目标端创建用户并赋权

源端执行脚本,输出结果,目标端执行创建语句。

set serveroutput on echo on declare v_sql varchar2 (2000); begin for c_username in (select name, password from sys.user$ where name not in ('anonymous', 'apex_030200', 'apex_public_user', 'appqossys', 'ctxsys', 'dbsnmp', 'dip', 'exfsys', 'flows_files', 'mddata', 'mdsys', 'mgmt_view', 'olapsys', 'oracle_ocm', 'orddata', 'ordplugins', 'ordsys', 'outln', 'owbsys', 'owbsys_audit', 'si_informtn_schema', 'spatial_csw_admin_usr', 'spatial_wfs_admin_usr', 'sys', 'sysman', 'system', 'wmsys', 'xdb', 'xs$null','dmsys','tsmsys') and type# = 1) loop v_sql := 'create user ' || c_username.name || ' identified by values '||chr(39) || c_username.password||chr(39) || ';'; dbms_output.put_line (v_sql); end loop; end; / create user uop_vb1 identified by values 'f4c240c968599350'; create user uqry_lc_zhangjin identified by values 'b5739f5cca86de9f'; create user md identified by values 'ffc9664ce4665ce6'; ...... create user lc_wangj identified by values '471f79a2d0ccb210'; create user yd_mawx identified by values 'eb90d6c7a0aa2f61';

为了后面导入元数据报错权限,线都赋予dba权限,后面回收即可

grant dba to uop_vb1 ; ...... grant dba to lc_wangj ; grant dba to yd_mawx ;

目标端导入public对象

nohup impdp system/xxxx@oldact include=db_link:\"" in \(select db_link from dba_db_links where owner=\'public\'\)\"" include=profile include=role full=y directory=impdp network_link=to_act logfile=dblink.log &

具体过程如下:

-bash-4.2$ cat dblink.log ;;; import: release 19.0.0.0.0 - production on thu nov 2 15:23:11 2023 version 19.6.0.0.0 米乐app官网下载 copyright (c) 1982, 2019, oracle and/or its affiliates. all rights reserved. ;;; connected to: oracle database 19c enterprise edition release 19.0.0.0.0 - production flashback automatically enabled to preserve database integrity. starting "system"."sys_import_full_01": system/********@oldact include=db_link:" in \(select db_link from dba_db_links where owner=\'public\'\)" include=profile include=role full=y directory=impdp network_link=to_act logfile=dblink.log estimate in progress using blocks method... total estimation using blocks method: 0 kb processing object type database_export/profile ...... processing object type database_export/role processing object type database_export/schema/db_link job "system"."sys_import_full_01" completed with 3 error(s) at thu nov 2 15:43:43 2023 elapsed 0 00:20:16

目标端导入用户元数据

-bash-4.2$ cat actpar.par dumpfile=act.dmp directory=impdp metrics=yes cluster=n transport_datafiles=' hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_1.dbf',' hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_2.dbf',' hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_3.dbf',' hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_4.dbf',。。。。。。。。。。' hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_286.dbf',' hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_287.dbf' nohup impdp system/xxxx@oldact parfile=actpar.par &

具体过程如下:

-bash-4.2$ impdp system/xxxx@oldact attach="system"."sys_import_transportable_01" import: release 19.0.0.0.0 - production on fri nov 3 10:27:15 2023 version 19.6.0.0.0 米乐app官网下载 copyright (c) 1982, 2019, oracle and/or its affiliates. all rights reserved. connected to: oracle database 19c enterprise edition release 19.0.0.0.0 - production job: sys_import_transportable_01 owner: system operation: import creator privs: true guid: 093721f92dbb7047e0632204bc873d40 start time: friday, 03 november, 2023 10:19:25 mode: transportable instance: zhjqdb1 max parallelism: 1 timezone: 00:00 export timezone: 08:00 timezone version: 32 endianness: little nls character set: zhs16gbk nls nchar character set: al16utf16 export job parameters: parameter name parameter value: client_command "/******** as sysdba" directory=dmp dumpfile=act.dmp logfile=act.log transport_tablespaces=tbs_old_data cluster=n exclude=table_statistics,index_statistics trace 0 tts_full_check 0 import job parameters: parameter name parameter value: client_command system/********@oldact parfile=/home/oracle/enmo/actpar.par metrics 1 tablespace_datafile hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_1.dbf trace 0 tablespace_datafile hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_2.dbf tablespace_datafile hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_3.dbf tablespace_datafile hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_4.dbf .................................... tablespace_datafile hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_286.dbf tablespace_datafile hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs_old_data_287.dbf state: executing bytes processed: 0 current parallelism: 1 job error count: 0 job heartbeat: 3 dump file: /home/oracle/enmo/act.dmp worker 1 status: instance id: 1 instance name: zhjqdb1 host name: zdb034 object start time: friday, 03 november, 2023 10:27:35 object status at: friday, 03 november, 2023 10:27:35 process name: dw00 state: executing object schema: yd_wangsd object name: wsd_b_accountdeposit_202006 object type: transportable_export/table completed objects: 1,016 worker parallelism: 1 --过一会: import> status job: sys_import_transportable_01 operation: import mode: transportable state: executing bytes processed: 0 current parallelism: 1 job error count: 0 job heartbeat: 277 dump file: /home/oracle/enmo/act.dmp worker 1 status: instance id: 1 instance name: zhjqdb1 host name: zdb034 object start time: friday, 03 november, 2023 15:17:35 object status at: friday, 03 november, 2023 15:17:35 process name: dw00 state: executing object schema: old_ad object name: ca_res_log_d_0977_1_20200817 object type: transportable_export/table completed objects: 18,257 worker parallelism: 1 --过26h以后: import> status job: sys_import_transportable_01 operation: import mode: transportable state: executing bytes processed: 0 current parallelism: 1 job error count: 0 job heartbeat: 1521 dump file: /home/oracle/enmo/act.dmp worker 1 status: instance id: 1 instance name: zhjqdb1 host name: zdb034 object start time: saturday, 04 november, 2023 13:01:13 object status at: saturday, 04 november, 2023 13:01:13 process name: dw00 state: executing object schema: old_ad object name: ca_prom_dtl_0970_6_20200623 object type: transportable_export/table completed objects: 192,066 worker parallelism: 1 --已经导入19w个对象 w-1 processing object type transportable_export/table w-1 completed 285508 table objects in 145806 seconds --经历了145806s,表的元数据导入完成。

虽然慢,completed objects 对象数目一直在变,说明在慢慢入库。刚开始,还排查了impdp hang的问题,排查了一大圈,除过等待事件“wait for unread message on broadcast channel”,其他都正常。

经测试,tts导入元数据的时候,也不能使用并行。仍然会报错ora-39047。如下:

[oracle@11gasm ~]$ impdp \'/ as sysdba\' dumpfile=zhuo.dmp directory=data_pump_dir transport_datafiles=' datadg/orcl/datafile/o1_mf_zhuo_gxdcfr5s_.dbf' parallel=4 import: release 11.2.0.4.0 - production on thu nov 2 22:02:10 2023 米乐app官网下载 copyright (c) 1982, 2011, oracle and/or its affiliates. all rights reserved. connected to: oracle database 11g enterprise edition release 11.2.0.4.0 - 64bit production with the partitioning, automatic storage management, olap, data mining and real application testing options ora-39002: invalid operation ora-39047: jobs of type transportable cannot use multiple execution streams.

从后面知道,只有从20c开始,oracle加强了tts导出导入元数据,可以使用并行。
使用impdp network_link可以不落地进行元数据导入,但是参数transport_datafiles和transport_tablespaces 必须连用:

[oracle@11gasm ~]$ impdp \'/ as sysdba\' network_link=to_orcl directory=data_pump_dir transport_datafiles=' datadg/orcl/datafile/o1_mf_zhuo_gxdcfr5s_.dbf' transport_tablespaces=zhuo import: release 11.2.0.4.0 - production on thu nov 2 22:07:02 2023 米乐app官网下载 copyright (c) 1982, 2011, oracle and/or its affiliates. all rights reserved. connected to: oracle database 11g enterprise edition release 11.2.0.4.0 - 64bit production with the partitioning, automatic storage management, olap, data mining and real application testing options starting "sys"."sys_import_transportable_01": "/******** as sysdba" network_link=to_orcl directory=data_pump_dir transport_datafiles= datadg/orcl/datafile/o1_mf_zhuo_gxdcfr5s_.dbf transport_tablespaces=zhuo processing object type transportable_export/plugts_blk processing object type transportable_export/table processing object type transportable_export/table_statistics processing object type transportable_export/post_instance/plugts_blk job "sys"."sys_import_transportable_01" successfully completed at thu nov 2 22:07:14 2023 elapsed 0 00:00:12 [oracle@11gasm ~]$ impdp \'/ as sysdba\' network_link=to_orcl directory=data_pump_dir transport_datafiles=' datadg/orcl/datafile/o1_mf_zhuo_gxdcfr5s_.dbf' import: release 11.2.0.4.0 - production on thu nov 2 22:06:37 2023 米乐app官网下载 copyright (c) 1982, 2011, oracle and/or its affiliates. all rights reserved. connected to: oracle database 11g enterprise edition release 11.2.0.4.0 - 64bit production with the partitioning, automatic storage management, olap, data mining and real application testing options ora-39004: invalid state ora-39053: parameter or attribute transport_tablespaces must be defined for a transportable job

19c的元数据导入,默认会导入对象权限信息。

import> status job: sys_import_transportable_01 operation: import mode: transportable state: executing bytes processed: 0 current parallelism: 1 job error count: 0 job heartbeat: 1072 dump file: /home/oracle/enmo/act.dmp worker 1 status: instance id: 1 instance name: zhjqdb1 host name: zdb034 object start time: sunday, 05 november, 2023 21:36:33 object status at: sunday, 05 november, 2023 21:36:33 process name: dw00 state: executing object schema: old_ad object name: alter object type: transportable_export/grant/owner_grant/object_grant completed objects: 2,256,000 worker parallelism: 1

源端查询权限数量:

sql> select count(*) from dba_tab_privs; count(*) ---------- 11017811

权限数量也确实够多。
完整的元数据导入日志如下:

-bash-4.2$ cat actpar.log ;;; import: release 19.0.0.0.0 - production on fri nov 3 10:19:22 2023 version 19.6.0.0.0 米乐app官网下载 copyright (c) 1982, 2019, oracle and/or its affiliates. all rights reserved. ;;; connected to: oracle database 19c enterprise edition release 19.0.0.0.0 - production ;;; ************************************************************************** ;;; parfile values: ;;; parfile: transport_datafiles= hdd_data01/zhjqdb/091038c4d2a63642e0632204b ;;; _parfile: c87ea64/datafile/tbs_old_data_1.dbf, ;;; _parfile: hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs ;;; _parfile: _old_data_2.dbf, ;;; _parfile: hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs ;;; _parfile: _old_data_3.dbf, ......................................................... ;;; _parfile: _old_data_285.dbf, ;;; _parfile: hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs ;;; _parfile: _old_data_286.dbf, ;;; _parfile: hdd_data01/zhjqdb/091038c4d2a63642e0632204bc87ea64/datafile/tbs ;;; _parfile: _old_data_287.dbf ;;; parfile: directory=impdp ;;; parfile: cluster=n ;;; parfile: metrics=y ;;; parfile: logfile=actpar.log ;;; parfile: dumpfile=act.dmp ;;; ************************************************************************** w-1 startup took 0 seconds w-1 master table "system"."sys_import_transportable_01" successfully loaded/unloaded w-1 source time zone is 08:00 and target time zone is 00:00. starting "system"."sys_import_transportable_01": system/********@oldact parfile=/home/oracle/enmo/actpar.par w-1 processing object type transportable_export/plugts_blk w-1 completed 1 plugts_blk objects in 71 seconds w-1 processing object type transportable_export/table w-1 completed 285508 table objects in 145806 seconds w-1 processing object type transportable_export/grant/owner_grant/object_grant w-1 completed 2714737 object_grant objects in 79652 seconds w-1 processing object type transportable_export/index/index w-1 completed 4006 index objects in 637 seconds w-1 processing object type transportable_export/constraint/constraint w-1 completed 68398 constraint objects in 9014 seconds w-1 processing object type transportable_export/comment w-1 completed 2792 comment objects in 69 seconds w-1 processing object type transportable_export/constraint/ref_constraint w-1 completed 3 ref_constraint objects in 1 seconds w-1 processing object type transportable_export/post_instance/plugts_blk w-1 completed 1 plugts_blk objects in 11529 seconds job "system"."sys_import_transportable_01" successfully completed at mon nov 6 07:02:09 2023 elapsed 2 20:42:45

整个过程耗时2天21个小时。太夸张了。。。
加上前面的传输文件,2天。整个过程持续了5天左右。确实够长,还好,这个表空间是历史归档数据。可以一直停机。

表空间置为read write

sql> alter tablespace tbs_old_data read write; tablespace altered. sql> select tablespace_name,status,contents from dba_tablespaces; tablespace_name status contents ------------------------------ --------- --------------------- system online permanent sysaux online permanent undotbs1 online undo temp online temporary tbs_old_data online permanent

对象比对

表空间信息:

mon nov 06 page 1 tablespace usage status tbs_name type total(gb) usage(gb) free(gb) free pct % extensible(gb) max_size(gb) used pct of max % no_axf_num axf_num -------------------- -------------------- --------------- --------------- --------------- ---------- --------------- --------------- ----------------- ---------- ------- tbs_old_data permanent 8,610.000 4,782.176 3,827.824 55.54 .000 8,610.000 55.54 287 0 system permanent 14.678 2.788 11.889 19.00 17.322 32.000 8.71 0 1 sysaux permanent 2.188 2.074 .113 94.82 29.812 32.000 6.48 0 1 undotbs1 undo 1.011 .889 .122 87.93 30.989 32.000 2.78 0 1 temp temporary .339 .000 .339 .00 31.661 32.000 .00 0 1

对象信息:

sql> select owner,table_name from dba_tables where tablespace_name='tbs_old_data' 2 minus 3 select owner,table_name from dba_tables@to_act where tablespace_name='tbs_old_data'; mon nov 06 page 1 tablespace usage status owner table_name -------------------- -------------------- lc_zhaoge temp_calfactor_error sql> select owner,index_name from dba_indexes where tablespace_name='tbs_old_data' 2 minus 3 select owner,index_name from dba_indexes@to_act where tablespace_name='tbs_old_data'; mon nov 06 page 1 tablespace usage status owner index_name -------------------- ------------------------------ jd sys_c001806330 jd sys_c001832189 jd sys_c001848608 jd sys_il0005623531c00003$$ lc_lipf sys_il0005623543c00003$$ lc_lipf sys_il0005623569c00003$$ ogg sys_il0005623557c00001$$ ogg sys_il0005623560c00001$$ ogg sys_il0005623563c00001$$ ogg sys_il0005623566c00001$$ old_ad sys_c001806081 old_ad sys_c001813410 old_ad sys_c001814737 old_ad sys_c001840432 old_ad sys_c001848530 old_ad sys_il0005623534c00008$$ old_ad sys_il0005623537c00003$$ old_ad sys_il0005623540c00003$$ utoptea sys_il0005623546c00112$$ utoptea sys_il0005623549c00002$$ utoptea sys_il0005623552c00007$$ utoptea sys_il0005623552c00008$$ 22 rows selected. sql> @table_info enter you table_name: temp_calfactor_error ---------------------------------------------------------------------------- | table information******************************************** | ---------------------------------------------------------------------------- tablespace owner tablename name analyzed ini_trans par temp bytes pct_free pct_used num_rows ---------- ------------------------- -------------------- -------------------- ---------- --- ---- ---------- ---------- ---------- ---------- lc_zhaoge temp_calfactor_error tbs_old_data 2023-11-05 16:36:17 1 no n 1m 10 0 ---------------------------------------------------------------------------- | object information | ---------------------------------------------------------------------------- object_id owner object_name object_typ status created lastddltime status temp ---------- ---------- ------------------------------ ---------- ------- -------------------- -------------------- ------- --------------- 5604804 lc_zhaoge temp_calfactor_error table valid 2023-11-05 00:13:46 2023-11-05 03:07:49 valid n

多余的这张表是昨天刚新建的。
多余的索引,都是lob字段对应的段和索引段,重新生成的。
所以对象都迁移过来了

导入系统权限信息

nohup impdp system/xxxx@oldact metrics=yes directory=impdp network_link=to_act cluster=n logfile=act11.log full=y include=system_grant,role_grant &

处理无效对象

exec utl_recomp.recomp_parallel(60);

收集数据库和数据字典统计信息

set serveroutput on set timing on exec dbms_stats.set_param('degree','64'); select dbms_stats.get_param('degree') from dual; exec dbms_stats.gather_database_stats; exec dbms_stats.gather_dictionary_stats; exec dbms_stats.gather_fixed_objects_stats; exec dbms_stats.set_param('degree','0'); commit;

回收dba 权限

revoke dba from uop_vb1 ; revoke dba from uqry_lc_zhangjin ; revoke dba from md ; revoke dba from pd ; revoke dba from sd ; revoke dba from bd ; revoke dba from ad ; revoke dba from zd ; .........

修改用户默认表空间

alter user lc_qiyz default tablespace tbs_old_data; alter user lc_zhangsm default tablespace tbs_old_data; alter user lc_yanhl default tablespace tbs_old_data; alter user lc_chenwj default tablespace tbs_old_data; alter user lc_majl default tablespace tbs_old_data; ............. alter user yd_guoyn default tablespace tbs_old_data; alter user lc_minr default tablespace tbs_old_data; alter user topteact default tablespace tbs_old_data; alter user lc_lij default tablespace tbs_old_data; alter user lc_dizw default tablespace tbs_old_data; alter user lc_lix default tablespace tbs_old_data; alter user uop_vb1 temporary tablespace temp; alter user uqry_lc_zhangjin temporary tablespace temp; ......... alter user lc_ruw temporary tablespace temp; alter user ws_liqy temporary tablespace temp; alter user lc_wangj temporary tablespace temp; alter user yd_mawx temporary tablespace temp;

经历了5天,整个迁移过程就算结束了。

· 从12.2版本开始,parallel参数扩展为在expdp和impdp过程中包含元数据,例如parallel>1,所有的元数据和数据会并行卸载
· 从20c版本开始,data pump改进了并行传输表空间元数据操作
从oracle database 20c开始oracle data pump通过并行性改善了可移植表空间元数据操作。
商业收益:oracle data pump现在支持可移植表空间(tts)元数据的并行导出和导入操作。这是在tts迁移中将表空间数据文件与目标数据库关联的信息。并行改进了tts的导出和导入性能,尤其是当数据文件中有数百万个数据库对象(包括表索引分区和子分区)时。

image.png
image.png
本次数据库版本为19c,所以tts的元数据导出导入不能采用并行的方式。如上,参数共用,会报错:ora-39047。
但是数据库版本大于12.2的时候,数据泵支持元数据的并行。

参考:
https://www.modb.pro/db/24496
https://www.modb.pro/doc/2174
https://www.modb.pro/db/41495

最后,对新版本tts并行导出导入支持测试:
数据库版本:

[oracle@db1 ~]$ sqlplus / as sysdba sql*plus: release 23.0.0.0.0 - beta on thu nov 2 23:59:33 2023 version 23.1.0.0.0 米乐app官网下载 copyright (c) 1982, 2022, oracle. all rights reserved. connected to: oracle database 23c enterprise edition release 23.0.0.0.0 - beta version 23.1.0.0.0 sql>

transport_tablespaces和parallel共用,并行导出:

[oracle@db1 enmo]$ expdp system/oracle@enmopdb1 directory=impdp dumpfile=users_%u.dmp logfile=users.log transport_tablespaces=users cluster=n parallel=4 export: release 23.0.0.0.0 - beta on thu nov 2 23:08:57 2023 version 23.1.0.0.0 米乐app官网下载 copyright (c) 1982, 2022, oracle and/or its affiliates. all rights reserved. connected to: oracle database 23c enterprise edition release 23.0.0.0.0 - beta starting "system"."sys_export_transportable_01": system/********@enmopdb1 directory=impdp dumpfile=users_%u.dmp logfile=users.log transport_tablespaces=users cluster=n parallel=4 processing object type transportable_export/index/statistics/index_statistics processing object type transportable_export/statistics/table_statistics processing object type transportable_export/table processing object type transportable_export/grant/owner_grant/object_grant processing object type transportable_export/index/index processing object type transportable_export/plugts_blk processing object type transportable_export/constraint/ref_constraint processing object type transportable_export/trigger processing object type transportable_export/constraint/constraint processing object type transportable_export/post_instance/plugts_blk ora-39439: warning: cannot process blockchain tables in transportable jobs or over a network link master table "system"."sys_export_transportable_01" successfully loaded/unloaded ****************************************************************************** dump file set for system.sys_export_transportable_01 is: /home/oracle/enmo/users_01.dmp /home/oracle/enmo/users_02.dmp /home/oracle/enmo/users_03.dmp ****************************************************************************** datafiles required for transportable tablespace users: data/enmo/efee7aa6fc6f0ffce053a71614ac1a59/datafile/users.278.1123603769 job "system"."sys_export_transportable_01" completed with 1 error(s) at thu nov 2 23:09:34 2023 elapsed 0 00:00:35 --期间查看并行进程 export> status job: sys_export_transportable_01 operation: export mode: transportable state: executing bytes processed: 0 current parallelism: 4 job error count: 0 job heartbeat: 1 dump file: /home/oracle/enmo/users_%u.dmp dump file: /home/oracle/enmo/users_01.dmp bytes written: 8,192 dump file: /home/oracle/enmo/users_02.dmp bytes written: 12,288 dump file: /home/oracle/enmo/users_03.dmp bytes written: 8,192 worker 1 status: instance id: 1 instance name: enmo1 host name: db1 object start time: thursday, 02 november, 2023 23:06:39 object status at: thursday, 02 november, 2023 23:06:40 process name: dw00 state: executing worker 2 status: instance id: 1 instance name: enmo1 host name: db1 object start time: thursday, 02 november, 2023 23:06:43 object status at: thursday, 02 november, 2023 23:06:45 process name: dw01 state: executing worker 3 status: instance id: 1 instance name: enmo1 host name: db1 object start time: thursday, 02 november, 2023 23:06:40 object status at: thursday, 02 november, 2023 23:06:41 process name: dw02 state: executing worker 4 status: instance id: 1 instance name: enmo1 host name: db1 object start time: thursday, 02 november, 2023 23:06:42 object status at: thursday, 02 november, 2023 23:06:45 process name: dw03 state: executing object schema: enmo object name: t1 object type: transportable_export/table completed objects: 25 worker parallelism: 1 [oracle@db1 enmo]$ ls -ltr total 2224 -rw-rw---- 1 oracle asmadmin 2224128 nov 2 23:09 users_02.dmp -rw-rw---- 1 oracle asmadmin 28672 nov 2 23:09 users_01.dmp -rw-rw---- 1 oracle asmadmin 20480 nov 2 23:09 users_03.dmp -rw-rw-rw- 1 oracle asmadmin 1811 nov 2 23:09 users.log

23c确实支持tts的并行导出导入。

最后修改时间:2023-11-06 22:07:06
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【米乐app官网下载的版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论

网站地图