« 如何调整Oracle Redo Logfile日志文件的大小 | Blog首页 | Exp 导出与 Expdp 的速度与时间对比 »
大表海量数据的转移及索引创建的记录
作者:eygle | 【转载请注出处】|【云和恩墨 领先的zData数据库一体机 | zCloud PaaS云管平台 | SQM SQL审核平台 | ZDBM 数据库备份一体机】
链接:https://www.eygle.com/archives/2009/09/imp_bigtable_record.html
为了减少对于生产环境的影响,我们将大表的数据分配迁移到测试机上进行处理,然后在转移回生产库。链接:https://www.eygle.com/archives/2009/09/imp_bigtable_record.html
这将极大的减少对于生产库的影响和冲击,以下是略微记录一下这些转移处理的时间。
使用如下命令来导入一个分区的数据:
nohup time imp sms/sms file=smsmg_p1.dmp fromuser=sms touser=sms buffer=500000000 commit=yes feedback=100000 indexes=no ignore=yes &
通过输出得到如下信息,在一台2CPU 8 Core的Pc Server上,导入这1.3亿左右的数据,花费了109分钟左右的时间:
Import: Release 10.2.0.2.0 - Production on Thu Sep 17 22:00:20 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Export file created by EXPORT:V10.02.01 via conventional path
Warning: the objects were exported by SMS, not by you
import done in ZHS16GBK character set and UTF8 NCHAR character set
export server uses AL16UTF16 NCHAR character set (possible ncharset conversion)
. importing SMS's objects into SMS
. . importing partition "SMSMG":"M01"
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
...........................................................................
.....
135587487 rows imported
IMP-00057: Warning: Dump file may not contain data of all partitions of this table
Import terminated successfully with warnings.
real 108m54.959s
user 31m4.134s
sys 3m7.302s
然后为这个表创建了一个Local的局部索引,又用去了37分钟:
SQL> set timing on这就是转移的好处,在生产库上,是无法创建和调整索引的,也不能使用并行,怕影响到生产的性能。
SQL> create index idx_MDN on smsmg(MDN) local nologging parallel 4;
Index created.
Elapsed: 00:37:29.64
处理器摘要信息供参考:
processor : 7
vendor_id : GenuineIntel
cpu family : 15
model : 2
model name : Intel(R) Xeon(TM) MP CPU 3.00GHz
stepping : 6
cpu MHz : 2990.724
cache size : 512 KB
physical id : 3
siblings : 2
core id : 3
cpu cores : 1
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic
bogomips : 5979.92
-The End-
历史上的今天...
>> 2008-09-23文章:
>> 2007-09-23文章:
>> 2006-09-23文章:
By eygle on 2009-09-23 08:17 | Comments (8) | Case | FAQ | 2407 |
大师,在生产库上online创建索引也不行么?
大师,在生产库上online创建索引也不行么?怕影响执行计划?
电信的数据库,不经申请是不能在库上做操作的。
为啥不用datapump?
同楼上问,用Data pump导入能节省不少时间吧?难道是因为生产机是9i?
问个非技术问题:
SMS就是短信的意思。难道电信会把我们的短信的内容也保存下来?
并且保存一万年?
datapump不是什么环境都能用啊。
短信?貌似保存一个月吧。
测试过,不用commit=y会更快吧.哈哈.