« 如何手工创建(Create)一个Oracle 10g数据库 | Blog首页 | 大表海量数据的转移及索引创建的记录 »
如何调整Oracle Redo Logfile日志文件的大小
作者:eygle | 【转载请注出处】|【云和恩墨 领先的zData数据库一体机 | zCloud PaaS云管平台 | SQM SQL审核平台 | ZDBM 数据库备份一体机】
链接:https://www.eygle.com/archives/2009/09/oracle_redo_log.html
折腾日志文件,调整了一下大小,将每个日志文件调整为2G,扩展名取为dbf。链接:https://www.eygle.com/archives/2009/09/oracle_redo_log.html
约略记录一下步骤。
在用的是不能删除的:
SQL> col member for a30Inactive的删除后,增加新的日志文件:
SQL> select * from v$logfile;
GROUP# STATUS TYPE MEMBER IS_
---------- ------- ------- ------------------------------ ---
1 ONLINE /data1/dbstat/redo01.dbf NO
3 STALE ONLINE /data1/dbstat/redo3.log NO
2 ONLINE /data1/dbstat/redo2.log NO
SQL> alter database drop logfile group 2;
alter database drop logfile group 2
*
ERROR at line 1:
ORA-01623: log 2 is current log for instance dbstat (thread 1) - cannot drop
ORA-00312: online log 2 thread 1: '/data1/dbstat/redo2.log'
SQL> alter database drop logfile group 3;
Database altered.
SQL> alter database add logfile group 3 ('/data1/dbstat/redo03.dbf') size 2048M reuse;
Database altered.
SQL> select * from v$logfile;
GROUP# STATUS TYPE MEMBER IS_
---------- ------- ------- ------------------------------ ---
1 ONLINE /data1/dbstat/redo01.dbf NO
3 ONLINE /data1/dbstat/redo03.dbf NO
2 ONLINE /data1/dbstat/redo2.log NO
SQL> set linesize 120
SQL> select * from v$Log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- -------------------
1 1 0 2147483648 1 YES UNUSED 0
2 1 11 52428800 1 NO CURRENT 203206 2009-09-17 18:56:08
3 1 0 2147483648 1 YES UNUSED 0
切换日志组,当CURRENT变为非活动时(INACTIVE)将其删除:
SQL> alter system switch logfile;现在就可以应对大规模的日志操作了。
System altered.
SQL> alter database drop logfile group 2;
alter database drop logfile group 2
*
ERROR at line 1:
ORA-01624: log 2 needed for crash recovery of instance dbstat (thread 1)
ORA-00312: online log 2 thread 1: '/data1/dbstat/redo2.log'
SQL> alter system switch logfile;
System altered.
SQL> select * from v$Log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- -------------------
1 1 12 2147483648 1 NO ACTIVE 203704 2009-09-17 19:04:23
2 1 11 52428800 1 NO ACTIVE 203206 2009-09-17 18:56:08
3 1 13 2147483648 1 NO CURRENT 203710 2009-09-17 19:04:38
SQL> alter system checkpoint;
System altered.
SQL> select * from v$Log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- -------------------
1 1 12 2147483648 1 NO INACTIVE 203704 2009-09-17 19:04:23
2 1 11 52428800 1 NO INACTIVE 203206 2009-09-17 18:56:08
3 1 13 2147483648 1 NO CURRENT 203710 2009-09-17 19:04:38
SQL> alter database drop logfile group 2;
Database altered.
SQL> alter database add logfile group 2 ('/data1/dbstat/redo02.dbf') size 2048M;
Database altered.
SQL> select * from v$log;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIME
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- -------------------
1 1 12 2147483648 1 NO INACTIVE 203704 2009-09-17 19:04:23
2 1 0 2147483648 1 YES UNUSED 0
3 1 13 2147483648 1 NO CURRENT 203710 2009-09-17 19:04:38
-The End-
历史上的今天...
>> 2007-09-22文章:
>> 2006-09-22文章:
>> 2005-09-22文章:
>> 2004-09-22文章:
By eygle on 2009-09-22 09:18 | Comments (7) | HowTo | 2406 |
一般的系统重做日志好像不要这么大吧
一般的数据库要不了这么大的日志吧
现在我也把我们系统的日志文件设置成2G了,但是导致的问题是归档等待的时间长了;还有就是由于用到了streams replication,导致logmnr的性能下降
在做一些海量数据的整理工作,需要大一些的日志。
日志的大小要看具体环境了。
很有实用价值
我一般只在进行imp时将日志文件加到比较大
原来我看的几个系统的redo log都是4G的 ^|^