Tuesday, July 24, 2007

Backup & Recovery3



QuestionsResourcesArchivesLinksPopularHotFiles
Home>Question Details
Rajesh -- Thanks for the question regarding "Backup and Recovery -- why extra redo is generated.", version Version 7.3.2.1/8.0.4
Submitted on 5-Jun-2000 3:31 Eastern US time
Tom's latest followup Bookmark Bottom
Last updated 5-May-2006 14:48
You Asked Why is excessive redo generated during an Online/Hot Backup
and we said...There is not excessive redo generated, there is additional information logged into
the online redo log during a hot backup the first time a block is modified in a
tablespace that is in hot backup mode.
in hot backup mode only 2 things are different:
o the first time a block is changed in a datafile that is in hot backup mode, the ENTIRE
BLOCK is written to the redo log files, not just the changed bytes. Normally only the
changed bytes (a redo vector) is written. In hot backup mode, the entire block is logged
the FIRST TIME. This is because you can get into a situation where the process copying
the datafile and DBWR are working on the same block simultaneously. Lets say they are
and the OS blocking read factor is 512bytes (the OS reads 512 bytes from disk at a time).
The backup program goes to read an 8k Oracle block. The OS gives it 4k. Meanwhile --
DBWR has asked to rewrite this block. the OS schedules the DBWR write to occur right
now. The entire 8k block is rewritten. The backup program starts running again
(multi-tasking OS here) and reads the last 4k of the block. The backup program has now
gotten an impossible block -- the head and tail are from two points in time. We cannot
deal with that during recovery. Hence, we log the entire block image so that during
recovery, this block is totally rewritten from redo and is consistent with itself at
least. We can recover it from there.
o the datafile headers which contain the SCN of the last completed checkpoint are NOT
updated while a file is in hot backup mode. This lets the recovery process understand
what archive redo log files might be needed to fully recover this file.
To limit the effect of this additional logging, you should ensure you only place one
tablepspace at a time in backup mode and bring the tablespace out of backup mode as soon
as you have backed it up. This will reduce the number of blocks that may have to be
logged to the minimum possible.
Reviews

xcellent December 20, 2001
Bookmark Bottom Top
Reviewer: vel from Mooni like this site very much.it very usefull for me
thax tom
October 14, 2002
Bookmark Bottom Top
Reviewer: Jayakumar from DubaiSimply Great ...

February 04, 2003
Bookmark Bottom Top
Reviewer: A reader "In hot backup mode, the entire block is logged the FIRST TIME. "
--Can you please explain what you mean by FIRST TIME above?
Followup:the first time a block is modified AFTER a hot backup for the tablespace it is in starts causes the
full block image to be logged.
Subsequent modifications to that block will go back to changes only logging. Until the next hot
backup of course.
February 04, 2003
Bookmark Bottom Top
Reviewer: A reader Xcellent. Know I got the clear idea what inside is happening at hotbackup time.
Very thankful to u..............
Reader February 16, 2003
Bookmark Bottom Top
Reviewer: A reader During Hot Backup Mode, file is being backed up (unix copy)
1. the block being copied may be in the buffer pool and
being changed
2. The block may be read from the disk
How does Oracle handle the redo in these cases. I am sure
you have explained this , in another posing, I just
couldn't locate it

Followup:when you place a tablespace in hot backup mode, oracle will log extra information for a block the
first time it is modified whilst the tablespace it belongs to is in hot backup mode.
Say tablespace X containing file 55 is put into hot backup mode.
You modify block 123 in file 55 -- this generates redo. Oracle will log the ENTIRE block image
instead of just changed bytes.
You commit.
Someone else modifies blocks 123 and 124 in file 55. Oracle will log just changed bytes for block
123 but a full block image copy for 124.

Confusing Term February 21, 2003
Bookmark Bottom Top
Reviewer: Amarjyoti Dewri from INDIAHi,
The line
-------------------
the datafile headers which contain the SCN of the last completed checkpoint
are NOT updated while a file is in hot backup mode. This lets the recovery
process understand what archive redo log files might be needed to fully recover
this file.
-------------------
is some what confusing? What i want to ask is "How does this lets the recovery
process understand what archive redo log files might be needed to fully recover
this file." a similar question was asked by some one and what i answered was
"When you give alter tablespace begin backup command, checkpoint occurs for all datafiles in the
HOT BACKUP. the datafile header checkpoints are advanced to the SCN captured when the begin backup
command is issued. And this SCN goes in the backup file as well.
Alter tablespace end backup command creates a redo record containing the begin backup checkpoint
SCN. Also the datafile checkpoint is advanced to the database checkpoint.
So from this redo record and the initial SCN in the backup datafile, oracle knows which all redo
needs to be applied to the restored backup file to make it consistent.
Also my understanding is that when we give the alter tablespace end backup command all the redo
generated during the backup is applied to the datafile to advance the datafile SCN to database SCN.
The same has to be done to the backup file to bring it to consistent state, hence the header is
freezed so that oracle knows that redo generated between the SCN at begin backup and end backup
need to be applied. (More ever the blocks changed during the backup are rewritten from the redo)
Or in other words if the SCN were not freezed, they would also be copied in the backup datafile. So
how would oracle know if the datafile is inconsistent when its restored, as the SCN of the backup
datafile would match the SCN at the alter tablespace end backup command. And Oracle uses SCN
numbers to apply the redo."
Am i right or wrong??
Followup:this:

Also my understanding is that when we give the alter tablespace end backup
command all the redo generated during the backup is applied to the datafile to
advance the datafile SCN to database SCN.

is absolutely wrong -- have you ever had an end backup ask you for an archive? Nope, never will -
we constantly write to the datafiles during the hot backup. The SCN recorded in the header tells
us how far back in the redo stream one needs to go to recover this file.
Read the answer above again -- things like:
...
Lets say they are and the OS blocking read factor is
512bytes (the OS reads 512 bytes from disk at a time). The backup program goes
to read an 8k Oracle block. The OS gives it 4k. Meanwhile -- DBWR has asked to
rewrite this block. the OS schedules the DBWR write to occur right now. The
entire 8k block is rewritten. The backup program starts running again
(multi-tasking OS here) and reads the last 4k of the block.
.....
should definitely make it clear that the datafiles are WRITTEN do whilst they are being backed up
(else we would not have this redo issue at all -- the logging of the full block image)
Thanks that clears my doubts. February 21, 2003
Bookmark Bottom Top
Reviewer: Amarjyoti Dewri from INDIAWhat a relief... :-)

hot backup and datafile writing May 29, 2003
Bookmark Bottom Top
Reviewer: gaurav from Ipswich, UK Hello Tom,
As you mentioned that while the hot backup is running oracle will keep on writing to datafiles.
However, I am unable to understand this , my doubt is-
1. I started the hot backup of tbs X
2. Some one has inserting a large number of records (say more than 10,000) .
3. Where will oracle write these records . Will it be redo logs and data files both ? what is the
reason of writing on both places?
3. If some one has issued the query from where oracle will read the records?
Regards
Gaurav
Followup:oracle writes them to the datafiles as normal.
everything happens "just as normal".
Oracle deals with fuzzy files all of the time, it understands what to do with a file upon recovery.
the only thing hot backup mode does really is to freeze the datafile header and generate a little
extra redo for a block the first time it is modified.
otherwise, everything happens "as normal"
some confusion September 22, 2003
Bookmark Bottom Top
Reviewer: Anurag from INDIADear Tom,
Please clear my doubts. Why - during online backup we put tablespace in a backup mode. I wrote a
script to insert and delete few thousand records in 'X' table. during execution I simply copied the
datafiles in bkup folder then after committing and shutting db. I corrupted relevant datafile Then
startup ...error message.....copied dbf from bkup and recover datafile 8 ......auto archive
applied....log applied...media recovery complete.
When backup and recovery is done...why then begin backup and end backup.....
Followup:you got lucky -- hope you don't do that for real.
Oracle works with database blocks in 2, 4, 8, 16, or 32k pieces. Say 8k is what you are using.
operating systems read and write files using their own chunk size. say 512bytes (1/2 k) for
example -- a common size.
So, you are copying a datafile -- you are using the OS to do that. You are reading it in 512byte
chunks.
You start reading block 55 in file 5 using "copy". You got 1/2 of the way through it (meaning you
read 4k of data). You get pre-empted and along comes DBWR who decides to lay down a new version of
that block on disk (checkpoint, flush dirty buffers). DBWR was lucky enough to write all 8k. Your
copy picks back up and reads the next 4k of the block -- but -- it is 4k of the block at a totally
different point in time.
This is known as a fractured block -- happens easily on a system with many people reading and
writing the same file at the same time.
Now, you restore this copy -- we have a block that is half at time T0 and half at time T1 -- we
cannot recover that block -- UNLESS we just happened to keep the full block image from the first
change after starting the backup elsewhere -- which is what we did. So, we are able to recover
this block using that image.
Note that if you use RMAN, this isn't an issue. RMAN is not affected by OS vs DB block sizes -- it
knows how to read an Oracle datafile safely, with RMAN backups, you don't put a tablespace into
backup mode, it is not necessary.
hot backup .... October 11, 2003
Bookmark Bottom Top
Reviewer: Shankar from Houston, TXTom, If the OS block size and Oracle block size are the same, does oracle still log entire block
image when the tablespace is in backup mode? Thanks.
Followup:yup.
Excellent November 26, 2003
Bookmark Bottom Top
Reviewer: Venkata krishnan from IndiaVery nice questions and answers. Cleared my most of the doubts regarding Hot backup.
During hot backup November 26, 2003
Bookmark Bottom Top
Reviewer: Ramesh from Atlanta, GA USAIf the Tablespace in hot backup mode and the files are deleted while in backup mode,
Still we can create table, insert rows into that table..
Is that mean, that the data is not written to Datafile while in backup mode ?
Thanks
Ramesh
Followup:no, you've got it wrong. files are continously written to, in hot backup mode or not.
first, under unix, if a process has a file opened and you erase it elsewhere -- that process still
has a file!
second, the file need not be accessed right away. DBWR does the writes to the files in the
background.
third, did you think to try the test NOT in backup mode (which would "prove" that files are
apparently NEVER written to :)

Why fractured blocks only matter the first time? February 27, 2004
Bookmark Bottom Top
Reviewer: Doug from CT, USATom - the first time a block is updated - is the update suspended/blocked until we can log a
complete block image so that the block is consistent? Why doesn't a fractured block matter the
second or third time it is updated. In other words, let's say that File 2, block 1 is updated but
the copy command hadn't gotten there yet. When it does get there, it is the 3rd or 4th time it's
been updated and it is fractured but we're not logging a completely consistent block image to redo
like we did say a minute ago. How come this doesn't matter? Thanks.
Followup:No, it is just that instead of logging into the redo log buffer "just enough to redo that change",
we log the block.
It doesn't matter on the second and third and nth times because of how recovery would proceed.
Lets say "block 5" in file "55" was backed up "fractured".
Further, we know the first time block 5 was modified AFTER the backup began we logged the modified
image of block 5 in the redo log buffer (and hence into the online redo log files upon commit and
hence into the archives upon log switch).
Now, say block 5 is updated 500 times -- we only log the minimum data to restore block 5 after that
initial write.
Now, we fail and need to restore this file 55. We goto the backups and we lay down this old copy
of file 55. We start processing the archives/online redo logs to catch this file up. We hit the
first change for block 5 after the backup was taken and get the entire block image for block 5.
So, even if block 5 was totally corrupt on disk -- fractured, binary zeros, whatever -- we have
block 5 as it should be after that first update. We lay down that entire block and then apply the
incremental updates from the arch/redo log files to catch it up.

Hot Backup May 13, 2004
Bookmark Bottom Top
Reviewer: Mahesh Kumar from Delhi, IndiaHi Tom
This is really fentastic. This clear all doubts and queries regarding Hot backup and Oracle
internal behaviour of generating Extra Redo.
Keep giving drops from ocean.
Thanks
Instance Crash during a Hot Backup May 13, 2004
Bookmark Bottom Top
Reviewer: Reader Dear Tom
Please clear this 1 doubt. The Scenario is as follows
1. Database has two online redo logs of 10MB each(sequence 32,33) 33 is the current online log.
2. Database running in Archivelog Mode
3. Issued "Alter tablespace users begin backup".Tablespace users has 1 datafile , users.dbf. Now at
this moment a checkpoint takes place and datafile headers are updated with a SCN, No more SCN's
will be recorded in the datafile header even though the datafile will be written to by both
commited and uncommited transactions.
4. A big transaction comes along (taking place on some table in the users.dbf datafile) and fills
up the online redo log. The transaction is not yet committed.
5. Log switch occurs and hence a checkpoint takes place , dbwr writes these uncommited entries to
the datafile users.dbf.
6. The log file is archived by archiver process.(ie log number 33)
7. Now we switch to redo log 32 which will now be overwritten assuming it has already been
archived.
8. Now we have sequence 34 as current online log. The transaction is still going along without
commiting and fills up this log also.
9. Log switch occurs and hence a checkpoint takes place , dbwr writes these uncommited entries to
the datafile users.dbf.
10. The log file is archived by archiver process.(ie log number 34)
11.If the system crashes now, then during the next startup our tablespace will still be in backup
mode. We will first have to issue "Alter Tablespace users end backup" and the database would
rollback the previously uncommited transactions written to the datafile from the online redo logs.
But suppose if we skip point 11 and instead of a system/database crash we continue further as
follows
11. Now we have sequence 35 as current online log. The transaction is still going along without
commiting and fills up this log also.
12. So the redo log 33 is now being overwritten.
13. The system crashes now.
How will the recovery proceed next time the database is started.?? Will the Archived redo logs
become a part of recovery since we have the datafile with uncommited transaction. If yes then will
this be a media recovery
Followup:you have the first step 11 wrong, the database would rollback the uncommitted transactions
regardless.
At all points in time -- regardless of the backup mode of any tablespace -- we only need the
contents of the current online redo logs to perform instance recovery. Before we reuse an online
log, dbwr will have written the blocks protected by that log to their datafiles (including UNDO!)
-- we no longer need that log for instance recovery -- no matter the backup state.
A Little Confused May 14, 2004
Bookmark Bottom Top
Reviewer: Reader Dear Tom
-----------------------------
Before we reuse an online log, dbwr will have written the blocks
protected by that log to their datafiles (including UNDO!) -- we no longer need
that log for instance recovery -- no matter the backup state.
-------------------------------
So please correct me if I am wrong. From the Above Scenario, The recovery will take place as
follows :
1. Even though we have the Redo Log Number 33 which contains the start of our long transaction
overwritten and not available to us during the startup after the failure, the changes made by this
transaction (ie the before images) will be there recorded in the Undo Datafile when the log
switched happened and thus firing a checkpoint.
2. So these changes made by the transaction in the Undo Datafile will have some sort of associated
entry with them that this particular changes are not commited.
3. Hence during the rollback phase of the Crash Recovery these changes will be undone.
Hope I am right??

Followup:1) yes, the undo protected by the log file is safely on disk now
2) yes, as all uncommitted transactions do.
3) yes.
Some more doubts ... May 15, 2004
Bookmark Bottom Top
Reviewer: Reader from US 1.During Log switch DBWR writes to the Datafile(DF) and ckpt updates the DF headers and control
file.
Until this completed a new logfile will not be allocated.
Yes or No ?
2. A large transaction is going on.
At checkpoint all dirty blocks are written to the Datafiles.The header of all the uncommitted
blocks will still be pointing to a RBS .
a) at this point is the block still in the SGA or flushed out to make room for new ones.
b) a new session starts and needs to read some rows from this block .Will it read from SGA or DF
?
c)I assume that it will go to the RBS and get a image before this transaction.
3.Assuming this transaction modified more than 10% of the SGA .So only the transaction table in the
RBS will be updated to indicate that the transaction is complete.
A lot of things happened and this RBS header has been overwritten.
A new session wants to read some of the blocks which belonged to this transaction .
It sees that it is an uncommited transaction.
Goes to RBS .
can not find it .
Generates ORA-1555 .
Now,
When will the block header be updated with a commit SCN and which process will do that and what
will be the SCN .(i read this from your book but you have not mentioned about this part.)
Thanks ,


Followup:1) at a log switch, we fire a checkpoint. until that checkpoint completes, we will not reuse the
logfile we switched out of.
it does not prevent us from going forward, it prevents us from reusing. If the act of going
forward involves "reuse of a log file" and the checkpoint that log file triggered is not complete,
we cannot go forward.
But normally, a log switch is a triggering event and waits for no one.
2) the dirty blocks are flushed to disk. it is impossible to say whether the new session will read
from the datafiles or the buffer cache. They will absolutely read from the buffer cache (no
doubt), the question is whether they have to perform physical IO to the rollback segments or the
data segment in order to load it into the buffer cache. that depends on whether the data is there
OR NOT.
it will read the data block to find out "opps, too new" and then read rbs data to roll it back in
general.
3) your timeline here is very unclear. it seems that the "new session wanting to read" would be
able to read everything and would not need anything from the rbs.
but if you are talking about block cleanout -- i did menation (and proved) that the reader of the
data will generate the cleaned out block (as the select generates REDO at that point)
one more clarification May 17, 2004
Bookmark Bottom Top
Reviewer: Reader from US This has reference to my earlier question and your answer above regrading delayed block cleanout.
eg at 9 AM a large transaction happens and completes
at 11.00 a query comes to access some of these rows .This fails because it is unable to find info
in the RBS .
at 11.30 another query comes --This will suceed .
Why ?
From where will the earlier session (11.00AM) get the commit SCN to update the headers of the
blocks as there is no trace of the transaction table containing the commit SCN .
I hope I am able to express my doubt clearly .
Thanks,

Followup:at 11:00am -- why would the query even be looking in the RBS, the data in the data segment is what
it wants.
if you have my book, Expert One on One Oracle -- I cover this block cleanout stuff in detail with
examples. It tells you when you can get 1555's due to it, and why. takes a couple of pages and
examples to get the material across.
Continued ... May 17, 2004
Bookmark Bottom Top
Reviewer: Reader from US This has reference to the above post.
I read your Books(2) almost every day .
The confusion is --
Delayed Block Cleanout :
Since the transaction was big only the RBS --Transaction Table was marked as complete. All the
related block headers (ITL) were left still pointing the the RBS.
It has an SCN which is not current ie much before the big transaction began.Ideally it should be
the commit SCN from the big transaction.
My question is :
The query at 11.00 will see the ITL and go to the RBS and since many commits have happened in
between it will fail with ORA-1555.
How and when these blocks get the commit SCN and when will it stop pointing to the RBS .
Thanks ,

Followup:those blocks get the commit scn when someone else reads them and cleans them out. that is the very
act of the block cleanout (and why a select can generate redo, redo for the cleanout)
last doubt ! May 17, 2004
Bookmark Bottom Top
Reviewer: Reader from US Now that you have clarified most of my doubts .
The last doubt is ---
from where does this session get this commit SCN because the actual commit SCN is gone ?
Also,If this select puts the commit SCN then why can it use the information in the Blocks instead
of generating these errors ?
Thanks Again,

Followup:it just needs to clean out the block -- get rid of the itl entries.
with the block cleanout ora-1555 -- the problem is it does not KNOW at that point in time if the
block is "old enough", that is the reason for the 1555.
see
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:895410916429for example
"Backup and Recovery -- why extra redo is generated.", version Version 7.3.2.1/8.0.4 May 18, 2004
Bookmark Bottom Top
Reviewer: Zeeshan Zafar from UK useful working explanation.
Datafile and block SCN's -- confusion. July 19, 2004
Bookmark Bottom Top
Reviewer: A reader Hi Tom,
From manuals I read -
" Oracle assigns every committed transaction a system change
number (SCN). Each commit will advance the SCN forward. Each time Oracle performs a checkpoint ,all
changed data is written to the disk. And each time there is a checkpoint, the thread checkpoint in
the control file is updated by Oracle. During this thread checkpoint, Oracle makes all the
datafiles and the control files consistent to the same SCN.
A consistant database means that the SCN's stored in all the data file headers are identical and
are also the same as the data file header information held in the control files."
The important thing is that the same SCN must appear in all the data files and control files.
Fine... but
I really can't understand how all the datafile headers will have the same SCN's !!
1) Does that mean that the datafile headers have a different SCN stored than all the block headers
SCN's??? Because every commit will increment the block headers if a row has changed and committed
there. So if a block has had the rows committed( after modification) 10 times then the SCN of the
block will be different from another block which has had one row changed and committed in it.
2) Do control files have the latest SCN of not only the blocks but also the datafies maintained and
how are they maintained?
Appreciate if you could show with some examples.
Thanks lots,
Regards,
RD.
Followup:the scn in the datafile header simply says "all of the blocks are good upto this scn, you won't
need any redo that predates this scn to recover this file"
the blocks may all have different scn's, but the scn in the datafile header tells us how far back
in redo we would have to go to recover that file.
say a block in file 1 was modified at scn=100
at scn=200 a checkpoint happens.
that block is flushed from the cache to disk and the datafile is stamped with "200".
file 1 needs redo that is generated AFTER scn=200 in the event of recovery happening (say after a
system crash). the fact that the block was modified at scn=100 isn't meaningful here.
Any indication that a tablespace in backup mode? February 07, 2005
Bookmark Bottom Top
Reviewer: Peter Tran from Houston, TX USAHi Tom,
Great explanations above. I would like to know if there's any Catalog views or Data dictionary
views that will let me know if a tablespace in backup mode?
E.g.
alter tablespace USERS begin backup;
How do I find other than looking at the alert log that that USERS tablespace is currently in backup
mode?
If the database instance crashes or the database is shutdown while a tablespace is in backup mode,
is it still in backup mode when the database is started up?
Thanks,
-Peter
Followup:sys@ORA9IR2> select * from v$backup
2 /

FILE# STATUS CHANGE# TIME
---------- ---------- ---------- ---------
1 NOT ACTIVE 0
2 NOT ACTIVE 0
3 NOT ACTIVE 0
4 NOT ACTIVE 0
5 NOT ACTIVE 0
6 NOT ACTIVE 0
7 NOT ACTIVE 0
8 NOT ACTIVE 0
9 NOT ACTIVE 0
10 NOT ACTIVE 0
11 NOT ACTIVE 0
14 NOT ACTIVE 0

12 rows selected.

sys@ORA9IR2> alter tablespace users begin backup;

Tablespace altered.

sys@ORA9IR2> select * from v$backup;

FILE# STATUS CHANGE# TIME
---------- ---------- ---------- ---------
1 NOT ACTIVE 0
2 NOT ACTIVE 0
3 NOT ACTIVE 0
4 NOT ACTIVE 0
5 NOT ACTIVE 0
6 NOT ACTIVE 0
7 ACTIVE 8.2049E+12 08-FEB-05
8 NOT ACTIVE 0
9 NOT ACTIVE 0
10 NOT ACTIVE 0
11 NOT ACTIVE 0
14 NOT ACTIVE 0

12 rows selected.

sys@ORA9IR2> alter tablespace users end backup;

Tablespace altered.

sys@ORA9IR2> select * from v$backup;

FILE# STATUS CHANGE# TIME
---------- ---------- ---------- ---------
1 NOT ACTIVE 0
2 NOT ACTIVE 0
3 NOT ACTIVE 0
4 NOT ACTIVE 0
5 NOT ACTIVE 0
6 NOT ACTIVE 0
7 NOT ACTIVE 8.2049E+12 08-FEB-05
8 NOT ACTIVE 0
9 NOT ACTIVE 0
10 NOT ACTIVE 0
11 NOT ACTIVE 0
14 NOT ACTIVE 0

12 rows selected.
So, v$backup tells you file by file (since files can be in backup mode or not as well...)
sys@ORA9IR2> alter tablespace users begin backup;

Tablespace altered.

sys@ORA9IR2> shutdown abort;
ORACLE instance shut down.
as good as a 'crash'
sys@ORA9IR2> startup
ORACLE instance started.

Total System Global Area 252777660 bytes
Fixed Size 451772 bytes
Variable Size 201326592 bytes
Database Buffers 50331648 bytes
Redo Buffers 667648 bytes
Database mounted.
ORA-01113: file 7 needs media recovery
ORA-01110: data file 7: '/home/ora9ir2/oradata/ora9ir2/users01.dbf'



sys@ORA9IR2> alter database recover;

Database altered.

sys@ORA9IR2> alter database open;

Database altered.

sys@ORA9IR2> select * from v$backup;

FILE# STATUS CHANGE# TIME
---------- ---------- ---------- ---------
1 NOT ACTIVE 0
2 NOT ACTIVE 0
3 NOT ACTIVE 0
4 NOT ACTIVE 0
5 NOT ACTIVE 0
6 NOT ACTIVE 0
7 NOT ACTIVE 8.2049E+12 08-FEB-05
8 NOT ACTIVE 0
9 NOT ACTIVE 0
10 NOT ACTIVE 0
11 NOT ACTIVE 0
14 NOT ACTIVE 0

12 rows selected.

Awesome!!! February 08, 2005
Bookmark Bottom Top
Reviewer: Peter Tran from Houston, TX USAAs always thank you very much for a clear example.
-Peter
Why do you recover? February 08, 2005
Bookmark Bottom Top
Reviewer: reader Tom, in your demo above, I could do "alter database end backup" instead of "alter database
recover". right?
Followup:try it, see what you see :)
but the correct response to:
ORA-01113: file 7 needs media recovery
ORA-01110: data file 7: '/home/ora9ir2/oradata/ora9ir2/users01.dbf'

is generally "recover"
When does a fragmented block rebuilds if Server is not restarted March 07, 2005
Bookmark Bottom Top
Reviewer: Deepak Haldiya from FL, USAHi Tom,
I have the following query on Hot BackUp:
1. Assume that we have a fragmented block in the datafile because of a HotBackUp. After the alter
tablespace end backup, the database is still running. Now after some time (at this point, we no
longer have this block in buffer cache), a user queries the data which belongs to this fragmented
block. What happens? How does Oracle provides the user correct data?
2. I read somewhere that the Hot Back generates less redo if performed through RMAN and generates
more when performed manually.
The only explanation I can think is RMAN does not issue Begin Backup Statement and check the Block
Header and Block Footer while copying the datafile in its special format. If a mismatch is found,
its re-read from the datafile. Is this correct?
Thanks

Followup:1) the fragmented block ONLY exists in the backed up file, it never exists on disk like that. the
fragmented block happens when your backup tool is reading a block at exactly the same moment as
dbwr is writing that block. dbwr will finish writing the block -- and the block will not be
fragmented on disk - but the backed up version could be fragmented and that is what the additional
redo generated during a backup is used to correct.
2) rman, which understands it is backing up oracle database blocks and not just a file like your OS
copy tools, does not suffer from fractured blocks since if it sees one -- it simply re-reads the
block from disk. So the backup file will never have a fractured block in it.

RE: When does a fragmented block rebuilds if Server is not restarted March 08, 2005
Bookmark Bottom Top
Reviewer: Deepak Haldiya from FL, USAHi Tom,
After thinking a little more on what I asked you, I figured that the Database Blocks is the current
Datafiles will always be consistent. Its only the backup datafiles which may have fragmented Blocks
while performing Hot Backups.
You may disregard my previous question.
Thanks
Deepak
Crystal!! March 08, 2005
Bookmark Bottom Top
Reviewer: Arul from INDIAExcellent Tom!!
Does Online Backup have any impact on Data Buffer? May 09, 2005
Bookmark Bottom Top
Reviewer: Manoj from IndiaDear Tom,
The forum over Online Backup is indeed very useful.
I like to request you to please tell me, is Online backup have any impact on Data buffer cache.
If Databuffer cache is full while hot backup is in progress then what should be done?
Many thanks in advance!
Best regards,
Manoj
Followup:nothing, the cache is used by SQL. rman reads data from disk. cp reads from disk. whatever you
use will read from disk.
Some More Doubts..... September 15, 2005
Bookmark Bottom Top
Reviewer: Kishor Bhalwankar from IndiaHi tom,
Can you explain me follwoing ..
After Begin Backup, I updated one row and commited.
(This is the FIRST dirty block after begin backup)
This will be copied in redo..
After some time if I update the same block (Same or other row) and commited.
It will record only the change vector in redo...
Q : How oracle knows that this block is copied previously in redo. (Where oracle stores this
information ) ?
OR
Will oracle copy the full block in redo again ?
Thanks in advance.
Correct me If I am wrong.

Followup:There is a ton of bookeeping information on every block header - the same sort of information we
use for read consistency (to determine if a block is "old" enough for you to read .
the extra redo is generated for the first change to that block after a begin backup. after that,
it is logged as it was outside of a begin backup.
little confusion October 12, 2005
Bookmark Bottom Top
Reviewer: A Reader from IndiaLet's have an example of Before starting with Hot backup( begin backup):
SELECT T.NAME, H.FILE#, H.CHECKPOINT_CHANGE#
FROM V$TABLESPACE T, V$DATAFILE_HEADER H
WHERE T.TS#=H.TS#;
NAME FILE# CHECKPOINT_CHANGE#
------------------------------ ---------- ------------------
SYSTEM 1 121923
UNDOTBS 2 121923
INDX 3 121923
TOOLS 4 121923
USERS 5 121923
So, all tablespaces (and all data files) start off synchronised with the same checkpoint change
number.
Now we will put USERS tablespace in backup mode.
alter tablespace users begin backup;
Again fire the same query:
select t.name, h.file#, h.checkpoint_change#
from v$tablespace t, v$datafile_header h
where t.ts#=h.ts#;
NAME FILE# CHECKPOINT_CHANGE#
------------------------------ ---------- ------------------
SYSTEM 1 121923
UNDOTBS 2 121923
INDX 3 121923
TOOLS 4 121923
USERS 5 121925
We can notice here that USERS tablespace is slightly ahead of rest of the tablespaces.
Now suppose it took 1hr to complete the backup of USERS tablespace datafile. And in between so many
transaction has happened and checkpoint also occurred.
alter system switch logfile;
alter system switch logfile;
alter system switch logfile;
Now we will fire the same query again to check the cheakpoint# :
select t.name, h.file#, h.checkpoint_change#
from v$tablespace t, v$datafile_header h
where t.ts#=h.ts#;
NAME FILE# CHECKPOINT_CHANGE#
------------------------------ ---------- ------------------
SYSTEM 1 121939
UNDOTBS 2 121939
INDX 3 121939
TOOLS 4 121939
USERS 5 121925
Here we can see that all tablespaces are now ahead of USERS tablespace because during hot backup ,
header was locked and did not get updated of USERS tablespace.
Now my confusion starts here:
When Oracle will bring the USERS datafile equal to the rest of the datafiles (in terms of
CHECKPOINT_CHANGE#)? Just after putting the tablespace out of backup mode (end backup) OR at the
time of recovery (when db would crash and we would restore it from backup and apply arch files).
What mechanism Oracle will follow?
Please explain it, you can give exampls also.
Thanks in advance for your support to Oracle community.
Followup:after you take the tablespace out of backup mode, it is free to be "caught up" to the rest of the
database - it would not wait for recovery to take place because recovery might *never* take place.
some clarification October 12, 2005
Bookmark Bottom Top
Reviewer: amit poddar from new haven, CTHi,
Is changes to datafile header (ie. during checkpoint, begin backup, end backup etc) protected by
redo ?

Followup:there might be some - but in general, the file header is used to figure out what redo might be
needed for recovery - not recovered itself from redo.
one more question October 12, 2005
Bookmark Bottom Top
Reviewer: amit poddar from new haven, CThttp://www.freelists.org/archives/oracle-l/10-2005/msg00375.htmlPlease take a look at the above link.
It says that increasing the SGA makes "begin backup" take longer since it takes longer to do a
tablespace checkpoint.
Question is does begin backup result in a tablespace checkpoint before freezing the datafile
header?
If yes then, why would a checkpoint needed
If no then why would increasing the size of SGA result in longer time for doing "begin backup" for
a tablespace ?
Followup:it does a checkpoint, yes.
gets the file to a known point, good for figuring out what redo you need to have included with the
backup in order to be able to restore it and make it consistent.
Little confusion October 12, 2005
Bookmark Bottom Top
Reviewer: A Reader from IndiaHi Tom,
I got your point that after putting the tablespace out of backup mode (i.e. end backup), it will
catch up with rest of database datafiles.
NOW, Suppose we are taking backup of USERS tablespace.
alter tablespace USERS begin backup;
During hot backup so many transaction are going on and due to split blocks , redo is getting
generated in excess and redolog file gets full and log switch happen. Then in this case the first
log file get archived ( archive log file) and control gets to second log file. Suppose backup take
the longer and in between 2-3 cycle happens for log switch.
I mean to say that redo information for transaction goes to the archive log file and redo logfile
overwritten.
Now what will happen when we will take USERS tablespace out of backup mode.
Alter tablespace USERS end backup;
As you said that datafile automatically catch up with database, so Will it recover the datafile
(split blocks) from archive log file?
Pls. explain how Oracle will handle it.
Thanks.

Followup:the same thing that would happen if NO redo were generated.
The datafile is constantly written to during a hot backup, checkpoints happen as normal, blocks are
read and written all of the time - only the data file header is treated special.
At the end of the backup, we fix up the data file header and continue business as normal.
The split (fractured) blocks would be in the BACKUP - not in the datafile. They happen because the
backup program (cp, copy, whatever) reads using a read size of "X" and we use a block size of "Y"
and "X" <> "Y". Fractured blocks would be in the backup set - not in the datafiles.
Nothing at all... October 12, 2005
Bookmark Bottom Top
Reviewer: Howard J. Rogers from Pheasants Nest, NSW, AustraliaNothing will happen when you say 'end backup'. What's there to have happen? The split blocks you're
talking about are in the *copy* of the data file you just made, not in the original file that the
database is using. The database therefore is suffering no problem, and if there's no problem,
there's no need to do anything.
If the original data file was now to be lost or damaged in some way, then you would restore the
backup copy, and *now* you would have a split blocks problem: the file that you restored contains
split blocks and is therefore internally inconsistent and unusable.
But the *archive logs* contain whole, pristine images of those blocks which, in the datafile copy,
have split. That is what putting the datafile into hot backup mode in the first place made happen.
So, as we perform database recovery, if we encounter a whole-image of a block in the redo stream,
we over-write the equivalent block image in the restored datafile. That then gives us a 'clean
slate' (i.e., no splits) for each individual block encountered, and a consistent block image which
can therefore be rolled forward to catch up with the time of the rest of the database.
So when you say 'end backup', nothing (much) happens. When you perform a recovery: yes, the
whole-block images in the archives are used to allow recovery to take place successfully.

Confusion cleared October 14, 2005
Bookmark Bottom Top
Reviewer: A Reader from IndiaTom, Thanks for clearing me about split blocks.
Horward, thanks to you also for very good explanation on this.

Proof! November 15, 2005
Bookmark Bottom Top
Reviewer: Richard from Buckinghamshire, UKNot that I doubt your word - but where in the Oracle documentation is the proof of your statement,
"the first time a block is changed in a datafile that is in hot backup mode, the ENTIRE BLOCK is
written to the redo log files, not just the changed bytes"?

Followup:Not all details as minute as this are necessarily documented fully - however...
You could of course set up a test to confirm this (dumping redo streams - not much fun but it would
be convincing)
(talks about fractured blocks and that extra redo is generated)
http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14220/backrec.htm#sthref2329(i believe this entry to be erroneous in part as it says "every time", not the first time - it is
only the first time)
http://download-east.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc1012.htm#sthref360Here is compelling evidence to that effect:
ops$tkyte@ORA9IR2> create table t ( x int );
Table created.
ops$tkyte@ORA9IR2> insert into t values ( 1 );
1 row created.
ops$tkyte@ORA9IR2> commit;
Commit complete.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace procedure p
2 authid current_user
3 as
4 l_value number;
5 begin
6 select b.value into l_value
7 from v$statname a, v$mystat b
8 where a.statistic# = b.statistic#
9 and a.name = 'redo size';
10
11 for i in 1 .. 100
12 loop
13 update t set x = x+1;
14 commit;
15 end loop;
16
17 select b.value-l_value into l_value
18 from v$statname a, v$mystat b
19 where a.statistic# = b.statistic#
20 and a.name = 'redo size';
21
22 dbms_output.put_line( 'redo size = ' l_value );
23 end;
24 /
Procedure created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec p
redo size = 51544
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> exec p
redo size = 51568
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> exec p
redo size = 51568
PL/SQL procedure successfully completed.
so, about 50/51k of redo generated each time, put tablespace into backup mode:
ops$tkyte@ORA9IR2> alter tablespace users begin backup;
Tablespace altered.
ops$tkyte@ORA9IR2> exec p
redo size = 59784
PL/SQL procedure successfully completed.
increase of 8k the first time (my block size is 8k). But note that it is NOT 100*8k, just 1*8k -
first time...
Just to make sure it is not a funky thing with plsql and commit optimizations, we'll do it again
and again:
ops$tkyte@ORA9IR2> exec p
redo size = 51568
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> exec p
redo size = 51600
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> alter tablespace users end backup;
Tablespace altered.
and we have the same amount of redo as before - even though we are in backup mode

Marvellous!!! November 15, 2005
Bookmark Bottom Top
Reviewer: Richard from Buckinghamshire, UK
December 01, 2005
Bookmark Bottom Top
Reviewer: A reader Hi Tom,
Excellent site ...the least I can say. Please excuse me for asking soem very basic
question. In teh article (http://www.dizwell.com/oracle/articles/hot_backups.html, I can't understand the following line (third para on the first page) :
"For example, suppose my data file’s checkpoint change number reads ‘1000’, but there are actually
blocks of data in that file which are actually from time ‘900’."
Does this mean that eac block also contains teh SCN?
Thanks a lot fo ryour useful feedback.
Santosh
Followup:conceptually, not literally (unless you have row dependencies...). information to figure out that
information is there.
You can always ask HJR what he meant as well:
http://dizwell.com/forum
Hot backup/Rman Backup February 24, 2006
Bookmark Bottom Top
Reviewer: A reader Tom,
Execellent site!
Does all above means that the alter system archive log all(current) following with backup of
archive logs should be always the very last command in the scripts for hot backup?
If I start with backing up archivelog files and then do hot backup by swithing TS to backup state
and coping OS file with OS copy command, but not backup logs which oracle archived during backup -
does this means that I can't use my very last backup (not open with reset log). Is such backup
really unusable?
When in the script should I backup controlfiles?
What is different if I use rman. Is the order of backup database/backup achivelog/current control
file is important?
For me it is important to understand whether is such backup (database without backing up the
archive logs done during backup) can be used to restore the database to the time of backup?
Thanks in advance
Olaf
Followup:In order to have a backup that is restorable to the point in time of the backup itself, you need to
have everything you need to make it consistent.
So yes, it would end with a "switch logfile" (to get off of the current online redo log), an
"archive them all" and then a "backup these archives"
You now have
o the datafiles
o all archives needed to make these datafiles consistent as of THIS POINT IN TIME.
Every other archive you end up throwing into that backup set will just allow you to take that
backup further in time.
It is not that your backup is unusable, it is that "unless you get access to the archives generated
during your backup at the very least, your backup won't be restorable".
You want to understand? Practice makes perfect. Set up a test environment and play, play, play,
play.
February 24, 2006
Bookmark Bottom Top
Reviewer: A reader that means that if I did not backed up achivelogs done during hot backup and did not saved it
together with datafile backup (or copied them away from the server), and it was the only one backup
copy - there is no way to restore the database if my prod db blew up?
Followup:that is likely the case - yes.

Confusion May 04, 2006
Bookmark Bottom Top
Reviewer: A Reader from USATom,
Please clear the point why - "We should not take the backup of Online redo logs in Hot backup."
Please give the example to clarify it. Why and how it will corrupt the database if we restore the
online redo logs from backup.
Thanks.
Followup:tell me why you would?
it just does not make sense - to restore a hot backup you
a) restore the hot backed up file
b) apply ARCHIVES TO IT
and if they exist - let the system roll forward through the online redo logs it finds on disk
If you restore from a backup - the online redo logs from 2 days ago - the ONLY thing you could
achieve is ----- losing the ability to recover the data that was in your current online redo logs.
confusion May 05, 2006
Bookmark Bottom Top
Reviewer: A Reader from USAHi Tom,
I think i couldn't explain my problem in previous post. Now pls let me explain it in more detail.
Suppose We take a hotbackup on 10th Mar06 and that time we copy the online redolog files also.
Now suppose we have to restore the full database on some other system(m/c) on 15thMar06.
Q1. Can we copy the full database (all datafiles,controlfiles and redolog files of 10thMar06) and
apply the archive log file from 10th06 to 15thMar06.
Q2. And if we can not copy the online redolog files of 10Mar06, pls explain why with example.
Thanks.
Followup:you cannot copy the online redo logs - they are full of garbage - we use them in a circular fashion
- they are a big round buffer.
some of what you copy is in the archives.
YOU DO NOT copy online redo logs, they are not USEFUL.
you take a hot backup, you switch log files, you archivelog all, you backup archives.
that is what you do.
Very Useful info... October 23, 2006
Bookmark Bottom Top
Reviewer: KVR Kishore from INDIAExcellent
Write a Review


About Oracle Legal Notices and Terms of Use Privacy Statement