Showing posts with label Dataguard. Show all posts
Showing posts with label Dataguard. Show all posts

Sunday, May 15, 2011

Client want to go for DR test.The changes During DR Test should Not REflect On Prod(Means the changed Data during DR Test should Not reflect on primar

Possibility1:Planned Fail over


Note:Primary Database will be down until DR Test completes

a.Take cold/hot/RMAN backup on primary before DR test

b.Take cold/hot/RMAN backup on standby Database before DR test.
c.Shutdown Primary Database

d.On standby Database fire the below command sql> alter database activate standby database;

e.Once standby Database is activated,Execute the below command. sql > alter database open

f.Now standby Database is converted to primary Database.

g.Go For DR Test
Post Fail over:After DR test.

a.shutdown standby Database(Which was activated as per step d) b.remove all the datafile,controlfile,redologfile....except pfile or spfile or Drop Database on standby Database

c.Restore the backup on standby Database (which was taken on step b)
d.After successful Restore,Execute below commands
startup nomount

sql > alter database mount standby Database


alter database recover managed standby database disconnect from session;


e.start the Primary Database(This step can be done as per the requirement..means you can start earlier also)


f.Do Manual switches sql > alter system switch logfile ;

g.check the synchronization with standby Database.


Other Possibility will be posted shortly..Hope the above scenario makes helps..Be free to comment or suggestion... Cheers, Ayyappa Y
------------*****--****----------------------
Possibility2 :Using FLASHBACK ON STANDBY DATABASE

Requirements

The following requirements need to be met in order to create a snapshot standby.

1.Database version should be 10.2* or higher
2.Primary & standby Database should be in archive log mode
3.Database should be in sync before you proceed.
4.Flash Recovery Area (FRA) is required on the standby database to implement a Flashback database.


ON PRIMARY DATABASE:

a.Do Manual switches
sql > ALTER SYSTEM ARCHIVE LOG CURRENT;

b.check the synchronization with standby Database
sql > select max(sequence#) from v$log_history;

c.Stop the synchronization & shipping of the archives from primary:
sql > alter system set log_archive_dest_stat2_2=defer;

On standby Database:

a.Cancel The Recovery Process:
sql > alter database recover managed standby database cancel;

b.Create Guarantee Restore point
CREATE RESTORE POINT ayyu_stdby GUARANTEE FLASHBACK DATABASE;

c.Activate the standby Database
sql > alter database activate standby database;

d.Open the standby Database
sql > alter database open

Now You can use the DR box for testing purpose..

e.Once testing is completed.Perform the Following

sql > shu immediate;
sql > startup Mount;

sql>Flashback Database to restore point ayyu_stdby;
sql > ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

Post DR test:

On Primary:

a.Enable the archive destination
sql > alter system set log_archive_dest_stat2_2=enable;

On standby:

a.start the recovery Process
sql > alter database recover managed standby database disconnect from session;


============================================================

Possibility-3 :Using SNAPSHOT STANDBY FEATURE IN 11G(Below Document is from ORACLE_BASE)

Snapshot standby allows the standby database to be opened in read-write mode. When switched back into standby mode, all changes made whilst in read-write mode are lost. This is achieved using flashback database, but the standby database does not need to have flashback database explicitly enabled to take advantage of this feature, thought it works just the same if it is.
If you are using RAC, turn off all but one of the RAC instances. Make sure the instance is in MOUNT mode.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
Make sure managed recovery is disabled.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Convert the standby to a snapshot standby. The following example queries the V$DATABASE view to show that flashback database is not enabled prior to the conversion operation.
SELECT flashback_on FROM v$database;

FLASHBACK_ON
------------------
NO

ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;
ALTER DATABASE OPEN;
SELECT flashback_on FROM v$database;

FLASHBACK_ON
------------------
RESTORE POINT ONLY

SQL>
You can now do treat the standby like any read-write database.
To convert it back to the physical standby, losing all the changes made since the conversion to snapshot standby, issue the following commands.
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
SHUTDOWN IMMEDIATE;
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
SELECT flashback_on FROM v$database;

FLASHBACK_ON
------------------
NO

SQL>
The standby is once again in managed recovery and archivelog shipping is resumed. Notice that flashback database is still not enabled.

============================================================

Wednesday, January 23, 2008

DR-Setup....

Architecture
Database Synchronization Options

Setup No-Data-Divergence

Setup Primary Database

Setup Standby Database

Start Managed Standby Recovery

Protect Primary Database

Cancel Managed Standby Recovery

Activating A Standby Database

Backup Standby Database


Database Switchover

Database Failover

Automatic Archive Gap Detection

Background Managed Recovery

Delayed Redo Application

Architecture

The Oracle9i Data Guard architecture incorporates the following items:

Primary Database - A production database that is used to create standby databases. The archive logs from the primary database are transfered and applied to standby databases. Each standby can only be associated with a single primary database, but a single primary database can be associated with multiple standby databases.
Standby Database - A replica of the primary database.

Log Transport Services - Control the automatic transfer of archive redo log files from the primary database to one or more standby destinations.

Network Configuration - The primary database is connected to one or more standby databases using Oracle Net.

Tuesday, January 22, 2008

LOG_ARCHIVE_DEST_n

Syntax LOG_ARCHIVE_DEST_[1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10] =

"null_string" |

((SERVICE=service

| LOCATION=location)

[AFFIRM | NOAFFIRM]

[ALTERNATE=destination | NOALTERNATE]

[ARCH | LGWR]

[DELAY[=minutes] | NODELAY]

[DEPENDENCY=destination | NODEPENDENCY]

[MANDATORY | OPTIONAL]

[MAX_FAILURE=count | NOMAX_FAILURE]

[QUOTA_SIZE=blocks | NOQUOTA_SIZE]

[QUOTA_USED=blocks | NOQUOTA_USED]

[REGISTER | NOREGISTER]

[REGISTER=location_format]

[REOPEN[=seconds] | NOREOPEN]

[SYNC | ASYNC=blocks]] )

Default value There is no default value.

Parameter class Dynamic: ALTER SESSION, ALTER SYSTEM

Range of values Valid keyword definitions

------------------------------------------------------------------------------
Note:
This parameter is valid only if you have installed Oracle Enterprise Edition. You may continue to use LOG_ARCHIVE_DEST if you have installed Oracle Enterprise Edition. However, you cannot use both LOG_ARCHIVE_DEST_n and LOG_ARCHIVE_DEST, as they are not compatible.

--------------------------------------------------------------------------------


The LOG_ARCHIVE_DEST_n parameters (where n = 1, 2, 3, ... 10) define up to ten archive log destinations. The parameter integer suffix is defined as the handle displayed by the V$ARCHIVE_DEST dynamic performance view.

Values:

SERVICE

Specifies a standby destination. Oracle Net (IPC or TCP) transmits the archivelog. A standby instance must be associated with the destination. The value represented by tnsnames_service corresponds to an appropriate service name in tnsnames.ora.

LOCATION

Specifies a local file system destination. You must specify this parameter for at least one destination.

MANDATORY

Specifies that archiving to the destination must succeed before the redo log file can be made available for reuse.

OPTIONAL

Specifies that successful archiving to the destination is not required before the redo log file can be made available for reuse. If the "must succeed count," set with LOG_ARCHIVE_MIN_SUCCEED_DEST, is met, the redo logfile is marked for reuse. This is the default.

REOPEN

Specifies an interval of time (in seconds) that must pass after an error has been encountered during archiving to the destination before future archives to the destination can be attempted. Future attempts are made when the next redo log file is archived. If a destination is MANDATORY, Oracle Corporation recommends that you specify a REOPEN time that reduces the possibility of primary database shutdown due to lack of available online redo log files.

If you do not specify integer, the default value is 300 seconds. If you do not specify REOPEN, the archiver process will never reopen a destination after an error.


Examples:
LOG_ARCHIVE_DEST_1 = 'LOCATION=/u01/oracle/logs MANDATORY REOPEN=120'
LOG_ARCHIVE_DEST_2 = 'SERVICE=standby1 MANDATORY REOPEN=180'
LOG_ARCHIVE_DEST_3 = 'SERVICE=standby2 OPTIONAL'

Note:
- This parameter is only valid for Oracle Enterprise Edition.
- This parameter introduced in Oracle 8i.
- You may continue to use LOG_ARCHIVE_DEST if you have installed Oracle
Enterprise Edition. However, you cannot use both LOG_ARCHIVE_DEST_n and
LOG_ARCHIVE_DEST, as they are not compatible.

Data Guard 9i Log Transportation on RAC

PURPOSE
-------

This article gives an overview about how to create a Data Guard
Configuration on Real Application Clusters (RAC). The Configuration
you can find here is for a Physical Standby Database.


SCOPE & APPLICATION
-------------------

You can see which initialisation parameters you have to use / change
and how the Log Transport is organized in a RAC-DataGuard environment.

NOTE: The Data Guard Broker and it's interfaces (DGMGRL and Data
Guard Manager from Enterprise Manager) do not support Data Guard
on RAC environements in Oracle 9i releases.


1. Overview Archiving on RAC
----------------------------

You can either run Noarchivelog or Archivelogmode on a RAC environment. If you
configure the Archivelogmode, each instance creates its own Archivelogs from
its Log Threads. These Archivelogs can either be stored on their local private
disks or on a formatted partition in the Shared Disk. Typically you store them
on the Shared Disk, to get each instance accessibility on all Archivelogs.
Alternative, you can transfer the Archivelogs between the primary instances via
the Service: LOG_ARCHIVE_DEST_n=(SERVICE= ...)


2. Create Standby Database
--------------------------

There are two types of Standby Databases supported for RAC. You can either
create the Standby database on a Single-Node or also on a Cluster system.
Due to the Broker being unable to support RAC environments, you have to
setup the Standby Configuration manually.

2.1 Standby Database on a Single-Node
-------------------------------------

In case of creating the Standby Database on a Single-Node you can mostly
proceed like a normal Standby Database creation, that means, first of all you
take a full backup of your Primary Database and create a Standby Controlfile
(ALTER DATABASE CREATE STANDBY CONTROLFILE AS from any
instance). Then you restore the Database including the Standby Contolfile on
your Standby Site.
Next mind to adjust the corresponding Initialisation parameters if needed
(eg. DB_FILE_NAME_CONVERT, STANDBY_ARCHIVE_DEST, CONTROL_FILES). For more
information about the correct settings, Please also follow this Note for
apprpriate Network (TNSNAMES.ORA and LISTENER.ORA) settings. In the next step
we have to configure the Logtransport-Services. This is done by setting up the
LOG_ARCHIVE_DEST_n-parameter on all primary instances. All the instances have
to archive the Logs to the Service of the Single-Node Standby System. The
Standby System creates a "pool" of Archivelogs and on the basis of the SCN it
can determine which Archivelog from which Thread is the next one to apply. So
you have to set the next free LOG_ARCHIVE_DEST_n-parameter to the Standby
Service. The other setting like which process to use for transfer or type of
transfer (SYNC or ASYNC,...) depend on your prefered Protection Mode. (Look at
Init.ora Parameter "LOG_ARCHIVE_DEST_n" Reference Note). If
this is all done, you now can STARTUP MOUNT the Standby Database and set it to
the RECOVERY MODE.

2.2 Standby Database on a RAC-System
------------------------------------

2.2.1 Standby Database is also a Cluster (RAC-) System
------------------------------------------------------
It is also possible to create the Standby System on a RAC-environment, too.
This provides much more comfort, scalability and performance. If you have two
identical systems, in case of switchover or failover, there should be no
performance and availability decrease.
Typically, you have the same number of Instances (Nodes) on the Standby system
as on the Primary System.
First of all you proceed like a normal Standby Database creation. You take full
backup from your Primary Database and create the Standby controlfile from any
of the Instances. Then you Prepare the Standby system for the Database (create
the RAW-devices and logical links, configure your Hardware and install the
Oracle Software on each Node). After that, you are able to restore the backup
including the new created Standby Controlfile. Then also mind to setup the
appropriate parameters in the PFILE or SPFILE for EACH instance correctly (see
2.1 Standby Database on a Single-Node) and also configure the Network parameter
(TNSNAMES.ORA, LISTENER.ORA) corresponing your requirements and settings.
The greatest difference in comparison to the Single-Node system is, that we now
have multiple Standby instances and only one of them can perform the Recovery.
Bascially every Primary Instance transports its Archivelogs to a corresponding
Standby Instance. The Standby Instance receiving the Logs now transfers them to
the instance perfoming the Recovery. This all is handled by the
LOG_ARCHIVE_DEST_n-parameter. Furthermore it is recommended to create Standby
Redologs on the Standy Database for each Standby instance.
For clarification, we will show you a small example:

We have a Primary Database with two Nodes and two Instances (A nd B). We
equally have a Standby-RAC enviroment, also containing the Standby Database
with two Nodes and Instances (C and D). In the normal Primary Database is
Archivelog Mode enabled. Each Primary Instance archives its Redologs on a
formatted Partition of the Shared Disk, if a Cluster File System (CFS) is
supported for the used Platform. Else the Archivelogs are stored on each's
Private Disk area. Furthermore you must use different formats for naming the
Archivelogs to prevent reciproactive overwritings. It is recommended to use
at least the %t in LOG_ARCHIVE_FORMAT to prevent this. The %t represents the
Thread Number where the Log comes from.
Accordingly, we have the following relevant settings:

Instance A:
LOG_ARCHIVE_DEST_1=(location=/u10/ARCHIVE)
LOG_ARCHIVE_FORMAT=arc_%s_%t

Instance B:
LOG_ARCHIVE_DEST_1=(location=/u10/ARCHIVE)
LOG_ARCHIVE_FORMAT=arc_%s_%t

Next we have to add the Log Tansport for each Primary Instance to the
corresponding Standby Instance. As we have Standby Redologs created and want
to have maximum Performance, we have now these settings:

Instance A:
LOG_ARCHIVE_DEST_1=(location=/u10/ARCHIVE)
LOG_ARCHIVE_DEST_2=(SERVICE=C LGWR ...)
LOG_ARCHIVE_FORMAT=arc_%s_%t

Instance B:
LOG_ARCHIVE_DEST_1=(location=/u10/ARCHIVE)
LOG_ARCHIVE_DEST_2=(SERVICE=D LGWR ...)
LOG_ARCHIVE_FORMAT=arc_%s_%t

Now we want to designate Instance D as the Recovering Instance. Therefore the
Archivelogs from the Instance C have to be transfered to Instance D. Instance
D is also performing the Archiving to disk process (Again on the Shared Disk
to be available for both instances). The resulting settings for Instance C
and D now are:

Instance C:
LOG_ARCHIVE_DEST_1=(SERVICE=D ARCH SYNC)

Instance D:
LOG_ARCHIVE_DEST_1=(loaction=/u10/ARCHIVE)

If all this is done, you can now STARTUP NOMOUNT and
ALTER DATABASE MOUNT STANDBY DATABASE all instances and put the
Recovering Instance into RECOVERY Mode.

2.2.2 Standby Database is a Single Node System
----------------------------------------------
Another possibility for a RAC-Standby Configuration is to place the Standby
Database on a Single Node System.
Basically, the prerequisites are the same. The difference now is, that all
Instances from the Primary Database have to ship their Archivelogs to one single
system. Due to the Information (SCN's) in the Archivelogs, the Managed Recovery
Process (MRP) on the Standby Database finds out the correct order to apply the
Archivelogs from the different Threads on the Standby Database.
So in contrast to 2.2.1, we have now as the Primary Instances in our example
Instance A and B. The Single Node Standby Instance is C. This results in these
settings for LOG_ARCHIVE_DEST:

Instance A and B:
LOG_ARCHIVE_DEST_1=(location=/u10/ARCHIVE)
LOG_ARCHIVE_DEST_2=(SERVICE=D LGWR ...)

Please also mind to create different Standby RedoLog Threads on the Standby
Database if you use them.
Instance C now receives all the Logs and applies them to the Standby Database.

Instance C:
LOG_ARCHIVE_DEST_1=(loaction=/u10/ARCHIVE)

2.2.3 Cross-Instance Archival
-----------------------------
In addition to the Parameters shown above, you can configure Cross-instance-
Archival. This means you can ship your Archivelogs from one Primary Instance to
another Primary Instance. This could be helpful in case of Gap-resolution if one
Instance looses Network connect or Archivelogs are deleted by fault. To enable
Cross-instance Archival, you simply have to take one free LOG_ARCHIVE_DEST_n-
parameter and configure it for shipping to another Primary Instance. For
Cross-Instance Archival only the Archive process is supported and can be used.
If we use again our example from 2.2.1, the result would be:

Instance A:
LOG_ARCHIVE_DEST_1=(location=/u10/ARCHIVE)
LOG_ARCHIVE_DEST_2=(SERVICE=C LGWR ...)
LOG_ARCHIVE_DEST_3=(SERVICE=B ARCH ...)
LOG_ARCHIVE_FORMAT=arc_%s_%t

If you also consider to use Cross-instance Archival on Instance B:

Instance B:
LOG_ARCHIVE_DEST_1=(location=/u10/ARCHIVE)
LOG_ARCHIVE_DEST_2=(SERVICE=C LGWR ...)
LOG_ARCHIVE_DEST_3=(SERVICE=A ARCH ...)
LOG_ARCHIVE_FORMAT=arc_%s_%t


3. Gap Resolution and FAL
-------------------------

Basically, there are two Modes of Gap Resolution:
- The automatic one, which is controlled by the Archiver and turned on
automatically. The Automatic Gap resolution works proactive and could
therefore produce some Network overhead.
- The FAL (Fetch Archive Log) Method, which requests certain Logs again from
a remote system. On the Receiving Instance you have to publish one or
more FAL-Server where it can receive its ArchiveLogs from and on the Sending
Instance you have to publish one or more FAL-Clients, who are allowed to
receive ArchiveLogs from that Instance. The publishing is done by the
Initialisation Parameters FAL_SERVER and FAL_CLIENT.
FAL is not supported for Logical Standby Databases.
A typical configuration for our example 2.2.1 could be:

Instance A and B:
FAL_CLIENT=C

Instance C and D:
FAL_SERVER=A,B

With Cross-Instance Archival (2.2.3) configured, Instance C could now also
receive missing Archivelogs from Instance B and Instance D from Instance A.


4. Protection Mode
------------------

All supported Protection Modes for Single-Node-systems are also supported for
RAC-environments in 9.2. You only have to set all Transport Services equal for
each primary instance. Please refer to the Data Guard Concepts and
Administration Guide for more information about specific settings.


5. Switchover and Failover
--------------------------

Switchover and Failovers can also be performed on RAC-Data Guard configurations.
Please perform the COMMIT TO SWITCHOVER on the Recovering instance. The rest
is equal to a Single-Node System. Of course you can also Switchover/Failover
from a RAC- to a Single Node Database. Please also refer to the Data Guard
Concepts and Administration Guide for more information.

Creating a Data Guard Configuration

1) Ensure the Primary database is in ARCHIVELOG mode:

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Current log sequence 9
SQL> alter database close;

Database altered.

SQL> alter database archivelog;

Database altered.

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

Modify the Primary database init.ora so that log_archive_start=true and restart
the instance. Verify that database is in archive log mode and that automatic
archiving is enabled.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Next log sequence to archive 9
Current log sequence 9

2) Create a backup of the Primary database:

You can use an existing backup of the Primary database as long as you have the
archive logs that have been generated since that backup. You may also take a
hot backup as long as you have all archive logs through the end of the backup
of the last tablespace. To create a cold backup do the following:

SQL>
SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------
/export/home/oracle/temp/oracle/data/sys.dbf

SQL>
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

Create a backup of all datafiles and online redo logs using an OS command or
utility. A backup of the online redo logs is necessary to facilitate switchover.

Once complete startup the instance:

SQL> startup
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
Database mounted.
Database opened.

3) Connect to the primary database and create the standby control file:

SQL> alter database create standby controlfile as
'/export/home/oracle/temp/oracle/data/backup/standby.ctl';

Database altered.

4) Copy files to the Standby host:

Copy the backup datafiles, standby controlfile, all available archived redo logs,
and online redo logs from the primary site to the standby site. Copying of the
online redo logs is necessary to facilitate switchover.

If the standby is on a separate site with the same directory structure as the
primary database then you can use the same path names for the standby files as
the primary files. In this way, you do not have to rename the primary datafiles
in the standby control file. If the standby is on the same site as the primary
database, or the standby database is on a separate site with a different
directory structure the you must rename the primary datafiles in the standby
control file after copying them to the standby site. This can be done using
the db_file_name_convert and log_file_name_convert parameters or by manually
using the alert database statements.

5) Set the initialization parameters for the primary database:

It is suggested that you maintain two init.ora’s on both the primary and the
standby. This allows you to facilitate role reversal during switchover
operations more easily.

Primary init.ora on Primary host:

log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60'
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
remote_archive_enable=true

Create the standby initialization parameter file and set the initialization
parameters for the standby database. Depending on your configuration, you may
need to set filename conversion parameters.

Standby init.ora on Primary host:

log_archive_dest_1='LOCATION=/export/home/oracle/temp/oracle/arch'
log_archive_dest_state_1=enable
log_archive_format=%t_%s.dbf
log_archive_start=true
standby_archive_dest=/export/home/oracle/temp/oracle/arch
standby_file_management=auto
fal_server=DGD01_hasunclu2
fal_client=DGD01_hasunclu1
remote_arhive_enable=true

NOTE: In the above example db_file_name_convert and log_file_name_convert are
not needed as the directory structure on the two hosts are the same. If the
directory structure is not the same then setting of these parameters is
recommended. Please reference notes 47325.1 and 47343.1 for further
information.

Copy the two init.ora’s from the Primary host to the Standby host. You must
modify the Primary init.ora on the Standby host to have log_archive_dest_2 use
the alias that points to the Primary host (ie DGD01_hasunclu1). You must
modify the Standby init.ora on the standby host to have fal_server and
fal_client use the aliases when standby is running on the Primary host
(ie fal_server=DGD01_hasunclu1 and fal_client=DGD01_hasunclu2).

6) Configure networking components:

On the Primary host create a net service name that the Primary database can
use to connect to the Standby database. On the Primary host create a net
service name that Standby, when running on the Primary host, can use to
connect to the Primary, when it is running on the Standby host. Example from
Primary’s host tnsnames.ora:

DGD01_hasunclu1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu1)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)
DGD01_hasunclu2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu2)(PORT = 1521))
)
(CONNECT_DATA =
(SID = DGD01)
(SERVER = DEDICATED)
)
)

The above two net service names must exist in the Standby hosts tnsnames.ora
also.

You must also configure a listener on the standby database. If you plan to
manage this standby database using the Data Guard broker, you must configure
the listener to use the TCP/IP protocol and statically register the standby
database service using its SID.


7) Start the standby instance and mount the standby database.

oracle@hasunclu2:/export/home/oracle/temp/oracle> sqlplus "/ as sysdba"

SQL*Plus: Release 9.0.1.0.0 - Production on Thu Mar 14 18:00:57 2002

(c) Copyright 2001 Oracle Corporation. All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile=?/dbs/initDGD.ora
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
SQL> alter database mount standby database;

Database altered.

SQL>

8) Create standby redo log files, if necessary:

Standby redo logs are necessary for the higher protection levels such as
Guaranteed, Instant, and Rapid. In these protection modes LGWR from the
Primary host writes transactions directly to the standby redo logs.
This enables no data loss solutions and reduces the amount of data loss
in the event of failure. Standby redo logs are not necessary if you are using
the delayed protection mode.

If you configure standby redo on the standby then you should also configure
standby redo logs on the primary database. Even though the standby redo logs
are not used when the database is running in the primary role, configuring
the standby redo logs on the primary database is recommended in preparation
for an eventual switchover operation.

Standby redo logs must be archived before the data can be applied to the
standby database. The standby archival operation occurs automatically, even if
the standby database is not in ARCHIVELOG mode. However, the archiver process
must be started on the standby database. Note that the use of the archiver
process (ARCn) is a requirement for selection of a standby redo log.

You must have the same number of standby redo logs on the standby as you have
online redo logs on production. They must also be exactly the same size.

The following syntax is used to create standby redo logs:

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_1.dbf' size 20m;

Database altered.
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_2.dbf' size 20m;

Database altered.

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_3.dbf' size 20m;

Database altered.

SQL>

9) Manually change the names of the primary datafiles and redo logs in the
standby control file for all files not automatically renamed using
DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT as noted in step 5. Datafile
names can be changed on the standby at a mounted state with the following
syntax:

SQL> alter database rename file
2 '/export/home/oracle/temp/oracle/data/sys.dbf'
3 to
4 '/export/home/oracle/temp/oracle/data2/sys.dbf';

10) Stop and restart the listeners:

On the primary database, and start the listener on the standby database so that
changes made to the listener.ora can be implemented.

11) Activate parameter changes:

Manually enable initialization parameter changes on the primary database
so that it can initiate archiving to the standby site.

At runtime, the LOG_ARCHIVE_DEST_n initialization parameter can be changed
using ALTER SYSTEM and ALTER SESSION statements. Activate the changes made to
these parameters by either bouncing the instance or activating via alter system.
For example:

SQL> alter system set log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60';

System altered.

12) Verify that automatic archiving is occurring:

On the Primary database switch a log and verfy that it has been shipped
properly using the v$archive_dest view.

SQL> alter system switch logfile;

System altered.

SQL> select status, error from v$archive_dest where dest_id=2;

STATUS ERROR
--------- -------------------------------------------------------
VALID

SQL>

13) Optionally place Standby database in managed recovery:

SQL> recover managed standby database disconnect;
Media recovery complete.
SQL> exit

Monday, December 24, 2007

Creating a physical standby Database in Oracle10g

STEPS for creating 10g dataguard

prerequisite : 9i dataguard setup knowledge


step1 :

Prepare initSID.ora file for primary and standby databases as follow.

** STANDBY setup parameters are given in BOLD

part A)

**** Production database primary file ****

prod.__db_cache_size=125829120
prod.__java_pool_size=4194304
prod.__large_pool_size=4194304
prod.__shared_pool_size=79691776
prod.__streams_pool_size=0
*.audit_file_dest='C:\oracle\product\10.2.0\admin\prod\adump'
*.background_dump_dest='C:\oracle\product\10.2.0\admin\prod\bdump'
*.compatible='10.2.0.3.0'
*.control_files='Q:\oradata\prod\control01.ctl','Q:\oradata\prod\control02.ctl','Q:\oradata\prod\control03.ctl'
*.core_dump_dest='C:\oracle\product\10.2.0\admin\prod\cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='prod'
*.db_recovery_file_dest='C:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648

*.instance_name='prod'
*.job_queue_processes=10

*.db_unique_name='prod'
*.fal_client='prod'
*.fal_server='prod_stdby'
*.log_archive_config='DG_CONFIG=(prod,stdby)'
*.log_archive_dest_1='LOCATION=Q:\oradata\prod\arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=prod'
*.log_archive_dest_2='SERVICE=prod_stdby VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stdby LGWR ASYNC REOPEN=10'
*.remote_login_passwordfile='EXCLUSIVE'

*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t%s%r.arc'
*.open_cursors=300
*.pga_aggregate_target=72351744
*.processes=150
*.service_names='prod'
*.sga_target=218103808
*.standby_archive_dest='q:\arch'
*.standby_file_management='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='C:\oracle\product\10.2.0\admin\prod\udump'



part B)

**** standby database primary file ****

*.compatible='10.2.0.3.0'
*.control_files='C:\oradata\prod\stdby.ctl'
*.instance_name='stdby'
*.db_name='prod'
*.db_unique_name='stdby'
*.fal_client='prod_stdby'
*.fal_server='prod'
*.log_archive_config='DG_CONFIG=(prod,stdby)'
*.log_archive_dest_1='LOCATION=C:\oradata\prod\stdby\arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stdby'
*.log_archive_dest_2='SERVICE=prod VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=prod LGWR ASYNC REOPEN=10'
*.db_file_name_convert='Q:\','C:\'
*.log_file_name_convert='Q:\','C:\'
*.remote_login_passwordfile='EXCLUSIVE'

*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r'
*.service_names='stdby'
*.shared_pool_size=104857600
*.standby_file_management='AUTO'


STEP 2:

part A)

shutdown primary database cleanly and copy datafiles to standby location.

part B) create standby controlfile by giving following command at production database at mount stage
alter databse create standby controlfiel as 'location\filename' ;

move this generated file to standby controlfile location as pointed by standby initSID.ora file


step 3)
create oracle service and password file with the same password at standby database location

step 4)
prepare the TNSNAMES.ORA and LISTENER.ora at both production and standy locations.





To check archive log gap


SELECT * FROM (
SELECT sequence#, archived, applied,
TO_CHAR(completion_time, 'RRRR/MM/DD HH24:MI') AS completed
FROM sys.v$archived_log
ORDER BY sequence# DESC)
WHERE ROWNUM <= 10

Query in DR....

What is Right What is Wrong
Found a DBA giving following commands at standby site to switchover while standby database was mounted.

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY
*
ERROR at line 1:
ORA-16139: media recovery required

SQL> recover managed standby database disconnect;
Media recovery complete.
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY

ERROR at line 1:
ORA-16139: media recovery required
**********
from another SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;

Database altered.

SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;

Database altered.