Wednesday, January 23, 2008


Database Synchronization Options

Setup No-Data-Divergence

Setup Primary Database

Setup Standby Database

Start Managed Standby Recovery

Protect Primary Database

Cancel Managed Standby Recovery

Activating A Standby Database

Backup Standby Database

Database Switchover

Database Failover

Automatic Archive Gap Detection

Background Managed Recovery

Delayed Redo Application


The Oracle9i Data Guard architecture incorporates the following items:

Primary Database - A production database that is used to create standby databases. The archive logs from the primary database are transfered and applied to standby databases. Each standby can only be associated with a single primary database, but a single primary database can be associated with multiple standby databases.
Standby Database - A replica of the primary database.

Log Transport Services - Control the automatic transfer of archive redo log files from the primary database to one or more standby destinations.

Network Configuration - The primary database is connected to one or more standby databases using Oracle Net.

Tuesday, January 22, 2008


Syntax LOG_ARCHIVE_DEST_[1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10] =

"null_string" |


| LOCATION=location)




[DELAY[=minutes] | NODELAY]








[REOPEN[=seconds] | NOREOPEN]

[SYNC | ASYNC=blocks]] )

Default value There is no default value.

Parameter class Dynamic: ALTER SESSION, ALTER SYSTEM

Range of values Valid keyword definitions

This parameter is valid only if you have installed Oracle Enterprise Edition. You may continue to use LOG_ARCHIVE_DEST if you have installed Oracle Enterprise Edition. However, you cannot use both LOG_ARCHIVE_DEST_n and LOG_ARCHIVE_DEST, as they are not compatible.


The LOG_ARCHIVE_DEST_n parameters (where n = 1, 2, 3, ... 10) define up to ten archive log destinations. The parameter integer suffix is defined as the handle displayed by the V$ARCHIVE_DEST dynamic performance view.



Specifies a standby destination. Oracle Net (IPC or TCP) transmits the archivelog. A standby instance must be associated with the destination. The value represented by tnsnames_service corresponds to an appropriate service name in tnsnames.ora.


Specifies a local file system destination. You must specify this parameter for at least one destination.


Specifies that archiving to the destination must succeed before the redo log file can be made available for reuse.


Specifies that successful archiving to the destination is not required before the redo log file can be made available for reuse. If the "must succeed count," set with LOG_ARCHIVE_MIN_SUCCEED_DEST, is met, the redo logfile is marked for reuse. This is the default.


Specifies an interval of time (in seconds) that must pass after an error has been encountered during archiving to the destination before future archives to the destination can be attempted. Future attempts are made when the next redo log file is archived. If a destination is MANDATORY, Oracle Corporation recommends that you specify a REOPEN time that reduces the possibility of primary database shutdown due to lack of available online redo log files.

If you do not specify integer, the default value is 300 seconds. If you do not specify REOPEN, the archiver process will never reopen a destination after an error.


- This parameter is only valid for Oracle Enterprise Edition.
- This parameter introduced in Oracle 8i.
- You may continue to use LOG_ARCHIVE_DEST if you have installed Oracle
Enterprise Edition. However, you cannot use both LOG_ARCHIVE_DEST_n and
LOG_ARCHIVE_DEST, as they are not compatible.

Data Guard 9i Log Transportation on RAC


This article gives an overview about how to create a Data Guard
Configuration on Real Application Clusters (RAC). The Configuration
you can find here is for a Physical Standby Database.


You can see which initialisation parameters you have to use / change
and how the Log Transport is organized in a RAC-DataGuard environment.

NOTE: The Data Guard Broker and it's interfaces (DGMGRL and Data
Guard Manager from Enterprise Manager) do not support Data Guard
on RAC environements in Oracle 9i releases.

1. Overview Archiving on RAC

You can either run Noarchivelog or Archivelogmode on a RAC environment. If you
configure the Archivelogmode, each instance creates its own Archivelogs from
its Log Threads. These Archivelogs can either be stored on their local private
disks or on a formatted partition in the Shared Disk. Typically you store them
on the Shared Disk, to get each instance accessibility on all Archivelogs.
Alternative, you can transfer the Archivelogs between the primary instances via
the Service: LOG_ARCHIVE_DEST_n=(SERVICE= ...)

2. Create Standby Database

There are two types of Standby Databases supported for RAC. You can either
create the Standby database on a Single-Node or also on a Cluster system.
Due to the Broker being unable to support RAC environments, you have to
setup the Standby Configuration manually.

2.1 Standby Database on a Single-Node

In case of creating the Standby Database on a Single-Node you can mostly
proceed like a normal Standby Database creation, that means, first of all you
take a full backup of your Primary Database and create a Standby Controlfile
instance). Then you restore the Database including the Standby Contolfile on
your Standby Site.
Next mind to adjust the corresponding Initialisation parameters if needed
information about the correct settings, Please also follow this Note for
apprpriate Network (TNSNAMES.ORA and LISTENER.ORA) settings. In the next step
we have to configure the Logtransport-Services. This is done by setting up the
LOG_ARCHIVE_DEST_n-parameter on all primary instances. All the instances have
to archive the Logs to the Service of the Single-Node Standby System. The
Standby System creates a "pool" of Archivelogs and on the basis of the SCN it
can determine which Archivelog from which Thread is the next one to apply. So
you have to set the next free LOG_ARCHIVE_DEST_n-parameter to the Standby
Service. The other setting like which process to use for transfer or type of
transfer (SYNC or ASYNC,...) depend on your prefered Protection Mode. (Look at
Init.ora Parameter "LOG_ARCHIVE_DEST_n" Reference Note). If
this is all done, you now can STARTUP MOUNT the Standby Database and set it to

2.2 Standby Database on a RAC-System

2.2.1 Standby Database is also a Cluster (RAC-) System
It is also possible to create the Standby System on a RAC-environment, too.
This provides much more comfort, scalability and performance. If you have two
identical systems, in case of switchover or failover, there should be no
performance and availability decrease.
Typically, you have the same number of Instances (Nodes) on the Standby system
as on the Primary System.
First of all you proceed like a normal Standby Database creation. You take full
backup from your Primary Database and create the Standby controlfile from any
of the Instances. Then you Prepare the Standby system for the Database (create
the RAW-devices and logical links, configure your Hardware and install the
Oracle Software on each Node). After that, you are able to restore the backup
including the new created Standby Controlfile. Then also mind to setup the
appropriate parameters in the PFILE or SPFILE for EACH instance correctly (see
2.1 Standby Database on a Single-Node) and also configure the Network parameter
(TNSNAMES.ORA, LISTENER.ORA) corresponing your requirements and settings.
The greatest difference in comparison to the Single-Node system is, that we now
have multiple Standby instances and only one of them can perform the Recovery.
Bascially every Primary Instance transports its Archivelogs to a corresponding
Standby Instance. The Standby Instance receiving the Logs now transfers them to
the instance perfoming the Recovery. This all is handled by the
LOG_ARCHIVE_DEST_n-parameter. Furthermore it is recommended to create Standby
Redologs on the Standy Database for each Standby instance.
For clarification, we will show you a small example:

We have a Primary Database with two Nodes and two Instances (A nd B). We
equally have a Standby-RAC enviroment, also containing the Standby Database
with two Nodes and Instances (C and D). In the normal Primary Database is
Archivelog Mode enabled. Each Primary Instance archives its Redologs on a
formatted Partition of the Shared Disk, if a Cluster File System (CFS) is
supported for the used Platform. Else the Archivelogs are stored on each's
Private Disk area. Furthermore you must use different formats for naming the
Archivelogs to prevent reciproactive overwritings. It is recommended to use
at least the %t in LOG_ARCHIVE_FORMAT to prevent this. The %t represents the
Thread Number where the Log comes from.
Accordingly, we have the following relevant settings:

Instance A:

Instance B:

Next we have to add the Log Tansport for each Primary Instance to the
corresponding Standby Instance. As we have Standby Redologs created and want
to have maximum Performance, we have now these settings:

Instance A:

Instance B:

Now we want to designate Instance D as the Recovering Instance. Therefore the
Archivelogs from the Instance C have to be transfered to Instance D. Instance
D is also performing the Archiving to disk process (Again on the Shared Disk
to be available for both instances). The resulting settings for Instance C
and D now are:

Instance C:

Instance D:

If all this is done, you can now STARTUP NOMOUNT and
Recovering Instance into RECOVERY Mode.

2.2.2 Standby Database is a Single Node System
Another possibility for a RAC-Standby Configuration is to place the Standby
Database on a Single Node System.
Basically, the prerequisites are the same. The difference now is, that all
Instances from the Primary Database have to ship their Archivelogs to one single
system. Due to the Information (SCN's) in the Archivelogs, the Managed Recovery
Process (MRP) on the Standby Database finds out the correct order to apply the
Archivelogs from the different Threads on the Standby Database.
So in contrast to 2.2.1, we have now as the Primary Instances in our example
Instance A and B. The Single Node Standby Instance is C. This results in these
settings for LOG_ARCHIVE_DEST:

Instance A and B:

Please also mind to create different Standby RedoLog Threads on the Standby
Database if you use them.
Instance C now receives all the Logs and applies them to the Standby Database.

Instance C:

2.2.3 Cross-Instance Archival
In addition to the Parameters shown above, you can configure Cross-instance-
Archival. This means you can ship your Archivelogs from one Primary Instance to
another Primary Instance. This could be helpful in case of Gap-resolution if one
Instance looses Network connect or Archivelogs are deleted by fault. To enable
Cross-instance Archival, you simply have to take one free LOG_ARCHIVE_DEST_n-
parameter and configure it for shipping to another Primary Instance. For
Cross-Instance Archival only the Archive process is supported and can be used.
If we use again our example from 2.2.1, the result would be:

Instance A:

If you also consider to use Cross-instance Archival on Instance B:

Instance B:

3. Gap Resolution and FAL

Basically, there are two Modes of Gap Resolution:
- The automatic one, which is controlled by the Archiver and turned on
automatically. The Automatic Gap resolution works proactive and could
therefore produce some Network overhead.
- The FAL (Fetch Archive Log) Method, which requests certain Logs again from
a remote system. On the Receiving Instance you have to publish one or
more FAL-Server where it can receive its ArchiveLogs from and on the Sending
Instance you have to publish one or more FAL-Clients, who are allowed to
receive ArchiveLogs from that Instance. The publishing is done by the
Initialisation Parameters FAL_SERVER and FAL_CLIENT.
FAL is not supported for Logical Standby Databases.
A typical configuration for our example 2.2.1 could be:

Instance A and B:

Instance C and D:

With Cross-Instance Archival (2.2.3) configured, Instance C could now also
receive missing Archivelogs from Instance B and Instance D from Instance A.

4. Protection Mode

All supported Protection Modes for Single-Node-systems are also supported for
RAC-environments in 9.2. You only have to set all Transport Services equal for
each primary instance. Please refer to the Data Guard Concepts and
Administration Guide for more information about specific settings.

5. Switchover and Failover

Switchover and Failovers can also be performed on RAC-Data Guard configurations.
Please perform the COMMIT TO SWITCHOVER on the Recovering instance. The rest
is equal to a Single-Node System. Of course you can also Switchover/Failover
from a RAC- to a Single Node Database. Please also refer to the Data Guard
Concepts and Administration Guide for more information.

Creating a Data Guard Configuration

1) Ensure the Primary database is in ARCHIVELOG mode:

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Current log sequence 9
SQL> alter database close;

Database altered.

SQL> alter database archivelog;

Database altered.

SQL> shutdown immediate
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.

Modify the Primary database init.ora so that log_archive_start=true and restart
the instance. Verify that database is in archive log mode and that automatic
archiving is enabled.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /export/home/oracle/temp/oracle/arch
Oldest online log sequence 7
Next log sequence to archive 9
Current log sequence 9

2) Create a backup of the Primary database:

You can use an existing backup of the Primary database as long as you have the
archive logs that have been generated since that backup. You may also take a
hot backup as long as you have all archive logs through the end of the backup
of the last tablespace. To create a cold backup do the following:

SQL> select name from v$datafile;


SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit

Create a backup of all datafiles and online redo logs using an OS command or
utility. A backup of the online redo logs is necessary to facilitate switchover.

Once complete startup the instance:

SQL> startup
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
Database mounted.
Database opened.

3) Connect to the primary database and create the standby control file:

SQL> alter database create standby controlfile as

Database altered.

4) Copy files to the Standby host:

Copy the backup datafiles, standby controlfile, all available archived redo logs,
and online redo logs from the primary site to the standby site. Copying of the
online redo logs is necessary to facilitate switchover.

If the standby is on a separate site with the same directory structure as the
primary database then you can use the same path names for the standby files as
the primary files. In this way, you do not have to rename the primary datafiles
in the standby control file. If the standby is on the same site as the primary
database, or the standby database is on a separate site with a different
directory structure the you must rename the primary datafiles in the standby
control file after copying them to the standby site. This can be done using
the db_file_name_convert and log_file_name_convert parameters or by manually
using the alert database statements.

5) Set the initialization parameters for the primary database:

It is suggested that you maintain two init.ora’s on both the primary and the
standby. This allows you to facilitate role reversal during switchover
operations more easily.

Primary init.ora on Primary host:

log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60'

Create the standby initialization parameter file and set the initialization
parameters for the standby database. Depending on your configuration, you may
need to set filename conversion parameters.

Standby init.ora on Primary host:


NOTE: In the above example db_file_name_convert and log_file_name_convert are
not needed as the directory structure on the two hosts are the same. If the
directory structure is not the same then setting of these parameters is
recommended. Please reference notes 47325.1 and 47343.1 for further

Copy the two init.ora’s from the Primary host to the Standby host. You must
modify the Primary init.ora on the Standby host to have log_archive_dest_2 use
the alias that points to the Primary host (ie DGD01_hasunclu1). You must
modify the Standby init.ora on the standby host to have fal_server and
fal_client use the aliases when standby is running on the Primary host
(ie fal_server=DGD01_hasunclu1 and fal_client=DGD01_hasunclu2).

6) Configure networking components:

On the Primary host create a net service name that the Primary database can
use to connect to the Standby database. On the Primary host create a net
service name that Standby, when running on the Primary host, can use to
connect to the Primary, when it is running on the Standby host. Example from
Primary’s host tnsnames.ora:

DGD01_hasunclu1 =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu1)(PORT = 1521))
(SID = DGD01)
DGD01_hasunclu2 =
(ADDRESS = (PROTOCOL = TCP)(HOST = hasunclu2)(PORT = 1521))
(SID = DGD01)

The above two net service names must exist in the Standby hosts tnsnames.ora

You must also configure a listener on the standby database. If you plan to
manage this standby database using the Data Guard broker, you must configure
the listener to use the TCP/IP protocol and statically register the standby
database service using its SID.

7) Start the standby instance and mount the standby database.

oracle@hasunclu2:/export/home/oracle/temp/oracle> sqlplus "/ as sysdba"

SQL*Plus: Release - Production on Thu Mar 14 18:00:57 2002

(c) Copyright 2001 Oracle Corporation. All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile=?/dbs/initDGD.ora
ORACLE instance started.

Total System Global Area 80512156 bytes
Fixed Size 279708 bytes
Variable Size 71303168 bytes
Database Buffers 8388608 bytes
Redo Buffers 540672 bytes
SQL> alter database mount standby database;

Database altered.


8) Create standby redo log files, if necessary:

Standby redo logs are necessary for the higher protection levels such as
Guaranteed, Instant, and Rapid. In these protection modes LGWR from the
Primary host writes transactions directly to the standby redo logs.
This enables no data loss solutions and reduces the amount of data loss
in the event of failure. Standby redo logs are not necessary if you are using
the delayed protection mode.

If you configure standby redo on the standby then you should also configure
standby redo logs on the primary database. Even though the standby redo logs
are not used when the database is running in the primary role, configuring
the standby redo logs on the primary database is recommended in preparation
for an eventual switchover operation.

Standby redo logs must be archived before the data can be applied to the
standby database. The standby archival operation occurs automatically, even if
the standby database is not in ARCHIVELOG mode. However, the archiver process
must be started on the standby database. Note that the use of the archiver
process (ARCn) is a requirement for selection of a standby redo log.

You must have the same number of standby redo logs on the standby as you have
online redo logs on production. They must also be exactly the same size.

The following syntax is used to create standby redo logs:

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_1.dbf' size 20m;

Database altered.
SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_2.dbf' size 20m;

Database altered.

SQL> alter database add standby logfile
2 '/export/home/oracle/temp/oracle/data/srl_3.dbf' size 20m;

Database altered.


9) Manually change the names of the primary datafiles and redo logs in the
standby control file for all files not automatically renamed using
DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT as noted in step 5. Datafile
names can be changed on the standby at a mounted state with the following

SQL> alter database rename file
2 '/export/home/oracle/temp/oracle/data/sys.dbf'
3 to
4 '/export/home/oracle/temp/oracle/data2/sys.dbf';

10) Stop and restart the listeners:

On the primary database, and start the listener on the standby database so that
changes made to the listener.ora can be implemented.

11) Activate parameter changes:

Manually enable initialization parameter changes on the primary database
so that it can initiate archiving to the standby site.

At runtime, the LOG_ARCHIVE_DEST_n initialization parameter can be changed
using ALTER SYSTEM and ALTER SESSION statements. Activate the changes made to
these parameters by either bouncing the instance or activating via alter system.
For example:

SQL> alter system set log_archive_dest_2='SERVICE=DGD01_hasunclu2 reopen=60';

System altered.

12) Verify that automatic archiving is occurring:

On the Primary database switch a log and verfy that it has been shipped
properly using the v$archive_dest view.

SQL> alter system switch logfile;

System altered.

SQL> select status, error from v$archive_dest where dest_id=2;

--------- -------------------------------------------------------


13) Optionally place Standby database in managed recovery:

SQL> recover managed standby database disconnect;
Media recovery complete.
SQL> exit

Sunday, January 20, 2008

Regular Tasks Of a DBA....

1.Regular Monitoring of The free space in Database.

2.Taking logical bakups of important table.

3.Checking the locks on the Database.

4.Checking the long running queries on ur database

5.Analysing the performance of ur Database

6.Investigate Wait Statistics

7.Tablespace Usage

8.Ensure Connectivity to Oracle

9.Lock Contention

10.Extent Failure

11.Alert Logs

12.Redo Logs

13.Check if all the instances are up and running ps -ef | grep ora -

14.check if all the listeners are up and running ps -ef | grep litener

15.check log file for any new errors

16.verify success of backup
17.verify succss of archiving to tape and to standby db

Regular Taska Of a DBA

1.Regular Monitoring of The free space in Database.

2.Taking logical bakups of important table.

3.Checking the locks on the Database.

4.Checking the long running queries on ur database

5.Analysing the performance of ur Database

6.Investigate Wait Statistics

7.Tablespace Usage

8.Ensure Connectivity to Oracle

9.Lock Contention

10.Extent Failure

11.Alert Logs

12.Redo Logs

13.Check if all the instances are up and running ps -ef | grep ora -

14.check if all the listeners are up and running ps -ef | grep litener

15.check log file for any new errors

16.verify success of backup

17.verify succss of archiving to tape and to standby db

18.verify size of archived folder

19.check out free space of tablespace

20.check invalid objects

21.monitor users and transactions for dead locks

22.Monitoring the Top SQL Statements

23.Monitoring Long Operations

24.Monitoring Sessions

25.Monitoring System Statistics

26.Is the database up and responding to requests

27.Are the listeners up and responding to requests

28.Are the Oracle Names and LDAP Servers up and responding to requests

29.Are the Web Listeners up and responding to requests

Thursday, January 10, 2008

procedure for Calculating Database Growth and scheduling in DBMS JOBS....

Create a Table By the Name db_growth...with following details...

Name Null? Type
----------------------------------------- -------- ----------------------------


create or replace PROCEDURE database_growth
today_size NUMBER;
yesterday_size NUMBER;
growth_size NUMBER;
SELECT sum(bytes)/(1024*1024) INTO today_size FROM SM$TS_USED;
SELECT COUNT(1) INTO cnt FROM db_growth ;
IF cnt > 0
SELECT database_size_mb INTO yesterday_size FROM db_growth WHERE to_date(d
ay,'dd-mon-yy')=to_date(SYSDATE -1,'dd-mon-yy');
growth_size := today_size - yesterday_size;
INSERT INTO db_growth VALUES(sysdate,today_size,growth_size);
WHEN no_data_found THEN
INSERT INTO db_growth VALUES(sysdate,today_size,0);

3.Submit in DBMS_JOBS

variable jobno number;
'database_growth ;',
trunc(sysdate+1) + 4/24,
'trunc(sysdate+1) + 4/24'
print :jobno

Wednesday, January 9, 2008

Introduction to Simple Oracle Auditing


This article will introduce the reader to the basics of auditing an Oracle database. Oracle's RDBMS is a functionally rich product and there are a number of auditing alternatives available to the reader. Because auditing Oracle is such a huge subject, doing all of it justice would take an entire book, so this paper will cover the basics of why, when and how to conduct an audit. It will also use a couple of good example cases to illustrate how useful Oracle audit can be to an organization.

The Issues

There are a number of basic issues that should be considered when contemplating using Oracle's auditing features. These are as follows:

* Why is audit needed in Oracle?

Is this a strange question? Well, lots of companies don't actually use the internal audit features of Oracle. Or, when they do use them, they are so overwhelmed with choice, they turn on everything for good measure, then realise there is far too much output to read and digest so they quickly turn it all off again. It is quite common to use firewalls, intrusion detection systems (IDS) and other security tools to determine if the network or operating system is being misused or abused. So why not audit what users are doing to the "crown jewels" of an organization, the data. Oracle audit can help detect unauthorized access and internal abuse of the data held in the database.

* When should Oracle users be audited?

A simple basic set of audit actions should be active all the time. The ideal minimum is to capture user access, use of system privileges and changes to the database schema structure. This basic set will not show attempted access to specific data that shouldn't be accessed; however, it will give a reasonably simple overview of "incorrect" access and use of privileges. If an employee is suspected of inappropriate actions or if an attack has been suspected then more detailed audit can be turned on for specific tables. From a data management point of view, auditing data changes for all tables in the database is not really practical and could also affect performance. Monitoring data change access on critical tables (such as salaries in a HR database) should be considered.

* How can Oracle users be audited?

The standard audit commands allow all system privileges to be audited along with access at the object level to any table or view on the database for select, delete, insert or update. Audit can be run for either successful or unsuccessful attempts or both. It can be for each individual user or for all users and it can also be done at the session level or access level. At action level a single record is created per action and at session level one record is created for all audit actions per session.

* What are the performance and complexity issues?

Audit is generally perceived to be complex and slow. The reason for this is usually ignorance. If many or all options are turned on, then the resultant audit trail produced can be large and difficult to interpret and manage. Furthermore, if audit is used on all tables and views in the database, then this can have an effect on performance. Every time an action performed is auditable a record is written to the database; clearly the more audit is used, the more records will be written to the system tablespace purely for audit. In some cases double the amount of access to the database can be performed: the original write and the audit record being written.

The watchword here is simplicity and caution. Use only the audit that is needed to give an overall view of what is happening and for detailed monitoring of critical data and objects. The simpler the audit trail set-up, the more likely it is that the data will be analyzed and be of some use. It is important to define what actions or abuses are being checked for so that simple reports can be written to filter the audit trail for these actions. A default installation of Oracle has audit turned off by default and Oracle does not come with any standard default audit settings or reports to analyse any audit trail produced. These reasons, and the fact that there are many options available are, in my opinion, why audit is perceived to be complex.

The standard audit commands do not allow audit to be performed at row level. It is also not possible to audit the actions of privileged users such as SYS and "as sysdba" until Oracle 9iR2.

Oracles Audit Facilities

The task of auditing an Oracle database does not have to be limited only to the audit commands; other techniques can be employed as well. Here are some of the main methods that can be used to audit an Oracle database:

* Oracle audit

This is really the subject of this paper. All privileges that can be granted to a user or role within the database can be audited. This includes read, write and delete access on objects at the table level. For more detailed audit, the database triggers need to be employed.

* System triggers

These were introduced with Oracle 8 and allow the writing of database triggers that fire when system events take place. These include start- up and shutdown of the database, log-on and log-off attempts, and creation, altering and dropping of schema objects. With the aid of autonomous transactions, these allow a log to be written for the above system events.

* Update, delete, and insert triggers

This is the second line of defence in trying to understand users' actions at a more detailed row level. Database triggers need to be written to capture changes at the column and row level. It is possible to write complete rows of data before and after the change being made to a log table in the database. The use of this type of logging is very resource intensive, as many extra records are written and stored. The one failing with this method is that read access cannot be captured with normal database triggers.

* Fine-grained audit

Fine-grained audit solves the problem of capturing read access. This feature is also based on internal triggers that fire when any piece of SQL is parsed. This is very efficient, as the SQL is parsed once for audit and execution. The feature uses predicates that are defined and tested each time the relevant object is accessed. Fine-grained audit is managed by a PL/SQL package called DBMS_FGA. A PL/SQL procedure is executed every time a "match" is made with the predicate. This method allows the audit to be performed down to the row and column level and to also for read statements. Readers should be forewarned that use of this feature requires programming skills.

* System logs

Oracle generates many log files and many of them can provide useful information to assist in auditing the database. One good example is the alert log used by the database to record start-up and shutdown as well as any structural changes such as adding a datafile to the database.

This paper is going to explore only the standard built-in audit commands. The other options will be left for future articles.

Some Examples

Because of the myriad of possibilities, auditing an Oracle database can be a daunting task. In order to try and simplify the discussion of what can be done, we will discuss a couple of simple examples that we will explore and work through.

* Auditing database access

This is a fundamental check to find out who accesses the database, from where and when. Log-on failures can be captured as well as log- ons at strange (anomolous) times of the day.

* Auditing changes to the database structure

In a production database, no user should ever change the schema structure. DBAs should make changes for upgrades at specific times; any other changes should be regarded as suspicious. Watching for structural changes can turn up indicators of incorrect use of the database.

A third simple example that could have been employed here is to audit any use of system privileges. However, this example is left to the reader to explore.

The final group of audit commands that can be employed is to audit any data changes to objects themselves. Unfortunately, as the requirements are very application and installation specific, this is beyond the scope of this paper.

Audit within Oracle is broken into three areas: statement auditing such as CREATE TABLE or CREATE SESSION, privilege auditing such as ALTER USER, and object level auditing such as SELECT TABLE.

Basic Configuration

The audit trail can be either written to the database or to the operating system. Writing the audit trail to the operating system is, in some senses, more secure but the implementation is not available on all platforms and is platform specific. In this article we will concentrate on using the database to store the audit trail.

Audit is turned on for writing to the database by adding the following line to the init.ora file. A symbolic link to it can usually be found in $ORACLE_HOME/dbs

audit_trail = db

The database now needs to be restarted. A simple check will show that audit is indeed now turned on.

SQL> select name,value from v$parameter
2 where name like 'audit%';

------------------------------ ------------------------------
audit_trail DB
audit_file_dest ?/rdbms/audit


No audit actions are captured yet until audit actions are defined; that is, except for privileged access to the database, starting and stopping of the database, and structural changes such as adding a datafile. These are logged to operating system files in $ORACLE_HOME/rdbms/audit unless audit_file_dest is redefined in the init.ora file. On Windows these events appear in the Event Viewer. To check if any privilege or statement audit actions are enabled, do the following:

SQL> select * from dba_stmt_audit_opts
2 union
3 select * from dba_priv_audit_opts;

no rows selected


To find out what objects are being audited, query the view dba_obj_audit_opts.

The Worked Examples

Let us now work through our two example cases and see what can be learned. First, turn on audit for the access attempts to the database:

SQL> audit create session;

Audit succeeded.


The above command will capture access by all users by access and whether successful or unsuccessful. The default for this command is by access.

Note: The format of all audit commands from the Oracle documentation is as follows:

audit {statement_option|privilege_option} [by user] [by
{session|access}] [ whenever {successful|unsuccessful}]

Only the statement_option or privilege_option part is mandatory. The other clauses are optional and enabling them allows audit be more specific.

For a user to define audit statements, the privilege "AUDIT SYSTEM" needs to have been granted first. The users that have this privilege can be checked as follows:

SQL> select *
2 from dba_sys_privs
3 where privilege like '%AUDIT%';

------------------------------ ---------------------------------------- ---

9 rows selected.


The above results are for a 9i database, the default users MDSYS, CTXSYS and WKSYS would likely be good targets for attackers, as any audit actions could be turned off as one of these users to hide any actions undertaken.

Now that audit will capture all access attempts, we need to wait for some users to log in to do work. So while they do that, let's set up the audit to capture alterations to the schema. For the sake of brevity, in this example, not all schema object changes will be captured. Changes to tables, indexes, clusters, views, sequences, procedures, triggers, libraries and many more can be captured. In this example, audit will be enabled on an example set. Turning on the audit can be performed as a two-stage process, generate the audit commands and then run them as follows:

set head off
set feed off
set pages 0
spool aud.lis
select 'audit '||name||';'
from system_privilege_map
where (name like 'CREATE%TABLE%'
or name like 'CREATE%INDEX%'
or name like 'CREATE%CLUSTER%'
or name like 'CREATE%SEQUENCE%'
or name like 'CREATE%PROCEDURE%'
or name like 'CREATE%TRIGGER%'
or name like 'CREATE%LIBRARY%')
select 'audit '||name||';'
from system_privilege_map
where (name like 'ALTER%TABLE%'
or name like 'ALTER%INDEX%'
or name like 'ALTER%CLUSTER%'
or name like 'ALTER%SEQUENCE%'
or name like 'ALTER%PROCEDURE%'
or name like 'ALTER%TRIGGER%'
or name like 'ALTER%LIBRARY%')
select 'audit '||name||';'
from system_privilege_map
where (name like 'DROP%TABLE%'
or name like 'DROP%INDEX%'
or name like 'DROP%CLUSTER%'
or name like 'DROP%SEQUENCE%'
or name like 'DROP%PROCEDURE%'
or name like 'DROP%TRIGGER%'
or name like 'DROP%LIBRARY%')
select 'audit '||name||';'
from system_privilege_map
where (name like 'EXECUTE%INDEX%'
or name like 'EXECUTE%PROCEDURE%'
or name like 'EXECUTE%LIBRARY%')
spool off

This will generate a set of audit commands that can be captured to a spool file, which is then run to enable the audit commands.

Another solution would be to audit the actual permissions granted to users by generating the audit commands from the database view dba_sys_privs. While this may seem to be a better solution and potentially involve less audit commands, it would not allow for the case when new permissions are granted to users. In this case, audit would also need to be enabled at the time the privileges are granted.

Now that all of the sample audit is now enabled, the settings can be viewed with this SQL:

1 select audit_option,success,failure
2 from dba_stmt_audit_opts
3 union
4 select privilege,success,failure
5* from dba_priv_audit_opts
SQL> /

---------------------------------------- ---------- ----------

38 rows selected.


Every time a user attempts anything in the database where audit is enabled the Oracle kernel checks to see if an audit record should be created or updated (in the case or a session record) and generates the record in a table owned by the SYS user called AUD$. This table is, by default, located in the SYSTEM tablespace. This in itself can cause problems with potential denial of service attacks. If the SYSTEM tablespace fills up, the database will hang.

The AUD$ table is rare, as it is the only SYS owned table from which Oracle allows records to be deleted. If the audit trail is turned on and written to the database, then the numbers of records in this table need to be monitored carefully to ensure it doesn't grow too fast and fill the system tablespace. A purging strategy needs to be adopted to keep the size of the table in check and, if needed, to archive off audit trail records for future reference. One tactic could be to copy the records to summary tables that allow specific checks for abuse to be performed offline. These summary tables can be in a separate database for added security. Once copied, sys.aud$ can be truncated.

SYS.AUD$ can be moved to a different tablespace other than SYSTEM but check with Oracle support first, as this action is no longer supported.

Only users who have been granted specific access to SYS.AUD$ can access the table to read, alter or delete from it. This is usually just the user SYS or any user who has had permissions. There are two specific roles that allow access to SYS.AUD$ for select and delete, these are DELETE_CATALOG_ROLE and SELECT_CATALOG_ROLE. These roles should not be granted to general users.

Back to the examples, our users have been logging on and working throughout the day and created some audit records. These audit records can be viewed in a number of ways:

* By selecting from SYS.AUD$ - This is the raw audit trail
* By selecting from dba_audit_trail - This is a DBA view showing the raw audit trail.
* By selecting from dba_audit_session - This view shows just log-on and log-off actions.

A simple piece of SQL can show details of the connection attempts:

SQL> get check_create_session
1 --
2 -- check_create_session.sql
3 --
4 col username for a15
5 col terminal for a6
6 col timestamp for a15
7 col logoff_time for a15
8 col action_name for a8
9 col returncode for 9999
10 select username,
11 terminal,
12 action_name,
13 to_char(timestamp,'DDMMYYYY:HHMISS') timestamp,
14 to_char(logoff_time,'DDMMYYYY:HHMISS') logoff_time,
15 returncode
16* from dba_audit_session
SQL> /
--------------- ------ -------- --------------- --------------- ----------
SYS pts/1 LOGOFF 09042003:051046 09042003:051641 0
ZULIA pts/1 LOGON 09042003:051641 1017
SYS pts/1 LOGOFF 09042003:051649 09042003:053032 0
SYS pts/2 LOGOFF 09042003:052622 09042003:053408 0
ZULIA pts/1 LOGON 09042003:053032 1017

There are a number of simple abuses that can be checked for in the area of user access to the database. As examples for this paper we will look at the following:

* Failed log-on attempts

This can indicate fat fingers or attackers' attempts to gain unauthorized access the database. The following SQL highlights this:

SQL> select count(*),username,terminal,to_char(timestamp,'DD-MON-YYYY')
2 from dba_audit_session
3 where returncode<>0
4 group by username,terminal,to_char(timestamp,'DD-MON-YYYY');

---------- --------------- ------ -----------
1 BILL pts/3 09-APR-2003
3 FRED pts/3 09-APR-2003
4 ZULIA pts/1 09-APR-2003


This shows two possible abuses, the first is the user Zulia attempting to log on and failing four times on the same day. This could be a forgotten password or it could be someone trying to guess his or her password. A change to the SQL as follows gives a bit more detail:

SQL> select count(*),username,terminal,to_char(timestamp,'DD-MON-YYYY'),returncode
2 from dba_audit_session
3 group by username,terminal,to_char(timestamp,'DD-MON-YYYY'),returncode;

---------- --------------- ------ ----------- ----------
1 BILL pts/3 09-APR-2003 1017
1 EMIL pts/1 09-APR-2003 0
1 EMIL pts/2 09-APR-2003 0
1 EMIL pts/3 09-APR-2003 0
1 EMIL pts/4 09-APR-2003 0
3 FRED pts/3 09-APR-2003 1017
3 SYS pts/1 09-APR-2003 0
1 SYS pts/2 09-APR-2003 0
1 SYSTEM pts/5 09-APR-2003 0
4 ZULIA pts/1 09-APR-2003 1017
1 ZULIA pts/1 09-APR-2003 0

11 rows selected.


This reveals that the user successfully logged on on the same terminal on the same day. A number of failed log-ons should be agreed as part of these checks and the above SQL run every day. Those users with failure numbers above the threshold should be investigated.

* Attempts to access the database with non-existent users

One interesting extension to the above SQL is to find attempts to log in where the user doesn't exist. An audit record is still created in this case. The following SQL illustrates:

SQL> select username,terminal,to_char(timestamp,'DD-MON-YYYY HH24:MI:SS')
2 from dba_audit_session
3 where returncode<>0
4 and not exists (select 'x'
5 from dba_users
6 where dba_users.username=dba_audit_session.username)
SQL> /

--------------- ------ --------------------
FRED pts/3 09-APR-2003 17:31:47
FRED pts/3 09-APR-2003 17:32:02
FRED pts/3 09-APR-2003 17:32:15
BILL pts/3 09-APR-2003 17:33:01


This is probably abuse. All attempts to log on with a user that doesn't exist should be checked each day and investigated.

* Attempts to access the database at unusual hours

Checks should be made for any attempts to access the database outside of working hours. These accesses could be genuine overtime work or maintenance but they could just as easily be unauthorized access attempts and should be checked as follows:

SQL> select username,
2 terminal,
3 action_name,
4 returncode,
5 to_char(timestamp,'DD-MON-YYYY HH24:MI:SS'),
6 to_char(logoff_time,'DD-MON-YYYY HH24:MI:SS')
7 from dba_audit_session
8 where to_date(to_char(timestamp,'HH24:MI:SS'),'HH24:MI:SS') <
9 or to_date(to_char(timestamp,'HH24:MI:SS'),'HH24:MI:SS') >
SQL> /

---------- ------ -------- ---------- -------------------- --------------------
SYS pts/1 LOGOFF 0 09-APR-2003 20:10:46 09-APR-2003 20:16:41
SYSTEM pts/5 LOGOFF 0 09-APR-2003 21:49:20 09-APR-2003 21:49:50
ZULIA pts/5 LOGON 0 09-APR-2003 21:49:50
EMIL APOLLO LOGON 0 09-APR-2003 22:49:12


The above SQL shows any connections before 8:00 AM and after 7:30 PM. Any connections, particularly those made by privileged users such as SYS and SYSTEM, should be investigated. Particular attention can be made to the location from which the access was made. For instance, if privileged access is made from machines that are not in the administrator department, the administrator needs to find out why.

* Check for users sharing database accounts

The following SQL looks for users who are potentially sharing database accounts:

SQL> select count(distinct(terminal)),username
2 from dba_audit_session
3 having count(distinct(terminal))>1
4 group by username
SQL> /

------------------------- ----------

This shows that three users have accessed their accounts from more than one location. A further check could be to add a time component to see if they are accessed simultaneously and also to restrict the check per day. The above SQL gives some idea of the potential without complicating it too much. Again, these accounts and users should be investigated.

* Multiple access attempts for different users from the same terminal

The final example checks to find where multiple database accounts have been used from the same terminal. The SQL is again simple and could be extended to group by day and also to print out the users per terminal. This is a simple test to illustrate the abuse idea:

SQL> select count(distinct(username)),terminal
2 from dba_audit_session
3 having count(distinct(username))>1
4 group by terminal
SQL> /

------------------------- ------
3 pts/1
2 pts/2
3 pts/3
3 pts/5


This could indicate someone trying to gain access by trying many accounts and passwords, or it could indicate legitimate users sharing accounts for certain aspects of their work. In either case, the admin should investigate further.

There are, of course, many other scenarios that could indicate possible abuses. Checking for those is as simple as the cases depicted above. It will be left to the reader to experiment. Let me know what you find useful.

The second example case that audit actions were set for is to detect changes made to the database schema. This could include new objects being added or attempts to change existing objects within the database.

A simple piece of SQL will show any audit trail items that relate to objects being created or changed as follows:

col username for a8
col priv_used for a16
col obj_name for a22
col timestamp for a17
col returncode for 9999
select username,
to_char(timestamp,'DD-MON-YYYY HH24:MI') timestamp,
from dba_audit_trail
where priv_used is not null
and priv_used<>'CREATE SESSION'
SQL> @check_obj.sql


This simple example shows that the user ZULIA has created a table and the user PETE has been writing PL/SQL procedures. Any changes such as this that are found should be investigated in a production database. Many more specific abuses can be checked for in relation to object and schema changes but, in general, no user should be able to alter the database schema in a production database. As a result, the check can remain pretty simple.

Protecting the Database Against These Abuses

The two examples given are just two of many possible scenarios that could be detected using Oracle's auditing facilities. Turning on and managing audit is one of the first steps to securing the database. Using audit should be part of an overall organization security plan and policy that includes Oracle. The database should be audited regularly for misconfiguration or known vulnerabilities that could allow security breaches to take place.

Because of its complex nature and vast number of different ways it can be used and configured, the best approach to securing Oracle will always be to follow the principle of least privilege. Once the database is part of the overall security plan and is configured correctly and checked regularly, then auditing it should be considered an important part of the strategy.

In general, do not grant any privileges to general users in a production database, remove most of the PUBLIC privileges and delete or lock and change the passwords of any default accounts. Ensure that users obey password policies and that the password management features of Oracle are employed.

It is important that the audit actions are planned from a performance and usability point of view and that the audit trail is managed. It is also important that the audit trail data is understood in terms of detecting abuse.

The author's recent book by the SANS Institute "Oracle security step- by-step - A survival guide for Oracle security" gives excellent guidelines on how to configure Oracle securely.


Oracle's auditing features are very powerful and sometimes seem very complex. As we saw in the introduction, there is more than one option available for auditing an Oracle database. It is possible to audit almost everything in the Oracle RDBMS with the standard features but not at the row level. If a high-level audit is needed, use the standard features to get a view of overall activity and then home in on the area of concern in more detail.

Because it is possible to audit almost any type of action in an Oracle database using the standard audit features, the reader should experiment with the most useful audit actions for their organization. Keep it simple and do not try to use everything. Above all, predetermine what data will be generated in the audit trail and the abuses that can be checked for. Write reports to check the audit trail and purge it regularly. Finally, monitor the reports each day and take the appropriate action.

For more detailed auditing, use database triggers and fine grained auditing. Keep in mind that both of these methods need programming skills to implement and report on, so they should be considered carefully. A lot of useful information can be gathered without resorting to row-level audit. Above all, employ least privilege principle to avoid any users making changes or reading data that they should not.

Tuesday, January 8, 2008

10 Scripts Every DBA Should Have

I. Display the Current Archivelog Status :


II. Creating a Control File Trace File


III. Tablespace Free Extents and Free Space

column Tablespace_Name format A20
column Pct_Free format 999.99