Showing posts with label RAC. Show all posts
Showing posts with label RAC. Show all posts

Monday, December 24, 2007

How to create Databas in 2 node RAC ..?

Database creation in RAC environment( 2 instances). can not rely on dbca always
CREATE PARAMETER FILE AND PLACE IT IN SHARED LOCATION
(O:\TEST\INITTEST.ORA shared location here ) AS FOLLOWING
SO THAT BOTH RAC INSTANCES CAN SHARE IT. point to this pfile/spfile from individual instance home/dbs.

# START OF INITTEST.ORA

*.aq_tm_processes=1
*.test1.background_dump_dest='D:\Oracle\admin\test1\bdump'
*.test2.background_dump_dest='D:\Oracle\admin\test2\bdump'

*.cluster_database_instances=2
*.cluster_database=TRUE

*.compatible='9.2.0.0.0'
*.control_files='O:\test\control01.ctl','O:\test\control02.ctl','O:\test\control03.ctl'
*.test1.core_dump_dest='D:\Oracle\admin\test1\cdump'
*.test2.core_dump_dest='D:\Oracle\admin\test2\cdump'
*.db_block_size=8192
*.db_cache_size=25165824
*.db_file_multiblock_read_count=16
*.db_name='test'

test1.instance_name='test1'
test2.instance_name='test2'
test1.instance_number=1
test2.instance_number=2

*.job_queue_processes=1

*.pga_aggregate_target=25165824
*.processes=150
*.remote_login_passwordfile='exclusive'
*.shared_pool_size=150994944
test1.thread=1
test2.thread=2
*.undo_management='AUTO'
*.undo_retention=10800
#test2.undo_tablespace='UNDOTBS2'
test1.undo_tablespace='UNDOTBS1'
test2.user_dump_dest='D:\Oracle\admin\test2\udump'

# END INITTEST.ORA




AT RAC NODE 1

SQL>STARTUP NOMOUNT PFILE='O:\TEST\INITTEST.ORA'
THEN GIVE CREATE DATABASE STMT

1 CREATE DATABASE test
2 MAXINSTANCES 2
3 --MAXLOGHISTORY
4 -- MAXLOGFILES 192
5 --MAXLOGMEMBERS 3
6 --MAXDATAFILES 1024
7 controlfile reuse
8 DATAFILE 'o:\test\system01.dbf' SIZE 200M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
9 DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE 'o:\test\temp01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMIT
10 UNDO TABLESPACE "UNDOTBS1" DATAFILE 'o:\test\undotbs01.dbf' SIZE 20M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
11 CHARACTER SET WE8MSWIN1252
12 NATIONAL CHARACTER SET AL16UTF16
13 LOGFILE 'o:\test\redo01.log' SIZE 10240K REUSE,
14 'o:\test\redo02.log' SIZE 10240K REUSE,
Database created.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
1 1 1 10485760 1 NO INACTIVE 1 06-OCT-06
2 1 2 10485760 1 NO CURRENT 7129 06-OCT-06

SQL> SELECT MEMBER FROM V$LOGFILE ;

MEMBER
---------------------------------------------------------------------------------------------------------------------------
O:\TEST\REDO01.LOG
O:\TEST\REDO02.LOG

Database altered.



SQL> ALTER DATABASE ADD LOGFILE THREAD 2 'O:\TEST\REDO03.LOG' ;

Database altered.

SQL> ALTER DATABASE ADD LOGFILE THREAD 2 'O:\TEST\REDO04.LOG' ;

Database altered.

SQL> SELECT * fROM V$LOG;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
1 1 1 10485760 1 NO INACTIVE 1 06-OCT-06
2 1 2 10485760 1 NO CURRENT 7129 06-OCT-06
3 2 0 10485760 1 YES UNUSED 0
4 2 0 10485760 1 YES UNUSED 0

SQL> ALTER DATABASE ENABLE THREAD 2 ;
Database altered.


******************now OPEN THE INSTANCE 2 ***************
**********THEN QUERY AGAIN HERE **********


SQL> SELECT * fROM V$LOG;

GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
1 1 1 10485760 1 NO INACTIVE 1 06-OCT-06
2 1 2 10485760 1 NO CURRENT 7129 06-OCT-06
3 2 1 10485760 1 NO CURRENT 8271 06-OCT-06
4 2 0 10485760 1 YES UNUSED 0


Now dont forget to create and assign undo tablespace for second instance

Sunday, December 23, 2007

great scenario for 2gbps insertion speed......

great scenario for 2gbps insertion speed......

please look at
http://forums.oracle.com/forums/thread.jspa?threadID=533820&tstart=-2

BT (British Telecom) ran a 82 node (or more) OPS cluster of Pyramid Nil

Query ...

Asked by a old DBA friend about waits caused by wait event "OS THREAD STARTUP" found in his 10g RAC(4 nodes on solaris with ASM)


SELECT DECODE (session_state, 'WAITING', event, NULL) event,
session_state, COUNT(*), SUM (time_waited) time_waited
FROM gv$active_session_history
WHERE sample_time > SYSDATE - 1
GROUP BY DECODE (session_state, 'WAITING', event, NULL),
session_state order by time_waited desc;

Node 1

enq: TX - row lock contention
WAITING
3517
1734721441

os thread startup
WAITING
743
681425655

log file sync
WAITING
3557
530889167

DFS lock handle
WAITING
1131
528346071

db file sequential read
WAITING
34162
502368757

PX Deq: Signal ACK
WAITING
3917
406436176

reliable message
WAITING
283
251896304

gc buffer busy
WAITING
580
175092687

db file parallel read
WAITING
2606
169615544

Streams AQ: qmn coordinator waiting for slave to start
WAITING
123
121679315


I'm concluding on base of my experiecne of parallel queries(on test
machine with single CPU!!!! so no benefit in my case no matter I had RAC)

I see there are PX events indicating your RAC database
instances has parallel query/dml executions but I
guess the os thread startup wait event has nothing to
do with rac environment. This event indicates the
waitupon starting os process(thread) to start query
slave process for execution of parallel query. if u
set init parameter parallel_min_server sufficently
high,you may slightly cut this overhead but its not
recomendded. But instead of following rules of thumb
you check trade-off if you have significant waits.

Making 10g RAC ready for worst

focussing on vulenrable part of RAC
RAC gives you high availablity and performance,but what if you dont have clustered disks and your voting disk or cluster registry fails. from 10g R2 these can be mirrored but you should regularly backup and voting/ocr disk.Also you should be heedfull to run integrity test.


manual mirriring of voting disk be

query for current voting disk :
C:\>crsctl query css votedisk
0. 0 P:\cdata\crs\votedsk

located 1 votedisk(s).

adding redundant voting disk:
C:\>crsctl add css votedisk Q:\cdata\crs\votedsk
Cluster is not in a ready state for online disk addition

C:\>crsctl add css votedisk Q:\cdata\crs\votedsk -force
Now formatting voting disk: Q:\cdata\crs\votedsk
successful addition of votedisk Q:\cdata\crs\votedsk.

C:\>crsctl query css votedisk
0. 0 P:\cdata\crs\votedsk
1. 0 Q:\cdata\crs\votedsk

located 2 votedisk(s).

C:\>crsctl add css votedisk O:\cdata\crs\votedsk
Cluster is not in a ready state for online disk addition

investigate ( is CRS running/stopped)

C:\>crsctl add css votedisk O:\cdata\crs\votedsk -force
Now formatting voting disk: O:\cdata\crs\votedsk
successful addition of votedisk O:\cdata\crs\votedsk.


running integrity test
C:\>cluvfy comp ocr -n all

Verifying OCR integrity

Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.

Uniqueness check for OCR device passed.

Checking the version of OCR...
OCR of correct Version "2" exists.

Checking data integrity of OCR...
Data integrity check for OCR passed.

OCR integrity check passed.

Verification of OCR integrity was successful.

10g-r2-rac-on-vmware

10g RAC installed on windows 2003 enterprize edition using vmware.

below links were major help to me.

http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnWindows2003UsingVMware.php

http://www.dbasupport.com/oracle/ora10g/RACingAhead0101.shtml

http://forums.oracle.com/forums/thread.jspa?messageID=1123638


http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_14.shtml#Overview

http://www.oracle.com/technology/obe/10gr2_db_vmware/manage/clusterintro/clusterintro.htm



some more useful links:


-- Listed in order as I found them and kept in store for U in random fashion --
http://www.vmware.com/community/thread.jspa?messageID=737640

Saturday, December 22, 2007

The Server Parameter File in a Real Application Clusters Environment

PURPOSE
-------

The purpose of this bulletin is to demonstrate the usage of the server
parameter file, a new feature in 9i, in a Real Application Clusters
environment.


SCOPE & APPLICATION
-------------------

This Article is for database administrators of ORAC environments looking to
implement the server parameter file feature in their Application Clusters
environment.


THE SERVER PARAMETER FILE IN A REAL APPLICATION CLUSTERS ENVIRONMENT
--------------------------------------------------------------------

In 9i there is a new feature called the server parameter file that will allow
you to store a binary copy of your initialization parameter file. This feature
will allow you to make permanent dynamic changes to parameters within the
database while allowing you to change certain parameters in memory only. For
Oracle Real Application Clusters you can use one server parameter file and
share it between instances.


Creating a server parameter file in a Real Application Clusters Environment:

1. Create a text parameter file that seperates parameters between global and
instance specific parameters. For the instance specific parameters, specify
SID.parameter and for the global parameters just use parameter=. For example:

# Example of Global Parameters

db_name = "V9I"
control_files = ("/ora9i/oradata/V9I/v910ctl1", "/ora9i/oradata/V9I/v910ctl2")
parallel_server=true

# Example of Instance Specific Parameters for Node 1:

V9IM.instance_name = V9IM
V9IM.instance_number = 1
V9IM.thread =1
V9IM.rollback_segments = (rbs01_t1, rbs02_t1, rbs03_t1, rbs04_t1, rbs05_t1)

# Example of Instance Specific Parameters for Node 2:

V9IZ.instance_name = V9IZ
V9IZ.instance_number = 2
V9IZ.thread =2
V9IZ.rollback_segments = (rbs01_t2, rbs02_t2, rbs03_t2, rbs04_t2, rbs05_t2)

2. Create a raw partition for your spfile using platform specific commands.

3. Start up the instance if it is not already started and issue the following
statement to create your spfile on your raw partition:

create spfile='/dev/vx/rdsk/Stripe12/v910init'
from pfile='/ora9i/oradata/V9I/pfile/spfileV9I.ora';

4. For easy manageability you can create an initSID.ora file in your local
$ORACLE_HOME/dbs directory that points to your spfile:

spfile=/dev/vx/rdsk/Stripe12/v910init

5. At this point you should be able to start up using your spfile:

SQL> shutdown immediate
SQL> startup

6. You can then backup the file you used to create your spfile to avoid
confusion:

% mv /ora9i/oradata/V9I/pfile/spfileV9I.ora /ora9i/oradata/V9I/pfile/spfile.bak


Creating a pfile from your spfile:

With the spfile feature you can dump the contents of your spfile back into a
text initialization file. For Example:

create pfile='/ora9i/oradata/V9I/pfile/spfileV9I.ora'
from spfile='/dev/vx/rdsk/Stripe12/v910init';

You can use this feature to backup your spfile.


Viewing parameters in the spfile:

select from v$spparameter;


Changing spfile parameters dynamically:

You can change spfile parameters with alter system commands. You can specify
the scope to determine whether to change the parameter in memory, in your
spfile, or both. Some static parameters like db_name will not allow you to
change the parameter in memory. You can also define the SID to change
instance specific paramters.

o scope can be set to memory, spfile, or both
o sid can be set to define one specific instance

For example:

SQL> alter system set job_queue_processes=30 scope=both sid='V9IZ';

To reset parameters back to their default settings you can issue:

SQL> alter system reset job_queue_processes scope=both sid='V9IZ';

Recreating the Controlfile in RAC

PURPOSE
-------
This article describes how you can recreate your controlfile in RAC.


SCOPE & APPLICATION
-------------------
For DBA's requiring to recreate the controlfile.

WARNING:
--------

You should only need to recreate your control file under very special
circumstances:

- All current copies of the control file have been lost or are corrupted.

- You need to change a "hard" database parameter that was set when the
database was first created, such as MAXDATAFILES, MAXLOGFILES,
MAXLOGHISTORY, etc.

- You are restoring a backup in which the control file is corrupted or
missing.

- Oracle Customer Support advises you to do so.

- If you are moving your database to another machine which is
running the same operating system but the location of the datafiles,
logfiles is not the same.


RECREATING THE CONTROLFILE IN RAC
---------------------------------

If are recreating your controlfile from an existing one, use the following
steps to recreate your controlfiles. If you have lost all copies of the
controlfile, a new one will need to be generated using SQL. The syntax
is available in the SQL Reference manual for all versions but consideration
for Step 4 onward must be taken into account.

1. Connected to an open or mounted RAC instance via sqlplus, issue the
following command to dump a trace file that contains a create controlfile
script. The file will be generated in the user_dump_dest on the local
instance (show parameter dump in server manager to find user_dump_dest):

SQL> alter database backup controlfile to trace;

2. Find the trace file using "ls -ltr" in the user_dump_dest, it will
probably be the last or one of the last files listed as it will be very
recent. At this point you may want to move or rename the file to an easy
to remember name. In my example I use the name "create_control.sql".

3. Once the file is opened, remove all of the header information
up to the "STARTUP NOMOUNT" command. In later versions 10+ for RAC
you can remove the "NORESETLOGS" "CREATE CONTROLFILE" statement and keep
the "RESETLOGS" version.

If your redo logs still exist in the correct locations and the create
controlfile statement contains the appropriate "alter database add logfile"
statements, skip to step 5. If your redo logs have been removed or need to
be recreated, continue:

At this point the controlfile should look something like this:

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "RAC" NORESETLOGS NOARCHIVELOG
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 32
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/ocfs01/rac/redo01.log' SIZE 50M,
GROUP 2 '/ocfs01/rac/redo02.log' SIZE 50M,
GROUP 3 '/ocfs01/rac/redo03.log' SIZE 50M,
GROUP 4 '/ocfs01/rac/redo04.log' SIZE 50M
-- STANDBY LOGFILE
DATAFILE
'/ocfs01/rac/system01.dbf',
'/ocfs01/rac/undotbs01.dbf',
'/ocfs01/rac/sysaux01.dbf',
'/ocfs01/rac/undotbs02.dbf',
'/ocfs01/rac/users01.dbf'
CHARACTER SET WE8ISO8859P1
RECOVER DATABASE
ALTER DATABASE OPEN;
ALTER TABLESPACE TEMP ADD TEMPFILE '/ocfs01/rac/temp01.dbf'
SIZE 167772160 REUSE AUTOEXTEND OFF;

4. Now the create controlfile script needs to be altered for RAC.

a. Notice that in the script all of the logfiles are listed together. We
will need to seperate these out by thread. First remove all logfiles listed
that are not from the 1st instance. Once these are removed, after the
create controlfile statement add an "alter database add logfile thread"
statement(s) for each thread of redo to be added (usually 1 per node).

b. Because not all of the logfiles are listed (additional threads added
after the controlfile is created), you will need to use the RESETLOGS
option on the create controlfile statement. This is necessary in RAC
and will reset your scn's back to 0. It is highly recommended to take
a full backup of the database after completing this procedure.

c. We must now set the appropriate recovery commands for the RESETLOGS.
If all datafiles are consistent and no additional recover is required
on the database you can simply place the following commands at the bottom
of the script:

RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
CANCEL
ALTER DATABASE ADD LOGFILE...
ALTER DATABASE OPEN RESETLOGS;

If additional datafile recovery is needed, you will need to gather information
from the existing controlfile (mount and query V$log and V$logfile) to get the
full path and file name for each online redo log. You will need to manually
run the recovery (take it out of the create controlfile script) and when recovery
prompts for an archive log that does not exist you will need to type in the full
path and file name for the online log that corresponds with the sequence number
requested. When this is finished you should get a "Media Recovery Complete"
message. For example:

RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
# recovery prompts for non-existant logfile arch_123.log
/u01/redo_log_dest/redo123.log
# Should now get "Media Recovery Complete" message after online logs are applied.
ALTER DATABASE OPEN RESETLOGS;
ALTER DATABASE ENABLE PUBLIC THREAD 2;
# repeat for other threads if applicable

e. After the alter database open command, add an "alter database enable
public thread #" command(s). Do this for each additional thread to be added.

Now the create controlfile script should look something like the following:

set echo on
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "RAC" RESETLOGS NOARCHIVELOG
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 32
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/ocfs01/rac/redo01.log' SIZE 50M,
GROUP 2 '/ocfs01/rac/redo02.log' SIZE 50M
-- STANDBY LOGFILE
DATAFILE
'/ocfs01/rac/system01.dbf',
'/ocfs01/rac/undotbs01.dbf',
'/ocfs01/rac/sysaux01.dbf',
'/ocfs01/rac/undotbs02.dbf',
'/ocfs01/rac/users01.dbf'
CHARACTER SET WE8ISO8859P1
RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
CANCEL
ALTER DATABASE ADD LOGFILE THREAD 2
GROUP 3 '/ocfs01/rac/redo03.log' SIZE 50M REUSE,
GROUP 4 '/ocfs01/rac/redo04.log' SIZE 50M REUSE;
ALTER DATABASE OPEN RESETLOGS;
ALTER TABLESPACE TEMP ADD TEMPFILE '/ocfs01/rac/temp01.dbf'
SIZE 167772160 REUSE AUTOEXTEND OFF;

5. Shutdown all instances cleanly with shutdown immediate, shutdown
transactional, or shutdown normal.

6. This would be a good time to make backup copies of the current
controlfiles.

7. Verify that you are running from the instance designated for thread 1.
This is because any logfiles designated in the create controlfile statement
will go into thread 1. You will get errors if you are running from another
instance. Make sure that the local init/spfile file states:

instance=1
thread=1

8. Make sure the cluster_database=false parameter is set in the init/spfile
to re-create the controlfile.

9. Now we are ready to run the script. Connect to server manager on Node
1 and as internal or sys and execute the script:

SQL> @create_control.sql

If you get the "Statement processed" message, the database will be
opened with a brand new control file.

10. Make sure the cluster_database=true parameter is set in the init/spfile.

11. Start other instances.

12. At the earliest convenience, take a full backup of the database

Monday, November 19, 2007

converting Noarchivelog to archive log mode in RAC

The Steps that we followed while enabling archivelog mode in a RAC database.

steps:
-1) Backup the database!
0) Shut down all instances of the database, except the one upon which
the changes will be made.
1) alter system set cluster_database=false scope=spfile;
-- verification
---------------
SQL> show parameter cluster_database

NAME TYPE VALUE
------------------------------------ -----------
------------------------------
cluster_database boolean TRUE
cluster_database_instances integer 2

2) shut down and startup the same instance in 'MOUNT EXCLUSIVE';
ORACLE instance started.

Total System Global Area 599785472 bytes
Fixed Size 780280 bytes
Variable Size 166729736 bytes
Database Buffers 432013312 bytes
Redo Buffers 262144 bytes
Database mounted.

-- verification
---------------
SQL> show parameter cluster_database

NAME TYPE VALUE
------------------------------------ -----------
------------------------------
cluster_database boolean FALSE
cluster_database_instances integer 1


3) Set the required parameters.
SQL> ALTER SYSTEM SET log_archive_start=TRUE scope=SPFILE;

System altered.

SQL> ALTER SYSTEM SET log_archive_format='arch_%t_%s_%r.arc' scope=SPFILE;

System altered.

SQL> ALTER SYSTEM SET log_archive_dest='/u01/app/oracle/oradata/bmc/arch/'
scope=SPFILE;

System altered.

4) shut the database and STARTUP MOUNT

SQL> SELECT name,open_mode FROM v$database;

NAME OPEN_MODE
--------- ----------
BMC MOUNTED

5) Enable ARCHIVELOG mode

SQL> alter database archivelog;

Database altered.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u01/app/oracle/oradata/bmc/arch/
Oldest online log sequence 16
Next log sequence to archive 17
Current log sequence 17
SQL> select name,open_mode from v$database;

NAME OPEN_MODE
--------- ----------
BMC MOUNTED

6) Set cluster_database=true again.
SQL> alter system set cluster_database=true scope=spfile;

System altered.

7) Shut down and restart all the instances.

Our database is now in ArchiveLog Mode.

The above process has been tried and tested on our database. If anyone out
there suggest a better way to do the same, please feel free to let me know.

Migrating to the Server Parameter File in Real Application Clusters Environments

Migrate to the server parameter file by creating and editing the server parameter file using the procedures described in this section.

Server Parameter File Placement in Real Application Clusters
For single-node cluster-enabled configurations, or if you are using a cluster file system, place the server parameter file on a file system. Otherwise, place the server parameter file on a shared raw device that is at least 5MB in size.

Procedures for Migrating to the Server Parameter File
Migrate to the server parameter file by completing the following procedures:


Note:

The following procedures show examples for Linux systems only.


Combine the initialization parameter files for all instances into one initdbname.ora file by copying all shared IFILE contents as is. All parameters defined in your IFILE parameter files are global. Therefore, create them as "parameter=value" without sid prefixes.

Copy all instance-specific parameter definitions from initsid.ora files using the following syntax where sid is the sid of the instance:

sid.parameter=value
If you are using a cluster file system, then create the server parameter file using the CREATE SPFILE statement as in the following example:

CREATE SPFILE='?/dbs/spfile_dbname.ora'
FROM PFILE='?/dbs/initdbname.ora'

If you use ASM, then use the following syntax to create the server parameter file:

CREATE SPFILE='/+disk_group_name/db_uniquename/spfiledbname.ora'
FROM PFILE='?/dbs/initdbname.ora'

If you use raw storage, then use the following syntax to create the server parameter file on a raw device:

CREATE SPFILE='/dev/vx/rdsk/oracle_dg/dbspfile'
FROM PFILE='?/dbs/initdbname.ora'

These statements read your combined initdbname.ora file that you created by merging your IFILEs. Then it transfers the settings for the parameters from the merged file into your server parameter file.

Oracle recommends that you use the server parameter file by executing the STARTUP command as in this example:

STARTUP PFILE=$ORACLE_HOME/dbs/initsid.ora

Where the file initsid.ora contains the entry:

SPFILE='/dev/vx/rdsk/oracle_dg/dbspfile'

If you use this STARTUP command syntax, then Oracle uses the server parameter file entry specified in initsid.ora.

Server Parameter File Errors in Real Application Clusters
Oracle reports errors that occur during server parameter file creation or while reading the file during startup. If an error occurs during a parameter update, then Oracle records the error in your ALERT.LOG file and ignores subsequent parameter updates to the file. If this happens, then do either of the following:

Shut down the instance, recover the server parameter file, and restart the instance.

Enable the instance to continue running without regard for subsequent parameter updates.

Oracle displays errors for parameter changes that you attempt when you incorrectly use the ALTER SYSTEM SET statement. Oracle does this when an error occurs while reading from or writing to the server parameter file.

Manually Convert Single Instance to RAC on Unix

« Creating a DTP enabled RAC ServiceHOWTO: Manually Convert Single Instance to RAC on Unix
Now, I know what you are going to say and yes I know there is a new utility in 10g called rconfig that will do all this for you but just in case you want to do it the old fashioned way I thought I’d include the steps to do it. So, here we go…

1. Convert the oracle home on each Node

(Note 211177.1 on Metalink)

a. Login as the Oracle software owner and make sure any databases running out of this oracle home are down

b. cd $ORACLE_HOME/rdbms/lib

c. make -f ins_rdbms.mk rac_on

If this step did not fail with fatal errors then proceed to step 4.

d. make -f ins_rdbms.mk ioracle

2. Each instance require its own redo thread. So add a new logfile thread for each additional instance.

alter database add logfile thread 2 group 4 ('/oradata1/ORA2/ORA2_t2g4_m1.rdo', '/oradata1/ORA2/ORA2_t2g4_m2.rdo') size 50m, group 5 ('/oradata1/ORA2/ORA2_t2g5_m1.rdo', '/oradata1/ORA2/ORA2_t2g5_m2.rdo') size 50m, group 6 ('/oradata1/ORA2/ORA2_t2g6_m1.rdo', '/oradata1/ORA2/ORA2_t2g6_m2.rdo') size 50m / ALTER DATABASE ENABLE PUBLIC THREAD 2 /3. Each instance requires its own undo tablespace. So add an undo tablespace for each additional instance

create undo tablespace undo02 datafile '/oradata1/ORA2/ORA2_undo02_01.dbf' size 4001m /4. Create the cluster views needed for RAC

SQL> @?/rdbms/admin/catclust5. If you are using an spfile, create an init.ora from it.

SQL> create pfile='/tmp/initORA.ora' from spfile
/6. Edit the init.ora to include the cluster parameters

*.cluster_database_instances=2
*.cluster_database=TRUE
ORA1.instance_name='ORA1'
ORA2.instance_name='ORA2'
ORA1.instance_number=1
ORA2.instance_number=2
ORA1.thread=1
ORA2.thread=2
ORA1.undo_tablespace='UNDO01'
ORA2.undo_tablespace='UNDO02'
ORA1.local_listener='LISTENER_ORA1'
ORA2.local_listener='LISTENER_ORA2'
ORA1.remote_listener='LISTENER_ORA2'
ORA2.remote_listener='LISTENER_ORA1'7. Shutdown and startup using the edited init.ora

SQL> connect / as sysdba
SQL> shutdown immediate
SQL> startup pfile='/tmp/initORA.ora'8. If the db starts up ok using this init.ora, create your spfile in a shared location

SQL> create spfile='/sharedlocation/spfileORA.ora' from pfile='/tmp/initORA.ora';9. On each node create a link in $ORACLE_HOME/dbs to the shared spfile, ie

cd $ORACLE_HOME/dbs
ln -s /sharedlocation/spfileORA.ora spfileORA1.ora10. Add the database and instances to the cluster registry

srvctl add database -d ORA -o $ORACLE_HOME
srvctl add instance -d ORA -i ORA1 -n oraserv1
srvctl add instance -d ORA -i ORA2 -n oraserv211. Start the db through server control

srvctl start database -d ORANB If you still have one instance up from step 7 you will get an error but this is nothing to worry about, as the node that is down should still start.

12. Create services as needed.

This can be done through the dbca under Service management or manually as follows:

srvctl add service -d ORA -s ORA_TAF -r ORA1, ORA2Your single instance database is now RAC enabled. I have only recently done this so I know these steps work.

Tuesday, October 23, 2007

Oracle Monitoring and Performance Tuning FAQ

Topics
Why and when should one tune performance?
What database aspects should be monitored?
Where should the tuning effort be directed?
What tuning indicators can one use?
What tools/utilities does Oracle provide to assist with performance tuning?
What is STATSPACK and how does one use it?
When is cost based optimization triggered?
How can one optimize %XYZ% queries?
Where can one find I/O statistics per table?
My query was fine last week and now it is slow. Why?
Why is Oracle not using the damn index?
When should one rebuild an index?
How does one tune Oracle Wait events?
What is the difference between DBFile Sequential and Scattered Reads?
Where can one get more info about Oracle Tuning?

--------------------------------------------------------------------------------
Back to top of file
--------------------------------------------------------------------------------

What database aspects should be monitored?
One should implement a monitoring system to constantly monitor the following aspects of a database. This can be achieved by writing custom scripts, implementing Oracle's Enterprise Manager, or buying a third-party monitoring product. If an alarm is triggered, the system should automatically notify the DBA (e-mail, page, etc.) to take appropriate action.
Infrastructure availability:

Is the database up and responding to requests
Are the listeners up and responding to requests
Are the Oracle Names and LDAP Servers up and responding to requests
Are the Web Listeners up and responding to requests
Etc.
Things that can cause service outages:

Is the archive log destination filling up?
Objects getting close to their max extents
Tablespaces running low on free space/ Objects what would not be able to extend
User and process limits reached
Etc.
Things that can cause bad performance:

See question "What tuning indicators can one use?".

Back to top of file
--------------------------------------------------------------------------------

Where should the tuning effort be directed?
Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement.
Database Design (if it's not too late):
Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the "data access path" in mind. Also look at proper data partitioning, data replication, aggregation tables for decision support systems, etc.

Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.

Memory Tuning:
Properly size your database buffers (shared_pool, buffer cache, log buffer, etc) by looking at your buffer hit ratios. Pin large objects into memory to prevent frequent reloads.

Disk I/O Tuning:
Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.

Eliminate Database Contention:
Study database locks, latches and wait events carefully and eliminate where possible.

Tune the Operating System:
Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.


Back to top of file
--------------------------------------------------------------------------------

What tuning indicators can one use?
The following high-level tuning indicators can be used to establish if a database is performing optimally or not:
Buffer Cache Hit Ratio
Formula: Hit Ratio = (Logical Reads - Physical Reads) / Logical Reads
Action: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) to increase hit ratio

Library Cache Hit Ratio
Action: Increase the SHARED_POOL_SIZE to increase hit ratio

Etc.

Back to top of file
--------------------------------------------------------------------------------

What tools/utilities does Oracle provide to assist with performance tuning?
Oracle provide the following tools/ utilities to assist with performance monitoring and tuning:
TKProf

UTLBSTAT.SQL and UTLESTAT.SQL - Begin and end stats monitoring

Statspack

Oracle Enterprise Manager - Tuning Pack


Back to top of file
--------------------------------------------------------------------------------

What is STATSPACK and how does one use it?
Statspack is a set of performance monitoring and reporting utilities provided by Oracle from Oracle8i and above. Statspack provides improved BSTAT/ESTAT functionality, though the old BSTAT/ESTAT scripts are still available. For more information about STATSPACK, read the documentation in file $ORACLE_HOME/rdbms/admin/spdoc.txt.
Install Statspack:

cd $ORACLE_HOME/rdbms/admin
sqlplus "/ as sysdba" @spdrop.sql -- Install Statspack -
sqlplus "/ as sysdba" @spcreate.sql -- Enter tablespace names when prompted

Use Statspack:
sqlplus perfstat/perfstat
exec statspack.snap; -- Take a performance snapshots
exec statspack.snap;

-- Get a list of snapshots
select SNAP_ID, SNAP_TIME from STATS$SNAPSHOT;

@spreport.sql -- Enter two snapshot id's for difference report

Other Statspack Scripts:
sppurge.sql - Purge a range of Snapshot Id's between the specified begin and end Snap Id's
spauto.sql - Schedule a dbms_job to automate the collection of STATPACK statistics
spcreate.sql - Installs the STATSPACK user, tables and package on a database (Run as SYS).
spdrop.sql - Deinstall STATSPACK from database (Run as SYS)
sppurge.sql - Delete a range of Snapshot Id's from the database
spreport.sql - Report on differences between values recorded in two snapshots
sptrunc.sql - Truncates all data in Statspack tables

Back to top of file
--------------------------------------------------------------------------------

When is cost based optimization triggered?
It's important to have statistics on all tables for the CBO (Cost Based Optimizer) to work correctly. If one table involved in a statement does not have statistics, Oracle has to revert to rule-based optimization for that statement. So you really want for all tables to have statistics right away; it won't help much to just have the larger tables analyzed.
Generally, the CBO can change the execution plan when you:

Change statistics of objects by doing an ANALYZE;
Change some initialization parameters (for example: hash_join_enabled, sort_area_size, db_file_multiblock_read_count).

Back to top of file
--------------------------------------------------------------------------------

How can one optimize %XYZ% queries?
It is possible to improve %XYZ% queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints.
If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table.


Back to top of file
--------------------------------------------------------------------------------

Where can one find I/O statistics per table?
The UTLESTAT report shows I/O per tablespace but one cannot see what tables in the tablespace has the most I/O.
The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_io procedure and table to gather the required information. After executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information.

For more details, look at the header comments in the $ORACLE_HOME/rdbms/admin/catio.sql script.


Back to top of file
--------------------------------------------------------------------------------

My query was fine last week and now it is slow. Why?
The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available.
Some factors that can cause a plan to change are:

Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?)
Has OPTIMIZER_MODE been changed in INIT.ORA?
Has the DEGREE of parallelism been defined/changed on any table?
Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, what percentage was used?
Have the statistics changed?
Has the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed?
Has the INIT.ORA parameter SORT_AREA_SIZE been changed?
Have any other INIT.ORA parameters been changed?
What do you think the plan should be? Run the query with hints to see if this produces the required performance.

Back to top of file
--------------------------------------------------------------------------------

Why is Oracle not using the damn index?
This problem normally only arises when the query plan is being generated by the Cost Based Optimizer. The usual cause is because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things that can be checked are:
USER_TAB_COLUMNS.NUM_DISTINCT - This column defines the number of distinct values the column holds.
USER_TABLES.NUM_ROWS - If NUM_DISTINCT = NUM_ROWS then using an index would be preferable to doing a FULL TABLE SCAN. As the NUM_DISTINCT decreases, the cost of using an index increase thereby making the index less desirable.
USER_INDEXES.CLUSTERING_FACTOR - This defines how ordered the rows are in the index. If CLUSTERING_FACTOR approaches the number of blocks in the table, the rows are ordered. If it approaches the number of rows in the table, the rows are randomly ordered. In such a case, it is unlikely that index entries in the same leaf block will point to rows in the same data blocks.
Decrease the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT - A higher value will make the cost of a FULL TABLE SCAN cheaper.
Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP SCANNING).
There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the statistics. Compare this to the explain plan when not using an index.


Back to top of file
--------------------------------------------------------------------------------

When should one rebuild an index?
You can run the 'ANALYZE INDEX VALIDATE STRUCTURE' command on the affected indexes - each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS'.

Back to top of file
--------------------------------------------------------------------------------

How does one tune Oracle Wait events?
Some wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:
Event Name: Tuning Recommendation:

db file sequential read Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.
buffer busy waits Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH
log buffer space Increase LOG_BUFFER parameter or move log files to faster disks



Back to top of file
--------------------------------------------------------------------------------

What is the difference between DBFile Sequential and Scattered Reads?
Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.
db file sequential read:

A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.

db file scattered read:

Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.

The following query shows average wait time for sequential versus scattered reads:

prompt "AVERAGE WAIT TIME FOR READ REQUESTS"
select a.average_wait "SEQ READ", b.average_wait "SCAT READ"
from sys.v_$system_event a, sys.v_$system_event b
where a.event = 'db file sequential read'
and b.event = 'db file scattered read';

Back to top of file
--------------------------------------------------------------------------------

Where can one get more info about Oracle Tuning?
Tuning Information:

This FAQ: Tuning Scripts

This FAQ: Tuning Papers

Ixora = Unix + Oracle - scripts, tips and advanced performance tuning information

Kevin Loney's New DBA FAQs: Part 2 - Database Monitoring

On-Line Services:

Yet Another Performance Profiling (YAPP) Method - Upload your BSTAT/ ESTAT and StatsPack output for analysis

itrprof SQL Analyzer - Upload your SQL_TRACE/Event10046 trace files to find bottlenecks and tuning advice

RAC: Ave Receive Time for Current Block is Abnormally High in Statspack

Symptom(s)
~~~~~~~~~~

The average receive time for current block and the global cache current block
receive time statistics are unusually high even though database performance
does not seem to be affected.


Change(s)
~~~~~~~~~~

Running statspack on 9.2 RAC.


Cause
~~~~~~~

This is a statistics bug. To verify that this is the problem, check the average
wait time for the 'global cache null to x' wait event. If this is under 15-20ms.
and the average receive time for current block is abnormally high then you are
hitting this bug.

Here is a sample statspack exhibiting the behavior of this bug:

Cache fusion statistics
-----------------------
Ave time to process CR block request (ms): 0.3
Ave receive time for CR block (ms): 3.5
Ave build time for CR block (ms): 0.0
Ave flush time for CR block (ms): 0.0
Ave send time for CR block (ms): 0.3

Ave time to process current block request (ms): 0.3
Ave receive time for current block (ms): 1,273.0 <-- too high
Ave pin time for current block (ms): 0.0
Ave flush time for current block (ms): 0.0
Ave send time for current block (ms): 0.2

Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
global cache null to x 11,198 85 40 4 0.5

Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
global cache current block flush 161 0.0 0.0
global cache current block pin ti 5,294 0.1 0.2
global cache current block receiv 2,063,975 47.8 89.4
global cache current block send t 35,120 0.8 1.5
global cache current blocks recei 16,213 0.4 0.7
global cache current blocks serve 1,421,318 32.9 61.6

Only 4ms per 'global cache null to x' wait while the average receive time for
current blocks is 1,273ms. This does not correspond.


Fix
~~~~

The statistic can be ignored unless the average 'global cache null to x' time is
high. This is fixed in 9.2.0.5 or above.

Script to Collect RAC Diagnostic Information (racdiag.sql)


Tuesday, October 9, 2007

Top 50 RAC-Related Acronyms

Couple of days ago i read Eddie Award’s Post on Top 50 SOA-Related Acronyms and found it very interesting. Lately i have been working on Oracle Applications RAC environments and realized that RAC had its own set of frequently used Acronyms. In this post I have put together my 'Top 50' of the RAC Acronyms. Some of these might be related to other technologies also but they find a place in this list as they also relate RAC and Oracle Applications.

RAC (Real Application Clusters) Is Oracle Implementation of Clustered Database Instances
.
CRS (Cluster Ready Services) Oracle's own clusterware tightly coupled with Oracle Real Application Clusters (RAC).

OCFS (Oracle Cluster File System) Oracle File system Implementation For Linux and windows.

PCP (Parallel Concurrent Processing) PCP allows concurrent processing activities to be distributed across multiple nodes in an RAC environment, maximizing throughput and providing resilience to node failure.

OCR (Oracle Cluster Registry) The OCR contains cluster and database configuration information for RAC and Cluster Ready Services (CRS).

ASM (Automatic Storage Management) Simplifies database administration by enabling you to create disk groups and manage them instead of individual datafiles.

SSH (Secure Shell) SSH is a set of standards and an associated network protocol that allows establishing a secure channel between a local and a remote computer.

RSH (Remote Shell) rsh (remote shell) is a command line computer program which can execute shell commands as another user, and on another computer across a computer network.

DSA (Digital Signature Algorithm) The Digital Signature Algorithm (DSA) is a United States Federal Government standard or FIPS for digital signatures.

SCSI (Small Computer System Interface) standard interface and command set for transferring data between devices commonly used for interfaces for hard disks, and USB and FireWire connections for external devices.

NAS (Network-attached storage) Is a dedicated data storage technology that can be connected directly to a computer network to provide centralized data access and storage to heterogeneous network clients.

NFS (Network Filesystem) is a protocol allows a user on a client computer to access files over a network as easily as if attached to its local disks.

OSCP (Oracle Storage Compatibility Program) An Oracle Programe To assist third-parties with their Oracle compatibility with respect to storage devices.

DTP (Distributed Transaction Processing) DTP model is the synchronization of any commits and rollbacks that are required to complete a distributed transaction request.

VIP (virtual IP Address) is an IP address that is not connected to a specific computer or network interface card (NIC) on a computer.

TCP/IP ( Transmission Control Protocol (TCP) and the Internet Protocol (IP)

DNS (Domain Name Server) The domain name system is responsible for translating it translates domain names (computer hostnames) to IP addresses.

MAC (Media Access Control) Media Access Control address (MAC address) is a unique identifier
attached to most network adapters.

JDBC (Java Database Connectivity) A Java API that allows independent connectivity between the Java programming language and a wide range of databases.

LB (Load Balancing) Is a technique to balance the load between configured servers.

FAN (Fast Application Notification) enables end-to-end, lights-out recovery of applications and load balancing when a cluster configuration changes.

FCF (Fast Connection Failover) Fast Connection Failover provides the ability to failover connections in the connection cache as quickly and efficiently as the database itself.

TAF (Transparent Application Failover) Using transparent application recovery (TAF), Oracle can automatically reconnect users to the database in the case of instance or node failure.

OCI (Oracle Call Interface) OCI allows you to develop applications that take advantage of the capabilites of SQL from within the application.

ODP (Oracle Data Provider) ODP.NET features optimized data access to the Oracle database from a .NET environment.

ONS (Oracle Names Server) Oracle Names makes network address and database link information available to all nodes throughout the network.

ORION (Oracle I/O) ORION is a test tool freely available tool which simulates Oracle I/O.

GSD (Global Services Daemon) The Global Services Daemon (GSD) background process allows usage of Oracle Enterprise Manager or the SRVCTL utility to perform system management tasks.

NTP (Network Time Protocol) The Network Time Protocol (NTP) is a protocol for synchronizing the clocks of computer.

CVU (Cluster Verification Utility) CVU is used to verify all the important components that need to be verified at different stages in a RAC environment.

VSD (Virtual Shared Disk) A Virtual Shared Disk (VSD) facility is a layer of software that allows a processing node to access disk devices physically attached to a remote node as if those devices were attached locally.

LPARs (Logical Partitions) a Logical Partition, commonly called an LPAR, is a virtualized computing environment abstracted from all physical devices.

DBCA (Oracle Database Configuration Assistant) An Oracle utility that facilitates the creation and configuration of a database.

ORACM (Oracle Cluster Manager) allows to add new nodes to an existing cluster without having to stop/start the whole cluster.

RDA (Remote Diagnostic Agent) An Oracle Diagnostic Tool used in troubleshooting.

GFS (Global Filesystem) GFS allows a cluster of Linux servers to share data in a common pool of storage.

NIC (Network Interface Controller) is a piece of computer hardware designed to allow computers to communicate over a computer network.

LVM (Logical Volume Manager)

GPFS (General Parrel File System) GPFS is IBM’s high-performance parallel, scalable file system for IBM UNIX clusters.

HA (High Avaliblity)

MAA (Maximum Availability Architecture)

RACDDT (RAC Data Collection Tool) is a data collection tool designed and configured specifically for gathering diagnostic data related to Oracle's Real Application Cluster (RAC) technology.

OSW (OSWatcher) is an operating system diagnostic utility tool that gathers archival performance data using various native Unix utilities, such as vmstat, iostat and top.

OLS (Oracle Label Security) Oracle Label Security is developed based on virtual private database (VPD) technology and provides a flexible, fine-grained access control functionality that is achieved by comparing a sensitive label assigned to a piece of data with label authorizations assigned to an application user.

RSM (Remote Shared Memory) is a feature that bypasses the UDP/IP communication in Solaris.

UDP/IP (User Datagram Protocol) can send short messages sometimes known as datagrams to one another.

TNS (Transparent Network Substrate) Allows peer-to-peer connectivity where no machine-level connectivity can occur.

EMCA (Enterprise Manager Configuration Assistant) An Oracle tool to set up the Enterprise Manager.

OPS (Oracle Parallel Server) The OPS option allows multiple instances on different computer systems (nodes) can access the same database files simultaneously.

SRVCTL (Server Control) A utility to assist in administration and maintenance of RAC databases.
Hope you enjoyed reading them as much as I did compiling them ;)

Introduction to RAC

Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for business applications.

http://www.oracle.com/technology/products/database/clustering/index.html



Evolution of RAC?



The evolution of RAC can be tracked back from Oracle 7, where it was called Oracle Parallel Server (OPS). But it was in Oracle 9i that RAC was introduced. It was not merely a name change from Oracle 7 to Oracle 9i. Oracle has changed almost everything. RAC supports even direct sharing of Oracle database blocks between two or more database instances. This sharing is the major change that can be tracked from OPS. In OPS blocks where not directly shared leaving one instance notifies another instance of a required database block and wait till the other instance hand overs the same after it has been written its content to the disk. This wait is totally absent in the new RAC design.

What is OPS and its architecture?

Oracle has had the functionality to handle parallel servers since Version 7 with the help of OPS. OPS is a technique whereby multiple Oracle database instances share a single set of database files. In the architecture of OPS only a single instance was allowed to modify a block at any given point of time. An instance in order to write to a block uses a technique called block pinging.
Each OPS instance creates locks in their own distributed lock area to manage consistency between multiple instances, which are called Parallel Cache Management (PCM) locks.

The responsibility of a PCM lock is to ensure that the instance reading a block gets a consistent image of the data in the block. An instance must acquire a PCM lock on a block before reading/modifying data in the block. When a block is requesting for a block and that block is currently being locked by another instance, then the holding instance must write the block back to disk before the requesting instance can read or edit the block. This way consistency is maintained.

Thus OPS would allow multiple Oracle instances to use same database, which will allow scalability to expand by simple addition of a server. Performance issues can be solved easily with this kind of architecture. But the performance of OPS was not up to the mark. It even required application level changes, functional data partitioning to prove it works.

What does RAC offer other than OPS?

With the advent of high speed interconnectivity Oracle now uses memory to memory transfer of data blocks at high speed (thanks to technology) which completely rubs out the use of disk as a transfer mechanism.

The major benefits of RAC are many, like Scalability, High availability, Transparency. Scalability is achieved through introduction of multiple RAC nodes to increase performance. High availability is meant by the process of one node talking over the load of other when it fails. By Transparency, Oracle server provides full transparency to underlying mechanisms from applications.