Tuesday, October 16, 2007

Complete Information of Oracle9i Statspack

STATSPACK FREQUENTLY ASKED QUESTIONS

CONTENTS
--------

1) What is StatsPack and how does STATSPACK work? Where are StatsPack README
files? The README files contain the most up to date inforamtion about StatsPack.

See Note 149115.1: What is StatsPack and where are the READMEs?

Checked for relevance on 16-May-2007

PURPOSE
-------

This note is intended to provide an overview of what the StatsPack package is.
This note also has the purpose to provide the location of the version specific
README which more up to date information about the use of the StatsPack tool.


SCOPE & APPLICATION
-------------------

This note is applicable for Oracle RDBMS releases 8.1.6 and greater, and is
related to the task of gathering database and session level performance
information.

For all 10g versions statspack is available. However, it is Oracle's recommmendation
that customers utilize the Automatic Workload Repository (AWR) in version 10g instead of
statspack as a diagnostic tool for database performance tuning issues.
Please see Note 276103.1: "PERFORMANCE TUNING USING 10g ADVISORS AND MANAGEABILITY FEATURES"
for more information on tuning advisors available in version 10g.



What is StatsPack and how does STATSPACK work?
---------------------------------------------

Statspack is a set of SQL, PL/SQL and SQL*Plus scripts which allow the
collection, automation, storage and viewing of performance data. A user is
automatically created by the installation script - this user, PERFSTAT, owns
all objects needed by this package. This user is granted limited query-only
privileges on the V$views required for performance tuning.

Statspack fundamentally differs from the well known UTLBSTAT/UTLESTAT tuning
scripts by collecting more information, and also by storing the performance
statistics data permanently in Oracle tables, which can later be used for
reporting and analysis. The data collected can be analyzed using the report
provided, which includes an "instance health and load" summary page, high
resource SQL statements, as well as the traditional wait events and
initialization parameters.


Where are StatsPack README files?
---------------------------------
StatsPack README files include more specific and up to date information and
history about this tool as well as platform and release specific information
that help on installing and using the product. These files are located at:

UNIX:
Oracle8i 8.1.6
$ORACLE_HOME/rdbms/admin/statspack.doc
Oracle8i 8.1.7, Oracle9i 9.x, and Oracle10g 10.x
$ORACLE_HOME/rdbms/admin/spdoc.txt
NT:
Oracle8i 8.1.6
%ORACLE_HOME%\rdbms\admin\statspack.doc
Oracle8i 8.1.7, Oracle9i 9.x, and Oracle10g 10.x
%ORACLE_HOME%\rdbms\admin\spdoc.txt
VMS:
Oracle8i 8.1.7
For Oracle8i 8.1.7.0.0, to get the README file, please contact
Oracle Support Services, and request patch for bug:
Bug 1745567 - MISSING STATSPACK FILES FOR ALPHA OPENVMS 8.1.7.0.0
Oracle9i 9.x, Oracle10g 10.x
ORA_ROOT:[RDBMS.ADMIN]SPDOC.TXT
OS/390:
Oracle8i 8.1.7
For Oracle8i 8.1.7.0.0 the SPDOC.TXT file is missing. This was resolved
in the 8.1.7.1 patchset.
Bug 1709469 - STATSPACK DOCUMENTATION NOT SHIPPED ON OS/390






2) How do I install StatsPack? Can StatsPack be installed with SVRMGRL?

See Note 149113.1: Installing and Configuring StatsPack Package
PURPOSE
-------
This note is intended to provide configuring and installing information about
the StatsPack package.


SCOPE & APPLICATION
-------------------
This note is applicable for Oracle RDBMS releases 8.1.6 and greater, and is
related to the task of gathering database and session level performance
information.


Configuration and Installation of the StatsPack Package.
--------------------------------------------------------

- StatsPack Database Space Requirements

Oracle does not recommend installing the package in the SYSTEM tablespace. A
more appropriate tablespace (if it exists) would be a "TOOLS" tablespace. If
you install the package in a locally-managed tablespace, storage clauses are
not required, as the storage characteristics are automatically managed. If you
install the package in a dictionary-managed tablespace, Oracle suggests you
monitor the space used by the objects created, and adjust the storage clauses
of the segments, if required.

The default initial and next extent size is 1MB for all tables and indexes
which contain changeable data. The minimum default space requirement is
approximately 35MB.

The amount of database space required by the package will vary considerably
based on the frequency of snapshots, the size of the database and instance, and
the amount of data collected (which is configurable). It is therefore
difficult to provide general storage clauses and space utilization predictions
which will be accurate at each site.


- Installing the StatsPack package

Interactive Installation:
========================

During the installation process, the user PERFSTAT will be created, default
password PERFSTAT. This user will own all PL/SQL code and database objects
created (including the STATSPACK tables, constraints and STATSPACK package).
The installation SQL script will prompt for the PERFSTAT user's default and
temporary tablespaces and also for the tablespace in which the tables and
indexes will be created

NOTE:
o Do not specify the SYSTEM tablespace for the PERFSTAT users
DEFAULT or TEMPORARY tablespaces; if SYSTEM is specified the
installation will abort with an error indicating this is the
problem. This is enforced as Oracle do not recommend using
the SYSTEM tablespace to store statistics data, nor for sorting.
Use a TOOLS tablespace to store the data, and your instance's
TEMPORARY tablespace for sorting.

o During the installation, the dbms_shared_pool and dbms_job
PL/SQL packages are created. dbms_shared_pool is used to
pin the Statspack package in the shared pool; dbms_job
is created on the assumption the DBA will want to schedule
periodic snapshots automatically using dbms_job.

o The installation grants SELECT privilege to PUBLIC for all
of the Statspack owned tables (prefixed by STATS$).

Even though SVRMGRL is available with Oracle8i releases 8.1.6 and 8.1.7,
installation of StatsPack can only be done with SQL*Plus do to special
formatting commands that are used in the scripts. Oracle9i releases do not
include SVRMGRL.

FYI:
====

1)Note 105692.1: "Installation of statspack fails from Server Manager",
discusses this matter further.

2)Bug 2673074 Abstract: SPCREATE.SQL HANGS BUSY SYSTEM
This may cause the database to hang while running the
SPCREATE.SQL script.

Workaround:
Make sure all DBMS_* packages are compiled
or manually comment out the creation of dbmsjob.sql from spcusr.sql

To install the package, using SQL*Plus and based on the correct platform and
RDBMS version, do the following:
Oracle 8.1.6:
on Unix:
SQL> connect / as sysdba
SQL> @?/rdbms/admin/statscre

on NT:
SQL> connect / as sysdba
SQL> @%ORACLE_HOME%\rdbms\admin\statscre

on VMS
SQL> connect / as sysdba
SQL> @ora_rdbms_admin:statscre


Oracle8i 8.1.7 and Oracle9i 9.x
on Unix:
SQL> connect / as sysdba
SQL> @?/rdbms/admin/spcreate

on NT:
SQL> connect / as sysdba
SQL> @%ORACLE_HOME%\rdbms\admin\spcreate

on VMS
SQL> connect / as sysdba
SQL> @ora_rdbms_admin:spcreate

Oracle10g

SQL> connect / as sysdba
SQL> @?/rdbms/admin/spcreate

on NT:
SQL> connect / as sysdba
SQL> @%ORACLE_HOME%\rdbms\admin\spcreate

on VMS
SQL> connect / as sysdba
SQL> @ora_rdbms_admin:spcreate

Although new tools are available for 10g such as AWR and ADDM, Statspack can also be used for 10g.



Batch mode installation
=======================
To install in batch mode, you must assign values to the SQL*Plus
variables which specify the default and temporary tablespaces before
running spcreate.

The variables are:
default_tablespace -> for the default tablespace
temporary_tablespace -> for the temporary tablespace

e.g.
on Unix:
SQL> connect / as sysdba
SQL> define default_tablespace='tools'
SQL> define temporary_tablespace='temp'
SQL> @?/rdbms/admin/spcreate

spcreate will no longer prompt for the above information.


- Log files created during installation

The StatsPack installation script runs 3 other scripts - you do not need
to run these - the scripts are called automatically:

Oracle8i 8.1.6
1. statscusr -> creates the user and grants privileges
2. statsctab -> creates the tables
3. statspack -> creates the package
Oracle8i 8.1.7 or Oracle9i 9.x
1. spcusr -> creates the user and grants privileges
2. spctab -> creates the tables
3. spcpkg -> creates the package

The installation script will generate spooled output file based on the name of
the script being run and end with a 'lis' extension. Check each of the three
output files produced by the installation to ensure no errors were encountered,
before continuing on to the next step. For example, on Oracle8i 8.1.6, a
output file called statcusr.lis will be created. Under Oracle8i 8.1.7 or
Oracle9i 9.x, a output file spcusr.lis will be created.


- Errors found during installation process

If the scripts were incorrectly run while in SVRMGRL, an ORA-1012 error
indicating that the session is not logged in or a PLS-00201 error indicating
that stats$statspack_parameter must be declared may be generated. To correct
this problem, first use SQL*Plus to remove StatsPack, and attempt installation
using the above steps:

Oracle 8.1.6:
on Unix:
SQL> connect / as sysdba
SQL> @?/rdbms/admin/statsdrp

on NT:
SQL> connect / as sysdba
SQL> @%ORACLE_HOME%\rdbms\admin\statsdrp
on VMS
SQL> connect / as sysdba
SQL> @ora_rdbms_admin:statsdrp


Oracle8i 8.1.7 and Oracle9i 9.x
on Unix:
SQL> connect / as sysdba
SQL> @?/rdbms/admin/spdrop

on NT:
SQL> connect / as sysdba
SQL> @%ORACLE_HOME%\rdbms\admin\spdrop

on VMS
SQL> connect / as sysdba
SQL> @ora_rdbms_admin:spdrop

Oracle 10g
on Unix:
SQL> connect / as sysdba
SQL> @?/rdbms/admin/spdrop

on NT:
SQL> connect / as sysdba
SQL> @%ORACLE_HOME%\rdbms\admin\spdrop

on VMS
SQL> connect / as sysdba
SQL> @ora_rdbms_admin:spdrop

- Batch mode installation

There are two ways to install Statspack - interactively (as shown above),
or in 'batch' mode (as shown below). Batch mode is useful when you do not
wish to be prompted for the PERFSTAT user's default and temporary tablespaces.

To install in batch mode, you must assign values to the SQL*Plus
variables which specify the default and temporary tablespaces before running
the StatsPack installation script.

The variables are:
default_tablespace -> for the default tablespace
temporary_tablespace -> for the temporary tablespace

e.g.
on Unix:
SQL> connect / as sysdba
SQL> define default_tablespace='tools'
SQL> define temporary_tablespace='temp'
Oracle8i 8.1.6
SQL> @?/rdbms/admin/statscre
Oracle8i 8.1.7 or Oracle9i 9.x
SQL> @?/rdbms/admin/spcreate

The StatsPack installation script will no longer prompt for the above
information.




3) How do I take a snapshot of current database statistics? How do I capture
different levels of performance information for either the session or an
instance? How can I automate the process of capturing the StatsPack
snapshots?

See Note 149121.1: Gathering a StatsPack snapshot
Checked for relevance on 16-May-2007

PURPOSE
-------

This note is intended to provide information about the steps required to take a
StatsPack snapshot of current database performance levels and how to capture
different granularities of performance information at both the instance and
session levels. This note also discusses how to automate the process of
capturing a StatsPack snapshot. These snapshots can be used as a starting or
ending point for generating a performance summary.

SCOPE & APPLICATION
-------------------

This note is applicable for Oracle RDBMS releases 8.1.6 and greater, and is
related to the task of gathering database and session level performance
information by using the StatsPack package. This note assumes that the
StatsPack package has already been loaded into the database. This process is
described in Note 149113.1: "Installing and Configuring StatsPack Package".

For all 10g versions statspack is available. However, it is Oracle's recommmendation
that customers utilize the Automatic Workload Repository (AWR) in version 10g instead of
statspack as a diagnotic tool for database performance tuning issues.
Please see Note 276103.1: "PERFORMANCE TUNING USING 10g ADVISORS AND MANAGEABILITY FEATURES"
for more information on tuning advisors available in version 10g.

Gathering a StatsPack snapshot
------------------------------

The simplest interactive way to take a snapshot is to login to SQL*Plus as the
PERFSTAT user, and execute the procedure statspack.snap:

e.g.
SQL> connect perfstat/perfstat
SQL> execute statspack.snap;

Note: In an OPS environment, you must connect to the instance you wish to
collect data for. A snapshot must be taken on each instance so that later
comparisons can be made. A snapshot taken on one instance can only be
compared to another snapshot taken on the same instance.

Although statistics are cumulative ,this will store the current values for the
performance statistics in the StatsPack schema tables, and can be used as a
baseline snapshot for comparison with another snapshot taken at a later time.

For easier performance analysis, set the init.ora parameter timed_statistics to
true; this way, statspack data collected will include important timing
information. The timed_statistics parameter is also dynamically changeable
using the 'alter system' command. Timing data is important and often required
by Oracle support to diagnose performance problems.
Note 30824.1: "Init.ora Parameter "TIMED_STATISTICS" Reference Note"
states that the small amount of overhead from tuning this tracing on is worth
the benefits gained.

Parameters able to be passed in to the statspack.snap and
statspack.modify_statspack_parameter procedures
---------------------------------------------------------

============================================================================= Range of Default
Parameter Name Valid Values Value Meaning
=============================================================================
i_snap_level 0, 5, 6, 10 5 Snapshot Level
*Level 6 NOT VALID in 8i
*Level 7 is available in 9.2
------------------------------------------------------------------------------
i_ucomment Text Blank Comment to be stored with Snapshot
------------------------------------------------------------------------------
i_executions_th Integer >=0 100 SQL Threshold: number of times
the statement was executed
------------------------------------------------------------------------------
i_disk_reads_th Integer >=0 1,000 SQL Threshold: number of disk reads
the statement made
------------------------------------------------------------------------------
i_parse_calls_th Integer >=0 1,000 SQL Threshold: number of parse
calls the statement made
------------------------------------------------------------------------------
i_buffer_gets_th Integer >=0 10,000 SQL Threshold: number of buffer
gets the statement made
------------------------------------------------------------------------------
i_session_id Valid sid 0 (no Session Id of the Oracle Session
from session) to capture session granular
v$session statistics for
------------------------------------------------------------------------------
i_modify_parameter True, False False Save the parameters specified for
future snapshots?
=============================================================================

Configuring the amount of data captured
---------------------------------------

- Snapshot Level

It is possible to change the amount of information gathered by the package,
by specifying a different snapshot 'level'. In other words, the level
chosen (or defaulted) will decide the amount of data collected.

Levels = 0 General performance statistics
Statistics gathered:
This level and any level greater than 0 collects general
performance statistics, such as: wait statistics, system events,
system statistics, rollback segment data, row cache, SGA,
background events, session events, lock statistics,
buffer pool statistics, parent latch statistics.

Levels = 5 Additional data: SQL Statements
This level includes all statistics gathered in the lower level(s),
and additionally gathers the performance data on high resource
usage SQL statements.

SQL 'Thresholds'
The SQL statements gathered by Statspack are those which exceed one of
four predefined threshold parameters:
- number of executions of the SQL statement (default 100)
- number of disk reads performed by the SQL statement (default 1,000)
- number of parse calls performed by the SQL statement (default 1,000)
- number of buffer gets performed by the SQL statement (default 10,000)

The values of each of these threshold parameters are used when
deciding which SQL statements to collect - if a SQL statement's
resource usage exceeds any one of the above threshold values, it
is captured during the snapshot.

The SQL threshold levels used are either those stored in the table
stats$statspack_parameter, or by the thresholds specified when
the snapshot is taken.

Levels = 6
This level includes all statistics gathered in the lower level(s).
Additionally, it gathers SQL plans and plan usage data for each of the
high resource usage SQL statements captured. Therefore, level 6 snapshots
should be used whenever there is the possibility that a plan may change.
To gather the plan for a SQL statement, the statement must be in the shared
pool at the time the snapshot is taken, and it must exceed one of the SQL
thresholds. To gather plans for all statements in the shared pool, specify
the executions threshold to be zero (0) for those snapshots.

Levels = 10 Additional statistics: Child latches
This level includes all statistics gathered in the lower levels, and
additionally gathers high Child Latch information. Data gathered at
this level can sometimes cause the snapshot to take longer to complete
i.e. this level can be resource intensive, and should only be used
when advised by Oracle personnel.

- Snapshot SQL thresholds

There are other parameters which can be configured in addition to the level.
These parameters are used as thresholds when collecting SQL statements;
if any SQL statements breach the threshold, these are the statements which
are captured during the snapshot.

Snapshot level and threshold information used by the package is stored
in the stats$statspack_parameter table.


- Changing the default values for Snapshot Level and SQL Thresholds

The default parameters used for taking snapshots can be adjusted/modified so
that they better capture data about an instance's workload.

This can be done either by:

o Taking a snapshot, and specifying the new defaults to be saved to the
database (using statspack.snap, and using the i_modify_parameter
input variable).

SQL> execute statspack.snap -
(i_snap_level=>10, i_modify_parameter=>'true');

Setting the i_modify_parameter value to true will save the new
thresholds in the stats$statspack_parameter table; these thresholds
will be used for all subsequent snapshots.

If the i_modify_parameter was false or omitted, the snapshot taken at
that point will use the specified values, any subsequent snapshots
use the preexisting values in the stats$statspack_parameter table.

o Changing the defaults immediately without taking a snapshot, using the
statspack.modify_statspack_parameter procedure. For example to change
the snapshot level to 10, and the SQL thresholds for buffer_gets and
disk_reads, the following statement can be issued:

SQL> execute statspack.modify_statspack_parameter -
(i_snap_level=>10, i_buffer_gets_th=>10000, i_disk_reads_th=>1000);

This procedure changes the values permananently, but does not
take a snapshot.

The full list of parameters which can be passed into the
modify_statspack_parameter procedure are the same as those for
the snap procedure.

- Specifying a Session Id

If session statistics are needed for a particular session, it is possible to
specify the session id in the call to StatsPack. The statistics gathered for
the session will include session statistics, session events and lock activity.
The default behavior is to not to gather session level statistics.

SQL> execute statspack.snap(i_session_id=>3);



How to automatically gather StatsPack snapshots:
------------------------------------------------

To be able to make comparisons of performance from one day, week or year to
the next, there must be multiple snapshots taken over a period of time. A
minimum of two snapshots are required before any performance characteristics of
the application and database can be made.

The best method to gather snapshots is to automate the collection on a
regular time interval. It is possible to do this:

- Within the database, using the Oracle dbms_job procedure to schedule the
snapshots

- Using Operating System utilities (such as 'cron' on Unix or 'at' on NT) to
schedule the snapshot. Please contact the System Administrator for more
information about using the OS utilities for automating this data collection.


- Scheduling StatsPack snapshots using DBMS_JOB package

To use an Oracle-automated method for collecting statistics, you can use
dbms_job. A sample script on how to do this is supplied in spauto.sql,
which schedules a snapshot every hour, on the hour.

In order to use dbms_job to schedule snapshots, the job_queue_processes
initialization parameter must be set to greater than 0 in the configuration
file used to start the instance for the job to be run automatically.

Example of an init.ora entry:
# Set to enable the job queue process to start. This allows dbms_job
# to schedule automatic statistics collection using STATSPACK
job_queue_processes=1

If using statsauto.sql in OPS environment, the statsauto.sql script must be
run once on each instance in the cluster. Similarly, the job_queue_processes
parameter must also be set for each instance.


Changing the interval of statistics collection
-----------------------------------------------
To change the interval of statistics collection use the dbms_job.interval
procedure

e.g.
execute dbms_job.interval(,'SYSDATE+(1/48)');

Where 'SYSDATE+(1/48)' will result in the statistics being gathered each 1/48
hours (i.e. every half hour).

To force the job to run immediately,
execute dbms_job.run();

To remove the autocollect job,
execute dbms_job.remove();

For more information on dbms_job, see the Supplied Packages Reference Manual.

To gather a STATSPACK report :
==============================

SQL> connect perfstat/perfstat
SQL> @?/rdbms/admin/spreport

You will be prompted for:
1. The beginning snapshot Id
2. The ending snapshot Id
3. The name of the report text file to be created




4) How do I generate a performance report?

See Note 149124.1: Creating a StatsPack performance report
PURPOSE
-------

This note is intended to provide the necessary steps to create an instance
performance report by using the StatsPack Package.

SCOPE & APPLICATION
-------------------
This note is applicable for Oracle RDBMS releases 8.1.6 and greater, and is
related to the task of gathering database and session level performance
information.

For all 10g versions statspack is available. However, it is Oracle's recommmendation
that customers utilize the Automatic Workload Repository (AWR) in version 10g instead of
statspack as a diagnostic tool for database performance tuning issues.
Please see Note 276103.1: "PERFORMANCE TUNING USING 10g ADVISORS AND MANAGEABILITY FEATURES"
for more information on tuning advisors available in version 10g.


Creating a StatsPack performance report
-----------------------------

Once snapshots are taken Note 149121.1, it is possible to generate a
performance report. The SQL script which generates the report prompts
for the two snapshot id's to be processed.

The first will be the beginning snapshot id, the second will be the
ending snapshot id. The report will then calculate and print ratios,
increases etc. for all statistics between the two snapshot periods, in
a similar way to the BSTAT/ESTAT report.

Note: As with BSTAT/ESTAT it is not valid to compare two snapshots
for an instance if the instance was shutdown between the
the times that the begin and end snapshots were taken;
if the user enters begin and end snapshots which were taken
between shutdowns, the report shows an appropriate error
to indicate this problem.

Separating the phase of data gathering from producing a report, allows the
flexibility of basing a report on any data points selected - for example
it may be reasonable for the DBA to use the supplied automation script to
automate data collection every hour on the hour; If at some later point
a performance issue arose which may be better investigated by looking
at a three hour data window rather than an hour's worth of data, the
only thing the DBA need do, is specify the required start point and end
point when running the report.


Running the report

To examine the change in statistics between two time periods, the
statsrep.sql file is executed while being connected to the PERFSTAT
user. The statsrep.sql command file is located in the rdbms/admin
directory of the Oracle Home.

Note: In an OPS environment you must connect to the instance you
wish to report on - this restriction may be removed in future
releases.

You will be prompted for:
1. The beginning snapshot Id
2. The ending snapshot Id
3. The name of the report text file to be created

Oracle 8.1.6:
on Unix:
SQL> connect perfstat/perfstat
SQL> @?/rdbms/admin/statsrep
on NT:
SQL> connect perfstat/perfstat
SQL> @%ORACLE_HOME%\rdbms\admin\statsrep
on VMS
SQL> connect perfstat/perfstat
SQL> @ora_rdbms_admin:statsrep


Oracle8i 8.1.7, Oracle9i 9.x, and Oracle10g 10.x
on Unix:
SQL> connect perfstat/perfstat
SQL> @?/rdbms/admin/spreport
on NT:
SQL> connect perfstat/perfstat
SQL> @%ORACLE_HOME%\rdbms\admin\spreport
on VMS
SQL> connect perfstat/perfstat
SQL> @ora_rdbms_admin:spreport

Example output:

DB Id DB Name Instance# Instance
----------- ---------- ---------- ----------
2796063325 PRD1 1 prd1


Completed Snapshots

Instance DB Name SnapId Snap Started Snap Level
---------- ---------- ------ ---------------------- ----------
Comment
--------------------------------------------------------------
prd1 PRD1 1 10 Aug 1999 12:00:47 5

2 10 Aug 1999 15:00:54 5

Enter beginning Snap Id: 1
Enter ending Snap Id: 2

Enter name of output file [st_1_2] :

The report will now scroll past, and also be written to the file
specified (e.g. st_1_2.lis).


Gathering Optimizer statistics on the PERFSTAT schema

For best performance when running the StatsPack report, collect optimizer
statistics for tables and indexes owned by the PERFSTAT user. This should be
performed whenever significant change occurs in data volumes in PERFSTAT's tables.
The easiest way to do this, is to use dbms_utility, or dbms_stats,
and specify the PERFSTAT user:
execute dbms_utility.analyze_schema('PERFSTAT','COMPUTE');
or
execute dbms_stats.gather_schema_stats('PERFSTAT');



5) Where can I find further documentation regarding "Performance Tuning using
Statspack"?

- See Note 228913.1: Systemwide Tuning using STATSPACK Reports:

- In the Oracle Technology Network (OTN) there are several documents
on this regards. See:
http://otn.oracle.com/deploy/performance/content.html
PURPOSE
This article is a reference to understand the output generated by the STATSPACK utility. Since performance tuning is a very broad area this document only provide tuning advice in very specific areas. Several documents are available in Metalink to resolve contention in specific resources. The following resources are available to find specific documentation related to a performance topic:

Metalink Database Performance Technical Library
Oracle9i Database Performance Tuning Guide and Reference
OTN Performance Technical Library

CONTENT
Introduction
Summary Information
Instance cache information
Load profile Information
Instance Efficiency Ratios
Top 5 Events section
Cluster Statistics
Foreground Wait Events
Background Wait Events
Notes Regarding Waitevents
SQL Information
Statistics
IO Activity
Buffer cache Activity Information
Instance Recovery Statistics
PGA Memory Statistics
Enqueue Activity
Undo (Rollback) Information
Latch Information
Dictionary Cache Statistics
Library Cache Statistics
SGA Memory Summary
SGA Memory Detail
Init.ora Parameters Summary



Introduction

StatsPack was created in response to a need for more relevant and more extensive statistical reporting beyond what was available via UTLBSTAT/UTLESTAT reports. Further, this information can be stored permanently in the database instance itself so that historical data is always available for comparison and diagnosis.

Statspack has been available since version 816, but can be installed on 806 and above. Snapshots created using older versions of statspack can usually be read using newer versions of Statspack although the newer features will not be available.

See the following notes for information on installing, configuring snapshots, and generating reports:

- Installing and Configuring StatsPack Package
- Gathering a StatsPack snapshot
- Creating a StatsPack performance report
- FAQ- StatsPack Complete Reference

Timed_statistics must be set to true prior to the creation of a snapshot. If it is not, the data within statspack will not be relevant. You can tell if timed_statistics was not set by looking at the total times columns in the report. If these are zero then timed_statistics was not set.

Snapshots during which the instance was recycled will not contain accurate information and should not be included in a statspack report.

In general, we suggest that snapshots intervals be 15 minutes in length. This allows fine-grained reporting when hangs are suspected/detected. The snapshots can also be combined into hourly reports for general performance tuning.

When a value is too large for the statspack field it will be represented by a series of pound signs such as #######. Should this occur and you need to see the value in the field you will need to decrease the number of snapshots in the report until the field can be read. Should there only be one snapshot in the report, then you will need to decrease the snapshot interval.

Profiles created using statspack information are quite helpful in determining long-term trends such as load increases, usage trends, resource consumption, latch activity, etc. It is especially important that a DBA know these things and be able to demonstrate changes in them that necessitate hardware improvements and load balancing policies. This document will describe the main sections of an statspack report, which will help to understand what information is available to diagnose and resolve performance tuning problems. Some of the sections of the statspack report may contain different information based on the Statspack release that was used to generate the report. This document will also indicate these changes for the different sections.



Summary Information

The summary information begins with the identification of the database on which the statspack report was run along with the time interval of the statspack report. Here is the 8i instance information:

STATSPACK report for

DB Name DB Id Instance Inst Num Release OPS Host
------------ ----------- ------------ -------- ----------- --- ------------
PHS2 975244035 phs2 2 8.1.7.2.0 YES leo2

Snap Id Snap Time Sessions
------- ------------------ --------
Begin Snap: 100 03-Jan-02 08:00:01 #######
End Snap: 104 03-Jan-02 09:00:01 #######
Elapsed: 60.00 (mins)


The database name, id, instance name, instance number if OPS is being utilized, Oracle binary release information, host name and snapshot information are provided.

Note that here the number of sessions during the snapshot was too large for the sessions field and so the overflow symbol is displayed.

Here is an example of an 806 instance using statspack:

STATSPACK report for

DB Name DB Id Instance Inst Num Release OPS Host
---------- ----------- ---------- -------- ---------- ---- ----------
GLOVP 1409723819 glovp 1 8.0.6.1.0 NO shiver

Snap Length
Start Id End Id Start Time End Time (Minutes)
-------- -------- -------------------- -------------------- -----------
454 455 07-Jan-03 05:28:20 07-Jan-03 06:07:53 39.55

Here is the 9i instance information. Note that the OPS column is now entitled 'Cluster' to accommodate the newer Real Applications Cluster (RAC) terminology and that the Cursors/Session and Comment columns have been added.

STATSPACK report for

DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
ETSPRD7 1415901831 etsprd7a 1 9.2.0.2.0 YES tsonode1

Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 20 03-Jan-03 00:00:05 ####### .0
End Snap: 21 03-Jan-03 01:00:05 ####### .0
Elapsed: 60.00 (mins)



Instance Workload Information

Every statspack report start with a section that describes the instance's workload profile and instance metrics that may help to determine the instance efficiency.

- Instance cache information:

In the 8i report the buffer cache size can be determined by multiplying the db_block_buffers by the db_block_size.

Cache Sizes
~~~~~~~~~~~
db_block_buffers: 6400 log_buffer: 104857600
db_block_size: 32768 shared_pool_size: 150000000



In 9i this has been done for you. Std Block size indicates the primary block size of the instance.
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 704M Std Block Size: 8K
Shared Pool Size: 256M Log Buffer: 1,024K

Note that the buffer cache size is that of the standard buffer cache. If you have multiple buffer caches, you will need to calculate the others separately.

- Load profile Information:

The load profile information is next. It is identical in both 8i and 9i.

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 351,530.67 7,007.37
Logical reads: 5,449.81 108.64
Block changes: 1,042.0 8 20.77
Physical reads: 37.71 0.75
Physical writes: 134.68 2.68
User calls: 1,254.72 25.01
Parses: 4.92 0.10
Hard parses: 0.02 0.00
Sorts: 15.73 0.31
Logons: -0.01 0.00
Executes: 473.73 9.44
Transactions: 50.17

% Blocks changed per Read: 19.12 Recursive Call %: 4.71
Rollback per transaction %: 2.24 Rows per Sort: 20.91

Where:


. Redo size: This is the amount of redo generated during this report.

. Logical Reads: This is calculated as Consistent Gets + DB Block Gets = Logical Reads

. Block changes: The number of blocks modified during the sample interval

. Physical Reads: The number of requests for a block that caused a physical I/O.

. Physical Writes: The number of physical writes issued.

. User Calls: The number of queries generated

. Parses: Total of all parses: both hard and soft

. Hard Parses: Those parses requiring a completely new parse of the SQL statement. These consume both latches and shared pool area.

. Soft Parses: Not listed but derived by subtracting the hard parses from parses. A soft parse reuses a previous hard parse and hence consumes far fewer resources.

. Sorts, Logons, Executes and Transactions are all self explanatory

- Instance Efficiency Ratios:

Hit ratios are calculations that may provide information regarding different structures and operations in the Oracle instance. Database tuning never must be driven by hit ratios. They only provide additional information to understand how the instance is operating. For example, in a DSS systems a low cache hit ratio may be acceptable due the amount of recycling needed due the large volume of data accesed. So if you increase the size of the buffer cache based on this number, the corrective action may not take affect and you may be wasting memory resources.

See - THE COE PERFORMANCE METHOD , for further reference on how to approach a performance tuning problem.

This section is identical in 8i and 9i.

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: -45.57 In-memory Sort %: 97.55
Library Hit %: 99.89 Soft Parse %: 99.72
Execute to Parse %: -1.75 Latch Hit %: 99.11
Parse CPU to Parse Elapsd %: 52.66 % Non-Parse CPU: 99.99

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 42.07 43.53
% SQL with executions>1: 73.79 75.08
% Memory for SQL w/exec>1: 76.93 77.64


It is possible for both the 'buffer hit ratio' and the 'execute to parse' ratios to be negative. In the case of the buffer hit ration, the buffer cache is too small and the data in is being aged out before it can be used so it must be retrieved again. This is a form of thrashing which degrades performance immensely.

The execute to parse ratio can be negative when the number of parses is larger than the number of executions. The Execute to Parse ratio is determined by the following formula:


100 * (1 - Parses/Executions) = Execute to Parse
Here this becomes:
100 * (1 - 42,757 / 42,023 ) = 100 * (1 - 1.0175) = 100* -0.0175 = -1.75

This can be caused by the snapshot boundary occurring during a period of high parsing so that the executions have not occurred before the end of the snapshot. Check the next snapshot to see if there are enough executes to account for the parses in this report.
Another cause for a negative execute to parse ratio is if the shared pool is too small and queries are aging out of the shared pool and need to be reparsed. This is another form of thrashing which also degrades performance tremendously.

- Top 5 Events section:

This section shows the Top 5 timed events that must be considered to focus the tuning efforts. Before Oracle 9.2 this section was called "Top 5 Wait Events". It was renamed in Oracle 9.2 to "Top 5 Timed Events" to include the "CPU Time" based on the 'CPU used by this session'. This information will allow you to determine SQL tuning problems.

For further see the Statspack readme file called $ORACLE_HOME/rdbms/admin/spdoc.txt. These events are particularly useful in determining which sections to view next. For instance if there are fairly high waits on latch free or one of the other latches you might want to examine the latch sections first. On the other hand, if the db file read waits events seem abnormally high, you might want to look at the file io section first.

Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
db file sequential read 12,131,221 173,910 58.04
db file scattered read 93,310 86,884 29.00
log file sync 18,629 9,033 3.01
log file parallel write 18,559 8,449 2.82
buffer busy waits 304,461 7,958 2.66



Notice that in Oracle 9.2 references are made "Elapsed Time" rather than to "Wait Time". Also the "CPU Time" is included as part of the Top events section.
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
log file sync 3,223,927 32,481 64.05
CPU time 7,121 14.04
global cache open x 517,153 3,130 6.17
log file parallel write 985,732 2,613 5.15
KJC: Wait for msg sends to complete 568,061 1,794 3.54
-------------------------------------------------------------

Note that db file scattered and sequential read are generally the top wait events when the instance is tuned well and not OPS/RAC. Wait Events

Cluster Statistics

In Oracle 9i with the introduction of real Application Clusters, several sections were added to the statspack report to show information related to cluster database environment. The following sections are now available in statspack to monitor RAC environments and are only displayed when a cluster is detected.

Oracle 9.0 and 9.1 Cluster Statistics :

Global Lock Statistics
----------------------
Ave global lock get time (ms): 0.3
Ave global lock convert time (ms): 0.0
Ratio of global lock gets vs global lock releases: 1.0

Global cache statistics
-----------------------
Global cache hit %: 0.3
Ave global cache get time (ms): 1.7
Ave global cache convert time (ms): 3.1

Cache fusion statistics
-----------------------
Ave time to process CR block request (ms): 0.2
Ave receive time for CR block (ms): 1.6
Ave build time for CR block (ms): 0.1
Ave flush time for CR block (ms): 0.0
Ave send time for CR block (ms): 0.1

Ave time to process current block request (ms): 0.2
Ave receive time for current block (ms): 2.5
Ave pin time for current block (ms): 0.0
Ave flush time for current block (ms): 0.0
Ave send time for current block (ms): 0.1

GCS and GES statistics
----------------------
Ave GCS message process time (ms): 0.1
Ave GES message process time (ms): 0.1
% of direct sent messages: 59.5
% of indirect sent messages: 40.3
% of flow controlled messages: 0.1
% of GCS messages received by LMD: 96.4
% of GES messages received by LMD: 3.6
% of blocked converts: 10.3
Ave number of logical side channel messages: 33.8
Ave number of logical recovery claim messages:

Oracle 9.2 Cluster Statistics :

Global Cache Service - Workload Characteristics
-----------------------------------------------
Ave global cache get time (ms): 4.6
Ave global cache convert time (ms): 20.2

Ave build time for CR block (ms): 0.0
Ave flush time for CR block (ms): 0.6
Ave send time for CR block (ms): 0.1
Ave time to process CR block request (ms): 0.7
Ave receive time for CR block (ms): 0.9

Ave pin time for current block (ms): 2.9
Ave flush time for current block (ms): 0.1
Ave send time for current block (ms): 0.1
Ave time to process current block request (ms): 3.1
Ave receive time for current block (ms): 7.2

Global cache hit ratio: 0.8
Ratio of current block defers: 0.0
% of messages sent for buffer gets: 0.5
% of remote buffer gets: 0.4
Ratio of I/O for coherence: 12.3
Ratio of local vs remote work: 1.2
Ratio of fusion vs physical writes: 0.0

Global Enqueue Service Statistics
---------------------------------
Ave global lock get time (ms): 0.2
Ave global lock convert time (ms): 2.3
Ratio of global lock gets vs global lock releases: 1.0

GCS and GES Messaging statistics
--------------------------------
Ave message sent queue time (ms): 0.1
Ave message sent queue time on ksxp (ms): 12.3
Ave message received queue time (ms): 0.0
Ave GCS message process time (ms): 0.1
Ave GES message process time (ms): 0.0
% of direct sent messages: 81.2
% of indirect sent messages: 13.1
% of flow controlled messages: 5.7

In all the Oracle9i release, a separate section shows the actual value for all the cluster statistics:
GES Statistics for DB: FUSION Instance: ecfsc2 Snaps: 161 -162

Statistic Total per Second per Trans
--------------------------------- ---------------- ------------ ------------
dynamically allocated gcs resourc 0 0.0 0.0
dynamically allocated gcs shadows 0 0.0 0.0
flow control messages received 0 0.0 0.0
flow control messages sent 10 0.0 0.0
gcs ast xid 30 0.0 0.0
gcs blocked converts 531,572 147.7 0.2
gcs blocked cr converts 55,739 15.5 0.0
gcs compatible basts 45 0.0 0.0
gcs compatible cr basts (global) 6,183 1.7 0.0
....

For further reference on tuning RAC clustered instances please refer to the documentation manual called Oracle9i Real Application Clusters Deployment and Performance

Wait Events Information

The following section will describe in detail most of the sections provided in a statspack report.

- Foreground Wait Events:

Foreground wait events are those associated with a session or client process waiting for a resource. The 8i version looks like this:

Wait Events for DB: PHS2 Instance: phs2 Snaps: 100 -104
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (cs) (ms) /txn
---------------------------- ------------ ---------- ----------- ------ ------
PX Deq: Execution Msg 15,287 6,927 1,457,570 953 694.9
enqueue 30,367 28,591 737,906 243 ######
direct path read 45,484 0 352,127 77 ######
PX Deq: Table Q Normal 7,185 811 241,532 336 326.6
PX Deq: Execute Reply 13,925 712 194,202 139 633.0
....



The 9.2 version is much the same but has different time intervals in the header.
Wait Events for DB: FUSION Instance: ecfsc2 Snaps: 161 -162
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
log file sync 3,223,927 1 32,481 10 1.0
global cache open x 517,153 777 3,130 6 0.2
log file parallel write 985,732 0 2,613 3 0.3
KJC: Wait for msg sends to c 568,061 34,529 1,794 3 0.2

- Background Wait Events:

Background wait events are those not associated with a client process. They indicate waits encountered by system and non-system processes. The output is the same for all the Oracle releases.

Background Wait Events for DB: PHS2 Instance: phs2 Snaps: 100 -104
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (cs) (ms) /txn
---------------------------- ------------ ---------- ----------- ------ ------
latch free 88,578 32,522 18,341 2 ######
enqueue 319 230 5,932 186 14.5
row cache lock 4,941 0 2,307 5 224.6
control file parallel write 1,172 0 332 3 53.3
db file parallel write 176 0 67 4 8.0
log file parallel write 315 0 65 2 14.3
db file scattered read 137 0 62 5 6.2
LGWR wait for redo copy 66 10 47 7 3.0


Examples of background system processes are LGWR and DBWR. An example of a non-system background process would be a parallel query slave.
Note that it is possible for a wait event to appear in both the foreground and background wait events statistics. Examples of this are the enqueue and latch free events.

The idle wait events appear at the bottom of both sections and can generally safely be ignored. Typically these type of events keep record of the time while the clien is connected to the database but not requests are being made to the server.

- Notes Regarding Waitevents:

- The idle wait events associated with pipes are often a major source of concern for some DBAs. Pipe gets and waits are entirely application dependent. To tune these events you must tune the application generating them. High pipe gets and waits can affect the library cache latch performance. Rule out all other possible causes of library cache contention prior to focusing on pipe waits as it is very expensive for the client to tune their application.A list of most wait events used by the RDBMS kernel can be found in Appendix A of the Oracle Reference manual for the version being used.

Some wait events to watch:
- global cache cr request: (OPS) This wait event shows the amount of time that an instance has waited for a requested data block for a consistent read and the transferred block has not yet arrived at the requesting instance. See Note 157766.1 'Sessions Wait Forever for 'global cache cr request' Wait Event in OPS or RAC'. In some cases the 'global cache cr request' wait event may be perfectly normal if large buffer caches are used and the same data is being accessed concurrently on multiple instances. In a perfectly tuned, non-OPS/RAC database, I/O wait events would be the top wait events but since we are avoiding I/O's with RAC and OPS the 'global cache cr request' wait event often takes the place of I/O wait events.
- Buffer busy waits, write complete waits, db file parallel writes and enqueue waits: If all of these are in the top wait events the client may be experiencing disk saturation. See Note 155971.1 Resolving Intense and "Random" Buffer Busy Wait Performance Problems for troubleshooting tips.
- log file switch, log file sync or log switch/archive: If the waits on these events appears excessive check for checkpoint tuning issues. See Note 147468.1 Checkpoint Tuning and Troubleshooting Guide.
- write complete waits, free buffer waits or buffer busy waits: If any of these wait events is high, the buffer cache may need tuning. See Note 62172.1 'Understanding and Tuning Buffer Cache and DBWR in Oracle7, Oracle8, and Oracle8i'
- latch free: If high, the latch free wait event indicates that there was contention on one or more of the primary latches used by the instance. Look at the latch sections to diagnose and resolve this problem.

SQL Information

The SQL that is stored in the shared pool SQL area (Library cache) is reported to the user via three different formats in 8i. Each has their own usefulness.



. SQL ordered by Buffer Gets
. SQL ordered by Physical Reads
. SQL ordered by Executions
9i has an additional section:

. SQL ordered by Parse Calls

- SQL ordered by Gets:

SQL ordered by Gets for DB: PHS2 Instance: phs2 Snaps: 100 -104
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100

Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
198,924 37,944 5.2 41.7 2913840444
select length from fet$ where file#=:1 and block#=:2 and ts#=:3

111,384 7 15,912.0 23.4 1714733582
select f.file#, f.block#, f.ts#, f.length from fet$ f, ts$ t whe
re t.ts#=f.ts# and t.dflextpct!=0 and t.bitmapped=0

105,365 16 6,585.3 22.1 4111567099
CREATE TABLE "PHASE".:Q3236003("PID","CAMPAIGN","SCPOS1","SCPOS2
","SCPOS3","SCPOS4","SCPOS5","SCPOS6","SCPOS7","SCPOS8","SCPOS9"
,"SCPOS10","SCPOS11","SCPOS12","SCPOS13","SCPOS14","SCPOS15","SC
POS16","SCPOS17","MCELL","MAILID","RSPPROD","STATTAG","RSPREF","
RSPCRED","MAILDATE","RSPTDATE","BDATE","STATE","ZIP","INCOME","R
....



This section reports the contents of the SQL area ordered by the number of buffer gets and can be used to identify CPU Heavy SQL.
- Many DBAs feel that if the data is already contained within the buffer cache the query should be efficient. This could not be further from the truth. Retrieving more data than needed, even from the buffer cache, requires CPU cycles and interprocess IO. Generally speaking, the cost of physical IO is not 10,000 times more expensive. It actually is in the neighborhood of 67 times and actually almost zero if the data is stored in the UNIX buffer cache.

- The statements of interest are those with a large number of gets per execution especially if the number of executions is high.

- High buffer gets generally correlates with heavy CPU usage.

- SQL ordered by Physical Reads:

SQL ordered by Reads for DB: PHS2 Instance: phs2 Snaps: 100 -104
-> End Disk Reads Threshold: 1000

Physical Reads Executions Reads per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
98,401 16 6,150.1 14.2 3004232054
SELECT C0 C0 FROM (SELECT C0 C0 FROM (SELECT /*+ NO_EXPAND ROWID
(A1) */ A1."PID" C0 FROM "PHASE"."P0201F00_PLAT_MCOP_TB" PX_GRAN
ULE(0, BLOCK_RANGE, DYNAMIC) A1) UNION ALL SELECT C0 C0 FROM (S
ELECT /*+ NO_EXPAND ROWID(A2) */ A2."PID" C0 FROM "PHASE"."P0201
F00_UCS_MCOP_TB" PX_GRANULE(1, BLOCK_RANGE, DYNAMIC) A2) UNION

50,836 32 1,588.6 7.3 943504307
SELECT /*+ Q3263000 NO_EXPAND ROWID(A1) */ A1."PID" C0 FROM "PHA
SE"."P9999F00_NEW_RESP_HIST_TB" PX_GRANULE(0, BLOCK_RANGE, DYNAM
IC) A1 WHERE A1."CAMPAIGN"='200109M' AND A1."RSPPROD"='B'

50,836 32 1,588.6 7.3 3571039650
SELECT /*+ Q3261000 NO_EXPAND ROWID(A1) */ A1."PID" C0 FROM "PHA
SE"."P9999F00_NEW_RESP_HIST_TB" PX_GRANULE(0, BLOCK_RANGE, DYNAM
IC) A1 WHERE A1."CAMPAIGN"='200109M' AND A1."RSPPROD"='P'
....


This section reports the contents of the SQL area ordered by the number of reads from the data files and can be used to identify SQL causing IO bottlenecks which consume the following resources.
- CPU time needed to fetch unnecessary data.
- File IO resources to fetch unnecessary data.

- Buffer resources to hold unnecessary data.

- Additional CPU time to process the query once the data is retrieved into the buffer.

- SQL ordered by Executions:

SQL ordered by Executions for DB: PHS2 Instance: phs2 Snaps: 100 -104
-> End Executions Threshold: 100

Executions Rows Processed Rows per Exec Hash Value
------------ ---------------- ---------------- ------------
37,944 16,700 0.4 2913840444
select length from fet$ where file#=:1 and block#=:2 and ts#=:3

304 1,219 4.0 904892542
select file#,block#,length from fet$ where length>=:1 and
ts#=:2 and file#=:3

295 0 0.0 313510536
select job from sys.job$ where next_date < sysdate and (field1
= :1 or (field1 = 0 and 'Y' = :2)) order by next_date, job

273 273 1.0 3313905788
insert into col$(obj#,name,intcol#,segcol#,type#,length,precisio
n#,scale,null$,offset,fixedstorage,segcollength,deflength,defaul
t$,col#,property,charsetid,charsetform,spare1,spare2)values(:1,:
2,:3,:4,:5,:6,decode(:7,0,null,:7),decode(:5,2,decode(:8,-127/*M
AXSB1MINAL*/,null,:8),178,:8,179,:8,180,:8,181,:8,182,:8,183,:8,
....

This section reports the contents of the SQL area ordered by the number of query executions. It is primarily useful in identifying the most frequently used SQL within the database so that they can be monitored for efficiency. Generally speaking, a small performance increase on a frequently used query provides greater gains than a moderate performance increase on an infrequently used query

- SQL ordered by Parse Calls (9i Only):

SQL ordered by Parse Calls for DB: S901 Instance: S901 Snaps: 2 -3
-> End Parse Calls Threshold: 1000
% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
295 295 0.48 1705880752
select file# from file$ where ts#=:1

60 60 0.10 3759542639
BEGIN DBMS_APPLICATION_INFO.SET_MODULE(:1,NULL); END;

33 2,222 0.05 3615375148
COMMIT

1 200,000 0.00 119792462
INSERT into free.freelist_test values (:b2||'J'||:b1,'AAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAA')

....

This section shows the number of times a statement was parsed as compared to the number of times it was executed. One to one parse/executions may indicate that:

- Bind variables are not being used.

- On RDBMS version 8172 and higher the init.ora parameter session_cached_cursors was not set in the init.ora (100 is usually the suggested starting value). See enhancement bug 1589185 for an explanation of the change that shifts some of the load from the library cache to the user session cache.

- The shared pool may be too small and the parse is not being retained long enough for multiple executions.

- cursor_sharing is set to exact (this should NOT be changed without considerable testing on the part of the client).

Statistics

The statistics section shows the overall database statistics. These are the statistics that the summary information is derived from. A list of the statistics maintained by the RDBMS kernel can be found in Appendix C of the Oracle Reference manual for the version being utilized. The format is identical from 8i to 9i.

Instance Activity Stats for DB: PHS2 Instance: phs2 Snaps: 100 -104

Statistic Total per Second per Trans
--------------------------------- ---------------- ------------ ------------
CPU used by this session 84,161 23.4 3,825.5
CPU used when call started 196,346 54.5 8,924.8
CR blocks created 709 0.2 32.2
DBWR buffers scanned 0 0.0 0.0
DBWR checkpoint buffers written 245 0.1 11.1
DBWR checkpoints 33 0.0 1.5
DBWR cross instance writes 93 0.0 4.2
DBWR free buffers found 0 0.0 0.0
....

Of particular interest are the following statistics.


- CPU USED BY THIS SESSION, PARSE TIME CPU or RECURSIVE CPU USAGE: These numbers are useful to diagnose CPU saturation on the system (usually a query tuning issue). The formula to calculate the CPU usage breakdown is:
Service (CPU) Time = other CPU + parse time CPU
Other CPU = "CPU used by this session" - parse time CPU
Some releases do not correctly store this data and can show huge numbers. The rule to decide if you can use these metrics is:

Trustworthy if :
(db version>= 8.1.7.2 and 9.0.1)
OR ((db version >= 9.0.1.1) = 8.0.6.0 AND not using job_queue_processes AND CPU_PER_CALL = default)

- DBWR BUFFERS SCANNED: the number of buffers looked at when scanning the lru portion of the buffer cache for dirty buffers to make clean. Divide by "dbwr lru scans" to find the average number of buffers scanned. This count includes both dirty and clean buffers. The average buffers scanned may be different from the average scan depth due to write batches filling up before a scan is complete. Note that this includes scans for reasons other than make free buffer requests.
- DBWR CHECKPOINTS: the number of checkpoints messages that were sent to DBWR and not necessarily the total number of actual checkpoints that took place. During a checkpoint there is a slight decrease in performance since data blocks are being written to disk and that causes I/O. If the number of checkpoints is reduced, the performance of normal database operations improve but recovery after instance failure is slower.
- DBWR TIMEOUTS: the number of timeouts when DBWR had been idle since the last timeout. These are the times that DBWR looked for buffers to idle write.
- DIRTY BUFFERS INSPECTED: the number of times a foreground encountered a dirty buffer which had aged out through the lru queue, when foreground is looking for a buffer to reuse. This should be zero if DBWR is keeping up with foregrounds.
- FREE BUFFER INSPECTED: the number of buffers skipped over from the end of the LRU queue in order to find a free buffer. The difference between this and "dirty buffers inspected" is the number of buffers that could not be used because they were busy or needed to be written after rapid aging out. They may have a user, a waiter, or being read/written.
- RECURSIVE CALLS: Recursive calls occur because of cache misses and segment extension. In general if recursive calls is greater than 30 per process, the data dictionary cache should be optimized and segments should be rebuilt with storage clauses that have few large extents. Segments include tables, indexes, rollback segment, and temporary segments.
NOTE: PL/SQL can generate extra recursive calls which may be unavoidable.
- REDO BUFFER ALLOCATION RETRIES: total number of retries necessary to allocate space in the redo buffer. Retries are needed because either the redo writer has gotten behind, or because an event (such as log switch) is occurring
- REDO LOG SPACE REQUESTS: indicates how many times a user process waited for space in the redo log buffer. Try increasing the init.ora parameter LOG_BUFFER so that zero Redo Log Space Requests are made.
- REDO WASTAGE: Number of bytes "wasted" because redo blocks needed to be written before they are completely full. Early writing may be needed to commit transactions, to be able to write a database buffer, or to switch logs
- SUMMED DIRTY QUEUE LENGTH: the sum of the lruw queue length after every write request completes. (divide by write requests to get average queue length after write completion)
- TABLE FETCH BY ROWID: the number of rows that were accessed by a rowid. This includes rows that were accessed using an index and rows that were accessed using the statement where rowid = 'xxxxxxxx.xxxx.xxxx'.
- TABLE FETCH BY CONTINUED ROW: indicates the number of rows that are chained to another block. In some cases (i.e. tables with long columns) this is unavoidable, but the ANALYZE table command should be used to further investigate the chaining, and where possible, should be eliminated by rebuilding the table.
- Table Scans (long tables) is the total number of full table scans performed on tables with more than 5 database blocks. If the number of full table scans is high the application should be tuned to effectively use Oracle indexes. Indexes, if they exist, should be used on long tables if less than 10-20% (depending on parameter settings and CPU count) of the rows from the table are returned. If this is not the case, check the db_file_multiblock_read_count parameter setting. It may be too high. You may also need to tweak optimizer_index_caching and optimizer_index_cost_adj.
- Table Scans (short tables) is the number of full table scans performed on tables with less than 5 database blocks. It is optimal to perform full table scans on short tables rather than using indexes.
IO Activity

IO ActivityInput/Output (IO) statistics for the instance are listed in the following sections/formats:
- Tablespace IO Stats for DB: Ordered by total IO per tablespace.
- File IO Stats for DB: Ordered alphabetically by tablespace, filename.

In Oracle 8.1.7 many other columns were included as follow:
- Avg. Read / Second
- Avg. Blocks / Read
- Avg. Writes / Second
- Buffer Waits
- Avg. Buffer Waits / Milisecond

- Tablespace IO Stats

Tablespace IO Stats for DB: PHS2 Instance: phs2 Snaps: 100 -104
->ordered by IOs (Reads + Writes) desc

Tablespace
------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
PHASE_WORK_TS
138,361 38 0.0 3.9 6,859 2 0 0.0
OFFER_HISTORY_TS
24,714 7 0.0 4.0 0 0 0 0.0
ATTR1_TS
7,823 2 0.0 4.0 0 0 0 0.0
TEMP
886 0 0.0 20.1 1,147 0 0 0.0
SYSTEM
184 0 3.9 2.8 56 0 18 3.3


- File IO Stats
File IO Stats for DB: PHS2 Instance: phs2 Snaps: 100 -104
->ordered by Tablespace, File

Tablespace Filename
------------------------ ----------------------------------------------------
Av Av Av Av Buffer Av Buf
Reads Reads/s Rd(ms) Blks/Rd Writes Writes/s Waits Wt(ms)
-------------- ------- ------ ------- ------------ -------- ---------- ------
ATTR1_TS /oradata/phs2/hsz16/attr1_01.dbf
398 0 0.0 3.9 0 0 0
/oradata/phs2/hsz17/attr1_02.dbf
400 0 0.0 4.0 0 0 0
/oradata/phs2/hsz18/attr1_03.dbf
398 0 0.0 4.0 0 0 0
/oradata/phs2/hsz19/attr1_04.dbf
480 0 0.0 4.0 0 0 0
....


Note that Oracle considers average read times of greater than 20 ms unacceptable. If a datafile consistently has average read times of 20 ms or greater then:
- The queries against the contents of the owning tablespace should be examined and tuned so that less data is retrieved.
- If the tablespace contains indexes, another option is to compress the indexes so that they require less space and hence, less IO.
- The contents of that datafile should be redistributed across several disks/logical volumes to more easily accommodate the load.
- If the disk layout seems optimal, check the disk controller layout. It may be that the datafiles need to be distributed across more disk sets.
Buffer cache Activity Information

The buffer statistics are comprised of two sections:

- Buffer Pool Statistics:

This section can have multiple entries if multiple buffer pools are allocated. This section is in both 8i and 9i and is identical in both.

Buffer Pool Statistics for DB: PHS2 Instance: phs2 Snaps: 100 -104
-> Pools D: default pool, K: keep pool, R: recycle pool

Free Write Buffer
Buffer Consistent Physical Physical Buffer Complete Busy
P Gets Gets Reads Writes Waits Waits Waits
- ----------- ------------- ----------- ---------- ------- -------- ----------
D 4,167 362,492 3,091 413 0 0 60




A baseline of the database's buffer pool statistics should be available to compare with the current statspack buffer pool statistics. A change in that pattern unaccounted for by a change in workload should be a cause for concern.
- Buffer Wait Statistics:

This section shows a breakdown of each type of object waited for. This section follows the Instance Recovery Stats for DB in 9i and is identical to that in 8i.

Buffer wait Statistics for DB: PHS2 Instance: phs2 Snaps: 100 -104
-> ordered by wait time desc, waits desc

Tot Wait Avg
Class Waits Time (cs) Time (cs)
------------------ ----------- ---------- ---------
undo header 42 21 1
data block 18 6 0


The above shows no real contention. Typically, when there is buffer contention, it is due to data block contention with large average wait times, like the example below:
Buffer wait Statistics for DB: GLOVP Instance: glovp Snaps: 454 - 455

Tot Wait Avg
Class Waits Time (cs) Time (cs)
------------------ ----------- ---------- ---------
data block 9,698 17,097 2
undo block 210 1,225 6
segment header 259 367 1
undo header 259 366 1

Instance Recovery Statistics

This section was added in 9i and is useful for monitoring the recovery and redo information.

Instance Recovery Stats for DB: S901 Instance: S901 Snaps: 2 -3
-> B: Begin snapshot, E: End snapshot

Targt Estd Log File Log Ckpt Log Ckpt
MTTR MTTR Recovery Actual Target Size Timeout Interval
(s) (s) Estd IOs Redo Blks Redo Blks Redo Blks Redo Blks Redo Blks
- ----- ----- ---------- ---------- ---------- ---------- ---------- ----------
B 15 8 8024 21033 20691 92160 20691 ##########
E 15 11 8024 77248 92160 92160 285818 ##########

PGA Memory Statistics


This section was added in 9i and which helps when using the new model to allocate PGA in Oracle9i using PGA_AGGREGATE_TARGET.
PGA Memory Stats for DB: S901 Instance: S901 Snaps: 2 -3
-> WorkArea (W/A) memory is used for: sort, bitmap merge, and hash join ops

Statistic Begin (M) End (M) % Diff
----------------------------------- ---------------- ---------------- ----------
maximum PGA allocated 10.405 10.405 .00
total PGA allocated 7.201 7.285 1.17
total PGA inuse 6.681 6.684 .04


This section is particularly useful when monitoring session memory usage on Windows servers.
Enqueue Activity

An enqueue is simply a locking mechanism. This section is very useful and must be used when the wait event "enqueue" is listed in the "Top 5 timed events".

In 8i the section looks like this.

Enqueue activity for DB: PHS2 Instance: phs2 Snaps: 100 -104
-> ordered by waits desc, gets desc

Enqueue Gets Waits
---------- ------------ ----------
PS 2,632 716
ST 192 185
TM 973 184
TC 66 57
US 80 53
TS 68 46
TT 349 36
PI 56 32
HW 10 5
CF 275 3
DV 4 3
TX 499 1

In 9i the section looks like this.

Enqueue activity for DB: S901 Instance: S901 Snaps: 2 -3
-> Enqueue stats gathered prior to 9i should not be compared with 9i data
-> ordered by waits desc, requests desc

Avg Wt Wait
Eq Requests Succ Gets Failed Gets Waits Time (ms) Time (s)
-- ------------ ------------ ----------- ----------- ----------- ------------
HW 656 656 0 139 2.04 0

The action to take depends on the lock type that is causing the most problems. The most common lock waits are generally for:
- TX - Transaction Lock: Generally due to application concurrency mechanisms, or table setup issues.

- TM - DML enqueue: Generally due to application issues, particularly if foreign key constraints have not been indexed.

- ST - Space management enqueue: Usually caused by too much space management occurring. For example: create table as select on large tables on busy instances, small extent sizes, lots of sorting, etc.

Undo (Rollback) Information

Undo (Rollback) information is provided in two sections. They are identical in both 8i and 9i and are self explanatory.

- Rollback Segment Stats
- Rollback Segment Storage

In 9i the following two sections are added to provide similar information on the System Managed Undo (SMU) tablespace. Both are self explanatory.

- Undo Segment Summary for DB
- Undo Segment Stats for DB

The examples below show typical performance problem related to Undo (rollback) segments:

- Rollback Segment Stats for DB

Rollback Segment Stats for DB: PHS2 Instance: phs2 Snaps: 100 -104
->A high value for "Pct Waits" suggests more rollback segments may be required

Trans Table Pct Undo Bytes
RBS No Gets Waits Written Wraps Shrinks Extends
------ ------------ ------- --------------- -------- -------- --------
0 9.0 0.00 0 0 0 0
4 6,838.0 0.18 554,206 0 0 0
5 2,174.0 0.55 292,474 0 0 0
6 4,309.0 0.23 471,992 0 0 0
....

In this case, the PCT Waits on three of the rollback segments indicates that there is some minor contention on the rollbacks and that either another rollback or more space should be added.
- Rollback Segment Storage for DB

Rollback Segment Storage for DB: PHS2 Instance: phs2 Snaps: 100 -104
->Optimal Size should be larger than Avg Active

RBS No Segment Size Avg Active Optimal Size Maximum Size
------ --------------- --------------- --------------- ---------------
0 753,664 0 753,664
4 2,520,743,936 0 2,520,743,936
5 2,109,702,144 0 2,109,702,144
6 528,449,536 0 528,449,536

In this case, the client does not have optimal set.
Rollback Segment Storage for DB: RW1PRD Instance: rw1prd Snaps: 10489 - 1
->The value of Optimal should be larger than Avg Active

RBS No Segment Size Avg Active Optimal Size Maximum Size
------ --------------- ----------- --------------- ---------------
0 5,087,232 0 5,087,232
1 52,420,608 ########### 52,428,800 335,536,128
2 52,420,608 10,551,688 52,428,800 283,107,328
3 52,420,608 10,621,742 52,428,800 283,107,328
4 52,420,608 10,736,056 52,428,800 283,107,328
5 52,420,608 17,861,266 52,428,800 325,050,368
6 52,420,608 19,579,373 52,428,800 335,536,128
7 52,420,608 11,571,513 52,428,800 283,107,328
8 52,420,608 44,140,215 52,428,800 335,536,128
9 52,420,608 65,045,643 52,428,800 325,050,368

In this instance optimal is set and we can see an overflow for average active for RBS 1 and that RBS 9 was also larger than optimal. If this is a consistent problem it may be that the optimal value should be raised.

- Undo Segment Summary for DB

Undo Segment Summary for DB: S901 Instance: S901 Snaps: 2 -3
-> Undo segment block stats:
-> uS - unexpired Stolen, uR - unexpired Released, uU - unexpired reUsed
-> eS - expired Stolen, eR - expired Released, eU - expired reUsed

Undo Undo Num Max Qry Max Tx Snapshot Out of uS/uR/uU/
TS# Blocks Trans Len (s) Concurcy Too Old Space eS/eR/eU
---- -------------- ---------- -------- ---------- -------- ------ -------------
1 20,284 1,964 8 12 0 0 0/0/0/0/0/0

The description of the view V$UNDOSTAT in the Oracle9i Database Reference guide provides some insight as to the columns definitions. Should the client encounter SMU problems, monitoring this view every few minutes would provide more useful information.
- Undo Segment Stats for DB

Undo Segment Stats for DB: S901 Instance: S901 Snaps: 2 -3
-> ordered by Time desc

Undo Num Max Qry Max Tx Snap Out of uS/uR/uU/
End Time Blocks Trans Len (s) Concy Too Old Space eS/eR/eU
------------ ------------ -------- ------- -------- ------- ------ -------------
12-Mar 16:11 18,723 1,756 8 12 0 0 0/0/0/0/0/0
12-Mar 16:01 1,561 208 3 12 0 0 0/0/0/0/0/0

This section provides a more detailed look at the statistics in the previous section by listing the information as it appears in each snapshot.
It should be noted that 9i introduces an optional init.ora parameter called UNDO_RETENTION which allows the DBA to specify how long the system will attempt to retain undo information for a committed transaction without being overwritten or recaptured. This parameter, based in units of wall-clock seconds, is defined universally for all undo segments.

Use of UNDO_RETENTION can potentially increase the size of the undo segment for a given period of time, so the retention period should not be arbitrarily set too high. The UNDO tablespace still must be sized appropriately. The following calculation can be used to determine how much space a given undo segment will consume given a set value of UNDO_RETENTION.

Undo Segment Space Required = (undo_retention_time * undo_blocks_per_seconds)

As an example, an UNDO_RETENTION of 5 minutes (default) with 50 undo blocks/second (8k blocksize) will generate:

Undo Segment Space Required = (300 seconds * 50 blocks/ seconds * 8K/block) = 120 M

The retention information (transaction commit time) is stored in every transaction table block and each extent map block. When the retention period has expired, SMON will be signaled to perform undo reclaims, done by scanning each transaction table for undo timestamps and deleting the information from the undo segment extent map. Only during extreme space constraint issues will retention period not be obeyed.

Latch Information

Latch information is provided in the following three sections.

. Latch Activity
. Latch Sleep breakdown
. Latch Miss Sources

This information should be checked whenever the "latch free" wait event or other latch wait events experience long waits.

- Latch Activity

Latch Activity for DB: PHS2 Instance: phs2 Snaps: 100 -104
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0

Pct Avg Pct
Get Get Slps NoWait NoWait
Latch Name Requests Miss /Miss Requests Miss
----------------------------- -------------- ------ ------ ------------ ------
KCL freelist latch 9,382 0.0 0
KCL lock element parent latch 15,500 0.0 0.0 0
KCL name table latch 3,340 0.0 0
Token Manager 12,474 0.0 0.0 0
active checkpoint queue latch 2,504 0.0 0
batching SCNs 114,141 0.0 0.0 0
begin backup scn array 6,697 0.0 0
cache buffer handles 1 0.0 0
cache buffers chains 1,056,119 0.1 0.2 6,303 0.0
cache buffers lru chain 104,996 0.0 4,078 0.0



This section is identical in both 8i and 9i.
This section is particularly useful for determining latch contention on an instance. Latch contention generally indicates resource contention and supports indications of it in other sections.
Latch contention is indicated by a Pct Miss of greater than 1.0% or a relatively high value in Avg Sleeps/Miss.
While each latch can indicate contention on some resource, the more common latches to watch are:

- cache buffer chains: Contention on this latch confirms a hot block issue. See Note 62172.1 'Understanding and Tuning Buffer Cache and DBWR in Oracle7, Oracle8, and Oracle8i' for a discussion of this phenomenon.

- shared pool: Contention on this latch in conjunction with reloads in the SQL Area of the library cache section indicates that the shared pool is too small. Contention on this latch indicates that one of the following is happening:

. The library cache, and hence, the shared pool is too small.

. Literal SQL is being used. See Note 62143.1 'Understanding and Tuning the Shared Pool for an excellent discussion of this topic.

. On versions 8.1.7.2 and higher, session_cached_cursors might need to be set. See enhancement bug 1589185 for details.

See Note 62143.1 Understanding and Tuning the Shared Pool in Oracle7, Oracle8, and Oracle8i for a good discussion on literal SQL and its impact on the shared pool and library cache.

- Latch Sleep breakdown

Latch Sleep breakdown for DB: PHS2 Instance: phs2 Snaps: 100 -104
-> ordered by misses desc

Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
row cache objects 1,908,536 70,584 16,976 54656/14893/
1022/13/0
dlm resource hash list 624,455 15,931 71,868 118/959/1483
5/19/0
parallel query alloc buffe 37,000 4,850 362 4502/335/12/
1/0
shared pool 176,560 3,238 773 2649/431/134
/24/0
library cache 871,408 1,572 935 925/433/151/
63/0
cache buffers chains 1,056,119 872 209 670/195/7/0/
0
....


This section provides additional supporting information to the previous section. It is identical in 8i and 9i.
Latch Miss Sources

Latch Miss Sources for DB: PHS2 Instance: phs2 Snaps: 100 -104
-> only latches with sleeps are shown
-> ordered by name, sleeps desc

NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- -------
batching SCNs kcsl01 0 1 1
cache buffers chains kcbgtcr: kslbegin 0 114 39
cache buffers chains kcbgcur: kslbegin 0 62 62
cache buffers chains kcbrls: kslbegin 0 29 104
cache buffers chains kcbchg: kslbegin: bufs not 0 1 1
dlm group lock table lat kjgalk: move a lock from p 0 1 0
dlm lock table freelist kjlalc: lock allocation 0 10 6
dlm lock table freelist kjgdlk: move lock to paren 0 1 2
dlm lock table freelist kjlfr: remove lock from pa 0 1 3
dlm resource hash list kjucvl: open cr lock reque 0 36,732 562
dlm resource hash list kjxcvr: handle convert req 0 29,189 39,519
dlm resource hash list kjskchcv: convert on shado 0 3,907 25
dlm resource hash list kjrrmas1: lookup master no 0 1,603 18
dlm resource hash list kjcvscn: remove from scan 0 383 0
dlm resource hash list kjrlck: lock resource 0 26 1,965


This section provides a detailed breakdown of which latches are missing and sleeping. It is particularly useful in identifying library cache bugs as it provides latch child information not available in the previous two sections.
Search on the latch child name experiencing high misses or sleeps and you can often find the bug responsible.
It is identical in 8i and 9i.

Dictionary Cache Statistics

This is an interesting section to monitor but about which you can do very little as the only way to change the size of the dictionary cache is to change the shared pool size as the dictionary cache is a percentage of the shared pool. It is identical in 8i and 9i.

Dictionary Cache Stats for DB: PHS2 Instance: phs2 Snaps: 100 -104
->"Pct Misses" should be very low (< 2% in most cases)
->"Cache Usage" is the number of cache entries being used
->"Pct SGA" is the ratio of usage to allocated size for that cache

Get Pct Scan Pct Mod Final Pct
Cache Requests Miss Requests Miss Req Usage SGA
---------------------- ------------ ------ -------- ----- -------- ------ ----
dc_constraints 0 0 0 0 0
dc_database_links 0 0 0 0 0
dc_files 0 0 0 161 98
dc_free_extents 226,432 16.8 304 0.0 288 ###### 99
...

Library Cache Statistics

This section of the report shows information about the different sub-areas activity in the library cache.

The 8i version looks like this.

Library Cache Activity for DB: PHS2 Instance: phs2 Snaps: 100 -104
->"Pct Misses" should be very low

Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 48 0.0 48 0.0 0 0
CLUSTER 7 0.0 8 0.0 0 0
INDEX 0 0 0 0
OBJECT 0 0 0 0
PIPE 0 0 0 0
SQL AREA 42,640 0.2 193,249 0.1 23 17
TABLE/PROCEDURE 287 3.8 1,701 2.6 6 0
TRIGGER 0 0 0 0

The 9i version looks like this.
Library Cache Activity for DB: S901 Instance: S901 Snaps: 2 -3
->"Pct Misses" should be very low

Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 29 0.0 29 0.0 0 0
SQL AREA 579 5.7 2,203,964 0.0 0 0
TABLE/PROCEDURE 292 0.0 496 0.0 0 0
TRIGGER 12 0.0 12 0.0 0 0

Values in Pct Misses or Reloads in the SQL Area, Tables/Procedures or Trigger rows indicate that the shared pool may be too small. To confirm this, consistent values (not sporadic) in Pct Misses or Reloads in the Index row indicate that the buffer cache is too small. (No longer available in 9i.)

Values in Invalidations in the SQL Area indicate that a table definition changed while a query was being run against it or a PL/SQL package being used was recompiled.

SGA Memory Summary

This section provides a breakdown of how the SGA memory is used at the time of the report. It is useful to be able to track this over time. This section is identical in 8i and 9i.

SGA regions Size in Bytes
------------------------------ ----------------
Database Buffers 209,715,200
Fixed Size 103,396
Redo Buffers 104,873,984
Variable Size 423,956,480
----------------
sum 738,649,060

SGA Memory Detail

This section shows a detailed breakdown of memory usage by the SGA at the beginning and ending of the reporting period. It allows the DBA to track memory usage throughout the business cycle. It is identical in 8i and 9i.

SGA breakdown difference for DB: PHS2 Instance: phs2 Snaps: 100 -104

Pool Name Begin value End value Difference
----------- ------------------------ -------------- -------------- -----------
java pool free memory 20,000,768 20,000,768 0
large pool PX msg pool 230,386,744 230,386,744 0
large pool free memory 299,976 299,976 0
shared pool Checkpoint queue 189,280 189,280 0
shared pool KGFF heap 252,128 252,128 0
shared pool KGK heap 31,000 31,000 0
shared pool KQLS heap 2,221,552 2,246,640 25,088
shared pool PL/SQL DIANA 436,240 436,240 0
shared pool PL/SQL MPCODE 138,688 138,688 0

Init.ora Parameters Summary

The final section shows the current init.ora parameter settings. It displays those that are more commonly used including some hidden. It is identical in 8i and 9i.

init.ora Parameters for DB: PHS2 Instance: phs2 Snaps: 100 -104

End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
_PX_use_large_pool TRUE
always_anti_join HASH
audit_trail TRUE
background_dump_dest /u01/app/oracle/admin/phs2/bdump
bitmap_merge_area_size 10485760
compatible 8.1.7
control_files /oradata/phs2/hsz16/control_01.db
core_dump_dest /u01/app/oracle/admin/phs2/cdump
cursor_space_for_time TRUE



6) How do I move my statistics from one database to a different database to
be able to run reports?

See Note 149122.1: Sharing StatsPack snapshot data between two or more databases
PURPOSE
-------

This note is indended to provide information about the steps required to move
StatsPack information from one database to another database so that performance
data can be analyzed

SCOPE & APPLICATION
-------------------

This note is applicable for Oracle RDBMS releases 8.1.6 and greater, and is
related to the task of gathering database and session level performance
information by using the StatsPack package. This note assumes that the
StatsPack package has already been loaded into the database.

Sharing StatsPack snapshot data between two or more databases
-----------------------------

If you wish to share data with other sites (for example if Oracle Support
requires the raw statistics), it is possible to export the PERFSTAT user.
Another reason performance data may need to be moved, is to facilitate
remote analysis of production data on a non-production system.

An export parameter file (statsuexp.par) has been supplied for this purpose.
To use this file, supply the export command with the userid parameter, along
with the export parameter file name.
e.g.
exp userid=perfstat/perfstat parfile=statsuexp.par

Parameter file name and location:
Oracle 8.1.6:
on Unix:
SQL> @?/rdbms/admin/statsuexp.par

on NT:
SQL> @%ORACLE_HOME%\rdbms\admin\statsuexp.par

on VMS
SQL> @ora_rdbms_admin:statsuexp.par


Oracle8i 8.1.7 and Oracle9i 9.x
on Unix:
SQL> @?/rdbms/admin/spuexp.par

on NT:
SQL> @%ORACLE_HOME%\rdbms\admin\spuexp.par

on VMS
SQL> @ora_rdbms_admin:spuexp.par



This will create a file called statsuexp.dmp/spuexp.dmp and the log file
statsuexp.log/spuexp.log

If you wish to load the data into another database, use the import command.

For more information on using export and import, please see the Oracle
Utilities manual.

Note: If you need to create a report based on the imported data, you will need to modify
the spreport.sql to reference the inst_num and dbid of the database imported directly.
This is required because this script obtain these values by quering the v$database and v$instance
views in the target database.

Summary of files referenced in this note (8.1.6/8.1.7 or 9.x):
statsuexp.par/spuexp.par - export parameter file
statsuexp.dmp/spuexp.dmp - export dump file created when exporting PERFSTAT user
statsuexp.log/spuexp.log - log file generated during export of PERFSTAT user





7) Can StatsPack snapshots and UTLBSTAT/UTLESTAT scripts be run on the same database?

See Note 149123.1: Conflicts and differences compared to UTLBSTAT/UTLESTAT
PURPOSE
-------

This note is indended to provide information about conflicts and differences
between StatsPack and UTLBSTAT/UTLESTAT.

SCOPE & APPLICATION
-------------------

This note is applicable for Oracle RDBMS releases 8.1.6 and greater, and is
related to the task of gathering database and session level performance
information by using the StatsPack package. This note assumes that the
StatsPack package has already been loaded into the database.

Conflicts and differences compared to UTLBSTAT/UTLESTAT
------------------------------------------------------------

- Running BSTAT/ESTAT in conjunction to Statspack

If you choose to run BSTAT/ESTAT in conjunction to Statspack, do not
run both as the same user, as there is a table name conflict - this table
is stats$waitstat.


- Differences between Statspack and BSTAT/ESTAT

* Statspack fundamentally differs from the well known UTLBSTAT/UTLESTAT
tuning scripts by collecting more information, and also by storing the
performance statistics data permanently in Oracle tables, which can later
be used for reporting and analysis. The data collected can be analyzed
using the report provided, which includes an "instance health and load"
summary page, high resource SQL statements, as well as the traditional
wait events and initialization parameters.

* Permanent tables owned by PERFSTAT store performance statistics;
instead of creating/dropping tables each time, data is inserted
into the pre-existing tables. This makes historical data comparisons
easier. As well, Statspack separates the data collection from the report
generation. Data is collected when a 'snapshot' is taken; viewing the data
collected is in the hands of the performance engineer when he/she runs the
performance report

* Statspack increments a transaction counter on a commit or a
rollback, and so calculates the number of transactions thus:

'user commits' + 'user rollbacks'

BSTAT/ESTAT considers a transaction to complete with a commit only, and
so assumes that transactions = 'user commits'

For this reason, comparing per transaction statistics between Statspack and
BSTAT/ESTAT may result in significantly different per transaction ratios.



8) How do I remove StatsPack snapshots that I am no longer interested in?

This functionality is available starting with Oracle 8.1.7 using a script
called sppurge.sql located in $ORACLE_HOME/rdbms/admin.

In Oracle 8.1.6 you will need to contact Oracle Support for assistance.
9) Statspack and Version 10g:
- It is recommended you use AWR in 10g. It is not recommended that you use both AWR and statspack in 10g.
If you must use statspack and you wish to convert historical statspack data to use in 10g, then you need to run
update script file(s) referenced in $ORACLE_HOME/rdbms/admin/spdoc.txt.
See Note 394937.1: Statistics Package (STATSPACK) Guide

25 comments:

Anonymous said...

excellent


[url=http://bradhsfxga.livejournal.com/]برامج[/url]
[url=http://bradhsfxga.livejournal.com/]برامج التحميل[/url]

Anonymous said...

Wаy cool! Some еxtremely valid points!
I apprеciate you penning thіs post аnԁ the rest of thе website is rеally good.


httр://wωw.truсkегtοtrucker.
com/blοg/bucket-truсκ-experts/
My homepage ; utility trucks

Anonymous said...

Hi there! Someοne in my Мyspace group ѕhared this ѕіte ωіth
us so Ι cаme to looκ it over. I'm definitely loving the information. I'm
boοkmагking and will be twеeting this to my fοllowerѕ!
Tеrгifіc blog and ωonderful
style anԁ dеsіgn.

http://wwω.podclаѕs.com/cсtvѕecuгіtуpros/
Take a look at my homepage : best roofing company

Anonymous said...

Heya i'm for the primary time here. I came across this board and I in finding It truly useful & it helped me out a lot. I am hoping to present something back and aid others like you aided me.

Take a look at my homepage cleaning the big green egg grill
Feel free to visit my blog : how to clean the big green egg

Anonymous said...

Everything is very open with a clear description of the
issues. It was really informative. Your site is very helpful.
Thank you for sharing!

Also visit my web site :: http://top-buzz.org/blog/24711/runder-haarausfall/
my site > Haarausfall

Anonymous said...

Thank you a bunch for sharing this with all people
you actually understand what you're speaking about! Bookmarked. Please additionally seek advice from my website =). We could have a hyperlink trade contract between us

Review my web blog :: http://www.etqwhq.de/search.php/all/Haarfollikel On Ice
My blog post : Haarausfall

Anonymous said...

Hеy therе! I'm at work surfing around your blog from my new apple iphone! Just wanted to say I love reading your blog and look forward to all your posts! Carry on the outstanding work!

Here is my blog tens therapy units
my page: www.tensunitsforpain.com

Anonymous said...

Its nοt my first time to visit this ωеb
page, i am visiting thiѕ web sitе dailly and get good facts frоm here eveгydаy.


Alѕо visit my blog :: irving taxi company

Anonymous said...

I go to see ԁaу-to-daу a few blogѕ and infoгmаtiοn sіtes to rеad posts, howevеr thіs
ωeblog proviԁеs qualitу baѕed wгiting.



My homepage :: no hands seo list

Anonymous said...

Its like уou read my mind! Үou seem tο know
ѕo muсh abοut this, liκе you ωrote the bοοk in it or something.

I thіnk that уou сan dо ωith
a few picѕ to drivе thе messаge home a bit, but instеad of that, thiѕ is magnificent blog.

Аn excellent read. І will certainly be back.



My web page get cash for surveys gary mitchell review

Anonymous said...

Excellent blοg you've got here.. It's hard tο finԁ high-quality writіng like yours nowadays.

I tгuly appгеciatе іndividuals lіkе yоu!
Take cаre!!

Нerе iѕ my web pаge; get cash for surveys real

Anonymous said...

Hі there! I гealize this iѕ kind of off-topiс howeνeг I hаԁ to ask.
Dοeѕ managing a wеll-еstablished ωebsite like yоuгs геquire
a lаrgе аmοunt оf wоrk?

I'm brand new to operating a blog but I do write in my diary daily. I'd like to start a blog so I can shaге mу experiеnce аnd feelings online.

Ρlease let me know if you hаѵe anу suggestiоns оr tiρs fοr
bгanԁ neω aspiring blog owners. Appreciate
іt!

Here is my web-site ... article submitter

Anonymous said...

Hey, I thіnk уоur website might be having browser
compаtіbility issueѕ. When I look at youг blog in Chгome, it lοοks fіne but
when openіng in Internet Explorег, it haѕ some overlapping.

I ϳust wanted to give yοu a quick heads up!

Other then that, terrific blog!

Feel frеe to visіt mу blog: Www.Paleodietprimal.info

Anonymous said...

Hello there, I discovered your web sitе by wаy of Googlе at the same timе as ѕeаrching for
a гelаtеd matter, your site got herе up, it аρρeaгs to be
like good. I have booκmarkеd it in my google bookmarks.


Hi there, just become alert to yοur weblog via
Google, and found that it is truly informаtivе.
I am gоing to ωatсh out fοr bгusѕels.
I'll be grateful in case you continue this in future. A lot of other folks will probably be benefited out of your writing. Cheers!

my web-site how to find ppl on facebook using email

Anonymous said...

Нey wоulԁ you mіnd ѕhагing whіch
blog platfοrm you're working with? I'm planning tо start my οwn blog
soоn but I'm having a difficult time choosing between BlogEngine/Wordpress/B2evolution and Drupal. The reason I ask is because your design and style seems different then most blogs and I'm
lοoking for somеthing unique.
P.S Sorгy for being off-topiс but I had tо аsκ!


Feel fгeе to surf to my web site :
: lancaster pa Seo company
My web site: seo company lancaster pa

Anonymous said...

Τhankѕ for уοur mаrvelοus poѕting!
I definitelу enјoyed reаding it, you're a great author. I will remember to bookmark your blog and will come back from now on. I want to encourage you continue your great work, have a nice holiday weekend!

Here is my website - gsa search engine ranker warrior

Anonymous said...

I get pleasure from, result in I discovered just what I used to
be having a look for. You have ended my four day lengthy hunt!
God Bless you man. Have a great day. Bye

My web blog; http://mediawiki.gazeta.pl/

Anonymous said...

Hi therе, cоnstantly i uѕeԁ to checκ blog
poѕts here early in the bгеaκ οf dаy, becаusе i like to
find οut more and more.

Feel free to visit my web page - Oklahoma City Roofers

Anonymous said...

Howdу, i read your blоg from timе to time
аnd i own a simіlaг onе and i waѕ just curiοus іf you get a lοt of ѕpаm responses?
If so how do you ρrеvent it, any plugіn
or anуthing yоu can suggeѕt?
I get ѕo much latеly it's driving me crazy so any support is very much appreciated.

my web-site: disable dating site

Anonymous said...

Whаt's up, I log on to your blog daily. Your humoristic style is witty, keep up the good work!

My site :: go

Anonymous said...

Ηеya і am for the first timе here. I founԁ thiѕ
board and I finԁ Іt rеаlly useful & it helpеd me οut а lot.
I hοpe tо give something baсκ and
аіd others lіke you aided me.



Hеге is my wеb blog .

.. http://www.howtobuyandsellcarsguide.com/

Anonymous said...

Ι was curious іf you eνer consideгeԁ сhanging the laуоut
of your blοg? Its veгy well written; I love whаt youve got
tο say. But maybе you coulԁ a little morе in thе way of content so peоple cοuld connect with it better.
Үouve got аn awful lot of teхt for only
havіng one or two ρictures. Mаybe you could space it οut better?


my web ѕitе :: Download Jvzoo

Anonymous said...

Тhis pοst is gеnuіnelу а nісе оne it
hеlps nеw the wеb people, who aге wiѕhing in favor оf blogging.



Also vіsit mу webpage; tube hero review

Anonymous said...

Magnіfiсent websіte. A lot of uѕeful infо
heгe. I'm sending it to several buddies ans additionally sharing in delicious. And of course, thanks in your effort!

my web site; paleo pie recipes

Anonymous said...

Way cool! Some extremelу valid pοints!
I аρρrеciatе yοu writing this write-up anԁ the rest of
the webѕitе iѕ also reаlly good.


Cheсk out mу web site - paulsohn.blogspot.com