Thursday, December 6, 2007

Why Oracle's Automatic Storage Management?

Why Oracle's Automatic Storage Management?
by Sreeram Surapaneni


Oracle Database 10g's mantra is simpler, more automated database management. We DBAs cannot help but get excited about all the new technology that Oracle routinely releases. With the release of Oracle's Volume Manager and File system built into the Database kernel (ASM) that enthusiasm cannot be overstated. While this Oracle 10g delivers numerous interesting new features, we will examine just one, Automatic Storage Management (ASM), which makes Linux and UNIX database environments disk space management a snap.

What is exactly Automatic Storage Management?
ASM Oracle storage management system directly manages potentially thousands of Oracle database files by enabling the DBA to create disk groups, which are comprised of disks and the files that reside on them. The DBA will need to manage a small number of disk groups.

The task of planning, initializing, allocating, and managing many disks for several databases, if not single large database, becomes unwieldy. Capitulation is often inevitable, and many companies simply treat the disk storage farm as a black box, thus abstracting that complexity away from the database. Very often, this approach can lead to database I/O bottlenecks that are time-consuming to diagnose. For enterprises where the DBA's tasks do involve Disk management, these tasks consume far too much precious time. Neither scenario should be generally acceptable, as both represent the extremes.

Oracle Database 10g's new ASM feature directly addresses these real-world scenarios by providing an effective and simple middle-ground solution. Now, instead of spending inordinate amounts of time managing disk complexities or abstracting it away from the database in the hopes that all is well, we can simply permit ASM to manage it all. We can now merely allocate disks to Oracle with preferences for striping and mirroring stated via templates, and let ASM manage the space—thus eliminating the need for traditional tools, like Veritas Volume Manager (VXVM) and (VXFS) file systems, used to manage lots of disk space and the numerous commands/GUI tools (VXSA) necessary to manage the storage. Thus, Linux and UNIX database servers can now be deployed more easily and quickly for ever-growing database size requirements—and with much greater I/O efficiency than before.

It is important to note that ASM is not a general-purpose file system and is not intended to replace the operating system's file system. For example, if ASM were used to manage background and user trace files, it could be difficult for the DBA to access the information they contain. ASM files cannot be directly accessed by the operating system; as a result, if the operating system is unable to access and open the alert. log file, the DBA will be unable to view its contents in a text editor and diagnose database problems. Rather, these types of files should be managed through the operating system's file management system.

Easier to Set Up
ASM is just plain easier to set up and administer than its traditional file system and Volume manager. Let's assume we have a simple database to create—just the required tablespaces and one for user data. Let's also assume the following hardware and software constraints:

RAID 0—stripe everything across all drives
Stripe Width = 4
Stripe Length = 64 K
Four 20 GB IDE disks—each with single partition
Linux ext3 file system (2 GB file size limit)
Five Tablespaces
SYSTEM, 2GB, 1 data files
SYSAUX, 2GB, 1 data files
UNDO, 8GB, 4 data files
TEMP, 8GB, 4 data files
USER, 60GB, 30 data files
One Physical Volume (PV) per disk drive
One Volume Group (VG)—VG01
Four Logical Volumes (LV)
LV01, 4GB, SYSTEM and SYSAUX
LV02, 8GB, UNDO
LV03, 8GB, TEMP
LV04, 60GB, USER
Now let's compare the actual steps to create the database in each environment. Note that in the LVM example I am purposely avoiding the handling of various overhead issues with all size settings in order to keep the example simple. However, be warned: for optimal disk space usage, you will have to tackle this issue as well. It is yet another example of complex storage planning and management issues that you can avoid by using ASM.

Here are the steps using cooked files with an LVM. (Note that the IDE disk drives are assumed to be the second through fifth drives in the example, hence the b-e device name designations of /dev/hdb through /dev/hde):

1. fdisk /dev/hdb set its type to 0x8e (LVM partition)
2. fdisk /dev/hdc set its type to 0x8e (LVM partition)
3. fdisk /dev/hdd set its type to 0x8e (LVM partition)
4. fdisk /dev/hde set its type to 0x8e (LVM partition)
5. pvcreate /dev/hdb /dev/hdc /dev/hdd /dev/hde)
6. vgcreate VG01 /dev/hdb /dev/hdc /dev/hdd /dev/hde)
7. lvcreate -L 4 G -i 4 -I 64 -n LV01 VG01)
8. lvcreate -L 8 G -i 4 -I 64 -n LV02 VG01)
9. lvcreate -L 8 G -i 4 -I 64 -n LV03 VG01)
10. -L 60 G -i 4 -I 64 -n LV04 VG01)
11. mkfs -t ext3 /dev/VG01/LV01)
12. mkfs -t ext3 /dev/VG01/LV02)
13. mkfs -t ext3 /dev/VG01/LV03)
14. mkfs -t ext3 /dev/VG01/LV04)
15. mount /dev/VG01/LV01 /home/oracle/oradata/LVMDB/system)
16. mount /dev/VG01/LV02 /home/oracle/oradata/LVMDB/undo)
17. mount /dev/VG01/LV03 /home/oracle/oradata/LVMDB/temp)
18. mount /dev/VG01/LV04 /home/oracle/oradata/LVMDB/user1)
19. edit /etc/fstab and add the new mount point entries)
20. Create initLVMDB.ora file)
o INSTANCE_TYPE = RDBMS)
21. SQL Plus connect as SYSDBA for SID=LVMDB)
22. STARTUP NOMOUNT PFILE=initLVMDB.ora)
23. CREATE SPFILE FROM PFILE=initLVM.ora)
24. Create Oracle database and user tablespace using SQL code.

Now simply imagine that you will need dozens of such user tablespaces, each with lots and lots of data files. It is not hard to see why this example does not scale well. Keeping track of all the tablespaces, data files, logical volumes, physical volumes, and actual disk drives (for finding hot spots) is just too difficult.

Now here is the much simpler ASM example:

1. Create initASM.ora file
o INSTANCE_TYPE = OSM
2. SQL Plus connect as SYSDBA for SID=ASM
3. STARTUP NOMOUNT PFILE=initASM.ora
4. CREATE SPFILE FROM PFILE=initASM.ora
5. CREATE DISKGROUP dgroup1 EXTERNAL REDUNDANCY DISK
'/dev/hdb','/dev/hdc','/dev/hdd','/dev/hde'
6. Create initASMDB.ora file
o INSTANCE_TYPE = RDBMS
o DB_CREATE_FILE_DEST = '+dgroup1'
7. SQL Plus connect as SYSDBA for SID=ASMDB
8. STARTUP NOMOUNT PFILE=initASMDB.ora
9. Create Oracle database and user tablespace using SQL

And that's the syntax if we still want to explicitly control the data allocation sizes (which really are no longer a concern as we're now working at the disk level). Thus, we really could just let Oracle handle all the internal space needs and issue the much simpler syntax:

create database ASMDB;
Not only is this syntax much shorter and thus easier to read, but note that we even had our redo logs created in our striped disk group. (In the prior example, they had just been placed on the file system.) The main point is that the process of laying out the storage and creating the database is just so much simpler, that if you're managing lots and lots of disk drives (i.e., SAN or NAS) that you can't go wrong with upgrading to Oracle Database 10g, for ASM alone.

Furthermore, ASM automatically rebalances both its striping and mirroring of a disk group whenever disks are added, dropped, or fail—and all with the database completely online. Therefore, Oracle automatically takes care of keeping all of your objects fully striped. That is why ASM can make the claim that it provides near-optimal I/O balancing without any manual tuning. It simply internalizes and automates that which DBAs have been doing manually for years—trying to eliminate hot spots by spreading things across as many drives as possible. Note that you can control when and how Oracle performs that rebalancing via the OSM_POWER_LIMIT and other parameters

Easier to Change
Of course, you only create the database once, so you might argue that the savings shown above are not reason enough to undertake any changes. So now, let's examine what would happen in each scenario if we add four disks. This is where ASM really shines.

Let's assume that our single USER tablespace is nearly full, containing just 10 tables and 10 indexes, where each table consumes 4GB and each index consumes 2GB. If we now need to create another table and index, we do not have enough room. Therefore, we are given four more disks identical to the first four to add to our storage design in order to accommodate additional space requests. In other words, we are going to add 80GB to our single USER tablespace. Sounds easy enough, right?

In the LVM example, we have three primary options:

Create a new Volume Group VG02 with a new Logical Volume LV05
Extend existing Volume Group VG01 with a new Logical Volume LV05
Extend existing Volume Group VG01 by extending Logical Volume LV04.
[Note: Since we are assuming that all 80GB is to be dedicated to the USER tablespace, there is no need to create more than a single new Logical Volume for the first two options.]

Most people will choose the third option because we are merely trying to add space to our existing storage design. The steps to implement that choice are shown below.

1. fdisk /dev/hdf set its type to 0x8e (LVM partition)
2. fdisk /dev/hdg set its type to 0x8e (LVM partition)
3. fdisk /dev/hdh set its type to 0x8e (LVM partition)
4. fdisk /dev/hdi set its type to 0x8e (LVM partition)
5. pvcreate /dev/hdf /dev/hdg /dev/hdh /dev/hdi
6. vgextend VG01 /dev/hdf /dev/hdg /dev/hdh /dev/hdi
7. lvextend -L +80 G /dev/VG01/LV04
8. ext2online /dev/VG01/LV04
9. SQL Plus connect as SYSDBA for SID=LVMDB
10. Add new space to the tablespace using SQL code.

Not only is this example very long; it has several problems as well. First, online file system resizing is a very tricky business. The author of the ext2online utility states that "resizing a mounted file system is inherently dangerous and may corrupt file systems." Moreover, the ext2online utility can only enlarge a mounted file system; to shrink or enlarge file systems one would use ext2resize instead—which only works on unmounted file systems. Of course, that would require taking the tablespace offline.

The LVM approach has some not-so-obvious drawbacks as well. We very likely expected that this solution would result in our data being striped across all eight of our drives—not true. While we can add space to a Logical Volume, we cannot change its striping nature on Linux (although some UNIX LVMs do provide such capabilities). So our 10 old tables and 10 old indexes are striped across drives b-e, while our new table and index are striped across drives f-i (because the USER tablespace was already full, new objects will be created in the new space). Even if we exported the tablespace objects, dropped them, coalesced the tablespace, and then imported them back into the tablespace, the Logical Volume is still set for four-way striping. We would have to manually do the following if we really wanted eight-way striping:

Export the objects in that tablespace (database in restricted session, to be safe)
Drop the tablespace
Drop the Logical Volume
Create a new Logical Volume (with striping parameter set as -i 8)
Create the tablespace (this would have lots of data file lines for all 140 GB)
Import the objects into the tablespace.
That is where ASM steps in and makes life easy. The steps are shorter:

SQL Plus connect as SYSDBA for SID=ASM
ALTER DISKGROUP dgroup1 ADD DISK '/dev/hdf','/dev/hdg','/dev/hdh','/dev/hdi'
Good Performance, Too...
All of the above exemplifies why for many shops, ASM alone might justify an upgrade to Database 10g. While we are eliminating the LVM and file system, we are still nonetheless utilizing more Oracle technology to do essentially much of the same thing. So initial expectations are to see fairly similar or slightly better performance—with the advantages once again being far greater simplicity to create and manage. Roughly, a 10-15percent improvement from a feature that makes the DBA's life easier is not a bad return on investment for the cost of doing an upgrade

Points to note:

ASM gives raw type access therefore the Oracle kernel directs concurrency
ASM can cache meta data (data about ASM disk group)
ASM disk space can be expanded on the fly.
ASM automatically stripes at the extent level improving I/O performance
These capabilities sound appealing. However, how does ASM work, and for what kinds of files?


How it works...
An ASM Instance
Before a database can be opened, the physical files must be located using the information provided by the control file. However, if the control file itself is part of a disk group, you might ask, how does the database instance locate the appropriate file?

ASM needs to have its own separate instance available before the actual database instance is started. As part of the ASM instance startup procedure, the various disk groups and their files are identified. The ASM instance mounts the disks, and then creates an extent map, which is passed to the database instance. However, the database instance itself is responsible for any actual input/output operations. The ASM instance is only involved during the creation or deletion of files and when disk configurations change (such as dropping or adding a disk).

When these types of changes occur, the ASM instance automatically rebalances the disks and provides the necessary information to refresh the extent map in the SGA of the database instance. Of course, this process requires that the ASM instance run concurrently with the database instance, and only shut down after the database instance is closed.

The impact of the ASM instance on performance of the database instance is minimal. The former does not process transactions affecting the individual database objects; therefore, the average SGA allocation needed by the instance is no more than 64MB. Unless the server's memory is already at the maximum recommended operating system/DBMS allocation, 64MB should have no impact on the memory available for the database instance.

Disk Groups
Depending on the number of available disk drives, controllers, and redundancy preferences, the DBA can create several disk groups. A disk group is a collection of disks managed as one logical group. By supporting mirroring and striping, it is possible to store redundant copies of the database files in the same disk group, provided sufficient space is available across the disks. However, in terms of performance and recoverability, the disk groups should be subdivided so mirrored copies of the database files reside in defined sets of disks.

This concept can be implemented through the use of failure groups. A failure group identifies a set of disks that needs to be fault-tolerant. When failure groups are assigned either by the DBA or automatically by the ASM instance, redundant copies of an extent are stored in each failure group on a 1MB-by-1MB basis. When high redundancy is specified (i.e., triple mirroring), then the ASM instance can tolerate failures in a maximum of two failure groups.

Creatinga Disk Group
Because the ASM will manage the disk groups and the database files internally, the DBA simply needs to identify the appropriate disks, desired failure groups, and the level of redundancy for the disk group. For example, if the DBA has eight drives available, and wants at least two copies of each database file, the drives can be divided into two failure groups; preferably with each group having its own controller. A sample SQL statement to create this disk group is:

CREATE DISKGROUP dg1
FAILGROUP fgroup1 DISK
'devices/disk1',
'devices/disk2',
'devices/disk3',
'devices/disk4'
FAILGROUP fgroup2 DISK
'devices/disk5',
'devices/disk6',
'devices/disk7',
'devices/disk8';

In this example, the first four disks have a different controller from the second set of disks. Notice that the statement only requires the DBA to identify which disks are to be included in the disk group and how they are to be subdivided among the failure groups. The DBA is not required to specify which database file is written to which disk; ASM takes care of that. Because ASM internally balances the workload across all the disks in the disk group, disk contention is reduced and performance increases.

An added benefit of ASM is that it automatically and dynamically rebalances the files if the DBA needs to change a size of a disk group (adding or dropping a disk). Moreover, this task can be accomplished without shutting down the database.

File Names
With ASM, the DBA has the option of simply specifying a disk group when creating a new data file. For example, to create a new tablespace with 300MB of allocated space, the following command is issued:

CREATE TABLESPACE tb1 DATAFILE '+dg1' SIZE 300MB;
With this command, ASM manages the naming and placement/distribution of the data file. The DBA only has to assign the disk group that would store the data file. This approach eliminates the time DBAs spend tuning I/O operations and correcting fragmentation problems.

If ASM, along with OMF, is responsible for file management including file names, how does the DBA reference a specific file? For example, suppose the DBA needs to perform an incomplete recovery?

Currently, the DBA can identify the file name for most database files through views. To identify a particular data file, the DBA can query the V$DATAFILE and V$TABLESPACE views and display the name and location of the data files associated with each tablespace. This process works exactly the same way with disk groups.

The file-naming structure that ASM requires to identify a copy of a data file within a failure group is structured and, when there are thousands of files to managed, difficult to un-map. However, ASM supports the use of aliases: The DBA has the option of assigning aliases to the control file, data files, and so on, which offers the flexibility of referencing individual data files when necessary. For example, in the case of an incomplete recovery, the ASM instance can also interact with RMAN, and an alias can be used to specify the appropriate data file to delete, allowing RMAN to perform the necessary recovery procedures.

Supported File Types
As with Oracle9i Database and previous database versions, the location of user trace files, the alert.log file, and so on should be specified using the USER_DUMP_DEST and BACKGROUND_DUMP_DEST parameters, respectively.

Conclusion and Implications for ASM
Each generation of Oracle software has included features designed to increase the performance of the database system while reducing administrative overhead. The use of an internal automatic storage management system lessens the administrative burden associated with a large database containing thousands of operating system files.

The greatest benefits gained by the DBA include the elimination of manual I/O tuning, ability to dynamically change storage resources, and automatic rebalancing. When there are thousands of files to manage, DBAs can spend 40% of their time performing the same operations that ASM will handle automatically. The end result is that DBAs will have more time to address other aspects of their jobs

Sreeram Surapaneni is an Oracle Certified DBA (7.3,8,8i,9i) working currently as DBA at Research In Motion Ltd, Canada and can be reached at Sreeram Surapaneni.

29 comments:

  1. xanax pills xanax 1mg tablets - generic xanax stamped g3721

    ReplyDelete
  2. buy tramadol online tramadol 50 mg online - cheap tramadol cod

    ReplyDelete
  3. xanax antidepressant xanax side effects seizures - xanax side effects long term

    ReplyDelete
  4. cheap tramadol online buy tramadol echeck - tramadol 50mg buy online

    ReplyDelete
  5. generic xanax cheap generic xanax no prescription - much does generic xanax cost

    ReplyDelete
  6. buy tramadol online safe place buy tramadol - tramadol 50mg dosage for humans

    ReplyDelete
  7. buy tramadol online tramadol addiction potential - tramadol hcl high effects

    ReplyDelete
  8. buy tramadol online tramadol hcl (zydol) - buy tramadol online next day delivery

    ReplyDelete
  9. xanax online prescription overdose on xanax and ambien - order xanax legally online

    ReplyDelete
  10. carisoprodol 350 mg carisoprodol 350 mg muscle relaxer - soma carisoprodol drug interactions

    ReplyDelete
  11. buy tramadol online tramadol 50 mg mfg zydus - tramadol dosage buzz

    ReplyDelete
  12. buy tramadol online tramadol hcl 50 mg para que sirve - order tramadol online saturday delivery

    ReplyDelete
  13. buy tramadol online tramadol yahoo answers - where can i buy tramadol for my dog

    ReplyDelete
  14. xanax online xanax qt prolongation - buy xanax canada pharmacy

    ReplyDelete
  15. cialis online buy cialis levitra - cialis daily vs. 36

    ReplyDelete
  16. buy tramadol is tramadol hcl a narcotic drug - tramadol withdrawal chest pain

    ReplyDelete
  17. buy tramadol usa tramadol withdrawal blog - tramadol 100

    ReplyDelete
  18. buy tramadol online tramadol for dogs with liver problems - tramadol bluelight

    ReplyDelete
  19. buy tramadol side effects tramadol hci 50mg - buy tramadol online in australia

    ReplyDelete
  20. buy tramadol tramadol ingredients - tramadol addiction long term use

    ReplyDelete
  21. http://blog.dawn.com/dblog/buy/#85714 order tramadol with cod - tramadol over the counter

    ReplyDelete
  22. learn how to buy tramdadol is tramadol hcl make you high - buy tramadol online from usa

    ReplyDelete
  23. buy tramadol tramadol overnight delivery mastercard - tramadol 400 mg a day

    ReplyDelete
  24. buy tramadol tramadol hcl restless leg syndrome - tramadol 50 mg tablets for dogs

    ReplyDelete
  25. http://ranchodelastortugas.com/#64581 make xanax effects stronger - ritalin and xanax high

    ReplyDelete
  26. http://staam.org/#92453 tramadol for dogs medication - tramadol withdrawal nausea

    ReplyDelete
  27. xanax cost xanax online no rx - xanax side effects body

    ReplyDelete
  28. Your write-up offers proven necessаry to myself.
    It’s quite educаtional and you're simply clearly very knowledgeable in this field. You get opened my personal eye to varying opinion of this kind of matter together with intriguing and reliable written content.

    Also visit my blog post ... clonazepam

    ReplyDelete