Showing posts with label Oracle ASM. Show all posts
Showing posts with label Oracle ASM. Show all posts

Monday, November 16, 2015

Automatic Storage Management(ASM)

Automatic Storage Management (ASM) is a new type of file system. ASM provided a foundation for highly efficient storage management with kernelized asynchronous I/O, direct I/O, redundancy, striping, and an easy way to manage storage. ASM is recommended file system for RAC and single instance ASM for storing database files. This provides direct I/O to the file and performance is comparable with that provided by raw devices. Oracle creates a separate instance for this purpose.

Automatic Storage Management (ASM) simplifies administration of Oracle related files by allowing the administrator to reference diskgroups rather than hundreds of individual disks and files, which are managed by ASM. The ASM functionality is an extension of the Oracle Managed Files (OMF) functionality that also includes striping and mirroring to provide balanced and secure storage. The ASM functionality can be used in combination with existing raw and cooked file systems, along with OMF and manually managed files.


You can store the following file types in ASM diskgroups:
  • Datafiles
  • Control files
  • Online redo logs
  • Archive logs
  • Flashback logs
  • SPFILEs
  • RMAN backups
  • Temporary datafiles
  • Datafile copies
  • Disaster recovery configurations
  • Change tracking bitmaps
  • DataPump dumpsets

In summary, ASM provides the following functionality/features:
  • Manages groups of disks, called diskgroups. Must be careful while choosing disks for a diskgroup.
  • Manages disk redundancy within a diskgroup.
  • Provides near-optimal I/O balancing without any manual tuning.
  • Enables management of database objects without specifying mount points and filenames.
  • Supports large files.
  • Replacement for CFS (Cluster File System).
  • Also useful for Non-RAC databases.
  • A new instance type - ASM is introduced in 10g.
  • ASM instance has no data dictionary.
  • A Disk can be a partial, full or a LUN from the RG.
  • I/O is spread evenly across all disks of a diskgroup.
  • Disks can be dynamically added to any diskgroup.
  • When combined with OMF increases manageability.
  • ASM cannot maintain empty directories “delete input” has issues, create a dummy directory.
  • Use of ASM diskgroup is very simple create tablespace.
  • Enterprise Manager can also be used for administering diskgroups
  • Only RMAN can be used with ASM.
  • Introduces three additional Oracle background processes – RBAL, ARBx and ASMB.
    • ASMB - This ASMB process is used to provide information to and from cluster synchronization services used by ASM to manage the disk resources. It's also used to update statistic and provide a heart beat mechanism.
    • Re-Balance, RBAL - RBAL is the ASM related process that performs rebalancing of disk resources controlled by ASM.
    • Actual Rebalance, ARBx - ARBx is configured by ASM_POWER_LIMIT.
  • ASM instance has it own set of v$views and init.ora parameters.

The advantages of ASM are
  • Disk Addition - Adding a disk is very easy. No downtime is required and file extents are redistributed automatically.
  • I/O Distribution - I/O is spread over all the available disks automatically, without manual intervention, reducing chances of a hot spot.
  • Stripe Width - Striping can be fine grained as in redolog files (128K for faster transfer rate) and coarse for datafiles (1MB for transfer of a large number of blocks at one time).
  • Mirroring - Software mirroring can be set up easily, if hardware mirroring is not available.
  • Buffering - The ASM file system is not buffered, making it direct I/O capable by design.
  • Kernelized Asynchronous I/O - There is no special setup necessary to enable kernelized asynchronous I/O, without using raw or third-party file systems such as Veritas Quick I/O.
The initialization parameters that are specific to an ASM instance are:
  • INSTANCE_TYPE - Set to ASM. The default is RDBMS.
  • ASM_DISKGROUPS - The list of diskgroups that should be mounted by an ASM instance during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. ASM configuration changes are automatically reflected in this parameter.
  • ASM_DISKSTRING - Specifies a value that can be used to limit the disks considered for discovery. The default value is NULL allowing all suitable disks to be considered. Altering the default value may improve the speed of diskgroup mount time and the speed of adding a disk to a diskgroup. Changing the parameter to a value which prevents the discovery of already mounted disks results in an error.
  • ASM_POWER_LIMIT -The maximum power for a rebalancing operation on an ASM instance. The valid values range from 1 (default) to 11. The higher the limit the more resources are allocated resulting in faster rebalancing operations. This value is also used as the default when the POWER clause is omitted from a rebalance operation. A value of 0 disables rebalancing.
  • ASM_PREFERRED_READ_FAILURE_GROUPS - This initialization parameter value (default is NULL) is a comma-delimited list of strings that specifies the failure groups that should be preferentially read by the given instance. This parameter is generally used only for clustered ASM instances and its value can be different on different nodes. This is from Oracle 11g.
  • DB_UNIQUE_NAME - Specifies a globally unique name for the database. This defaults to +ASM but must be altered if you intend to run multiple ASM instances.
While creating a diskgroup, we have to specify an ASM diskgroup type based on one of the following three redundancy levels:
  • Normal redundancy - for 2-way mirroring, requiring two failure groups, when ASM allocates an extent for a normal redundancy file, ASM allocates a primary copy and a secondary copy. ASM chooses the disk on which to store the secondary copy in a different failure group other than the primary copy.
  • High redundancy - for 3-way mirroring, requiring three failure groups, in this case the extent is mirrored across 3 disks.
  • External redundancy - to not use ASM mirroring. This is used if you are using hardware mirroring or third party redundancy mechanism like RAID, Storage arrays.

ASM is supposed to stripe the data and also mirror the data (if using Normal, High redundancy). So this can be used as an alternative for RAID (Redundant Array of Inexpensive Disks) 0+1 solutions.

Source: 

Sachin's DBA Blog

ASM REDUNDANCY and Mirroring

The type of a ASM disk group is based on three redundancy levels:
  • Normal
ASM provides two-way mirroring by default. A loss of one ASM disk is tolerated. Use can optionally choose three-way or unprotected mirroring for a file in a NORMAL redundancy disk group. A file specified with HIGH redundancy (three-way mirroring) in a NORMAL redundancy disk group provides additional protection form a bad disk sector, not protection from a disk failure.
  • High
ASM provides triple mirroring by default. A loss of two ASM disks in different failure groups is tolerated.
  • External
ASM does not provide mirroring redundancy and relies on the storage system to provide RAID functionality. Any write error causes a forced dismount of the disk group. All disks must be located to successfully mount the disk group.

Failure Group

When ASM allocates an extent for a mirrored file it allocate a primary copy and a mirror copy. ASM chooses the disk on which to store the mirror copy in a different failure group from the primary copy.
A failure group is a subset of the disks in a disk group, which could fail at the same time because they share hardware. The failure of common hardware must be tolerated. The simultaneous failure of all disks in a failure group does not result data loss because all mirrored copies of the disks are in different failure groups. A NORMAL redundancy disk group must contain at least two failure groups. A HIGH redundancy disk group must contain at least three failure group. There are always failure groups even if they are not explicitly created. If you do not specify a failure group for a disk, Oracle automatically creates a new failure group containing just that disk, except for disk group containing disks on Oracle Exadata cells.

Disk Failure

When there is a failure of one or more disks, the disks are first taken offline and then automatically dropped. In this case the disk group remains mounted and serviceable. In addition because of mirroring all of the disk group data remain accessible. After the disk drop operation, ASM performs a re-balance to restore full redundancy for the data on the failed disks.

Recovery from Read or Write I/O Errors

When a read error happens it triggers Oracle ASM instance to attempt bad block remapping. ASM then reads a good copy of the extent and copies it to the disk that has the read error. If the write to the same location succeeds then the underlining allocation unit is deemed healthy. If the write fails, ASM attempts to write the extent to a new allocate unit on the same disk. If this write succeeds, the original allocation unit is marked as unusable. If the write fails the disk is taken offline.
One unique benefit on Oracle ASM based mirroring is that the database instance is aware of the mirroring. For many types of logical corruptions such as a bad checksum or incorrect System Change Number (SCN), the database instance proceeds through the mirror side looking for valid content and proceeds without errors.
When a write error happens, the database instance sends ASM instance a disk offline message. If database can successfully complete a write to at least one extent copy and receive acknowledgment of the offline disk from ASM , the write is consider successful. If the write to all mirror side fails, database takes the appropriate actions  in response to a write error such as taking the tablespace offline.