Automatic Storage Management (ASM) / Part 1

using SQL*Plus connect to the idle instance.

export ORACLE_SID=+ASM
sqlplus / as sysdba

Startup and Shutdown of ASM Instances

 

ASM instance are started and stopped in a similar way to normal database instances. The options for the STARTUP command are:

  • FORCE – Performs a SHUTDOWN ABORT before restarting the ASM instance.
  • MOUNT – Starts the ASM instance and mounts the disk groups specified by the ASM_DISKGROUPS parameter.
  • NOMOUNT – Starts the ASM instance without mounting any disk groups.
  • OPEN – This is not a valid option for an ASM instance.

The options for the SHUTDOWN command are:

  • NORMAL – The ASM instance waits for all connected ASM instances and SQL sessions to exit then shuts down.
  • IMMEDIATE – The ASM instance waits for any SQL transactions to complete then shuts down. It doesn’t wait for sessions to exit.
  • TRANSACTIONAL – Same as IMMEDIATE.
  • ABORT – The ASM instance shuts down instantly.

 

ASM Disk Groups

  level of redundancy:

 

  • NORMAL REDUNDANCY – Two-way mirroring, requiring two failure groups.
  • HIGH REDUNDANCY – Three-way mirroring, requiring three failure groups.
  • EXTERNAL REDUNDANCY – No mirroring for disks that are already protected using hardware mirroring or RAID. If you have hardware RAID it should be used in preference to ASM redundancy, so this will be the standard option for most installations.

Create Example : 

CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY
FAILGROUP failure_group_1 DISK
'/devices/diska1' NAME diska1,
'/devices/diska2' NAME diska2
FAILGROUP failure_group_2 DISK
'/devices/diskb1' NAME diskb1,
'/devices/diskb2' NAME diskb2;

DROP Example : 

DROP DISKGROUP disk_group_1 INCLUDING CONTENTS;

Using Alter to Add Disk : 

ALTER DISKGROUP disk_group_1 ADD DISK
'/devices/disk*3',
'/devices/disk*4';

Using Alter to Drop Disk :

ALTER DISKGROUP disk_group_1 DROP DISK diska2;

Using Alter to Re-size Disk : 

ALTER DISKGROUP disk_group_1
RESIZE DISK diska1 SIZE 100G;

 Using Alter to Re-size all disk in Failure Group:

ALTER DISKGROUP disk_group_1
RESIZE DISKS IN FAILGROUP failure_group_1 SIZE 100G;

 Using Alter to Resize all Disk In DiskGroup :

ALTER DISKGROUP disk_group_1
RESIZE ALL SIZE 100G;

Using Alter To dismiss pending disk to be Dropped :

ALTER DISKGROUP disk_group_1 UNDROP DISKS;

 Some Useful Command to Mount ASM disk group Manually : 

ALTER DISKGROUP ALL DISMOUNT;
ALTER DISKGROUP ALL MOUNT;
ALTER DISKGROUP disk_group_1 DISMOUNT;
ALTER DISKGROUP disk_group_1 MOUNT;

 Thank You 

Osama mustafa

Checking O2CB heartbeat: Not active

[root@rac01]# /etc/init.d/o2cb  status

Module “configfs”: Loaded
Filesystem “configfs”: Mounted
Module “ocfs2_nodemanager”: Loaded
Module “ocfs2_dlm”: Loaded
Module “ocfs2_dlmfs”: Loaded
Filesystem “ocfs2_dlmfs”: Mounted
Checking O2CB cluster ocfs2: Online
  Heartbeat dead threshold: 61
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 2000
Checking O2CB heartbeat: Not active

The Solution like the Following :

[root@rac01]# /etc/init.d/o2cb  configure

Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets (‘[]’).  Hitting
without typing an answer will keep that current value.  Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [y]: y
Cluster to start on boot (Enter “none” to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
O2CB cluster ocfs2 already online

[root@rac01]# /etc/init.d/o2cb  stop
Stopping O2CB cluster ocfs2: OK
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module “ocfs2_dlmfs”: OK
Unmounting configfs filesystem: OK
Unloading module “configfs”: OK

 [root@rac01]# /etc/init.d/o2cb  start
Loading module “configfs”: OK
Mounting configfs filesystem at /config: OK
Loading module “ocfs2_nodemanager”: OK
Loading module “ocfs2_dlm”: OK
Loading module “ocfs2_dlmfs”: OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK

Now

  [root@rac01]# ocfs2console

Then Mount Command .