Remove Node From Oracle Rac 10g ( Part-2)

 Remove Node From The Database :  ( From The Node You want to Delete) 


**Before You Start Deleting Node From Database You need To update inventory by :

./runInstallerupdateNodeList ORACLE_HOME=<Database home> “CLUSTER_NODES=<node to be removed>” –local

After Perform This You can Start De-install Process Like The Following : 

 You now need to update the corresponding  inventory on the remaining nodes. You can use the following command from the first node:
./runInstallerupdateNodeList ORACLE_HOME=<Database home> “CLUSTER_NODES=<remaining nodes>”

Remove Node From ASM : 

maybe some of us will be conflict with the above steps , I Above Step we Remove ASM from Node that we want to delete But ASM still exists , And On Remaining Node The Deleted Node For ASM still exists So We Need To update the ASM to Delete  Like The Following :

Update Inventory From Node We Want delete : 

./runInstallerupdateNodeList ORACLE_HOME=<ASM home> “CLUSTER_NODES=<node to be removed>” –local

 
update the corresponding inventory on the remaining nodes. You can use the following command from the first node:
./runInstallerupdateNodeList ORACLE_HOME=<ASM home> “CLUSTER_NODES=<remaining nodes>”

Remove Node From Oracle Clusterware From The First Node Do the Below Steps : 


1-Run This Command :

<Oracle Clusterware home>/bin/racgons remove_config <Node to be removed>:6200 
**6200 : Can Be Changed, And Its Called Remote Port 

<Oracle Clusterware home>/opmn/conf/ons.config 


2-From The Node You want To remove as root User : 

<Oracle Clusterwarehome>/install/rootdelete.sh

3-From The First Node as Root User : 

<Oracle Clusterware home>/bin/olsnodes –n
Then 
<Oracle Clusterware home>/install/rootdeletenode.sh <node name to be deleted>,<node number to be deleted>.

 

Remove Node From Oracle Cluster-ware (From The Node You want to be Delete)


To Update Inventory Run The Following Command : 


 
 

<Oracle Clusterwarehome>/oui/bin/runInstaller  –updateNodeList ORACLE_HOME=<Oracle Clusterwarehome>  “CLUSTER_NODES=<Node to be deleted>”   CRS=TRUE -local

Run Installer — > De-install 

From The First Node after Finish The De-install :
<Oracle Clusterwarehome>/oui/bin/runInstallerupdateNodeList  ORACLE_HOME=<Oracle Clusterware home>  “CLUSTER_NODES=<Remaining nodes>” CRS=TRUE
 Now You Are Delete The Node ..
To Check

 •srvctl  status nodeapps -n <Deleted node>should get a message saying Invalid node.
crs_stat | grep -i <Deleted node>should not get any output.
olsnodes –n should get all the present nodes list without the deleted node.
Thank You
Osama  Mustafa 

Dealing With Database Gaurd

Database Switchover 

Using this method you can switch backwards and forwards between the primary and DR servers (e.g. so that the primary can become DR and DR can become primary) without having to rebuild either environment:

 

On Primary Server:
SQL> alter database commit to switchover to standby;
This may cause the following error to be generated:
ERROR at line 1:
ORA-01093: ALTER DATABASE CLOSE only permitted with no sessions connected
If this does occur then restart the database, as below, before retrying the above command:
SQL> shutdown immediate
SQL> startup

 

SQL> shutdown immediate
SQL> startup nomount
SQL> alter database mount standby database;
SQL> alter database recover managed standby database disconnect;

The primary server is now configured as a DR standby database.
On DR Server:
SQL> alter database recover managed standby database cancel;
SQL> alter database commit to switchover to primary;
SQL> shutdown immediate
SQL> startup
The DR server is now configured as the primary database.
To switch back you just need to repeat the above process but the other way around (e.g. convert the DR database back to a standby and the primary database back to primary).

Activating a Standby Database

If the primary database is not available the standby database can be converted into the primary database as follows:


SQL> alter database recover managed standby database cancel;
SQL> alter database activate standby database;
SQL> shutdown immediate
SQL> startup
The original primary database is now obsolete and can be rebuilt as a standby database once it is available again.

Opening the Standby Database in Read Only Mode

 

The standby database can be opened in read only mode for querying and then converted back into a standby database without affecting the primary.
On standby server:
SQL> alter database recover managed standby database cancel;
SQL> alter database open read only;
The standby database is now open and available for querying in read only mode.
To put the standby database back into standby mode:
SQL> shutdown immediate
SQL> startup nomount
SQL> alter database mount standby database;
SQL> alter database recover managed standby database disconnect;

How to check whether the Standby Database is in Sync

On the primary server:

SQL> SELECT max(sequence#) AS “PRIMARY” FROM v$log_history;

 
On the standby server:

SQL> SELECT max(sequence#) AS “STANDBY”, applied
          FROM v$archived_log GROUP BY applied;

The standby database is in sync with the primary database if the above PRIMARY value matches the above STANDBY value where applied = ‘YES’.
 

 


Change VIP Addresses

1.Determine the interface used to support your VIP:
 $ ifconfig -a
2.Stop all resources depending on the VIP:
$ srvctl stop instance -d DB -i DB1
$ srvctl stop asm -n node1
# srvctl stop nodeapps -n node1

3.Verify that the VIP is no longer running:

$ ifconfig -a
$ crs_stat
4.Change IP in /etc/hostsand DNS.

5.Modify your VIP address using srvctl:
# srvctl modify nodeapps -n node1 -A 192.168.2.125/255.255.255.0/eth0
6.Start nodeappsand all resources depending on it:
# srvctl start nodeapps -n node1
7.Repeat from step 1 for the next node.
Thank You 
Osama Mustafa 

 

 

 

UNKNOWN State Of RAC Resources

Sometimes when you check Oracle Cluster Status Via “crs_stat -t” :

------------------------------------------------------------
ora.orcl.db application ONLINE ONLINE rac1
ora....11.inst application ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....DC.lsnr application ONLINE ONLINE rac1
ora....idc.gsd application ONLINE ONLINE rac1
ora....idc.ons application ONLINE ONLINE rac1
ora....idc.vip application ONLINE ONLINE rac1
ora....SM2.asm application ONLINE UNKNOWN rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora....dc2.gsd application ONLINE ONLINE rac2
ora....dc2.ons application ONLINE ONLINE rac2
ora....dc2.vip application ONLINE ONLINE rac2

If You try to restart Oracle Cluster via

./crs_stop -all
./crs_start -all

It will not working so you have to do the following simple steps :

 First :

crs_stop -f Service_name
 
Example :
crs_stop -f ora.rac2.ASM2.asm 

 Second :

crs_start -f services_name

Example :

crs_start -f ora.rac2.ASM2.asm 

Thank you
Osama

Checking O2CB heartbeat: Not active

[root@rac01]# /etc/init.d/o2cb  status

Module “configfs”: Loaded
Filesystem “configfs”: Mounted
Module “ocfs2_nodemanager”: Loaded
Module “ocfs2_dlm”: Loaded
Module “ocfs2_dlmfs”: Loaded
Filesystem “ocfs2_dlmfs”: Mounted
Checking O2CB cluster ocfs2: Online
  Heartbeat dead threshold: 61
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 2000
Checking O2CB heartbeat: Not active

The Solution like the Following :

[root@rac01]# /etc/init.d/o2cb  configure

Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets (‘[]’).  Hitting
without typing an answer will keep that current value.  Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [y]: y
Cluster to start on boot (Enter “none” to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
O2CB cluster ocfs2 already online

[root@rac01]# /etc/init.d/o2cb  stop
Stopping O2CB cluster ocfs2: OK
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module “ocfs2_dlmfs”: OK
Unmounting configfs filesystem: OK
Unloading module “configfs”: OK

 [root@rac01]# /etc/init.d/o2cb  start
Loading module “configfs”: OK
Mounting configfs filesystem at /config: OK
Loading module “ocfs2_nodemanager”: OK
Loading module “ocfs2_dlm”: OK
Loading module “ocfs2_dlmfs”: OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK

Now

  [root@rac01]# ocfs2console

Then Mount Command .

Oracle Real Application Cluster Lesson # 1

Sometimes we need Solutions to keep our database Available all the time, There are lot of solutions one of these solutions called  Oracle Real Application Cluster (RAC)/High Availability

As Lesson Number One i will take on Oracle Real Application Cluster Basics .

Lets Start :

Oracle RAC allows multiple computers to run Oracle RDBMS software simultaneously while accessing a single database, thus providing a clustered database.
In a non-RAC Oracle database, a single instance accesses a single database. The database consists of a collection of data files, control files, and redo logs located on disk. The instance comprises the collection of Oracle-related memory and operating system processes that run on a computer system.
In an Oracle RAC environment, two or more computers (each with an instance) concurrently access a single database. This allows an application or user to connect to either computer and have access to a single coordinated set of data.
Assume the  installation of Oracle 10g release 2 (10.2) RAC on Red Hat Enterprise Linux 4.

Hardware
At the hardware level, each node in a RAC cluster shares three things:

  1. Access to shared disk storage
  2. Connection to a private network
  3. Access to a public network.

 

Shared Disk Storage
Oracle RAC relies on a shared disk architecture. The database files, online redo logs, and control files for the database must be accessible to each node in the cluster. The shared disks also store the Oracle Cluster Registry and Voting Disk (discussed later). There are a variety of ways to configure shared storage including direct attached disks (typically SCSI over copper or fiber), Storage Area Networks (SAN), and Network Attached Storage (NAS).

Supported Shared Storage In RAC :

1-Oracle Cluster File System (OCFS) is a shared file system designed specifically for Oracle Real Application Cluster OCFS eliminates the requirement that Oracle database files be linked to logical drives and enables all nodes to share a single Oracle Home
2-ASM
3-RAW Device

Private Network
Each cluster node is connected to all other nodes via a private high-speed network, also known as the cluster interconnect or high-speed interconnect (HSI). This network is used by Oracle’s Cache Fusion technology to effectively combine the physical memory (RAM) in each host into a single cache. Oracle Cache Fusion allows data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network. It also preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.
The private network is typically built with Gigabit Ethernet, but for high-volume environments, many vendors offer proprietary low-latency, high-bandwidth solutions specifically designed for Oracle RAC. Linux also offers a means of bonding multiple physical NICs into a single virtual NIC (not covered here) to provide increased bandwidth and availability.

Public Network
To maintain high availability, each cluster node is assigned a virtual IP address (VIP). In the event of node failure, the failed node’s IP address can be reassigned to a surviving node to allow applications to continue accessing the database through the same IP address.

Configuring the Cluster Hardware
There are many different ways to configure the hardware for an Oracle RAC cluster. Our configuration here uses two servers with two CPUs, 1GB RAM, two Gigabit Ethernet NICs, a dual channel SCSI host bus adapter (HBA), and eight SCSI disks connected via copper to each host (four disks per channel). The disks were configured as Just a Bunch Of Disks (JBOD)—that is, with no hardware RAID controller. 


Software
At the software level, each node in a RAC cluster needs:

  1. An operating system
  2. Oracle Clusterware
  3. Oracle RAC software
  4. An Oracle Automatic Storage Management (ASM) instance (optional).

Operating System
Oracle RAC is supported on many different operating systems. This guide focuses on Linux. The operating system must be properly configured for the OS–including installing the necessary software packages, setting kernel parameters, configuring the network, establishing an account with the proper security, configuring disk devices, and creating directory structures. All these tasks are described in this guide.

Oracle Cluster Ready Services becomes Oracle Clusterware
Oracle RAC 10g Release 1 introduced Oracle Cluster Ready Services (CRS), a platform-independent set of system services for cluster environments. In Release 2, Oracle has renamed this product to Oracle Clusterware.
Clusterware maintains two files: the Oracle Cluster Registry (OCR) and the Voting Disk. The OCR and the Voting Disk must reside on shared disks as either raw partitions or files in a cluster filesystem. This guide describes creating the OCR and Voting Disks using a cluster filesystem (OCFS2) and walks through the CRS installation.

Oracle RAC Software
Oracle RAC 10g Release 2 software is the heart of the RAC database and must be installed on each cluster node. Fortunately, the Oracle Universal Installer (OUI) does most of the work of installing the RAC software on each node. You only have to install RAC on one node—OUI does the rest.

Oracle Automatic Storage Management (ASM) / Or other shared Storage .
ASM is a new feature in Oracle Database 10g that provides the services of a filesystem, logical volume manager, and software RAID in a platform-independent manner. Oracle ASM can stripe and mirror your disks, allow disks to be added or removed while the database is under load, and automatically balance I/O to remove “hot spots.” It also supports direct and asynchronous I/O and implements the Oracle Data Manager API (simplified I/O system call interface) introduced in Oracle9i.

Some Other Stuff you need To check Before Installation : 

1-Crossover cables are not supported (use a high-speed switch).
2-Use at least a gigabit Ethernet for optimal performance.
3-Increase the UDP buffer sizes to the OS maximum.
4-Turn on UDP checksumming.
5-Oracle Support strongly recommends the use of UDP (TCP for WIndows )
6-SSH Connectivity .

Thank you
Osama mustafa