Article / Data Guard

Today while I am Browsing i read this amazing article about Data Guard and want to share it .


What Is Data Guard 

Oracle Data Guard delivers high availability, data protection, and disaster recovery for the enterprise data. 

Data Guard configuration consists of one production database, having primary role, and one or more standby databases, having standby roles.

Standby database can take over the production service either for planned or unplanned outages.
Apart from that it can be used for offloading the reporting and backup operations from production.

The databases in a Data Guard configuration are connected by Oracle Net and might be arranged geographically. 

There are no strict network requirements for the databases to be in the same subnet and etc. It could be LAN, WAN or internet connection.

Data Guard configuration and control could be managed via:

  • SQL command line interface;
  • Data Guard Broker interface: using DGMRL for command line interface and GUI delivered via Oracle Enterprise Manager. Data Guard broker tool is included in Enterprise edition license. This tool provides simplified and automatic data guard maintenance and configuration procedures. How to use it I will describe later in this article.

Oracle Data Guard could be deployed among different platforms(x86, Power, Itanium, SPARC), Operating systems(32/64 bit) and Oracle software versions (32/64bit) with limitations.

Check This notes : 413484.1 and 1085687.1 On MOS.

Data Guard deployment examples

 

Examples provided here are using fast start failover technology for automatic failover operations.
Without fast start failover in place all failover operations must be performed manually.

 The depiction of solution above demonstrates high level HA ready deployment leveraging fast start fail-over configuration and active data guard option on physical standby for reporting&queries. 

Here observer is data guard broker executable utility, used in fast start failover configurations, which  could run on either laptop, server or workstation.


Disaster ready  setup


The depiction of solution below demonstrates high level DR ready deployment leveraging fast start failover configuration and logical standby for reporting&queries. As depicted it involves 3 different sites to provide desired functionality. Although it is not intended to provide reporting services in case of Site A outage, it could be done by putting additional standby server in site B though.

 


Standby databases overview

Data Guard provides three different types of standby databases:

  • Physical standby. This is physically identical copy of primary database compared on block level. All database physical and logical structures like data files, schemas are the same. Standby database is synchronized through Redo Apply mechanism, which delivers redo data and applies it on the physical standby database. For such purpose so called standby redo logs are used in standby which are providing similar functionality as online redo logs.
  • Logical standby. This type of database contains only the same logical information as the production database, but the physical structure  of the data can be different, like different data file organization and etc.  The logical standby database is synchronized through SQL Apply mechanism, which first transforms the data, in the redo received, into SQL statements – DML’s and DDL’s – and then executes them on the standby. Here LogMiner component of SQL Apply is used. Standby databases are functioning in read write mode.  
  • Snapshot standby. This type of physical standby database appeared first with 11g release. It is more like a data guard mode than a different configuration. The only difference from latter ones, when enabled it is fully operational – read and write mode –  production database copy, where users can access and perform any data manipulations. Meantime redo data is being received from the primary database, archived, but not applied until snapshot standby database is reverted back to the physical standby database.

Physical standby

  • Starting with Oracle database release 11g there is new active data guard feature which allows reporting and queries while redo apply is active on physical standby. This data guard option must be purchased as add-on to Enterprise edition license. 
  • Can be used to offload backup operations from production server. 
  • Supports all database datatypes, types of tables, DDL and DML operations

* Logical standby  

  • Typically is used for reporting and queries while sql apply is active on logical standby.
  • Could be used for database upgrades with minimal downtime.
  • Can maintain other database objects not belonging to primary database. 
  • Logical standby database has limitation on support of datatypes, types of tables, DDL and DML operations.

Data Guard protection modes

Databases running with Data Guard implemented could be configured to act differently when primary server outage is taking place


Maximum performance
Default protection mode. This protection mode provides maximum protection without affecting primary database performance. Here transactions on primary are committed as soon as redo log is filled with all relevant redo data without waiting for write accomplishment from standby databases – it is being done asynchronously. Therefore such protection mode does not guarantee complete data restore until last transaction.
Maximum availability
This protection mode guarantees that no data loss will occur if primary database fails. Transactions do not commit until all relevant redo data is written in redo and standby redo logs of standby database. In such configuration standby database is kept completely synchronized with primary. In case of standby database outage or whatever reasons preventing to write/send redo data into standby locations, primary database is kept running to preserver availability.
Maximum protection
This protection mode guarantees that no data loss will occur if primary database fails. The only difference from the Maximum Availability mode is that if primary database cannot write/send redo data into standby database locations it will be shutdown. Since such configuration prioritizes data protection over primary database’s availability, the recommended deployment is to have at least several standby databases.

How to enable, change protection modes

from SQLPlus on primary : 

ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY/PERFORMANCE/PROTECTION;

from Data Guard broker command line interface :

 DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY/MAXPROTECTION/MAXPERFORMANCE;

 Thank you
Osama Mustafa

ORA-29701: Unable to connect to Cluster Manager

ORA-29701: Unable to connect to Cluster Manager.

The reason is Oracle cssd daemon process was not running.

As Root User :

Step 1: Go to the /etc folder
Step 2: Execute the following command.

# init.cssd start

startup will be queued to init within 30 seconds

Step 3: Wait for 2 minutes and execute the following commands

# cd $ORACLE_HOME/bin
# ./localconfig delete

Stopping CSSD
Failure in CSS initialization opening OCR.
Shutdown has begun. The daemons should exit soon.

Step 4: Execute the following command

# ./localconfig add

Successfully accumulated necessary OCR keys.
Creating OCR keys for user root, privgrp ‘system’
Operation Successful
Configuration for local css has been initialized.
Adding to inittab
Startup will be queued to init within 30 seconds
Checking the status of new oracle init process..
Expecting the CRS daemons to be up within 600 seconds
CSS is active on these nodes
nodelist
CSS is active on all nodes
Oracle CSS service is installed and running under init(1M)

Step 5: Start the ASM instance now.

Drop all Triggers On Database

The Below Scripts Used fro Drop Database Trigger By user or all Database. Please Don’t Use them Until you make sure what you are doing


First One : ( For Specific User)

BEGIN  
 
FOR i in (select trigger_name,owner
             
from dba_triggers
             
where trigger_name like '%_BI%' and owner = 'myTesting' ) LOOP  
   
EXECUTE IMMEDIATE 'DROP TRIGGER '||i.owner||'.'||i.trigger_name;  
 
END LOOP;  END;  

Second One : (For All Database)

BEGIN  
 
FOR i in (select trigger_name,owner
             
from dba_triggers ) LOOP  
   
EXECUTE IMMEDIATE 'DROP TRIGGER '||i.owner||'.'||i.trigger_name;  
 
END LOOP;  END;  

Thank you
Osama Mustafa

ORA-1652: unable to extend temp segment by % in tablespace TEMP

What does that error means:

This error is fairly self explanatory – we cannot get enough space for a temporary segment. The size reported in the error message is the number of contiguous free Oracle blocks that cannot be found in the listed tablespace.

How to approach the ORA-1652 error

There are two views in the database that help keep track of the free space in the temporary tablespace: v$sort_usage and v$tempseg_usage (from 9i onwards).

select TABLESPACE_NAME,TOTAL_BLOCKS,USED_BLOCKS,FREE_BLOCKS from v$sort_segment;
TABLESPACE_NAME                 TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS
——————————- ———— ———– ———–
TEMP                                 1310592           0     1310592

If USED_BLOCKS = TOTAL_BLOCKS, find which user and statement is using the temporary sort segment by following;

 SELECT a.username, a.sid, a.serial#, a.osuser, b.tablespace, b.blocks, c.sql_text
FROM v$session a, v$tempseg_usage b, v$sqlarea c
WHERE a.saddr = b.session_addr
AND c.address= a.sql_address
AND c.hash_value = a.sql_hash_value
ORDER BY b.tablespace, b.blocks

Conclusion

There are two ways of solving this error:

  1. Add more tempfiles, increase the size of the current ones or enable auto extend and the most important:
  2. Tune the queries/statements so that the sort operations are done in memory and not on the disk.

 Thank you
Osama Mustafa

Get Database Parameters even Hidden One

You can use this Scripts to check the init.ora in Oracle but not from v$parameter this Time


First Query : (display all init.ora parameter including the hidden parameters):

SET linesize 235
col Parameter FOR a50
col SESSION FOR a28
col Instance FOR a55
col S FOR a1
col I FOR a1
col D FOR a1
col Description FOR a90

SELECT
  a.ksppinm  “Parameter”,
  decode(p.isses_modifiable,‘FALSE’,NULL,NULL,NULL,b.ksppstvl) “Session”,
  c.ksppstvl “Instance”,
  decode(p.isses_modifiable,‘FALSE’,‘F’,‘TRUE’,‘T’) “S”,
  decode(p.issys_modifiable,‘FALSE’,‘F’,‘TRUE’,‘T’,‘IMMEDIATE’,‘I’,‘DEFERRED’,‘D’) “I”,
  decode(p.isdefault,‘FALSE’,‘F’,‘TRUE’,‘T’) “D”,
  a.ksppdesc “Description”
FROM x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
WHERE a.indx = b.indx AND a.indx = c.indx
  AND p.name(+) = a.ksppinm
  AND UPPER(a.ksppinm) LIKE UPPER(‘%&1%’)
ORDER BY a.ksppinm;

Second Query (list of parameter which are not default):

SET linesize 235 pagesize 200
col Parameter FOR a50
col SESSION FOR a28
col Instance FOR a55
col S FOR a1
col I FOR a1
col D FOR a1
col Description FOR a90

SELECT * FROM (SELECT
  a.ksppinm  “Parameter”,
  decode(p.isses_modifiable,‘FALSE’,NULL,NULL,NULL,b.ksppstvl) “Session”,
  c.ksppstvl “Instance”,
  decode(p.isses_modifiable,‘FALSE’,‘F’,‘TRUE’,‘T’) “S”,
  decode(p.issys_modifiable,‘FALSE’,‘F’,‘TRUE’,‘T’,‘IMMEDIATE’,‘I’,‘DEFERRED’,‘D’) “I”,
  decode(p.isdefault,‘FALSE’,‘F’,‘TRUE’,‘T’) “D”,
  a.ksppdesc “Description”
FROM x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
WHERE a.indx = b.indx AND a.indx = c.indx
  AND p.name(+) = a.ksppinm
  AND UPPER(a.ksppinm) LIKE UPPER(‘%&1%’)
ORDER BY a.ksppinm) WHERE d=‘F’;

Scripts : Find Oracle Parameters

Find User with unlimited Tablespace Quota

First find the user with direct quota on tablespace SYSTEM.

SELECT username,tablespace_name, bytes, max_bytes
FROM dba_ts_quotas
WHERE max_bytes = 1 AND tablespace_name =‘SYSTEM’;

USERNAME                  TABLESPACE_NAME                BYTES  MAX_BYTES
————————- ————————- ———- ———-
SCOTT                     SYSTEM                             0         1
TEST                      SYSTEM                             0         1

why do we want to know who has unlimited quota on the SYSTEM tablespace?

User who have unlimited quota on SYSTEM as well the privilege to create tables or clusters could do some kind of denial of service attack to the database. The just have to fill up the free space in the SYSTEM tablespace. If there default tablespace is as well SYSTEM the could even do this without intention.

Find user with system privilege UNLIMITED TABLESPACE.

SELECT * FROM dba_sys_privs WHERE privilege = ‘UNLIMITED TABLESPACE’

GRANTEE                        PRIVILEGE                      ADM
—————————— —————————— —
WMSYS                          UNLIMITED TABLESPACE           NO
RRDOMREG                       UNLIMITED TABLESPACE           NO
HR                             UNLIMITED TABLESPACE           NO
OE                             UNLIMITED TABLESPACE           NO
SYS                            UNLIMITED TABLESPACE           NO
LOGSTDBY_ADMINISTRATOR         UNLIMITED TABLESPACE           NO
SCOTT                          UNLIMITED TABLESPACE           NO
BI                             UNLIMITED TABLESPACE           NO
OUTLN                          UNLIMITED TABLESPACE           NO
DBSNMP                         UNLIMITED TABLESPACE           NO
IX                             UNLIMITED TABLESPACE           NO
SH                             UNLIMITED TABLESPACE           NO
DBA                            UNLIMITED TABLESPACE           YES
SYSTEM                         UNLIMITED TABLESPACE           YES

What about cascaded roles?

Mmh, but since Oracle 11g it is possible to grant UNLIMITED TABLESPACE to a role and this can be granted to an other role which is granted again to an other role. It could be a role concept or somebody who want to hide a little bit some privileges. To test it I’ve created three roles DBA3, DBA2 and DBA1, granted UNLIMITED TABLESPACE to DBA3.

SELECT
  grantee,
  privilege,
  DECODE(p,‘=>’||grantee,‘direct’,p) path
FROM (
  SELECT
    grantee,
    privilege,
    SYS_CONNECT_BY_PATH(grantee, ‘=>’) p
  FROM (
    SELECT
      grantee,
      privilege
    FROM dba_sys_privs
    UNION ALL
    SELECT
      grantee,
      granted_role privilege
    FROM
      dba_role_privs)
  START WITH privilege = ‘UNLIMITED TABLESPACE’
  CONNECT BY PRIOR grantee = privilege )
WHERE
  (grantee IN (SELECT username FROM dba_users)
  OR grantee = ‘PUBLIC’);

GRANTEE   PRIVILEGE               PATH
——— ———————– ——————————-
BI        UNLIMITED TABLESPACE    direct
SYS       DBA                     =>DBA=>SYS
SYSTEM    DBA                     =>DBA=>SYSTEM
SCOTT     DBA1                    =>DBA3=>DBA2=>DBA1=>SCOTT
SYS       DBA1                    =>DBA3=>DBA2=>DBA1=>SYS
SYS       DBA2                    =>DBA3=>DBA2=>SYS
SYS       DBA3                    =>DBA3=>SYS
DBSNMP    UNLIMITED TABLESPACE    direct
HR        UNLIMITED TABLESPACE    direct
IX        UNLIMITED TABLESPACE    direct
SYS       LOGSTDBY_ADMINISTRATOR  =>LOGSTDBY_ADMINISTRATOR=>SYS
OE        UNLIMITED TABLESPACE    direct
OUTLN     UNLIMITED TABLESPACE    direct
RRDOMREG  UNLIMITED TABLESPACE    direct
SH        UNLIMITED TABLESPACE    direct
SYS       UNLIMITED TABLESPACE    direct
SYSTEM    UNLIMITED TABLESPACE    direct
WMSYS     UNLIMITED TABLESPACE    direct

18 ROWS selected.

create one to find user’s with direct quotas as well through a system privilege will give something like this.

SELECT
  username,
  tablespace_name,
  privilege
FROM (
  SELECT
    grantee username, ‘Any Tablespace’ tablespace_name, privilege
  FROM (
    — first get the users with direct grants
    SELECT
      p1.grantee grantee, privilege
    FROM
      dba_sys_privs p1
    WHERE
      p1.privilege=‘UNLIMITED TABLESPACE’
    UNION ALL
    — and then the ones with UNLIMITED TABLESPACE through a role…
    SELECT
      r3.grantee, granted_role privilege
    FROM
      dba_role_privs r3
      START WITH r3.granted_role IN (
          SELECT
            DISTINCT p4.grantee
          FROM
            dba_role_privs r4, dba_sys_privs p4
          WHERE
            r4.granted_role=p4.grantee
            AND p4.privilege = ‘UNLIMITED TABLESPACE’)
    CONNECT BY PRIOR grantee = granted_role)
    — we just whant to see the users not the roles
  WHERE grantee IN (SELECT username FROM dba_users) OR grantee = ‘PUBLIC’
  UNION ALL
  — list the user with unimited quota on a dedicated tablespace
  SELECT
    username,tablespace_name,‘DBA_TS_QUOTA’ privilege
  FROM
    dba_ts_quotas
  WHERE
    max_bytes = 1 )
WHERE tablespace_name LIKE UPPER(‘SYSTEM’)
    OR tablespace_name = ‘Any Tablespace’;

USERNAME                  TABLESPACE_NAME           PRIVILEGE
————————- ————————- ——————————

SYSTEM                    Any Tablespace            UNLIMITED TABLESPACE
SYS                       Any Tablespace            DBA
SYSTEM                    Any Tablespace            DBA
SCOTT                     Any Tablespace            DBA1
SYS                       Any Tablespace            DBA1
SYS                       Any Tablespace            DBA2
SYS                       Any Tablespace            DBA3
SYS                       Any Tablespace            LOGSTDBY_ADMINISTRATOR
TEST                      SYSTEM                    DBA_TS_QUOTA

19 ROWS selected.

 You Can Download Script from here : Tablespace-scripts
Links :
1-Find TableSpace Quota .
 Thank you
Osama Mustafa

Data Pump impdp expdp NETWORK_LINK option

First what is the network_Link Option :

you can import the schema from source database to target database. One advantage of this option you don’t need export and import as it does the export and import in single shot from the source to destination. Also, the file system space is not needed to accommodate the huge dump files as we can directly import to target using network_link. It is very amazing option with data pump. You can take the backup of source database schema from another database and you can store in dump files in target location as well.
Examples One :
SQL>  select name from v$database;
NAME
———
Production
SQL> show user
USER is “OSAMA”
SQL> select * from tab;
no rows selected
SQL> create table test1 as select * from all_objects;
Table created.
SQL> select * from tab;
TNAME                          TABTYPE  CLUSTERID
—————————— ——- ———-
test1                           TABLE
added a TNS entry (File location: $ORACLE_HOME/network/admin/tnsnames.ora) for production database in my source database box. Entry as below:

production =
   (description =
      (address =
         (protocol = tcp)
         (host = xxx.xxxx.xxx.xxx)
         (port = 1521)
      )
      (connect_data =
         (server = dedicated)
         (sid = production)
      )
   )

Make Sure you test the connection using tnsping .

Connect to source database using sqlplus and create a database link to production database with osama user
SQL> create database link production connect to osama identified by osama using ‘production’;
Database link created.
SQL> select * from tab@production
  2  ;
TNAME                          TABTYPE  CLUSTERID
—————————— ——- ———-
test1                          TABLE
import the osama schema of production database to source database without dumpfile(Run it From Source)
 
$ impdp osama/osama directory=network logfile=osama.log network_link=production
Import: Release 11.2.0.3.0 – 64bit Production on Tuesday, 24 July, 2012 01:30:35
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “OSAMA”.”SYS_IMPORT_SCHEMA_01″:  OSAMA/******** directory=exp_dir logfile=impnetworkscott.log network_link=prod8
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 12 MB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:”OSAMA” already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported “OSAMA”.”TEST1″                      11222 rows

Example Two :
we will export  schema from source database from target machine. You can store the dump in files.
 From Source Run (Copy to dump to folder network)
$ expdp osama/osama directory=network dumpfile=osama.dmp logfile=osama.log network_link=production
Export: Release 11.2.0.3.0 – 64bit Production on Tuesday, 24 July, 2012 02:05:09
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “OSAMA”.”SYS_EXPORT_SCHEMA_01″:  osama/******** directory=test dumpfile=osama.dmp logfile=osama.log network_link=production
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 12 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported “OSAMA”.”TEST1″                     9.496 MB   11222rows
Master table “osama”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded
******************************************************************************
Also Check
1-Ronny Egners Blog

Enjoy 

Osama Mustafa

New Features In 11g / ADRCI

Automatic Diagnostic Repository Command Interface

What is It ? 


diagnostic data such as traces, dumps, the alert log, health monitor reports, and more. It has a unified directory structure across multiple instances and multiple products. Beginning with Release 11g, the database, Automatic Storage Management (ASM), and other Oracle products or components store all diagnostic data in the ADR. Each instance of each product stores diagnostic data underneath its own ADR home directory. The ADR’s unified directory structure enables customers and Oracle Support to correlate and analyze diagnostic data across multiple instances and multiple products.

 Contain :

Problem : critical error in the database.
Incident : single occurrence of a problem. 
Problem Key : text string that includes an error code (ORA-%)
Incident Package : collection of data about incidents for one or more problems.
ADR Home : root directory for all diagnostic data—traces, dumps, alert log.
ADR Base : permit correlation of diagnostic data across multiple ADR homes.
 How to Use for Show Errors , Alerts and Traces
adrci> show alert -tail -f
adrci> show problem
adrci> show incident
adrci> show incident -mode detail -p “incident_id=”incident_id””
adrci> show trace “trace file name
create packages and zip files for oracle support:
It gather all the required information with a method called “Incident Packaging Service” (IPS):
adrci> ips create package problem 1 correlate all
adrci> ips generate package 1 in “/home/oracle” 
Purging trace files automatically:

adrci> show tracefile -rt
adrci> show control

SHORTP_POLICY :Retention for ordinary trace files
LONGP_POLICY :Retention for like incident files

adrci> set control (SHORTP_POLICY = 360) ===>15days
adrci> set control (LONGP_POLICY = 2160) ===>90 Days
adrci> show control

Purging Trace files manually:

Following command will manually purge all tracefiles older than 2 days (2880 minutes):
adrci> purge -age 2880 -type trace
adrci> purge -age 129600 -type ALERT ===> purging ALERT older than 90 days
adrci> purge -age 43200 -type INCIDENT ===> purging INCIDENT older than 30 days
adrci> purge -age 43200 -type TRACE ===> purging TRACE older than 30 days
adrci> purge -age 43200 -type CDUMP ===> purging CDUMP older than 30 days
adrci> purge -age 43200 -type HM ===> purging HM older than 30 days
adrci> show tracefile -rt  

amazing tools for DBA, could let your job became easier .
Also Check
1-ADRCI Tips
enjoy 
Osama Mustafa

buffer cache and Shared Pool

Buffer Cache :


What is the Buffer Cache?

The buffer cache is part of the SGA. It holds copies of data blocks so as they can be accessed quicker by oracle than by reading them off disk.

Purpose 
The purpose of the buffer cache is to minimize physical io. When a block is read by Oracle, it places this block into the buffer cache, because there is a chance that this block is needed again. Reading a block from the buffer cache is less costly (in terms of time) than reading it from the disk. 
 
Different Pool within Cache :
 
Keep pool
The keep pool’s purpose is to take small objects that should always be cached, for example Look Up Tables
 Recycle pool

The recycle pool is for larger objects.
Default pool
The default pool is for everything else. 
 
 Cold Area/Hot Area in Buffer Cache :
Each pool’s LRU is divided into a hot area and a cold area. Accordingly, buffers with in the hot area are hot buffers (and buffers in the cold are are called cold buffers).
By default, 50% of the buffers belong to the cold area and the other 50% belong to the hot area. This behaviour can be changed with _db_percent_hot_default (for the default pool) _db_percent_hot_recycle (for the recycle pool) and _db_percent_hot_keep (for the keep pool).
A newly read db block will be inserted between the cold and the hot area such that it belongs to the hot area. This is called midpoint insertion. However, this is only true for single block reads, multi block reads will be placed at the LRU end.

 

how to Flush Buffer Cache :
In 10g you can Flush Buffer using :
 alter system flush buffer_cache.
Optimal Size:

Some common wisdom says that the larger the buffer cache is, the better the performance of the database becomes. However, this claim is not always true.
To begin with, the cache needs to be managed. The bigger the cache, the larger the LRU and dirty list becomes. That results in longer search times for a free buffer (buffer busy waits.
Also, the bigger the cache, the greater the burden on the DBWn process.

 Shared Pool

 What is Shared Pool 
 
the shared pool is the part of the SGA where (among others) the following things are stored:

  • Optimized query plans
  • Security checks
  • Parsed SQL statements
  • Packages
  • Object informatio.

Shared Pool Latch :
used when memory is allocated or freed in the shared pool.

Library Cache latch :
this latch protects operations within the library cache.

Flush Shared Pool 

alter system flush shared_pool

 Allocation in memory

Memory in the shared pool is managed and freed in a LRU fashion, that is, Memory that isn’t used for long gets aged out first. This is in contrast to the large pool, where memory is managed in a heap (using allocate and free).

Check Shared Pool Size :

select    name, bytes/1024/1024 "MB" from v$sgastat where pool = 'shared pool'
order by bytes desc;

Refence Link :
 1-Buffer Cache
 2-Shared Pool

Use RMAN to Manage Oracle Files / DataFiles

RMAN> REPORT SCHEMA;

Report of database schema for database with db_unique_name ORCL

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
—- ——– ——————– ——- ————————
1    750      SYSTEM               ***     /u01/app/oracle/oradata/ORCL/system01.dbf
2    1150     SYSAUX               ***     /u01/app/oracle/oradata/ORCL/sysaux01.dbf
3    444      UNDOTBS1             ***     /u01/app/oracle/oradata/ORCL/undotbs01.dbf
4    120      USERS                ***     /u01/app/oracle/oradata/ORCL/users01.dbf
5    345      EXAMPLE              ***     /u01/app/oracle/oradata/ORCL/example01.dbf
8    3277     SOE                  ***     /u01/app/oracle/product/11.2.0.2/db_1/dbs/soe.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
—- ——– ——————– ———– ——————–
1    370      TEMP                 32767       /u01/app/oracle/oradata/ORCL/temp01.dbf

RMAN>

Copy the file(s) to the new location.

RMAN> COPY DATAFILE 8 TO '/u01/app/oracle/oradata/ORCL/soe.dbf';

Turn the tablespace to offline. We could have turned the tablespace offline before the copy, removing the need for a recovery, but the tablespace would have been offline longer using that method.

RMAN> SQL 'ALTER TABLESPACE osama OFFLINE';

Switch to the new datafile copy(s) and recover the tablespace.

RMAN> SWITCH DATAFILE 8 TO COPY;
RMAN> RECOVER TABLESPACE osama;

Remove the old datafile(s).

Done .
Osama Mustafa