Drop all Triggers On Database

The Below Scripts Used fro Drop Database Trigger By user or all Database. Please Don’t Use them Until you make sure what you are doing


First One : ( For Specific User)

BEGIN  
 
FOR i in (select trigger_name,owner
             
from dba_triggers
             
where trigger_name like '%_BI%' and owner = 'myTesting' ) LOOP  
   
EXECUTE IMMEDIATE 'DROP TRIGGER '||i.owner||'.'||i.trigger_name;  
 
END LOOP;  END;  

Second One : (For All Database)

BEGIN  
 
FOR i in (select trigger_name,owner
             
from dba_triggers ) LOOP  
   
EXECUTE IMMEDIATE 'DROP TRIGGER '||i.owner||'.'||i.trigger_name;  
 
END LOOP;  END;  

Thank you
Osama Mustafa

ORA-1652: unable to extend temp segment by % in tablespace TEMP

What does that error means:

This error is fairly self explanatory – we cannot get enough space for a temporary segment. The size reported in the error message is the number of contiguous free Oracle blocks that cannot be found in the listed tablespace.

How to approach the ORA-1652 error

There are two views in the database that help keep track of the free space in the temporary tablespace: v$sort_usage and v$tempseg_usage (from 9i onwards).

select TABLESPACE_NAME,TOTAL_BLOCKS,USED_BLOCKS,FREE_BLOCKS from v$sort_segment;
TABLESPACE_NAME                 TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS
——————————- ———— ———– ———–
TEMP                                 1310592           0     1310592

If USED_BLOCKS = TOTAL_BLOCKS, find which user and statement is using the temporary sort segment by following;

 SELECT a.username, a.sid, a.serial#, a.osuser, b.tablespace, b.blocks, c.sql_text
FROM v$session a, v$tempseg_usage b, v$sqlarea c
WHERE a.saddr = b.session_addr
AND c.address= a.sql_address
AND c.hash_value = a.sql_hash_value
ORDER BY b.tablespace, b.blocks

Conclusion

There are two ways of solving this error:

  1. Add more tempfiles, increase the size of the current ones or enable auto extend and the most important:
  2. Tune the queries/statements so that the sort operations are done in memory and not on the disk.

 Thank you
Osama Mustafa

Get Database Parameters even Hidden One

You can use this Scripts to check the init.ora in Oracle but not from v$parameter this Time


First Query : (display all init.ora parameter including the hidden parameters):

SET linesize 235
col Parameter FOR a50
col SESSION FOR a28
col Instance FOR a55
col S FOR a1
col I FOR a1
col D FOR a1
col Description FOR a90

SELECT
  a.ksppinm  “Parameter”,
  decode(p.isses_modifiable,‘FALSE’,NULL,NULL,NULL,b.ksppstvl) “Session”,
  c.ksppstvl “Instance”,
  decode(p.isses_modifiable,‘FALSE’,‘F’,‘TRUE’,‘T’) “S”,
  decode(p.issys_modifiable,‘FALSE’,‘F’,‘TRUE’,‘T’,‘IMMEDIATE’,‘I’,‘DEFERRED’,‘D’) “I”,
  decode(p.isdefault,‘FALSE’,‘F’,‘TRUE’,‘T’) “D”,
  a.ksppdesc “Description”
FROM x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
WHERE a.indx = b.indx AND a.indx = c.indx
  AND p.name(+) = a.ksppinm
  AND UPPER(a.ksppinm) LIKE UPPER(‘%&1%’)
ORDER BY a.ksppinm;

Second Query (list of parameter which are not default):

SET linesize 235 pagesize 200
col Parameter FOR a50
col SESSION FOR a28
col Instance FOR a55
col S FOR a1
col I FOR a1
col D FOR a1
col Description FOR a90

SELECT * FROM (SELECT
  a.ksppinm  “Parameter”,
  decode(p.isses_modifiable,‘FALSE’,NULL,NULL,NULL,b.ksppstvl) “Session”,
  c.ksppstvl “Instance”,
  decode(p.isses_modifiable,‘FALSE’,‘F’,‘TRUE’,‘T’) “S”,
  decode(p.issys_modifiable,‘FALSE’,‘F’,‘TRUE’,‘T’,‘IMMEDIATE’,‘I’,‘DEFERRED’,‘D’) “I”,
  decode(p.isdefault,‘FALSE’,‘F’,‘TRUE’,‘T’) “D”,
  a.ksppdesc “Description”
FROM x$ksppi a, x$ksppcv b, x$ksppsv c, v$parameter p
WHERE a.indx = b.indx AND a.indx = c.indx
  AND p.name(+) = a.ksppinm
  AND UPPER(a.ksppinm) LIKE UPPER(‘%&1%’)
ORDER BY a.ksppinm) WHERE d=‘F’;

Scripts : Find Oracle Parameters

Find User with unlimited Tablespace Quota

First find the user with direct quota on tablespace SYSTEM.

SELECT username,tablespace_name, bytes, max_bytes
FROM dba_ts_quotas
WHERE max_bytes = 1 AND tablespace_name =‘SYSTEM’;

USERNAME                  TABLESPACE_NAME                BYTES  MAX_BYTES
————————- ————————- ———- ———-
SCOTT                     SYSTEM                             0         1
TEST                      SYSTEM                             0         1

why do we want to know who has unlimited quota on the SYSTEM tablespace?

User who have unlimited quota on SYSTEM as well the privilege to create tables or clusters could do some kind of denial of service attack to the database. The just have to fill up the free space in the SYSTEM tablespace. If there default tablespace is as well SYSTEM the could even do this without intention.

Find user with system privilege UNLIMITED TABLESPACE.

SELECT * FROM dba_sys_privs WHERE privilege = ‘UNLIMITED TABLESPACE’

GRANTEE                        PRIVILEGE                      ADM
—————————— —————————— —
WMSYS                          UNLIMITED TABLESPACE           NO
RRDOMREG                       UNLIMITED TABLESPACE           NO
HR                             UNLIMITED TABLESPACE           NO
OE                             UNLIMITED TABLESPACE           NO
SYS                            UNLIMITED TABLESPACE           NO
LOGSTDBY_ADMINISTRATOR         UNLIMITED TABLESPACE           NO
SCOTT                          UNLIMITED TABLESPACE           NO
BI                             UNLIMITED TABLESPACE           NO
OUTLN                          UNLIMITED TABLESPACE           NO
DBSNMP                         UNLIMITED TABLESPACE           NO
IX                             UNLIMITED TABLESPACE           NO
SH                             UNLIMITED TABLESPACE           NO
DBA                            UNLIMITED TABLESPACE           YES
SYSTEM                         UNLIMITED TABLESPACE           YES

What about cascaded roles?

Mmh, but since Oracle 11g it is possible to grant UNLIMITED TABLESPACE to a role and this can be granted to an other role which is granted again to an other role. It could be a role concept or somebody who want to hide a little bit some privileges. To test it I’ve created three roles DBA3, DBA2 and DBA1, granted UNLIMITED TABLESPACE to DBA3.

SELECT
  grantee,
  privilege,
  DECODE(p,‘=>’||grantee,‘direct’,p) path
FROM (
  SELECT
    grantee,
    privilege,
    SYS_CONNECT_BY_PATH(grantee, ‘=>’) p
  FROM (
    SELECT
      grantee,
      privilege
    FROM dba_sys_privs
    UNION ALL
    SELECT
      grantee,
      granted_role privilege
    FROM
      dba_role_privs)
  START WITH privilege = ‘UNLIMITED TABLESPACE’
  CONNECT BY PRIOR grantee = privilege )
WHERE
  (grantee IN (SELECT username FROM dba_users)
  OR grantee = ‘PUBLIC’);

GRANTEE   PRIVILEGE               PATH
——— ———————– ——————————-
BI        UNLIMITED TABLESPACE    direct
SYS       DBA                     =>DBA=>SYS
SYSTEM    DBA                     =>DBA=>SYSTEM
SCOTT     DBA1                    =>DBA3=>DBA2=>DBA1=>SCOTT
SYS       DBA1                    =>DBA3=>DBA2=>DBA1=>SYS
SYS       DBA2                    =>DBA3=>DBA2=>SYS
SYS       DBA3                    =>DBA3=>SYS
DBSNMP    UNLIMITED TABLESPACE    direct
HR        UNLIMITED TABLESPACE    direct
IX        UNLIMITED TABLESPACE    direct
SYS       LOGSTDBY_ADMINISTRATOR  =>LOGSTDBY_ADMINISTRATOR=>SYS
OE        UNLIMITED TABLESPACE    direct
OUTLN     UNLIMITED TABLESPACE    direct
RRDOMREG  UNLIMITED TABLESPACE    direct
SH        UNLIMITED TABLESPACE    direct
SYS       UNLIMITED TABLESPACE    direct
SYSTEM    UNLIMITED TABLESPACE    direct
WMSYS     UNLIMITED TABLESPACE    direct

18 ROWS selected.

create one to find user’s with direct quotas as well through a system privilege will give something like this.

SELECT
  username,
  tablespace_name,
  privilege
FROM (
  SELECT
    grantee username, ‘Any Tablespace’ tablespace_name, privilege
  FROM (
    — first get the users with direct grants
    SELECT
      p1.grantee grantee, privilege
    FROM
      dba_sys_privs p1
    WHERE
      p1.privilege=‘UNLIMITED TABLESPACE’
    UNION ALL
    — and then the ones with UNLIMITED TABLESPACE through a role…
    SELECT
      r3.grantee, granted_role privilege
    FROM
      dba_role_privs r3
      START WITH r3.granted_role IN (
          SELECT
            DISTINCT p4.grantee
          FROM
            dba_role_privs r4, dba_sys_privs p4
          WHERE
            r4.granted_role=p4.grantee
            AND p4.privilege = ‘UNLIMITED TABLESPACE’)
    CONNECT BY PRIOR grantee = granted_role)
    — we just whant to see the users not the roles
  WHERE grantee IN (SELECT username FROM dba_users) OR grantee = ‘PUBLIC’
  UNION ALL
  — list the user with unimited quota on a dedicated tablespace
  SELECT
    username,tablespace_name,‘DBA_TS_QUOTA’ privilege
  FROM
    dba_ts_quotas
  WHERE
    max_bytes = 1 )
WHERE tablespace_name LIKE UPPER(‘SYSTEM’)
    OR tablespace_name = ‘Any Tablespace’;

USERNAME                  TABLESPACE_NAME           PRIVILEGE
————————- ————————- ——————————

SYSTEM                    Any Tablespace            UNLIMITED TABLESPACE
SYS                       Any Tablespace            DBA
SYSTEM                    Any Tablespace            DBA
SCOTT                     Any Tablespace            DBA1
SYS                       Any Tablespace            DBA1
SYS                       Any Tablespace            DBA2
SYS                       Any Tablespace            DBA3
SYS                       Any Tablespace            LOGSTDBY_ADMINISTRATOR
TEST                      SYSTEM                    DBA_TS_QUOTA

19 ROWS selected.

 You Can Download Script from here : Tablespace-scripts
Links :
1-Find TableSpace Quota .
 Thank you
Osama Mustafa

Data Pump impdp expdp NETWORK_LINK option

First what is the network_Link Option :

you can import the schema from source database to target database. One advantage of this option you don’t need export and import as it does the export and import in single shot from the source to destination. Also, the file system space is not needed to accommodate the huge dump files as we can directly import to target using network_link. It is very amazing option with data pump. You can take the backup of source database schema from another database and you can store in dump files in target location as well.
Examples One :
SQL>  select name from v$database;
NAME
———
Production
SQL> show user
USER is “OSAMA”
SQL> select * from tab;
no rows selected
SQL> create table test1 as select * from all_objects;
Table created.
SQL> select * from tab;
TNAME                          TABTYPE  CLUSTERID
—————————— ——- ———-
test1                           TABLE
added a TNS entry (File location: $ORACLE_HOME/network/admin/tnsnames.ora) for production database in my source database box. Entry as below:

production =
   (description =
      (address =
         (protocol = tcp)
         (host = xxx.xxxx.xxx.xxx)
         (port = 1521)
      )
      (connect_data =
         (server = dedicated)
         (sid = production)
      )
   )

Make Sure you test the connection using tnsping .

Connect to source database using sqlplus and create a database link to production database with osama user
SQL> create database link production connect to osama identified by osama using ‘production’;
Database link created.
SQL> select * from tab@production
  2  ;
TNAME                          TABTYPE  CLUSTERID
—————————— ——- ———-
test1                          TABLE
import the osama schema of production database to source database without dumpfile(Run it From Source)
 
$ impdp osama/osama directory=network logfile=osama.log network_link=production
Import: Release 11.2.0.3.0 – 64bit Production on Tuesday, 24 July, 2012 01:30:35
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “OSAMA”.”SYS_IMPORT_SCHEMA_01″:  OSAMA/******** directory=exp_dir logfile=impnetworkscott.log network_link=prod8
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 12 MB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:”OSAMA” already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported “OSAMA”.”TEST1″                      11222 rows

Example Two :
we will export  schema from source database from target machine. You can store the dump in files.
 From Source Run (Copy to dump to folder network)
$ expdp osama/osama directory=network dumpfile=osama.dmp logfile=osama.log network_link=production
Export: Release 11.2.0.3.0 – 64bit Production on Tuesday, 24 July, 2012 02:05:09
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “OSAMA”.”SYS_EXPORT_SCHEMA_01″:  osama/******** directory=test dumpfile=osama.dmp logfile=osama.log network_link=production
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 12 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported “OSAMA”.”TEST1″                     9.496 MB   11222rows
Master table “osama”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded
******************************************************************************
Also Check
1-Ronny Egners Blog

Enjoy 

Osama Mustafa

New Features In 11g / ADRCI

Automatic Diagnostic Repository Command Interface

What is It ? 


diagnostic data such as traces, dumps, the alert log, health monitor reports, and more. It has a unified directory structure across multiple instances and multiple products. Beginning with Release 11g, the database, Automatic Storage Management (ASM), and other Oracle products or components store all diagnostic data in the ADR. Each instance of each product stores diagnostic data underneath its own ADR home directory. The ADR’s unified directory structure enables customers and Oracle Support to correlate and analyze diagnostic data across multiple instances and multiple products.

 Contain :

Problem : critical error in the database.
Incident : single occurrence of a problem. 
Problem Key : text string that includes an error code (ORA-%)
Incident Package : collection of data about incidents for one or more problems.
ADR Home : root directory for all diagnostic data—traces, dumps, alert log.
ADR Base : permit correlation of diagnostic data across multiple ADR homes.
 How to Use for Show Errors , Alerts and Traces
adrci> show alert -tail -f
adrci> show problem
adrci> show incident
adrci> show incident -mode detail -p “incident_id=”incident_id””
adrci> show trace “trace file name
create packages and zip files for oracle support:
It gather all the required information with a method called “Incident Packaging Service” (IPS):
adrci> ips create package problem 1 correlate all
adrci> ips generate package 1 in “/home/oracle” 
Purging trace files automatically:

adrci> show tracefile -rt
adrci> show control

SHORTP_POLICY :Retention for ordinary trace files
LONGP_POLICY :Retention for like incident files

adrci> set control (SHORTP_POLICY = 360) ===>15days
adrci> set control (LONGP_POLICY = 2160) ===>90 Days
adrci> show control

Purging Trace files manually:

Following command will manually purge all tracefiles older than 2 days (2880 minutes):
adrci> purge -age 2880 -type trace
adrci> purge -age 129600 -type ALERT ===> purging ALERT older than 90 days
adrci> purge -age 43200 -type INCIDENT ===> purging INCIDENT older than 30 days
adrci> purge -age 43200 -type TRACE ===> purging TRACE older than 30 days
adrci> purge -age 43200 -type CDUMP ===> purging CDUMP older than 30 days
adrci> purge -age 43200 -type HM ===> purging HM older than 30 days
adrci> show tracefile -rt  

amazing tools for DBA, could let your job became easier .
Also Check
1-ADRCI Tips
enjoy 
Osama Mustafa

buffer cache and Shared Pool

Buffer Cache :


What is the Buffer Cache?

The buffer cache is part of the SGA. It holds copies of data blocks so as they can be accessed quicker by oracle than by reading them off disk.

Purpose 
The purpose of the buffer cache is to minimize physical io. When a block is read by Oracle, it places this block into the buffer cache, because there is a chance that this block is needed again. Reading a block from the buffer cache is less costly (in terms of time) than reading it from the disk. 
 
Different Pool within Cache :
 
Keep pool
The keep pool’s purpose is to take small objects that should always be cached, for example Look Up Tables
 Recycle pool

The recycle pool is for larger objects.
Default pool
The default pool is for everything else. 
 
 Cold Area/Hot Area in Buffer Cache :
Each pool’s LRU is divided into a hot area and a cold area. Accordingly, buffers with in the hot area are hot buffers (and buffers in the cold are are called cold buffers).
By default, 50% of the buffers belong to the cold area and the other 50% belong to the hot area. This behaviour can be changed with _db_percent_hot_default (for the default pool) _db_percent_hot_recycle (for the recycle pool) and _db_percent_hot_keep (for the keep pool).
A newly read db block will be inserted between the cold and the hot area such that it belongs to the hot area. This is called midpoint insertion. However, this is only true for single block reads, multi block reads will be placed at the LRU end.

 

how to Flush Buffer Cache :
In 10g you can Flush Buffer using :
 alter system flush buffer_cache.
Optimal Size:

Some common wisdom says that the larger the buffer cache is, the better the performance of the database becomes. However, this claim is not always true.
To begin with, the cache needs to be managed. The bigger the cache, the larger the LRU and dirty list becomes. That results in longer search times for a free buffer (buffer busy waits.
Also, the bigger the cache, the greater the burden on the DBWn process.

 Shared Pool

 What is Shared Pool 
 
the shared pool is the part of the SGA where (among others) the following things are stored:

  • Optimized query plans
  • Security checks
  • Parsed SQL statements
  • Packages
  • Object informatio.

Shared Pool Latch :
used when memory is allocated or freed in the shared pool.

Library Cache latch :
this latch protects operations within the library cache.

Flush Shared Pool 

alter system flush shared_pool

 Allocation in memory

Memory in the shared pool is managed and freed in a LRU fashion, that is, Memory that isn’t used for long gets aged out first. This is in contrast to the large pool, where memory is managed in a heap (using allocate and free).

Check Shared Pool Size :

select    name, bytes/1024/1024 "MB" from v$sgastat where pool = 'shared pool'
order by bytes desc;

Refence Link :
 1-Buffer Cache
 2-Shared Pool

Use RMAN to Manage Oracle Files / DataFiles

RMAN> REPORT SCHEMA;

Report of database schema for database with db_unique_name ORCL

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
—- ——– ——————– ——- ————————
1    750      SYSTEM               ***     /u01/app/oracle/oradata/ORCL/system01.dbf
2    1150     SYSAUX               ***     /u01/app/oracle/oradata/ORCL/sysaux01.dbf
3    444      UNDOTBS1             ***     /u01/app/oracle/oradata/ORCL/undotbs01.dbf
4    120      USERS                ***     /u01/app/oracle/oradata/ORCL/users01.dbf
5    345      EXAMPLE              ***     /u01/app/oracle/oradata/ORCL/example01.dbf
8    3277     SOE                  ***     /u01/app/oracle/product/11.2.0.2/db_1/dbs/soe.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
—- ——– ——————– ———– ——————–
1    370      TEMP                 32767       /u01/app/oracle/oradata/ORCL/temp01.dbf

RMAN>

Copy the file(s) to the new location.

RMAN> COPY DATAFILE 8 TO '/u01/app/oracle/oradata/ORCL/soe.dbf';

Turn the tablespace to offline. We could have turned the tablespace offline before the copy, removing the need for a recovery, but the tablespace would have been offline longer using that method.

RMAN> SQL 'ALTER TABLESPACE osama OFFLINE';

Switch to the new datafile copy(s) and recover the tablespace.

RMAN> SWITCH DATAFILE 8 TO COPY;
RMAN> RECOVER TABLESPACE osama;

Remove the old datafile(s).

Done .
Osama Mustafa

Manage Oracle Files / DataFiles Part 3

This will be the last part for manage Oracle Database files we will mention another way using RMAN

SQL> SELECT name FROM v$datafile;

NAME
———————————————————
C:\ORACLE\PRODUCT\11.2.0\ORADATA\ORCL\SYSTEM01.DBF
C:\ORACLE\PRODUCT\11.2.0\ORADATA\ORCL\UNDOTBS01.DBF
C:\ORACLE\PRODUCT\11.2.0\ORADATA\ORCL\SYSAUX01.DBF
C:\ORACLE\PRODUCT\11.2.0\ORADATA\ORCL\USERS01.DBF

4 rows selected.

SQL>

SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.

You can rename Datafiles , Move it now .

SQL> STARTUP MOUNT
ORACLE instance started.

Total System Global Area  167772160 bytes
Fixed Size                   787968 bytes
Variable Size              61864448 bytes
Database Buffers          104857600 bytes
Redo Buffers                 262144 bytes
Database mounted.
SQL> ALTER DATABASE RENAME FILE ‘C:\ORACLE\PRODUCT\11.2.0\ORADATA\ORCL\USERS01.DBF’ –
>  TO ‘C:\ORACLE\PRODUCT\11.2.0\ORADATA\ORCL\RENAME_USERS01.DBF’;

Database altered.

SQL> ALTER DATABASE OPEN;

Database altered.

SQL>

Done

Enjoy
Osama mustafa

ORA-12557: TNS:protocol adapter not loadable

this problem On windows Platforms .

and it must be related to Windows Environment or Oracle Home PATH because sqlplus command works smoothly when I execute it inside ORACLE_HOME\bin.

RUN: SYSDM.CPL to open Windows System Properties
Click on Advanced Tab > Environment Variables…

Click the Path variable under System Variable, then click  Edit…

change the order between Oracle Client Home and Oracle DB Home:

From: D:\oracle\product\10.2.0\client_1\bin;D:\oracle\product\10.2.0\db_1\bin;

To: D:\oracle\product\10.2.0\db_1\bin;D:\oracle\product\10.2.0\client_1\bin;

in other words, put the Oracle DB Home in front of the other path.

Or i Don’t love this way , Since its not actual solution but its solve problem sometimes :
1-Remove ORACLE_HOME From environment Variable .


2- Restart PC 
Done

Osama mustafa