Wednesday 23 December 2015

RMAN-03009: ORA-19502: ORA-27063:

We got the following error when backing up an 11.2.0.4 database to an external disk. From the below error it is clearly showing there is no space left on the disk to hold new backup pieces.




Cause:1. The cleanup jobs is not running which deleted the old backups pieces.
           2. Some one increase the value for control file recovery window .
           3.Alert mechanism is not properly set for the file system .

Solution:1. connect to rman
         $rman target /
         rman> report obsolete;
if there is any obsolete backup found as a output of the above command .Run the below command to delete those backup pieces.
rman > delete obsolete;
This are the backups whose information are no longer available in controlfile.

2.If by deleting obsolete backup also space is not released than delete some old backup pieces but make sure those backup pieces are copied to tape .
3. Or move some backup pieces to other location where you have space.




Friday 18 December 2015

Impdp ORA-39111 dependent object type object_grant skipped

During a Data Pump import job, a dependent object is being skipped because its base object already existed. Use import parameter in parfile
    
        table_exists_action=replace

So it replace only the tables not the other  independent objects. 

Note: Table exists action parameter work without any issue only when there is no foreign key constraints in the import tables.

Solution:
To run a clean import drop all the objects of the schema and start import again.To drop all the objects at a time and for  complete import  steps  please refer the links.

http://swadhinur.blogspot.com/2015/12/schema-refresh-step-by-step.html

Tuesday 8 December 2015

ORA-00020 maximum number of processes exceeded

We may come across this kind of situation in production when there is no other session are allowed to create in the database.Because database already reached to its maximum limits of process thresholds value.

error:ORA-00020: maximum number of processes (1100) exceeded
        ERROR at line 1:
        ORA-01012: not logged on

There are two possible reason behind ora-00020 error.
1.If process limit is set to few value and total active connection reach the thresholds value than we received the same error .
2 .Zombie process , process those are created from a parent process and not releasing his resources after execution also. In linux there are no child process(PID) is created from parent process(PPID) and after executed those child not informed to his parent process about his task execution. So parent process is keep waiting for there acknowledged .Can we kill those zombie process by KILL -9 PID. The answer is no because process are already dead. You need to kill the parent process to kill all the zombie process.

Here is the example of zombie process , where 8559 is the parent process and waiting for acknowledgement from the other child process under the column PID.

               PID   PPID
   oracle  6363  8559   0 12:19:20 ?           0:01 oracleislam02 (LOCAL=NO)
  oracle 12829  8559   0 12:36:09 ?           0:34 oracleislam02 (LOCAL=NO)
  oracle 21644  8559   0 11:38:25 ?           0:02 oracleislam02 (LOCAL=NO)
  oracle  6308  8559   0 12:19:20 ?           0:01 oracleislam02 (LOCAL=NO)
  oracle  6295  8559   0 12:19:20 ?           0:01 oracleislam02 (LOCAL=NO)
  oracle 12764  8559   0 12:36:09 ?           0:02 oracleislam02 (LOCAL=NO)
  oracle    81  8559   0 10:38:22 ?           0:06 oracleislam02 (LOCAL=NO)
  oracle  6341  8559   0 12:19:20 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle  1960  8559   0 07:55:51 ?           0:14 oracleislam02 (LOCAL=NO)
  oracle  6347  8559   0 12:19:20 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle  6408  8559   0 12:19:21 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle 12810  8559   0 12:36:09 ?           0:02 oracleislam02 (LOCAL=NO)
  oracle 12333  8559   0 12:35:22 ?           0:01 oracleislam02 (LOCAL=NO)
  oracle 19031  8559   0 10:06:35 ?           0:09 oracleislam02 (LOCAL=NO)
  oracle  6286  8559   0 12:19:19 ?           0:01 oracleislam02 (LOCAL=NO)
  oracle  8461  8559   0 08:14:38 ?           2:51 oracleislam02 (LOCAL=NO)
  oracle  6389  8559   0 12:19:21 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle  6423  8559   0 12:19:21 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle 27610  8559   0 09:07:24 ?           4:21 oracleislam02 (LOCAL=NO)
  oracle 12801  8559   0 12:36:09 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle 12772  8559   0 12:36:09 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle 12279  8559   0 12:35:21 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle  6379  8559   0 12:19:20 ?           0:02 oracleislam02 (LOCAL=NO)
  oracle 12311  8559   0 12:35:21 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle 12823  8559   0 12:36:09 ?           0:15 oracleislam02 (LOCAL=NO)
  oracle 21618  8559   0 10:14:42 ?           1:40 oracleislam02 (LOCAL=NO)
  oracle 12302  8559   0 12:35:21 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle 12285  8559   0 12:35:21 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle 12806  8559   0 12:36:09 ?           0:04 oracleislam02 (LOCAL=NO)
  oracle  6454  8559   0 12:19:21 ?           0:02 oracleislam02 (LOCAL=NO)
  oracle  6335  8559   0 12:19:20 ?           0:00 oracleislam02 (LOCAL=NO)
  oracle 12339  8559   0 12:35:22 ?           0:02 oracleislam02 (LOCAL=NO)
  oracle  6372  8559   0 12:19:20 ?           0:35 oracleislam02 (LOCAL=NO)
  oracle  6315  8559   0 12:19:20 ?           0:00 oracleislam02 (LOCAL=NO)


But we are DBA and we are working in database . So when we executing a SQL script and after the execution got complete the session become inactive due to some  reason . This Inactive session is a count of no session connecting to the database and there is a corresponding process is created in the OS level that is also keep waiting ....this cause the problem of no session exceeded in the database.

Solution: As we can't  connect to the database because its already reached its maximum limits. So only way to connect the DB is connect without creating a session and kill all the inactive session .


sqlplus /nolog
@> set _prelim on
@> conn  /  as sysdba

Set  _prelim on , will connect to the database without creating a session.

: Kill all the inactive session at a time
sql> select 'alter system kill session '''||sid||','||serial#||''' IMMEDIATE;' from v$session where sTatus= 'INACTIVE' and USERNAME NOT IN ('SYS','SYSTEM');

Schema Refresh step by step


Below are the details step for schema refresh . The below steps are very helpful when you want to refresh schemas in QA/DEV and want to keep all the previous grants and privileges after refresh .

1. Take the backup of the schema in target database.


expdp directory=<directory_name> dumpfile=backup_%U.dmp logfile=Backup.log parllel=<n> compression=all
schemas=Schema_name

2. sys_privs.sql :
---------------

set echo off
set head off
set pagesize 0
set feedback off
set linesize 200
spool sys_privs.sql
select 'grant '||privilege||' to '||grantee||' ; '  from dba_sys_privs where grantee in ('schema_name');
spool off

3.owner.sql :
-----------

spool owner.sql
select 'grant '||privilege||' on '||OWNER||'.'||table_name||' to  '||grantee||' ; ' from dba_tab_privs WHERE OWNER in ('schema_name');
spool off

4.pre_refresh_role_privs.sql :
----------------------------

set echo off
set head off
set pagesize 0
set feedback off
set linesize 200
spool pre_refresh_role_privs.sql
select 'grant '||granted_role||' to '||grantee||' ; '  from dba_role_privs where grantee in ('schema_name');
spool off

5.Check the object count in the database under that schema
 SELECT object_type,COUNT(*) FROM DBA_OBJECTS WHERE OWNER in ('schema_name') group by object_type ;

6. check the table count
SELECT COUNT(*) FROM DBA_tables WHERE OWNER in ('schema_name');

7.drop all the object under that schema
select 'drop '||object_type||' '||owner||'."'||object_name||'";' from dba_objects where owner in ('schema_names');

8.drop all the tables under that schema
select 'drop table  '||owner||'.'||table_name||' cascade constraints;' from dba_tables where owner in('schema_names');

9.purge the recyclebin
select 'purge table '||owner||'."'||object_name||'";' FROM dba_RECYCLEBIN where owner='schema_name' ;

10.now import data into target database from source dumpfile.
 impdp dumpfile=<dumpfile_name> logfile=import.log directory=<directory_name> schemas=schema_name parallel=<n>

11.check the count of objects after refresh for each schema
SELECT COUNT(*) FROM DBA_OBJECTS WHERE OWNER in ('schema_name');

12.check the table count
SELECT COUNT(*) FROM DBA_tables WHERE OWNER='schema_name';

13.run the spool file
@sys_privs.sql

14. run the spool file
@owner.sql

15.run the spool file
pre_refresh_role_privs.sql

-------------- Schema refresh is complete ----------------------

Table Refresh step by step

Below are details step by step for table refresh . Points to be remember
a. This steps is very useful when we are refreshing tables that has referential constraints.
b. If there is no referential constraints we can use table_exists_action=replace parameter during import.


1. Check the size of the table in the source and target.
And that much space is available in the mount point or not.

sql> select segment_name,segment_type,bytes/1024/1024
 from dba_segments where segment_type='TABLE' and segment_name='<table_name>';

2.Take the backup of the table in the target database.
 $expdp dumpfile=backup_Table_%U.dmp logfile=backup_Table.log directory=<directory name> tables=schema.tablename parallel=<n> compression=all

3.create spool file for Disable the contraints for the tables.
sql>spool disable_contraints.sql

sql> set lines 200 pages 300
   select 'alter table '||table_name||' disable constraint '||constraint_name||';' from user_constraints where table_name in('tables_name');
sql> spool off

4.create spool for enable the constraints.

sql>spool enable_contraints.sql

sql> set lines 200 pages 300
   select 'alter table '||table_name||' enable constraint '||constraint_name||';' from user_constraints where table_name in('tables_name');

 sql> spool off;

5.alter session set current_schema=schema_name

6. run @disable_contraints.sql

7.Either Drop the table in the target database or use TABLE_EXISTS_ACTION=REPLACE  parameter during import 

sql> Drop table schema.table_name;

8.Import the dump taken from source.

$impdp dumfile=export_%U.dmp logfile=import.log directory=<directory_name> tables=schema.tablename parallel=<n>

 --monitor the  log--

9.check the row count in the target after refresh.

sql> Select count(*) from schema.tablename;

10.check the row count in the source database . 

sql> Select count(*) from schema.tablename;

-- count should be same.---

11.alter session set current_schema=schema
run @enable_contraints.sql


------------------------------ table refresh is complete------------------------

Monday 16 November 2015

Locally vs Dictionary Managed Tablespaces

When Oracle allocates space to a segment (like a table or index), a group of contiguous free blocks, called an extent, is added to the segment. Metadata regarding extent allocation and unallocated extents are either stored in the data dictionary, or in the tablespace itself.

Tablespaces that record extent allocation in the dictionary, are called dictionary managed tablespaces, and

Tablespaces that record extent allocation in the tablespace header, are called locally managed tablespaces.


SQL> select tablespace_name, extent_management, allocation_type from dba_tablespaces;

TABLESPACE_NAME       EXTENT_MAN        ALLOCATION
------------------------------     ----------------------       --------------------
SYSTEM                           DICTIONARY           USER
SYSAUX                          LOCAL                      SYSTEM
UNDOTBS1                      LOCAL                      SYSTEM
TEMP                               LOCAL                      UNIFORM
ABMD                              LOCAL                      USER
ESSO                                LOCAL                     USER
SIT                                   LOCAL                     USER
USERS                             LOCAL                     UNIFORM
USER_DATA                    LOCAL                     UNIFORM



The main benefits of locally managed tablespaces include:

  • Marginally faster performance:  Because there is less data dictionary contention, throughput can be faster.
  • Faster tablespace space management activities:  Since space management activities are done in the tablespace blocks, we see less space management contention (ST locks).  Uniform extent management is enforced and tablespace fragmentation can be reduced.
      
  • Locally Managed tablespace is default from 11g onwards
  •  Space wastage removed. In DMTs, there is no implied mechanism to enforce uniform extent      sizes. The extent sizes may vary depending on the storage clause provided at the object level or the tablespace level, resulting in space wastage and fragmentation.Oracle enforces the uniform extents allocation in the LMTs (when created with UNIFORM SIZE clause). Space wastage is removed, as this would result in all the same sized extents in the tablespace. 
  •  









Thursday 12 November 2015

SQL Error: ORA-14086: a partitioned index may not be rebuilt as a whole /ORA-14287: cannot REBUILD a partition of a Composite Range partitioned index

We are facing one issue while loading   table in pre-modeling server our load get failed. Because An index is in unusable state .Please rebuild the index.

SQL> alter index INDEX_NAME rebuild
Error report -
SQL Error: ORA-14086: a partitioned index may not be rebuilt as a whole
14086. 00000 - "a partitioned index may not be rebuilt as a whole"
*Cause: User attempted to rebuild a partitioned index using
ALTER INDEX REBUILD statement, which is illegal
*Action: Rebuild the index a partition at a time (using
ALTER INDEX REBUILD PARTITION) or drop and recreate the
entire index

Reason: This is a partition index you can not directly rebuild or rebuild as a whole.

Solution: Check what are the partitioned index for that Index.
Step 1: SQL> select index_name,partition_name,status from dba_ind_partitions where index_name='INDEX_NAME';

INDEX_NAME                     PARTITION_NAME                 STATUS 
--------------------------------------------------------------------------------------------------------------------------
DWH_ZVN _IDX            DWH_ZVN_RLP_2012_Q2            N/A      
DWH_ZVN _IDX            DWH_ZVN_RLP_2012_Q2            N/A      
DWH_ZVN _IDX            DWH_ZVN_RLP_2012_Q2            N/A       
DWH_ZVN _IDX            DWH_ZVN_RLP_2012_Q2            N/A       
DWH_ZVN _IDX            DWH_ZVN_RLP_2012_Q2            N/A      
DWH_ZVN _IDX            DWH_ZVN_RLP_2012_Q2            N/A      
DWH_ZVN _IDX            DWH_ZVN_RLP_2012_Q2            N/A       
DWH_ZVN _IDX            DWH_ZVN_RLP_2012_Q2            N/A       

Step 2: Rebuild the partioned index.
SQL> alter session set current_schema=SCHEMA_NAME;
SQL> alter index DWH_ZVN _IDX rebuild partition DWH_ZVN_RLP_2012_Q2 tablespace TEMP_DWH_DYNI;

ERROR at line 1:
ORA-14287: cannot REBUILD a partition of a Composite Range partitioned index

The index has a subpartition also. With a SubPartitioned Index, the Partitions do NOT have segments.  It is only the SubPartitions that have segments that can be rebuilt.

Step 3: Check what are the subpartitioned index and their status.
SQL>select index_name, partition_name, subpartition_name,status from dba_ind_subpartitions where index_name='index_name';

You might get huge no of subpartitions that need to be rebuild.To rebuild all the subpatitions at a time follow the below seps.

Step4: SQL> spool rebuild_index.sql
SQL> select 'alter index INDEX_NAME rebuild subpartition '||"SUBPARTITION_NAME"||' ;' from dba_ind_subpartitions where index_name='INDEX_NAME';

SQL> spool off;

Step5: @rebuild_index.sql

Check all the index status hopefully it is rebuild.

Tuesday 10 November 2015

Not Receiving alert form OEM

Some time we come across a situation where we found OEM  agent is up and running but we are not receiving any alert.

Step1.:  verify agent status
   $emctl status agent
The output is look like below :






Though its showing agent is up and running we need to check "Last successful heartbeat to OMS "
 as shown in the above image the agent is blocked.If the agent is fully sync than the last successful time is shown in the status output.

Step2: Check the same agent status from OEM console.





Step3: Resync the agent from the OEM console. Follow the below steps in OEM to resync.
Agent -> Resynchronization -> confirm.

Step 4: Check the agent status from backend.emctl status agent






Hopefully you are receiving alert.

ORA-01940 cannot drop a user that is currently connected

You cant not drop a user directly which is currently connected or in use.
 SQL> drop user username ;
ORA-01940 cannot drop a user that is currently connected

    Two way to drop the user.
1. Either disconnect all the connected session manually. or

2. Kill the all the session which are currently connected.

  SQL> select sid, serial#, username, machine from V$session where username='SCHEMA_NAME';

  SID               SERIAL#             USERNAME                           MACHINE
------------    ----------------------   --------------------------------       ------------------------------------------------------
 234                 3452                   DLT                                      spgwtrd

SQL> alter system kill session '234,3452' immediate;
system altered.

Wednesday 4 November 2015

Oracle performance tuning Book

Example of find command in Oracle database / Delete log files older than a specific date

Below command help to delete old log files,trace files ,audit log files older than a specified date .

1. To display the files older than 5 days
 find  /u01/oracle/admin/diag/rdbms/islam01/trace/ -type f -name "*.trc"  -mtime +5 -execls -latrh {} \;

2. Delete the files older than 5 days
find  /u01/oracle/admin/diag/rdbms/islam01/trace/ -type f -name "*.trc"  -mtime +5 -exec rm -rf {} \;

Note: " /u01/oracle/admin/diag/rdbms/islam01/trace/"  is your log files location.

ORA-39083 and ORA-02304 error during impdp

While importing using data pump get below error ora-39083 and ora-02304. Which cause some object are not created or skip in impdp opration.






Cause:   

OID should be unique in a database. the OID in the impdp create statemene was used by old schema.  

solution:

 Add parameter transform=OID:n  in your par file and import again 

Reference: 


http://docs.oracle.com/cd/B19306_01/server.102/b14215/dp_import.htm

Tuesday 3 November 2015

Status: Failure -Test failed: IO Error: The Network Adapter could not establish the connection

When concern  team try to connect to the database and getting below error or the below error has been encountered during DBA activity.



The possible reason for this 
1.- you have either the wrong SQL Developer URL  
2.- The wrong port number or IP address (or DNS host name) was used
3.- The listener is not configured properly
4.- The listener process (service) is not running.  You can re-start it with the "lsnrctl start" command or on Windows by starting the listener service.
5. Port is closed.

In my case the port was closed reached to firewall team to open the port.Generally this reason is very rearly encountered  because port is already enabled. But in my scenario it was a newly build database.

Monday 2 November 2015

Data Guard switchover and failover using data Guard Broker



PREREQUISITES

Close any gap.
If it is possible, review standby synchronization at least one day and few hours before the execution of these procedures. Any gap, even if it won’t prevent the success of database movement, may delay procedure execution.

Review OEM targets blackout.
Inside OEM Cloud Control 12c, is necessary to be sure that blackout has been set to primery_server and stanby_server targets. This will help us preventing fake alerts or notifications from Monitoring System.
Note: After DR Drill exercise is completed, these blackouts need to be removed.

Set freeze to DB clusters.
Ask Unix team to perform this task for both environments (primery_server and stanby_server)./* this step is only applicable when if database is running on cluster */

Stop DB backups.
Edit crontab and comment all lines referring to DB backups.
Note: After DR Drill exercise is completed, edit crontab again and uncomment the lines you commented. Do not forget, those are critical maintenance scripts.

Stop any other maintenance job in crontab
Edit crontab and comment all other jobs at both servers. After DR Drill exercise, comment at Standby those jobs that must run only against Primary.

== if jobs are scheduled through autosys or oem make sure all the jobs are freezed during switchover period
SWITCHOVER PROD TO stanby

Step 1: Take incremental database backup. TARGET:  (primary).


Step 2: Verify Data Guard replication state. TARGET:  (primary).


Step 3: Verify NO database user connections, kill them if any. TARGET:  (primary)


Step 4: Turn OFF database flashback. TARGET:  (standby)

sqlplus / as sysdba
SQL> ALTER DATABASE FLASHBACK OFF;
SQL> SELECT FLASHBACK_ON FROM V$DATABASE;


Step 5: Review database state using Data Guard Broker. TARGET:  (primary).

Look for:

Database Status = SUCCESS
Transport Lag = 0 seconds
Apply Lag= 0 seconds

dgmgrl
DGMGRL> connect sys/<sys_password>@ora_servername_dbname
DGMGRL> show database verbose 'ora_servername_dbname';
DGMGRL> show database verbose 'ora_servername_dbname';


Step 6: Perform Switchover using DG Broker. TARGET: (primary)

dgmgrl
DGMGRL> connect sys/<sys_password>@ora_servername_dbname
DGMGRL> switchover to 'ora_servername_dbname';


Step 7: Turn ON database Flashback, enable Active Data Guard. TARGET:  (new standby)

--7A.- Disable Apply Process
dgmgrl
DGMGRL> connect sys/<sys_password>@ora_servername_dbname
DGMGRL> edit database 'ora_servername_dbname' set state ='LOG-APPLY-OFF';

 --7B.- Enable Flashback and Active Data Guard
sqlplus / as sysdba
SQL> ALTER DATABASE FLASHBACK ON;
SQL> SELECT FLASHBACK_ON FROM V$DATABASE;
SQL> ALTER DATABASE OPEN READ ONLY;

--7C.- Enable Apply Process
dgmgrl
DGMGRL> connect sys/<sys_password>@ora_servername_dbname
DGMGRL> edit database 'ora_servername_dbname' set state ='ONLINE';


Step 8: Verify database state using sqlplus. TARGET:  (new primary)

Look for:

Delta  = 1
FlashBackOption = NO
Role = PRIMARY

sqlplus / as sysdba
@dg_state.sql      /*  one can check manually if dont have the script */


Step 9: Review database state using DG Broker. TARGET:  (new standby)

Look for:

Database Status = SUCCESS
Transport Lag = 0 seconds
Apply Lag = 0 seconds
Real Time Query = ON

dgmgrl
DGMGRL> connect sys/<sys_password>@ora_servername_dbname
DGMGRL> show database verbose 'ora_servername_dbname';
DGMGRL> show database verbose 'ora_servername_dbname';


Step 10: Review database state using sqlplus. TARGET : (new standby)

Look for:

OpenMode = READ ONLY WITH APPLY
Role = PHYSICAL STANDBY
FlashBackOption = YES


FAILOVER PROD TO APE(Standby)

Note: This is for a real emergency. Perform with care, not for use on drills.

Step 1: Turn OFF database flashback. TARGET: (standby)

sqlplus / as sysdba
SQL> ALTER DATABASE FLASHBACK OFF;
SQL> SELECT FLASHBACK_ON FROM V$DATABASE;


Step 2: Perform Failover using DG Broker. TARGET:  (standby)

dgmgrl
DGMGRL> connect sys/<sys_password>@ora_servername_dbname
DGMGRL> failover to 'ora_servername_dbname';


Step 3: Disable Transport Log. TARGET: (new standalone primary)

dgmgrl
DGMGRL> connect sys/<sys_password>@ora_servername_dbname
DGMGRL> EDIT DATABASE 'ora_servername_dbname' SET STATE='LOG-TRANSPORT-OFF';


Data Guard Sync check

1.
SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE
ARCH.THREAD# = APPL.THREAD#
;

2.
select max(sequence#) from v$archived_log where applied='YES';

Top ten query using temp

cursor bigtemp_sids is
select * from (
select s.sid,
s.status,
s.sql_hash_value sesshash,
u.SQLHASH sorthash,
s.username,
u.tablespace,
sum(u.blocks*p.value/1024/1024) mbused ,
sum(u.extents) noexts,
nvl(s.module,s.program) proginfo,
floor(last_call_et/3600)||':'||
floor(mod(last_call_et,3600)/60)||':'||
mod(mod(last_call_et,3600),60) lastcallet
from v$sort_usage u,
v$session s,
v$parameter p
where u.session_addr = s.saddr
and p.name = 'db_block_size'
group by s.sid,s.status,s.sql_hash_value,u.sqlhash,s.username,u.tablespace,
nvl(s.module,s.program),
floor(last_call_et/3600)||':'||
floor(mod(last_call_et,3600)/60)||':'||
mod(mod(last_call_et,3600),60)
order by 7 desc,3)
where rownum < 11;

temp usage with sort segments

select b.Total_MB,b.Total_MB - round(a.used_blocks*8/1024) Current_Free_MB,round(used_blocks*8/1024) Current_Used_MB,round(max_used_blocks*8/1024)
Max_used_MB from v$sort_segment a,(select round(sum(bytes)/1024/1024) Total_MB from dba_temp_files ) b;

Who is using temp with login id and SID

select u.tablespace
       , u.segfile#
       , u.segblk#
       , u.blocks
       , s.sid
       , s.serial#
--       , p.spid
       , s.username
       , s.osuser
       , s.status
from v$session s
   , v$sort_usage u
   , v$process p
where s.saddr = u.session_addr
  and s.paddr = p.addr
order by u.blocks desc;

Full Database Refresh step by step

This doc is helpful in the context when we are refreshing database from prod to non prod and want to keep intact all the grant,users,password and db link in the Target environment after refresh.

If there is no such requirement than steps 1 to 12 can be ignored and continue from step 13 .

=============== Presteps ================

1.Check the size of the database:
========================
Spool database_size.log

col "Database Size" format a20
col "Free space" format a20
col "Used space" format a20
select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size"
, round(sum(used.bytes) / 1024 / 1024 / 1024 ) -
round(free.p / 1024 / 1024 / 1024) || ' GB' "Used space"
, round(free.p / 1024 / 1024 / 1024) || ' GB' "Free space"
from    (select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log) used
, (select sum(bytes) as p
from dba_free_space) free
group by free.p
/
spool off;


========================
1) Take backup of  all the existing user in the database 

set echo off
set head off
set pagesize 0
set feedback off
set linesize 2000
spool pre_refresh_user.sql
select 'create user "'||a.username||'" identified by values '''||decode(b.spare4,NULL,NULL,b.spare4||';')||b.password||
''' default tablespace '||a.default_tablespace||' temporary tablespace '||a.temporary_tablespace||' profile '||a.profile||' ; '
from dba_users a, user$ b
where a.username = b.name ;
spool off


=================
2) Pull all the system privilege exist in the database.


set echo off
set head off
set pagesize 0
set feedback off
set linesize 2000
spool sys_privs.sql
select 'grant '||privilege||' to "'||grantee||'" ;'  from dba_sys_privs where grantee not in 'SYS';
spool off


=================
3)Pull all the user password from the database.


set echo off
set head off
set pagesize 0
set feedback off
set linesize 200
spool passwords.sql
select 'alter user "'||a.username||'" identified by values '''||decode(b.spare4,NULL,NULL,b.spare4||';')||b.password||''';'
from dba_users a, user$ b
where a.username = b.name ;
spool off


=============
4)Pull all the object privilege exists in the database.


spool owner.sql
select 'grant '||privilege||' on '||owner||'."'||table_name||'" to  "'||grantee||'"; ' from dba_tab_privs ;
spool off


==============================
5)  Pull all the Role exist in the database.

set echo off
set head off
set pagesize 0
set feedback off
set linesize 200
spool pre_refresh_role_privs.sql
select 'grant '||granted_role||' to "'||grantee||'";' from dba_role_privs where grantee like '%%';
spool off


================
6)  Pull all existing profiles from Database


spool profile.sql
select distinct profile from dba_profiles;
spool off;

spool profile_ddl.sql
set lines 1000 pages 1000;
select dbms_metadata.get_ddl('PROFILE',a.profile) from dba_profiles a;
spool off;


====================
7) Take backup of profile assign to users.

spool user_profile.sql
select 'alter user "'||username||'" profile '||profile||';' from dba_users;
spool off;

============================

8) Take Backup of temporary tablespaces and their Allocated spaces

spool temp_undo_tablespace.sql
set head on
a)

SELECT tablespace_name,
SUM(bytes_used)/1024/1024 "USED MB",
SUM(bytes_free)/1024/1024 "FREE MB"
FROM V$temp_space_header
GROUP BY tablespace_name;

b)

col file_name for a60
select file_name,bytes/1024/1024 from dba_temp_files
where tablespace_name like '%TEMP%';

c)

col name for a25
select a.tbl "Name",a.tsz "Total Size",b.fsz "Free Space",
   round((1-(b.fsz/a.tsz))*100) "Pct Used",round((b.fsz/a.tsz)*100) "Pct Free" from
               (select tablespace_name tbl,sum(bytes)/1024/1024/1024 TSZ from dba_data_files
               where tablespace_name like '%%' group by tablespace_name) a,
               (select tablespace_name tblsp,sum(bytes)/1024/1024/1024 FSZ from dba_free_space
               where tablespace_name like '%%' group by tablespace_name) b
   Where a.tbl=b.tblsp;

d)

col file_name for a60
select file_name,bytes/1024/1024 from dba_data_files
where tablespace_name ='UNDOTBS1';


spool off;


==============================================================
9)Check the listener name and services pointed to the database :

$ ps -ef|grep inh > listener_services.sql
$ lsnrctl status <listener_name> | grep <database_name> >> listener_services.sql

===========================
10) Take backup of all the DB Link :

SQL> create directory <directory name> as 'path'
$ expdp full=y include=DB_LINK dumpfile=dblink.dmp directory=prechecks logfile=dblink.log


=================================
11)Jobs information  

spool jobs.sql
select * from dba_jobs;
select * from dba_scheduler_jobs;
select count(*) from dba_jobs;
select count(*) from dba_scheduler_jobs;
spool off;

$ expdp full=y include=procobj dumpfile=jobs.dmp directory=prechecks logfile=jobs.log


========================================================================
12)Make a copy of the pfile and spfile in your current backup location


SQL>sqlplus / as sysdba
SQL> create pfile=’/backup location/initdbname.ora’ from spfile;


SQL> cd $ORACLE_HOME/dbs
$ cp spfiledbname.ora /backup location/spfiledbname.ora

With that, the pre-steps are now complete!!! Starting the DB refresh.

 Before Proceeding with DB restore make sure all the required backup pieces are ready in target machine.

 13)Shutdown the Target DB

SQL> sqlplus / as sysdba
SQL> shut immediate;


14)startup the DB in  mount  mode using the pfile in restrict mode Use pfile mentioned in steps 12.

SQL> startup mount pfile=’/backup location/initdbname.ora’ restrict;

(It is good practice to copy the pfile in different location and use that pfile during refresh .)


15)drop the destination database :

SQL> drop database;

16)edit the pfile: Comment out the log_file_name_convert & db_file_name_convert parameters

db_name='source database' (Usually Source DB if target and source db name is different otherwise no change required)
db_unique_name='destination database' (QA or DEV DB which is getting refreshed)

17)Set up the environment of the database to be refreshed :

export ORACLE_SID=<Source_databasename>
echo $ORACLE_SID
export ORACLE_HOME=<$ORACLE_HOME of the destination database>
check oracle home from cat /var/opt/oracle/oratab  or env|grep ora
echo $ORACLE_HOME


18)startup nomount the database using the pfile edited above in step 15 :


SQL> startup nomount pfile=’ /backup location/initdbname.ora’;
SQL> exit


19)Connect RMAN to start restore :

$ rman target /

20)Restore the controlfile using latest controlfile copy which is in your backup location i.e. controlfile of the production database :

RMAN> restore controlfile from ‘/backup location/name_of_controlfile’;

((control file should be the latest control file amongst the no of controlefiles  ))


21)Mount the database :

RMAN> alter database mount;

22)Now we need to catalog the backup Pieces which we are going to restore :

RMAN> catalog start with '/backup location/’; ( Location where the backup pieces are available)

RMAN> exit;

23)Now we need to execute set  newname command    to define a new location for the restored datafile in target machine.

Login to Target database (usually production)

$ sqlplus / as sysdba
SQL> set lines 300
SQL> select ' set newname for datafile ' || file# || ' to '''||name||''';' from v$datafile;



The output would look like below (of course the number of datafiles will be much more than what is shown below)
---------------------------------------------------------------------------------------------------------------

set newname for datafile 31 to '/u02/oradata/islam/system/undotbs04.dbf';
set newname for datafile 34 to ‘/u02/oradata/islam/datafile/edwcclyubal10_003.dbf';

change the name of the database to the name of the destination database (for all the datafiles)
----------------------------------------------------------------------------------------------------------------------

set newname for datafile 31 to '/u02/oradata/islam2/system/undotbs04.dbf';
set newname for datafile 34 to ‘/u02/oradata/islam2/datafile/edwccluyubal10_003.dbf';


24)Create a shell script which will complete your restore :

$ sh restore_dbname.sh &  (this will start the restore by executing the commands in the shell script)

vi restore_dbname.sh
rman target / log=/backup location/restore_dbname.log << EOF
run
{
 set newname for datafile 1 to '/u02/oradata/system/system_001.dbf';
 set newname for datafile 2 to '/u02/oradata/system/undotbs01.dbf';
 set newname for datafile 3 to '/u02/oradata/system/sysaux_004.dbf';
 set newname for datafile 4 to '/u02/oradata/datafile/assetedwdata01_006.dbf';
 set newname for datafile 5 to '/u02/oradata/system/undotbs02.dbf';
 set newname for datafile 6 to '/u02/oradata/datafile/edwdata01_008.dbf';
 set newname for datafile 7 to '/u02/oradata/datafile/edwindx01_024.dbf';
 set newname for datafile 8 to '/u02/oradata/datafile/edwdata01_007.dbf';
 set newname for datafile 9 to '/u02/oradata/datafile/edwindx01_023.dbf';
 set newname for datafile 10 to '/u02/oradata//system/undotbs03.dbf';
restore database;
switch datafile all;
recover database;
}
exit
EOF
:wq!


25)Tail the log file and monitor the progress of the restore periodically :

$ tail –f /backup location/restore_dbname.log  (this is the log mentioned  in the shell script)

26)Once the restore is complete, it will fail with an error and return the message :

"archivelog sequence not found."

27)Now, rename the existing redo log members with new names and clear the old log groups :

 select * from v$logfile;

check the redologfiles

SQL> alter database rename file ‘/u02/oradata/sourcedb/system/redo06a.log’ to ‘/u02/oradata/targetdb/system/redo06a.log’;

SQL> alter database rename file ‘/u02/oradata/sourcedb/mirror/redo06b.log’ to ‘/u02/oradata/targetdb/mirror/redo06b.log’;

Follow similar steps for all the members

28)Clear the Log group :

SQL> alter database clear logfile group 6; (Follow similar steps for all the members)

29)Now, open the database with resetlogs option :

SQL> alter database open resetlogs;

30 )create new tempfiles of same size as per presteps taken in step 8 and delete all the other extra tempfile.(This steps is only applicable when source and target have different temp tablespace structure) :

select * from v$tempfile;

check the tempfiles from the presteps and create exactly similar as per step 8.

Add new tempfile with privious size and drop the existing one.
SQL> alter tablespace TEMP add tempfile '/u02/oradata/dbname/datafile/TEMP01.dbf' size 4096m;(New)
SQL>alter database  tempfile '/u02/oradata/sourcedb/system/temp_users01.dbf'  drop;(Old)

SQL> alter tablespace TEMP add tempfile ‘/path/name’ size xx G;

31)shutdown the database :

SQL> shut immediate;

32)startup mount with the pfile in the backup location and change db name and db id using NID :

$ sqlplus / as sysdba

idle> startup mount pfile='pfile_location/initdbname.ora';

$ nid target=sys dbname=’destination database’
Note: After nid command completed successfully DB will shutdown automatically and need to start with resetlog option.

33)Bring up the database with original PFILE from $ORACLE_HOME/dbs/ location.

SQL> startup mount pfile='$ORACLE_HOME/dbs/initdbname.ora';
SQL> alter database noarchivelog;(If db was running in  no archive log mode )

34)Open the database with resetlogs option :

SQL> alter database open resetlogs;

35)Check if the db name has changed appropriately and all its files are in proper location :

SQL> select name,open_mode from v$database;
SQL> select name from v$datafile;
SQL> select name from v$tempfile;
SQL> select member from v$logfile;
SQL> select name from v$controlfile;

36)Create a spfile from the pfile(This creates a spfile in $ORACLE_HOME/dbs location#) :

SQL> create spfile from pfile=’/backup location/initdbname.ora’;

37)Now, restart the database :

SQL> shut immediate;
SQL> startup

38)Check the listener status:

SQL> lsnrctl status <listener_name> |grep <database_name>

                 Database Restore Completed Here continue with Post Steps to restore the Grants ,Users , Roles,Db Link (Steps 39 to 52 are only as per requirement  if no such requirement , can be ignore) 



====================================
39)Run pre_refresh_user.sql(step 1) :
====================================

SQL> spool pre_refresh_user_post.log
SQL> @pre_refresh_user.sql
SQL> spool off;


==============================
40)Run sys_privs.sql(step 2) :
==============================

SQL> spool sys_privs_post.log
SQL> @sys_privs_pre.sql
SQL> spool off;

==========================================
Steps for password sql
===========================

 change the profile here whice allow you to change the password.

SQL> spool passwords_post.log
SQL> @passwords.sql
SQL> spool off;


step d) reset the profiles resource limit to the pre existing values



==============================
41)Run passwords.sql(step 3) :
==============================

SQL> spool passwords_post.log
SQL> @passwords.sql
SQL> spool off;


=========================================================================
42)If you got any error like 'ORA-28007: the password cannot be reused' :
=========================================================================

SQL> select username,profile from dba_users where username in ('<u1>','<u2>','<u3>','<u4>','<u5>','<u6>');

SQL> SELECT PROFILE,LIMIT FROM DBA_PROFILES where RESOURCE_NAME='PASSWORD_REUSE_TIME';


=====================================================
43)Change PASSWORD_REUSE_MAX parameter to UNLIMITED :
=====================================================

SQL> alter profile <profile name> LIMIT PASSWORD_REUSE_MAX to UNLIMITED;

SQL> spool passwords_post1.log
SQL> @passwords.sql
SQL> spool off;


======================================================================================
44)Roll back PASSWORD_REUSE_MAX parameter to Original value which is in profiles.sql :
======================================================================================

SQL> alter profile <profile name> LIMIT PASSWORD_REUSE_MAX <value/DEFAULT>;


==========================
45)Run owner.sql(step 4) :
==========================

SQL> spool owner_post.log
SQL> @owner.sql
SQL> spool off;


===========================================
46)Run pre_refresh_role_privs.sql(step 5) :
===========================================

SQL> spool pre_refresh_role_privs_post.log
SQL> @pre_refresh_role_privs.sql
SQL> spool off;



=================================
47)Run user_profile.sql(step 7) :
=================================

SQL> spool user_profile_.log
SQL> @user_profile.sql
SQL> spool off;




============================
48)Drop the Database links :
============================
SQL> spool dblink_drop.log
SQL> set lines 1000 pages 1000
SQL> col OWNER for a20
SQL> col DB_LINK for a20
SQL> col HOST for a15
SQL> select * from dba_db_links;


 ==login with DB link owner

SQL> drop database link DB_LINK_source_TO_target;

Database link dropped.

SQL> conn / as sysdba

Connected.



========================================================
49)Create the Database links from the dumpfile(step 9) :
========================================================

SQL> create directory <directory name used for export> as '/u02/oradata/global/';
SQL> exit

$ impdp full=y include=DB_LINK dumpfile=dblink.dmp logfile=databaselink_post.log directory=prechecks


========================================================
50)Create the Database jobs from the dumpfile(step 10) :
========================================================

$ impdp full=y include=procobj dumpfile=jobs.dmp logfile=jobs_post.log directory=prechecks


===============================================
51)Create Password file : in cd $ORACLE_HOME/dbs
=================================================

$ orapwd file=orapw<dbname>  password=stioerug4_qa3UX


========================================================================
52)Check the connectivity from SQL Developer to the respective schemas in Database by using service name :

####################   Database Refresh Completed  ####################