Some Oracle 10g Best Practices

  • Learn to decipher Awr reports.
  • Create index only where needed, drop unneeded indexes.
  • Use outlines if code is not modifiable.
  • Use RAID-5 for OLTP systems, 85-95% of io’s are reads anyway.
  • Create tablespace with extent management local and segment space management auto
  • If you cannot use the datapump:
    • TIP #1: Export with buffer=several hundred MB, recordlength=65535, direct=y
    • TIP #2: Import with commit=n, buffer=several hundred MB, recordlength=65535; and a large UNDO.
  • Use SGA_TARGET, don’t bother setting up the sga yourself.
  • In the Grid Control, set tablespace alert threshold to 92%, the default at 97% is too risky. Better, create a monitoring template.
  • Rebuild indexes online regularly; forget about validating the need to since some application indexes might be locked all the time. And as it is time consuming to find which indexes really need a rebuild.
    • TIP: If you can afford, recreate the index instead, it’s better than a rebuild
  • On a normal OLTP database, let Oracle gather the stats. Don’t bother.
  • Reorganize the tables by using alter table move tablespace ….
  • Cache all small static table into a buffer keep pool.
  • Scan the db regularly for costly full table scans.
  • Forget about the myth of restarting the database weekly to supposedly clean the instance, it flushes your cache and some views might suffer from this. In one instance, the view took 1h30 to re-cache itself.
  • If you see queries with lots of outer joins, validate each and every outer joins, some developers don’t take any chances!!! At one client, I saved them 2 cpu by removing 7 outer joins in a table trigger.

Rman Backup script on Windows


Here’s a script I created recently to backup a database under Windows with RMAN.

Hope you enjoy it.


The script takes 4 parameters:

  • Target Database Sid
  • Rman Database Sid
  • Rman Database Password
  • Level of Rman backup

—–  Begin Script —–


@echo on


REM This bit generates the RMAN script to backup database,
REM archivelogs and control file and then crosscheck output.
echo run { > rman_%ORACLE_SID%.rcv
echo allocate channel d1 type disk; >> rman_%ORACLE_SID%.rcv
echo allocate channel d2 type disk; >> rman_%ORACLE_SID%.rcv
echo allocate channel d3 type disk; >> rman_%ORACLE_SID%.rcv
echo backup incremental level %LEVELBCK% format
echo sql ‘alter system archive log current’; >>rman_%ORACLE_SID%.rcv
echo backup archivelog all format
echo backup current controlfile format
echo release channel d1; >> rman_%ORACLE_SID%.rcv
echo release channel d2; >> rman_%ORACLE_SID%.rcv
echo release channel d3; >> rman_%ORACLE_SID%.rcv
echo } >> rman_%ORACLE_SID%.rcv
echo allocate channel for maintenance type disk; >> rman_%ORACLE_SID%.rcv
echo sql ‘alter system archive log current’; >> rman_%ORACLE_SID%.rcv
echo crosscheck backup; >> rman_%ORACLE_SID%.rcv
echo crosscheck backup of archivelog all; >> rman_%ORACLE_SID%.rcv
echo crosscheck backup of controlfile; >> rman_%ORACLE_SID%.rcv
echo release channel; >> rman_%ORACLE_SID%.rcv
echo exit >> rman_%ORACLE_SID%.rcv

REM This starts RMAN, executes the script created earlier, then exits and
tidies up.
if %LEVELBCK%==1 goto INCR
if %LEVELBCK%==0 goto FULL

del %RMAN_DEST%*.* /q
rman target / catalog=%CATALOG_PASS% cmdfile=%TMPDIR%rman_%ORACLE_SID%.rcv
del %TMPDIR%rman_%ORACLE_SID%.rcv
goto END
rman target / catalog=%CATALOG_PASS% cmdfile=%TMPDIR%rman_%ORACLE_SID%.rcv
del %TMPDIR%rman_%ORACLE_SID%.rcv

—- end script —-

Network import with DataPump

With DataPump, there is no need to create an export file anymore, if the sole purpose is to import data.


Before proceeding with the import, the following needs to be configured:


  • Create a streams pool (50Mb)
  • Create a directory (Since there is no export file involved, it serves as the location of the datapump logfile, there is no export file involved)
  • Grant read-write privileges on the datapump directory to the datapump user
  • Create a public database link to the remote database


Import with DataPump

Generic Example: (in this case I connected to the remote schema directly)


  1. alter system set streams_pool_size=50Mb scope=both;
  2. create directory DATAPUMP_DIR as ‘/export/datapump’;
  3. grant read, write on directory DATAPUMP_DIR to DPEDEV;
  4. create public database link RDPEPRD connect to DPEPRD identified by DPEPRD using ‘rdpeprd’;
  5. impdp dpedev/dpedev directory=datapump_dir network_link=RDPEPRD REMAP_SCHEMA=DPEPRD:DPEDEV

 Make sure the remote user has the EXP_FULL_DATABASE role