Friday, June 25, 2010
How to check rows that got inserted today
Luckily I got reference to one of the reply from Pelle in a forum regarding this -
If you are running 10g or higher, you can use the ORA_ROWSCN pseudocolumn, which can be translated to a timestamp using the SCN_TO_TIMESTAMP function.
select ID from SCHEMA.TABLE where scn_to_timestamp( ora_rowscn ) - 0 > sysdate - 1;
But there can be a situation where we insert the data imported from another instance. Since it will have an invalid SCNs, it will raise an exception. So, here is a small function that returns NULL whenever there's an imported row with an invalid SCN.
CREATE OR REPLACE function rowdate( rowscn in number )
return date
as
l_rowts timestamp;
bad_scn exception;
pragma exception_init( bad_scn, -8181 );
begin
l_rowts := scn_to_timestamp( rowscn );
return cast( l_rowts as date);
exception when bad_scn then
return null;
end;
/
and then you can use the following SQL to see the records added to the table but be advised that this will give you not only rows inserted today, but also the updated ones.
select ID from SCHEMA.TABLE where rowdate( ora_rowscn ) > trunc( sysdate );
Friday, June 11, 2010
Monitoring Indexes
Way to acheive this is that, we can turn on the Monitioring on the indexes for a while like -
ALTER INDEX INDEX_NAME
and once the information has been gathered we can turn off the monitoring like -
ALTER INDEX INDEX_NAME
This will record all the information under the table OBJECT_USAGE data dictionary view.
Under this view there are following four column (OBJ#,FLAGS,START_MONITORING,
END_MONITORING)
OBJ# is the OBJECT_ID and can be correlated with DBA_OBJECTS. The OBJECT_ID is a unique identifier of the object in the database. Each object is assigned a unique number to recognize it in the database. So, no two object of two different schemas will be same. That is why Oracle might have kept only one master table (X$) and one master view (OBJECT_USAGE) to know which objects were actually get used and which were not.
FLAGS column of OBJECT_USAGE view can have two values, 1 (USED) and 0 (NOT USED).
After turning on/off the monitoring we can run a simple query like the following to get the Objects that are being in use -
SELECT A.OWNER, A.OBJECT_NAME, A.OBJECT_TYPE
FROM DBA_OBJECTS A, OBJECT_USAGE B
WHERE A.OBJECT_ID=B.OBJ#
AND B.FLAGS=1
/
If you want to check the space used by all INDEXS that are NOT in use then you can check it by running this query -
SELECT SUM(BYTES/1024/1024/1024) "TOTAL SPACE (GB)"
FROM DBA_SEGMENTS A, DBA_OBJECTS B, OBJECT_USAGE C
WHERE A.OWNER=B.OWNER
AND A.SEGMENT_NAME=B.OBJECT_NAME
AND B.OBJECT_ID=C.OBJ#
AND B.OBJECT_TYPE='INDEX'
AND C.FLAGS=0
/
This can save us not only space but can enhance the performance of the query also.
Thursday, May 13, 2010
Where are my NULL records
Created a table with three records -
SQL> select * from NULL_TEST;
ID NAME
-- ------------------------------
1 Apple
2 Orange
Watermelon
If I select all the records other than ID=2 then I get the following resutset -
SQL> select * from NULL_TEST where ID!=2;
ID NAME
-- ------------------------------
1 Apple
So, why it is not showing the record that has ID=NULL?
Because, we not use equality ("=") or inequality ("!=") when dealing with NULL; only "IS NULL" or "IS NOT NULL".
Our SQL should be like this -
SQL> select * from NULL_TEST where ID!=2 or ID is NULL;
ID NAME
-- ------------------------------
1 Apple
Watermelon
Thats Better!
Wednesday, May 12, 2010
Temp Tablespace Usage
SELECT sql_text,
sum(onepass_executions) onepass_cnt,
sum(multipasses_executions) mpass_cnt
FROM V$SQL s, V$SQL_WORKAREA wa
WHERE s.address = wa.address
GROUP BY sql_text
HAVING sum(onepass_executions+multipasses_executions)>0;
Script to show allocation on the temp tablespace -
SELECT TO_NUMBER(DECODE(sid, 65535, NULL, sid)) sid,
operation_type OPERATION,
TRUNC(expected_size/1024) ESIZE,
TRUNC(actual_mem_used/1024) MEM,
TRUNC(max_mem_used/1024) MAXMEM,
number_passes PASS,
TRUNC(tempseg_size/1024) TSIZE
FROM V$SQL_WORKAREA_ACTIVE
ORDER BY 1,2;
Following script you can monitor the actual space used in a temporary tablespace and see the HWM of the temporary tablespace. The script is designed to run when there is only one temporary tablespace in the database.
select sum( u.blocks * blk.block_size)/1024/1024 "Mb. in sort segments",
(hwm.max * blk.block_size)/1024/1024 "Mb. High Water Mark"
from v$sort_usage u,
(select block_sizefrom dba_tablespaceswhere contents = 'TEMPORARY') blk,
(select segblk#+blocks maxfrom v$sort_usagewhere segblk# = (select max(segblk#)
from v$sort_usage) ) hwm
group by hwm.max * blk.block_size/1024/1024;
Disaster Recovery using RMAN
• All the filesystems including oracle binaries and TSM Client of the Database Server will be available in the DR Server.
• Database TESTDB uses a server parameter file (not a client-side initialization parameter file)
• We are using Control File for the purpose of this DR and not the Recovery Catalog
• You have a record of the DBID for TESTDB (1527325274)
• A media manager (TSM Server) is accessible
Steps followed were as follows -
Step 1. Connect RMAN without catalog
rman TARGET / NOCATALOG
Step 2. Start the instance without mounting it.
RMAN> SET DBID 1527325274; --> DBID of the Source Database (TESTDB)
RMAN> STARTUP NOMOUNT;
startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file -----
starting Oracle instance without parameter file for retrival of spfileOracle instance started
Step 3. Restore and edit the server parameter file.
RUN
{
allocate channel t1 device type 'sbt_tape' PARMS="ENV=(TDPO_OPTFILE=/oracle/tdpo/TESTDB/tdpo.opt)";
RESTORE SPFILE TO PFILE '/oracle/product/10.2/dbs/initTESTDB.ora' FROM AUTOBACKUP;
SHUTDOWN ABORT;
}
Step 4. Start the database at NOMOUNT state.
STARTUP FORCE NOMOUNT PFILE='/oracle/product/10.2/dbs/initTESTDB.ora';
Step 5. Restore the Control File from the Autobackup..
RUN
{
ALLOCATE CHANNEL t1 DEVICE TYPE 'sbt_tape' PARMS="ENV=(TDPO_OPTFILE=/oracle/tdpo/TESTDB/tdpo.opt)";
RESTORE CONTROLFILE FROM AUTOBACKUP;
release channel t1;
}
Step 6. Restore and Recover the database
run
{
allocate channel t1 device type 'sbt_tape' PARMS="ENV=(TDPO_OPTFILE=/oracle/tdpo/TESTDB/tdpo.opt)";
set until time "to_date('05/10/2010 4:00','MM/DD/YYYY HH24:MI')";
restore database;
recover database;
release channel t1;
}
Step 7. Open the databse and start Listener
ALTER DATABASE OPEN RESETLOGS;
lsnrctl start
Check if everything is running on the database server -
ps –ef grep smon
ps –ef grep tns
Wednesday, January 20, 2010
Left Brain vs Right Brain
Everyone can have his/her own opinion upon his/her career decisions but while doing MBA our professor "Dr. George Judah" once told us about the Right Brain and Left Brain. I thought may be it can help understand more about our strengths and gives us a direction.
Our brain has two halves - the right and the left. People using left part of the brain are usually logical and analytical. While those who use the right half of the brain are creative, innovative and imaginative. Experimentation has shown that the two different sides, or hemispheres, of the brain are responsible for different manners of thinking. The following table illustrates the differences between left-brain and right-brain thinking:
Left Brain -
Logical
Sequential
Rational
Analytical
Objective
Looks at parts
Right Brain -
Random
Intuitive
Holistic
Synthesizing
Subjective
Looks at wholes
By looking at it one can come to the conclusion that if you are a Left Brain then it will be better to be a DBA rather than going for a management profile which demands a person with a right brain. However it doesn't mean that the Manager can not be logical or analytical, it's just that which trait is required more for which kind of job.
I have seen so many good Techies who failed miserably while becoming an entrepreneur and few people with no or less educational background who went on to become good entrepreneurs. That's where you need to be more creative and innovative rather than just logical and analytical. Hope we learn something from science. For more information please follow these links –
http://www.ehow.com/how_2172882_tell-leftbrained-rightbrained.html
http://jobs.aol.com/articles/2009/01/26/are-you-a-right-or-left-brain-thinker/
RMAN - Data Recovery Advisor
Oracle has introduced a new feature in Oracle Database 11g that can be used to get assistance in case of loss or corruption of datafiles, redo log files or controlfiles. This tool is called Data Recovery Advisor and is accessible from the command line as well as from the GUI. Command line interface is implemented as new commands for RMAN and the graphical interface is integrated into the Enterprise Manager.
The Data Recovery Advisor is a tool that helps you to diagnose and repair data failures and corruptions. The Data Recovery Advisor analyzes failures based on symptoms and intelligently determines optimal repair strategies. The tool can also automatically repair diagnosed failures. It is very useful in case of Block Corruption.
DBAs need to proactively check for the block corruption in the database on a regular basis. There can be two types of block corruptions: physical and logical. When physical corruption occurs, the Oracle kernel is unable to read or write one or more data blocks, usually due to media corruption. In logical corruption, the data that is read back or written is inconsistent. Usually this is due to various bugs at the Oracle or Media layer, and one may observe ORA-00600 or ORA-01578 errors.
In Oracle Database 11g, RMAN directly informs us about corrupted blocks it has found in the output of the backup command and creates a trace file in the Automatic Diagnostic Repository (ADR). Prior to 11g, DBAs had to look at v$database_block_corruption for this information after the backup was done. This is a great improvement. A check such as this that has returned corrupted block in Oracle Database 11g immediately creates a failure in the database. There are multiple checkers available in 11g that are automatically run by the server as soon as critical error is raised, like, database structure checks, integrity checks for the data dictionary as well as block corruption check.
RMAN has the following set of new commands to deal with failures:
• LIST FAILURE
• ADVISE FAILURE
• CHANGE FAILURE
• REPAIR FAILURE
Depending upon the type of failure and it’s recovery time, in Oracle Database 11g, RMAN can choose to use block copies from the flashback logs instead of a backup for a block media recovery. It is in such situations, that the ADVICE FAILURE command comes in very handy.
Let’s look at an example how we can repair a corrupted block from a datafile. For this test, we will be using TSTDB database.
Step 1. Let’s corrupt a data file
dd of=/u01/oradata/TESTDB/example_01.dbf bs=8192 conv=notrunc seek=12
<<>
CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt
EOF
0+3 records in
0+3 records out
Step 2. Check if there is any corruption in the database
SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;
System altered.
SQL> select * from hr.dba;
select * from hr.dba
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 12, block # 12)
ORA-01110: data file 12: '/u01/oradata/TESTDB/example_01.dbf'
Step 3. List the failure as per the DRA
rman target /
connected to target database: TESTDB (DBID=xxxxxxxxxxxx)
RMAN> connect catalog test_rmandb/*****@RMANDB
connected to recovery catalog database
RMAN> list failure;
List of Database Failures
=========================
Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
622 HIGH OPEN 20-OCT-08 Datafile 12: '/u01/oradata/TESTDB/example_01.dbf' contains one or more corrupt blocks
Step 4. Advice the solution for the failure as per the DRA
RMAN> advise failure;
List of Database Failures
=========================
Failure ID Priority Status Time Detected Summary
---------- -------- --------- ------------- -------
622 HIGH OPEN 20-OCT-08 Datafile 12: '/u01/oradata/TESTDB/example_01.dbf' contains one or more corrupt blocks
analyzing automatic repair options; this may take some time
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=129 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.1.0
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=128 device type=DISK
analyzing automatic repair options complete
Mandatory Manual Actions
========================
no manual actions available
Optional Manual Actions
=======================
no manual actions available
Automated Repair Options
========================
Option Repair Description
------ ------------------
1 Perform block media recovery of block 12 in file 12
Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/diag/rdbms/testdb/TESTDB/hm/reco_181505967.hm
Step 5. Now repair the corrupted block as per the solution provided be the DRA
RMAN> repair failure;
Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/diag/rdbms/testdb/TESTDB/hm/reco_181505967.hm
contents of repair script:
# block media recovery
recover datafile 12 block 12;
Do you really want to execute the above repair (enter YES or NO)? yes
executing repair script
Starting recover at 20-OCT-08
using channel ORA_SBT_TAPE_1
using channel ORA_DISK_1
channel ORA_SBT_TAPE_1: restoring block(s)
channel ORA_SBT_TAPE_1: specifying block(s) to restore from backup set
restoring blocks of datafile 00012
channel ORA_SBT_TAPE_1: reading from backup piece 0sjtk0q7_1_1
channel ORA_SBT_TAPE_1: piece handle=0sjtk0q7_1_1 tag=TAG20081020T095831
channel ORA_SBT_TAPE_1: restored block(s) from backup piece 1
channel ORA_SBT_TAPE_1: block restore complete, elapsed time: 00:01:55
starting media recovery
media recovery complete, elapsed time: 00:00:03
Finished recover at 20-OCT-08
repair failure complete
ADR and DRA is a great help for every DBA. DBA might use ADR to prevent making big mistakes during disaster situations, usually when time plays a crucial role and the chance of making a mistake is rather high.
SQL*Plus Copy
1. insert into roop.t@target select * from mark.t;
2. To make to more fast we can use –
insert /*+ append */ into roop.t@target select * from mark.t;
3. Lets look at the SQL*Plus Copy option. The SQL*Plus COPY command is one of the fastest ways of copying data between databases and schemas. This is also one of the few methods that will handle LONG columns correctly.
copy from mark/mark@source to roop/roop@target create t using select * from t;
Let’s understand the syntax –
FROM database - The database that contains the data to be copied. If you omit the FROM clause, the source defaults to the database to which SQL*Plus is connected (that is, the database that other commands address). You must use a FROM clause to specify a source database other than the default. The COPY command FROM clause does not support SYSDBA or SYSOPER privileged connections.
TO database - The database containing the destination table. If you omit the TO clause, the destination defaults to the database to which SQL*Plus is connected (that is, the database that other commands address). You must use a TO clause to specify a destination database other than the default. The COPY command TO clause does not support SYSDBA or SYSOPER privileged connections.
APPEND - Inserts the rows from query into destination_table if the table exists. If destination_table does not exist, COPY creates it.
CREATE - Inserts the rows from query into destination_table after first creating the table. If destination_table already exists, COPY returns an error.
INSERT - Inserts the rows from query into destination_table. If destination_table does not exist, COPY returns an error. When using INSERT, the USING query must select one column for each column in destination_table.
REPLACE - Replaces destination_table and its contents with the rows from query. If destination_table does not exist, COPY creates it. Otherwise, COPY drops the existing table and replaces it with a table containing the copied data.
This command is no more enhanced since Oracle 8.0 and may be declared as obsolete in a future version (but is still there in 11gR2) but there is no replacement to easily copy tables with LONG column (LONG datatype is obsolete).
Since main foucs of this topic was SQL*Plus Copy, I didn't touched other ways to copy data from one database to another like export/datapump, External Tables, sql*Loader, etc.
Saturday, January 16, 2010
RMAN - Automated TSPITR
1. In traditional way, I may have to restore SYSTEM, SYSAUX, UNDO and the USER DATA tablespce first in a auxiliary database and then export this particular Schema and then import in the Target database.
2. Use a DUPLICATE DATABASE command to create this temporary database and export this particular Schema and then import in the Target database.
3. Now we can use fully automated RMAN TSPITR also.
Before I go further, let me make this clear that this Schema that I want to restore is the ONLY schema in the Tablespace that I am restoring. So, it's one-to-one relationship between schema and Tablespace and Schema.
Here is how it goes - Connect to RMAN, Connect to Catalog and run this single line command.
rman target /
connect catalog TEST_RMAN/****@RMANDB;
recover tablespace USERS until time "to_date('2009-10-15 15:40:00','YYYY-MM-DD HH24:MI:SS')" auxiliary destination '/u01/oradata/RESTORE';
That's it. Once the above statement is executed, RMAN does the following for us:
- Creates auxiliary instance (including the pfile etc..)
- Mounts the auxiliary instance
- Makes the candidate tablespace into OFFLINE
- Restores the basic tablespace UNDO, SYTEM,SYSAUX and then the required tablespace
- Applies archives (completes recovery)
- Opens the database
- Performs an export of the objects residing in the tablespace
- Shutdown aux instance
- Import the objects into the target database
- Remove aux instance and cleanup
Now since I was having only one-to-one relationship between Tablespace and Schema so only one schema got back to the given point-in-time but if there are multiple schemas in this tablespace then all of them will go to this given point-in-time.
Important Oracle's MySupport Documents
Doc ID: 207303.1 - Client / Server / Interoperability Support Between Different Oracle Versions
Doc ID: 169706.1 - DB and OS Installation and Configuration Requirements
HowTo SQLs
**********************************
[Oracle Version - 11.1.0.7]
Here is a simple SQL that you can use to look at the hidden parameters of Oracle. You can not neither see these paramers using "SHOW PARAMETERS" or from V$PARAMETER view. Be extra careful before changing these parameters.
SELECT X.KSPPINM NAME, DECODE(BITAND(KSPPIFLG/256, 1), 1, 'TRUE', 'FALSE') SESMOD,
DECODE( BITAND(KSPPIFLG/65536, 3), 1, 'IMMEDIATE', 2, 'DEFERRED', 3, 'IMMEDIATE', 'FALSE' )
SYSMOD, KSPPDESC DESCRIPTION
FROM SYS.X$KSPPI X
WHERE X.INST_ID = USERENV('INSTANCE') AND TRANSLATE(KSPPINM,'_','#') LIKE '#%'
ORDER BY 1 ;
OR
COLUMN parameter FORMAT a45
COLUMN description FORMAT a50 WORD_WRAPPED
COLUMN "Session VALUE" FORMAT a10
COLUMN "Instance VALUE" FORMAT a10
SELECT a.ksppinm "Parameter", a.ksppdesc "Description",
b.ksppstvl "Session Value", c.ksppstvl "Instance Value"
FROM x$ksppi a, x$ksppcv b, x$ksppsv c
WHERE a.indx = b.indx
AND a.indx = c.indx
AND a.ksppinm LIKE '%_max_cr_dba';
How To Check the SID and Serial# of an OS Process
*************************************
set linesize 999
col MACHINE format a15
col PROGRAM format a20
select a.sid,a.serial#,a.machine,a.program
from v$session a, v$process b
where a.PADDR=b.ADDR and b.spid=:OS_ID
(:OS_ID is your OS Process ID)
This one shows SQL that is currently "ACTIVE":-
select S.USERNAME, s.sid, s.osuser, t.sql_id, sql_text
from v$sqltext_with_newlines t,V$SESSION s
where t.address =s.sql_address
and t.hash_value = s.sql_hash_value
and s.status = 'ACTIVE'
and s.username <> 'SYSTEM'
order by s.sid,t.piece
/
This shows locks. Sometimes things are going slow, but it's because it is blocked waiting for a lock:
select
object_name,
object_type,
session_id,
type, -- Type or system/user lock
lmode, -- lock mode in which session holds lock
request, block,
ctime -- Time since current mode was granted
from v$locked_object, all_objects, v$lock
where
v$locked_object.object_id = all_objects.object_id
AND v$lock.id1 = all_objects.object_id
AND v$lock.sid = v$locked_object.session_id
order by session_id, ctime desc, object_name
/
This is a good one for finding long operations (e.g. full table scans). If it is because of lots of short operations, nothing will show up.
COLUMN percent FORMAT 999.99
SELECT sid, to_char(start_time,'hh24:mi:ss') stime, message,( sofar/totalwork)* 100 percent
FROM v$session_longops WHERE sofar/totalwork <>
/
Thursday, January 14, 2010
RMAN - Parallel Backup of a Very Large Datafile
Let’s look at an example, in our Test database TESTDB, the SYSTEM tablespace contains a single 620MB datafile. Although it's not an candidate for a parallel bacup but for this test purpose I will try to use the parallel backup concept in this file itself. Three system backup to tape (SBT) channels are configured, with the parallelism setting for the SBT device set to 3.
RMAN> backup section size 250M tablespace SYSTEM;
Starting backup at 03-NOV-08
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=131 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.5.1.0
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=138 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: Data Protection for Oracle: version 5.5.1.0
allocated channel: ORA_SBT_TAPE_3
channel ORA_SBT_TAPE_3: SID=130 device type=SBT_TAPE
channel ORA_SBT_TAPE_3: Data Protection for Oracle: version 5.5.1.0
channel ORA_SBT_TAPE_1: starting full datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/oradata/TESTDB/system_01.dbf
backing up blocks 1 through 32000
channel ORA_SBT_TAPE_1: starting piece 1 at 03-NOV-08
channel ORA_SBT_TAPE_2: starting full datafile backup set
channel ORA_SBT_TAPE_2: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/oradata/TESTDB/system_01.dbf
backing up blocks 32001 through 64000
channel ORA_SBT_TAPE_2: starting piece 2 at 03-NOV-08
channel ORA_SBT_TAPE_3: starting full datafile backup set
channel ORA_SBT_TAPE_3: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/oradata/TESTDB/system_01.dbf
backing up blocks 64001 through 80640
channel ORA_SBT_TAPE_3: starting piece 3 at 03-NOV-08
channel ORA_SBT_TAPE_3: finished piece 3 at 03-NOV-08
piece handle=16juouo6_3_1 tag=TAG20081103T101014 comment=API Version 2.0,MMS Version 5.5.1.0
channel ORA_SBT_TAPE_3: backup set complete, elapsed time: 00:00:23
channel ORA_SBT_TAPE_1: finished piece 1 at 03-NOV-08
piece handle=16juouo6_1_1 tag=TAG20081103T101014 comment=API Version 2.0,MMS Version 5.5.1.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:25
channel ORA_SBT_TAPE_2: finished piece 2 at 03-NOV-08
piece handle=16juouo6_2_1 tag=TAG20081103T101014 comment=API Version 2.0,MMS Version 5.5.1.0
channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:00:25
Finished backup at 03-NOV-08
The command creates one backup set containing three (250+250+120MB) backup pieces. The command results in an Oracle RMAN backup set in which each backup piece contains a file section, which is a contiguous range of blocks in a datafile.
TCP Validnode Checking
Depending on the type of application and network configuration, valid node checking can be a powerful tool to restrict most traffic from the Listener. Most web applications only require access to the Listener from the application servers and a limited number of clients for administration.
For Oracle 9i/10/11g, the valid node checking lines are added to the $ORACLE_HOME/network/admin/sqlnet.ora
For Oracle 8/8i, the lines are added to the $ORACLE_HOME/network/admin/protocol.ora file.
tcp.validnode_checking = yes
tcp.invited_nodes = (x.x.x.x name, x.x.x.x name)
tcp.excluded_nodes=( x.x.x.x name, x.x.x.x name)
Include either the invited_nodes or excluded_nodes, but do not use both. Wildcards, subnets, etc. are not valid, only individual IP addresses or host names are allowed. For more sophisticated checking, use Oracle Connection Manager.
The Listener must be stopped and started for valid node checking to become active. There is no hard limit on the number of nodes that can be included, but for a large number of entries using Oracle Connection Manager may be a better solution. If many clients require direct SQL*Net access to the database, it is often difficult to use valid node checking due to constantly changing network configurations.
Let’s now take an example -
Step 1 – Two users (SUSH and ROOP) connected to one of the test database using sqlplus.
Both were able to connect to the database.
Step 2 – I then updated the sqlnet.ora of this test database and added the following parameters –
tcp.validnode_checking = yes
tcp.invited_nodes = (10.100.10.11, 10.100.10.12)
Since I want to give privilege only to my machine to connect to this database, I added my machine’s IP address along with the IP address of the server itself. I added these parameters not only to the server’s sqlnet.ora but to my machine’s sqlnet.ora also.
Step 3 – Bounced the Listener. So, that the changed parameters can be now in effect.
Step 4 – Now I tried to connect to the same schema again and I was successful in doing so.
Step 5 – Now SUSH tried to connect to the same schema but she was unsuccessful with the following error –
H:\>sqlplus test_schema/*****@sandbox
SQL*Plus: Release 11.1.0.6.0 - Production on Wed Jan 6 15:48:02 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
ERROR:
ORA-12537: TNS:connection closed
This proves that valid node checking helps in securing the database server and is quite a robust security solution from external attacks. But we need to assess each database server and find if it will really be a viable solution to your environment.
Challenges you can face while implementing this solution will be like –
1. One can have multiple applications on one database. So one application might be touched by only application servers but another can be touched by not only app server but individuals also. So the management tracking all these IP addresses will be too cumbersome.
2. One can have only one Listener for all the databases on one server. This has same issues as described in point 1.
Password in Clear Text?
1. How the password moves over the network?
2. Is the password in Redo logs stored in encrypted format?
How the password moves over the network?
In the SQLNET.ora add the following parameters –
TRACE_LEVEL_SERVER = SUPPORT
TRACE_DIRECTORY_SERVER = /u01/product/10.2/network/trace
This will create a sqnet.trc under /u01/product/10.2/network/trace
Now, if you change the password of any user, like;
ALTER USER TEST IDENTIFIED BY TEST;
Then this trace file will capture the changes to the trace file like this –
[000001 24-SEP-2008 13:29:33:535] nsprecv: 01 10 1F 93 A8 24 61 6C .....$al
[000001 24-SEP-2008 13:29:33:535] nsprecv: 74 65 72 20 75 73 65 72 ter.user
[000001 24-SEP-2008 13:29:33:535] nsprecv: 20 74 65 73 74 31 20 69 .test1.i
[000001 24-SEP-2008 13:29:33:535] nsprecv: 64 65 6E 74 69 66 69 65 dentifie
[000001 24-SEP-2008 13:29:33:535] nsprecv: 64 20 62 79 20 74 65 73 d.by.tes
[000001 24-SEP-2008 13:29:33:535] nsprecv: 74 31 00 00 00 01 00 00 t1......
This suggests us that passwords are transferred in an unencrypted format. But we can disable the creation of sqlnet.trc file by setting the parameter TRACE_LEVEL_SERVER set to OFF.
Is the password in Redo logs stored in encrypted format?
Yes, the password stored in the redolog are in encrypted format although we should not be using easy password that can be cracked easily.
In-Memory Undo - IMU
Overview
In-Memory UNDO (IMU) became a standard feature starting with the Oracle 10g. Need for the IMU was due the problem related to Segment Management. Segment Management Consumes a lot of resources like memory, cpu and IO. With introduction of IMU, Oracle offloads some of the Segment Management cost that eventually helps in CPU or Memory allocation.
Oracle uses UNDO segment for two main purposes –
1. To allow a transaction to rollback so the database so the database looks just like it did before the transaction started.
2. Whenever a query starts and ends the result will match what the database looked like when the query started.(Read Consistency)
Traditional UNDO Management (Prior 10g)
• Traditional UNDIO Segments uses physical blocks, just like any other segment.
• If a query needs to read an undo block to maintain read-consistency, it still has to go through all the latch and potentially I/O overhead of any other query.
• Additionally, undo block changes also must maintain redo information for recovery purposes.
• This creates too much of overhead on the resources and can impact the performance. Clearly there exists an opportunity for increased performance
In-memory undo
• IMU replaces undo blocks with an in-memory control structure called an IMU node.
• Instead of writing a change to an undo buffer, undo is written to the IMU node and persists in memory even after commit.
• A read-consistent query needs none of the overhead associated with physical blocks, so speed is dramatically improved.
• Since IMUs are not undo segments, their changes do not generate redo. So IMU will reduce the amount of redo an instance generates.
• Tests not only show that using IMU helps in the performance but also show that the queries using IMU were more consistent execution time. It might be because there was less segment management to do.
• Parameter to set In-Memory Undo is _in-memory_undo (default is TRUE).
• Monitoring of IMU can be done through V$SGASTATS, V$SYSSTAT and V$LATCH_CHILDREN
Please refer a very good document presented by Craig A. Shallahamer -