Showing posts with label DATABASE. Show all posts
Showing posts with label DATABASE. Show all posts

Saturday 18 January 2014

Problems associated with the Improper Configuration of Listeners parameters in RAC environment lead

Checking the Listener parameters configurations in RAC environment.

Checks if the REMOTE_LISTENER and LOCAL_LISTENER initialization parameters are set for the instance.

Command
sql>show parameter remote_listener
it should be scanname:portnum
ex:   testdbscan:1521
 
sql>show parameter local_listener
It should be vip of that instance

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
local_listener                       string      (DESCRIPTION=(ADDRESS_LIST=(AD
                                                 DRESS=(PROTOCOL=TCP)(HOST=192.168.1.20)(PORT=1521))))
 

192.168.1.20-- vip of that node


Risk

Failure to configure the REMOTE_LISTENER and LOCAL_LISTENER database initialization parameters puts the availability of the database at risk since server side load balancing and connection failover will not be enabled.


Recommendation

Server-side load balancing and failover should be configured for all Oracle RAC environments. This can be accomplished by setting the appropriate values for REMOTE_LISTENER and LOCAL_LISTENER database initialization parameters.

The parameter can be set by using the alter system command (alter system set <parameter_name> = <value> scope=both).



 

Troubleshoting RAC Load Balance not happening properly.

Hi,
  Recently we have experienced a problem on RAC loadbalancing. In one of our environment, we have two node RAC. We observed client sessions were not getting distributed properly. All load was exerted on one node hence cpu% was going beyond 90% and other node was 10-15% load.

We did a series of tests and diagnose the problem. I am sharing this, hope it will work for you as well.


Step-1:
    Check for the scan listener running on nodes
     export GRID_HOME
    $ps -ef|grep tns

   In my case, node1 is having listener_scan1 and node2 is having listener_scan2 and listener_scan3

 Now I tested on node1
  $lsnrctl status listener_scan1

important of output showing

Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.10)(PORT=1521)))
Services Summary...
Service "orcl" has 1 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
Service "oraclXDB" has 2 instance(s).
  Instance "oracl1", status READY, has 1 handler(s) for this service...
The command completed successfully


on node2

$lsnrctl status listener_scan2

Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.11)(PORT=1521)))
Services Summary...
Service "orcl" has 2 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
  Instance "orcl2", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 2 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
  Instance "orcl2", status READY, has 1 handler(s) for this service...
The command completed successfully


on node2
$lsnrctl status listener_scan3

Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.11)(PORT=1521)))
Services Summary...
Service "orcl" has 1 instance(s).
  Instance "orcl2", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 2 instance(s).
  Instance "orcl2", status READY, has 1 handler(s) for this service...
The command completed successfully

Here you can see listener_scan1 and listener_scan3 identifies only one instance on which they are running, but scan2 can identify all the instances in the cluster.

Hence clearly it indicates scan1 and scan3 are not getting load  information from pmon of all nodes in the cluster. So it is clear that we need to register all the instance(pmon) to  all listeners.

In my case:
I did

on node1 database
sql>alter system register.
same on node2 as well

then I bounced back the listener

$lsnrctl reload scan_names in their corresponding instances.

Now after that I cleary see that listener status can display all the instances in the cluster like listener_scan2

Hope this can give some idea resolving such issues.

Any query, please feel free to mail manojpalbabu@gmail.com


















Tuesday 14 January 2014

DB Incarnation problem while doing rman restore

Hi guys,
   Here we will discuss the problem that basically comes due to database incarnation. Hope it will help you.
If any doubt please email me.



Symptoms
After restoring the controlfile, rman restore of database fails with :

RMAN-03002: failure of restore command at 09/14/2006 01:39:09
ORA-01180: cannot create datafile 1
ORA-01110: data file 1: '/<path>/system01.dbf'
Cause
The current incarnation only contains incremental level 1 backups of the database.
There are NO level 0 backups in this incarnation. 
As the physical datafiles do not exist RMAN attempts to create the physical files but this is NOT allowed if the file belongs to the SYSTEM tablespace.

Solution
Reset the database to the previous incarnation and run the restore and recovery again:
RMAN> reset database to incarnation n;  (n should be current or parent backup)
RMAN> restore database;
In my case I gave
RMAN> reset database to incarnation 1;
After applying this command it worked rman>list incarnation;Check the Current incarnation.