Showing posts with label RAC. Show all posts
Showing posts with label RAC. Show all posts

Saturday 4 June 2016

Linux: How to Configure the DNS Server for 11gR2 SCAN (Doc ID 1107295.1)

To BottomTo Bottom

08-May-2013HOWTO
Rate this documentEmail link to this documentOpen document in new windowPrintable Page
In this Document


Goal

Solution

References



Applies to:

Oracle Database - Enterprise Edition - Version 11.2.0.1 to 11.2.0.3 [Release 11.2]
Generic Linux
The commands listed in this Note tested at Red Hat Enterprise Server 5 Update 2. For the other Linux enviroments it should be similar.

Goal

This note explains how to configure the DNS to accommodate SCAN-VIP. In most cases this task is carried out by the Network Administrator, but awareness of these steps can be very useful for assisting your network administrator in configuring DNS properly for SCAN and/or provide the ability to configure DNS in a sandbox enviroment.

If there is no separate DNS Server box available for your test case, you can have one of the cluster nodes (example: rac1 or rac2) also acting as the DNS server. Note, however, that using one of your cluster nodes as your DNS server is not supported in production.

This note will demonstrate how to prepare the SCAN-IP on a Linux DNS Server

When installing Grid Infrastructure, there are 2 options:

1. Configure GNS and let it handle name resolution
OR
2. Choose not to configure GNS and configure each Node and SCAN name with IP addresses defined in DNS

For the purpose of this note, we will not involve GNS (see Note:946452.1 for how to configure GNS).

The three nodes involved in this case are:  rac1, rac2, and dns1.  The domain is:  testenv.com

Node Name           Public IP            Private IP              VIP IP          
rac1.testenv.com     17.17.0.1            172.168.2.1           192.168.2.221
rac2.testenv.com     17.17.0.2            172.168.2.2           192.168.2.222
dns1.testenv.com     17.17.0.35            

The target scan-vip name is: rac-scan  
rac-scan will be configued with the following 3 IP addresses:  192.168.2.11, 192.168.2.12, 192.168.2.13

Solution

1.  On dns1.testenv.com install the DNS Server Package:
# yum install bind-libs bind bind-utils

Three packages must be installed on Linux for DNS Server:
  • bind (includes DNS server, named)
  • bind-utils (utilities for querying DNS servers about host information)
  • bind-libs (libraries used by the bind server and utils package)
You can obtain an account from the Yum Server which will install the package for you automatically. 

OR

You can manually download these packages:
  • bind.XXX.rpm (for example bind-9.2.4-22.el3.i386.rpm)
  • bind-utils.XXX.rpm
  • bind-libs.XX.rpm
And use the rpm command to do the DNS Server installation (For example)
#  rpm -Uvh bind-9.2.3-1.i386.rpm

2. On  dns1.testenv.com system edit the "/etc/named.conf" file

a. Configure the "forwarder" under "options" in "/etc/named.conf " (If you do not have another DNS or Router that can resolve names for you, skip this step) :
options {
.
.
// Forwarder: Anything this DNS can't resolve gets forwarded to other DNS.
forwarders { 10.10.1.1; };  // This is the IP for another DNS/Router
};

b. Configure Zone Entries for your domain in "/etc/named.conf "  
If you are using localdomain, it has been automatically configured and you can skip this step.
For this case we are using "testenv.com" so here we need to add the following lines to "/etc/named.conf"
zone "testenv.com" IN {
type master;
file "testenv.com.zone";
allow-update { none; };
};

The "file" parameter specifies the name of the file in the "/var/named/" directory that contains the configuration for this zone.

c. Configure reverse lookup in "/etc/named.conf "
Reverse lookup is used to let the client find out if the hostname matches to the related IP.  Because we are using 192.168.2.X for VIP and SCAN-VIP so we need to configure the reverse lookup for 192.168.2.X

To configure reverse lookup add the following to "/etc/named.conf"
zone "2.168.192.in-addr.arpa." IN {
type master;
file "2.168.192.in-addr.arpa";
allow-update { none; };
};

3. On dns1.testenv.com edit the config  files under /var/named
a. Edit the DNS Zone Configuration file:
If you are using localdomain you can edit /var/named/localdomain.zone
For this case we edit the file name: testenv.com.zone and localdomain.zone

Add the line below to the end of this file:
rac1-vip IN A 192.168.2.221
rac2-vip IN A 192.168.2.222
rac-scan IN A 192.168.2.11
rac-scan IN A 192.168.2.12
rac-scan IN A 192.168.2.13

Put all the private IPs, VIP and SCAN VIPs in the DNS config file.  If you only want the DNS to resolve the scan-vip, only include the rac-scan with its three corresponding IP addresses in the file.  Also if you only need one SCAN IP, you can put only one entry in the file.

b. Create/Edit the "/var/named/2.168.192.in-addr.arpa" file for reverse lookups as follows:
$ORIGIN 2.168.192.in-addr.arpa.
$TTL 1H
@ IN SOA testenv.com. root.testenv.com. ( 2
3H
1H
1W
1H )
2.168.192.in-addr.arpa. IN NS testenv.com.

221 IN PTR rac1-vip.testenv.com.
222 IN PTR rac2-vip.testenv.com.
11 IN PTR rac-scan.testenv.com.
12 IN PTR rac-scan.testenv.com.
13 IN PTR rac-scan.testenv.com.

4. On dns1.testenv.com : stop/start DNS Server to ensure it can be successfully restarted and make sure the DNS Server will be started automatically:
# service named stop
# service named start
# chkconfig named on

The DNS Server configuration has been completed, next we need to point our RAC nodes to use this DNS server for name resolution.

5. Configure "/etc/resolv.conf" on all nodes:
nameserver 17.17.0.35
search localdomain testenv.com
It should point to the DNS Server Address.  In this case nameserver has been set to the IP address of dns1.  If the node itself is also acting as the DNS Server it should point to its own IP address.

6. Optionally change the hosts search order in  /etc/nsswitch.conf on all nodes:
hosts: dns files nis
The default sequence is: files nis dns; We must move dns to the first entry.
If there is nscd (Name Service Caching Daemon) running, then service nscd needs to be restarted:
# /sbin/service nscd restart


At this point the configuration is complete.  We should be able to test the forward and reverse lookups using the "nslookup" command.

# nslookup rac-scan.testenv.com
Server: 17.17.0.35
Address: 17.17.0.35#53

Name: rac-scan.testenv.com
Address: 192.168.2.11
Name: rac-scan.testenv.com
Address: 192.168.2.12
Name: rac-scan.testenv.com
Address: 192.168.2.13

# nslookup 192.168.2.11
Server: 17.17.0.35
Address: 17.17.0.35#53

11.2.168.192.in-addr.arpa name = rac-scan.testenv.com.

# nslookup 192.168.2.12
Server: 17.17.0.35
Address: 17.17.0.35#53

12.2.168.192.in-addr.arpa name = rac-scan.testenv.com.

# nslookup 192.168.2.13
Server: 17.17.0.35
Address: 17.17.0.35#53

13.2.168.192.in-addr.arpa name = rac-scan.testenv.com.

If you try to ping rac-scan.testenv.com at this moment, you will find it displays one of SCAN ip addresses but it will not be able to be reached. This is the correct behavior.

Once the GI software has been installed and is running it will bring these IP addresses online and at that point the SCAN IP should be pingable.

Thursday 31 July 2014

Apply PSU7 on GRID INFRASTRUCTURE (11.2.0.3)

Applying PSU7 on Grid Infrastructure
====================================
Step-1
------
Backup both Global Inventory and Local Inventory

Step-2
------
export ORACLE_HOME=/u01/app/grid/11.2.0/grid

create ocm response file
--------------------------
Enter Into oracle user and run the below command

$ORACLE_HOME/OPatch/ocm/bin/emocmrsp  -no_banner -output /u02/ocm.rsp


Applying PSU path Using auto option
-------------------------------------
* for PSU7 patching opatch version should be more than or equal to 11.2.0.3.4
So need to download the latest opatch for applying psu7

* unzip the PSU patch to a empty directory
on grid user you run the below command

$mkdir PSU7

$unzip -d /u01/PSU7 p16742216_112030_AIX64-5L.zip

This has two patches

16619892 --rdbms PATCH
16619898 --GRID patch


$chmod -R 777 /u01/PSU7

Now run the below commands from root User

#cd  /u01/PSU7
OPatch auto for GI

The Opatch utility has automated the patch application for the Oracle
Grid Infrastructure (GI) home and the Oracle RAC database homes
when run with root privileges.It must be executed on each node in the cluster
if the GI home or Oracle RAC database home is in non-shared storage.

The utility should not be run in parallel on the cluster nodes.

#/u01/app/grid/11.2.0/grid/OPatch/opatch auto ./ -ocmrf /u02/ocm.rsp

This will patch Both ORACLE_HOME AND GRID_HOME

The auto command will do the below steps

* try to shutdown the local database instance
* Then it applies the patch on ORACLE_HOME

* After that It stops the crs services
* After complete shutdown of services it applies the Patch on GRID_HOME
* After completion of patch apply it start the crs services


POST PATCH STEPS IN GRID ENVIRONMENT
=======================================

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT

ROLLBACK FOR THE PS7
=====================
Case 1: GI Home and Database Homes that are not shared and ACFS file system is not configured.

#opatch auto <UNZIPPED_PATCH_LOCATION> -rollback -ocmrf <ocm response file>

Case 2: GI Home is not shared, Database Home is shared and ACFS may be used.

$ <ORACLE_HOME>/bin/srvctl stop database –d <db-unique-name>

# opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME> -rollback -ocmrf <ocm response file>

# opatch auto <UNZIPPED_PATCH_LOCATION> -oh <DATABASE_HOME> -rollback -ocmrf <ocm response file>

$ <ORACLE_HOME>/bin/srvctl start instance –d <db-unique-name> -n <nodename>

# opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME> -rollback -ocmrf <ocm response file>

$ <ORACLE_HOME>/bin/srvctl start instance –d <db-unique-name> -n <nodename>

Patch Post-Deinstallation Instructions for an Oracle RAC Environment
====================
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle_PSU_<database SID PREFIX>_ROLLBACK.sql
SQL> QUIT

Incase of Inventory failure
==============================

Sometime there may be a chance of Inventory failure for which opatch lsinventory could show wrong information or Error

So we need to create a new inventory or need to copy from a backup oraInventory directory
For creating new Inventory in a GRID ENVIRONMENT

STEP-1 INVENTORY FOR ORACLE_HOME IN A CLUSTER ENVIRONMENT
=========
$ORACLE_HOME/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=$ORACLE_HOME \
ORACLE_HOME_NAME=OraDb11g_home1 CLUSTER_NODES=ehdb5,ehdb6 "INVENTORY_LOCATION=/u01/app/grid/oraInventory" \
-invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=ehdb6

run the above command as a one line you can remove \ from end of each line

STEP-2
========
$GRID_HOME/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=$GRID_HOME \
 ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=ehdb5,ehdb6 CRS=true  "INVENTORY_LOCATION=/u01/app/grid/oraInventory" \
-invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=ehdb6

run the above command as a one line you can remore \ from end of each line 

Thanks for viewing this article

Hope it will help you.


Feel free to ask viewssharings.blogspot.in@gmail.com



Friday 6 June 2014

SCAN LISTENER is showing Intermediate state in 11gR2 RAC

Hi, recently I met a problem with scan listener twice and I solved them in two different scenario.

when I observed in $crsctl stat res -t output scan listenr1 is showing intermediate state and lsnrctl status listener_scan1 is showing no services then I just did the below step in order to relocate the scan where it may belongs

$srvctl relocate scan_listener -i 2 -n ecdb1

it went fine in 1st time.  as other two scan listeners were running fine I did not touch them.

At the second time I faced the same issue, I tried to do the same as above but It all scan listeners were showing intermediate state.

I did the below steps to solve the problem.

step-1  stop all listeners running from ORACLE_HOME and GRID_HOME
$lsnrctl stop 
step-2  stop scan_listeners using srvctl stop scan in the grid home
$srvctl stop scan_listener (stop all scan listeners)
step-3 start the scan using srvctl start scan from grid home
$ srvctl stop scan (stop all scan VIPS)
step-4 start scan VIP using srvctl start scan the scan_listeners from grid home
$ srvctl start scan
$srvctl start scan_listener

Then start the listener from GRID_ORACLE_HOME, this has to be done carefully. Don't start the listener from RDBMS_ORACLE_HOME
now check in crsctl stat res -t scan listeners are online with no problem

 Please feel free to ask manojpalbabu@gmail.com

Saturday 18 January 2014

Problems associated with the Improper Configuration of Listeners parameters in RAC environment lead

Checking the Listener parameters configurations in RAC environment.

Checks if the REMOTE_LISTENER and LOCAL_LISTENER initialization parameters are set for the instance.

Command
sql>show parameter remote_listener
it should be scanname:portnum
ex:   testdbscan:1521
 
sql>show parameter local_listener
It should be vip of that instance

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
local_listener                       string      (DESCRIPTION=(ADDRESS_LIST=(AD
                                                 DRESS=(PROTOCOL=TCP)(HOST=192.168.1.20)(PORT=1521))))
 

192.168.1.20-- vip of that node


Risk

Failure to configure the REMOTE_LISTENER and LOCAL_LISTENER database initialization parameters puts the availability of the database at risk since server side load balancing and connection failover will not be enabled.


Recommendation

Server-side load balancing and failover should be configured for all Oracle RAC environments. This can be accomplished by setting the appropriate values for REMOTE_LISTENER and LOCAL_LISTENER database initialization parameters.

The parameter can be set by using the alter system command (alter system set <parameter_name> = <value> scope=both).



 

Troubleshoting RAC Load Balance not happening properly.

Hi,
  Recently we have experienced a problem on RAC loadbalancing. In one of our environment, we have two node RAC. We observed client sessions were not getting distributed properly. All load was exerted on one node hence cpu% was going beyond 90% and other node was 10-15% load.

We did a series of tests and diagnose the problem. I am sharing this, hope it will work for you as well.


Step-1:
    Check for the scan listener running on nodes
     export GRID_HOME
    $ps -ef|grep tns

   In my case, node1 is having listener_scan1 and node2 is having listener_scan2 and listener_scan3

 Now I tested on node1
  $lsnrctl status listener_scan1

important of output showing

Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.10)(PORT=1521)))
Services Summary...
Service "orcl" has 1 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
Service "oraclXDB" has 2 instance(s).
  Instance "oracl1", status READY, has 1 handler(s) for this service...
The command completed successfully


on node2

$lsnrctl status listener_scan2

Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.11)(PORT=1521)))
Services Summary...
Service "orcl" has 2 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
  Instance "orcl2", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 2 instance(s).
  Instance "orcl1", status READY, has 1 handler(s) for this service...
  Instance "orcl2", status READY, has 1 handler(s) for this service...
The command completed successfully


on node2
$lsnrctl status listener_scan3

Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.11)(PORT=1521)))
Services Summary...
Service "orcl" has 1 instance(s).
  Instance "orcl2", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 2 instance(s).
  Instance "orcl2", status READY, has 1 handler(s) for this service...
The command completed successfully

Here you can see listener_scan1 and listener_scan3 identifies only one instance on which they are running, but scan2 can identify all the instances in the cluster.

Hence clearly it indicates scan1 and scan3 are not getting load  information from pmon of all nodes in the cluster. So it is clear that we need to register all the instance(pmon) to  all listeners.

In my case:
I did

on node1 database
sql>alter system register.
same on node2 as well

then I bounced back the listener

$lsnrctl reload scan_names in their corresponding instances.

Now after that I cleary see that listener status can display all the instances in the cluster like listener_scan2

Hope this can give some idea resolving such issues.

Any query, please feel free to mail manojpalbabu@gmail.com