Thursday, 13 November 2014

ANS1126E (RC27) The file space cannot be deleted because this node does not have permission to delete archived or backed up data.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of delete command on ORA_MAINT_SBT_TAPE_1 channel at 11/13/2014 12:32:22
ORA-19509: failed to delete sequential file, handle="ntpnen1m_1_1", parms=""
ORA-27027: sbtremove2 returned error
ORA-19511: Error received from media manager layer, error text:
   ANS1126E (RC27)   The file space cannot be deleted because this node does not have
permission to delete archived or backed up data.

The above error has happened in the during running delete obsolete command in the rman.

The reason :
=============
We had rman configured to tape in one server(for ex. 192.18.1.1), where the repository holds the backup information that has taken to tape.

but we had moved the database from 192.18.1.1 to 192.18.1.2 server using rman backup, where we tried to clear the rman repository using delete obsolete.

As new node has no delete permission to node filespace of 192.18.1.1 in tsm storage.


resolution:
============
We need to provide the delete permission to the new node.

Enter into tsm server and check the node name of 192.18.1.2 for eg: TsmnewNode

run the below tsm command to provide permission:

tsm>update node TsmnewNode backdel=y

then run the delete obsolete rman command to delete the rman repository catalog info.

Tuesday, 4 November 2014

Apps Login Issue with Unable to generate forwarding URL. Exception: java.lang.RuntimeException: Guest user/pwd does not exist or match: GUEST/ORACLE

Problem Statement:

In a recently cloned instance of oracle APPS R12, I started both the tiers. Then tried to login into application. I found the login page is coming blank then checked all fndlibr process whose count was ok. I checked the database tier I found no error or no such problem in database alert log file.

Then I checked concurrent manager processes with the below query:

SELECT v.concurrent_queue_name,v.concurrent_queue_id,
v.user_concurrent_queue_name,v.max_processes,
v.running_processes,v.control_code,v.target_node,
v.enabled_flag,TO_CHAR(v.last_update_date, 'DD-MON-RR HH24:MI:SS'),
v.node_name,v.node_name2, p.concurrent_processor_name
FROM fnd_concurrent_queues_vl v, fnd_concurrent_processors p
where v.enabled_flag = 'Y'
and v.concurrent_processor_id=p.concurrent_processor_id
order by v.application_id, v.concurrent_queue_id;


It showed all the concurrent manager processes were up and running fine.

2nd Observation: In another system I tried to open the application but here I found the below error on the browser.

Apps Login Issue with Unable to generate forwarding URL. Exception: java.lang.RuntimeException: Guest user/pwd does not exist or match: GUEST/ORACLE

Then I got the below queries to check

The below command run consecutively in the same session
=====================================

SQL> select fnd_web_sec.validate_login('GUEST','oracle') from dual;

FND_WEB_SEC.VALIDATE_LOGIN('GUEST','ORACLE')
--------------------------------------------------------------------------------
N

SQL>
SQL>
SQL> select fnd_message.get from dual;

GET
--------------------------------------------------------------------------------
Oracle error -4031: ORA-04031: unable to allocate 4096 bytes of shared memory ("
java pool","unknown object","joxs heap",":Intern") has been detected in FND_WEB_
SEC.VALIDATE_LOGIN(u,p).

here the problem is clear.

Then I checked.

SQL> show parameter java

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
java_jit_enabled                     boolean     TRUE
java_max_sessionspace_size           integer     0
java_pool_size                       big integer 0
java_soft_sessionspace_limit         integer     0

Then I increased the java_pool_size=128M
 $sqlplus sys as sysdba
sql> alter system set java_pool_size =128M scope=both;


Then I have bounced back the db and appsTier and checked the application I was able to login.

 Please feel free to mail me @    viewssharings.blogspot.in@gmail.com

Saturday, 2 August 2014

The Best way of ssh sharing among linux server

Hi,

Here is the best way of ssh sharing between Linux server


Step 1: Create public and private keys using ssh-key-gen on local-host
    $ssh-keygen (hit enter->enter-->enter)

Step 2: Copy the public key to remote-host using ssh-copy-id

$ssh-copy-id -i ~/.ssh/id_rsa.pub remote-host

Step 3: Login to remote-host without entering the password

$ssh remote-host


This has been tested on Redhat Linux and Oracle Linux and will not work on AIX.

Thanks for reading this.


Thursday, 31 July 2014

Apply PSU7 on GRID INFRASTRUCTURE (11.2.0.3)

Applying PSU7 on Grid Infrastructure
====================================
Step-1
------
Backup both Global Inventory and Local Inventory

Step-2
------
export ORACLE_HOME=/u01/app/grid/11.2.0/grid

create ocm response file
--------------------------
Enter Into oracle user and run the below command

$ORACLE_HOME/OPatch/ocm/bin/emocmrsp  -no_banner -output /u02/ocm.rsp


Applying PSU path Using auto option
-------------------------------------
* for PSU7 patching opatch version should be more than or equal to 11.2.0.3.4
So need to download the latest opatch for applying psu7

* unzip the PSU patch to a empty directory
on grid user you run the below command

$mkdir PSU7

$unzip -d /u01/PSU7 p16742216_112030_AIX64-5L.zip

This has two patches

16619892 --rdbms PATCH
16619898 --GRID patch


$chmod -R 777 /u01/PSU7

Now run the below commands from root User

#cd  /u01/PSU7
OPatch auto for GI

The Opatch utility has automated the patch application for the Oracle
Grid Infrastructure (GI) home and the Oracle RAC database homes
when run with root privileges.It must be executed on each node in the cluster
if the GI home or Oracle RAC database home is in non-shared storage.

The utility should not be run in parallel on the cluster nodes.

#/u01/app/grid/11.2.0/grid/OPatch/opatch auto ./ -ocmrf /u02/ocm.rsp

This will patch Both ORACLE_HOME AND GRID_HOME

The auto command will do the below steps

* try to shutdown the local database instance
* Then it applies the patch on ORACLE_HOME

* After that It stops the crs services
* After complete shutdown of services it applies the Patch on GRID_HOME
* After completion of patch apply it start the crs services


POST PATCH STEPS IN GRID ENVIRONMENT
=======================================

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT

ROLLBACK FOR THE PS7
=====================
Case 1: GI Home and Database Homes that are not shared and ACFS file system is not configured.

#opatch auto <UNZIPPED_PATCH_LOCATION> -rollback -ocmrf <ocm response file>

Case 2: GI Home is not shared, Database Home is shared and ACFS may be used.

$ <ORACLE_HOME>/bin/srvctl stop database –d <db-unique-name>

# opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME> -rollback -ocmrf <ocm response file>

# opatch auto <UNZIPPED_PATCH_LOCATION> -oh <DATABASE_HOME> -rollback -ocmrf <ocm response file>

$ <ORACLE_HOME>/bin/srvctl start instance –d <db-unique-name> -n <nodename>

# opatch auto <UNZIPPED_PATCH_LOCATION> -oh <GI_HOME> -rollback -ocmrf <ocm response file>

$ <ORACLE_HOME>/bin/srvctl start instance –d <db-unique-name> -n <nodename>

Patch Post-Deinstallation Instructions for an Oracle RAC Environment
====================
cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle_PSU_<database SID PREFIX>_ROLLBACK.sql
SQL> QUIT

Incase of Inventory failure
==============================

Sometime there may be a chance of Inventory failure for which opatch lsinventory could show wrong information or Error

So we need to create a new inventory or need to copy from a backup oraInventory directory
For creating new Inventory in a GRID ENVIRONMENT

STEP-1 INVENTORY FOR ORACLE_HOME IN A CLUSTER ENVIRONMENT
=========
$ORACLE_HOME/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=$ORACLE_HOME \
ORACLE_HOME_NAME=OraDb11g_home1 CLUSTER_NODES=ehdb5,ehdb6 "INVENTORY_LOCATION=/u01/app/grid/oraInventory" \
-invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=ehdb6

run the above command as a one line you can remove \ from end of each line

STEP-2
========
$GRID_HOME/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=$GRID_HOME \
 ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=ehdb5,ehdb6 CRS=true  "INVENTORY_LOCATION=/u01/app/grid/oraInventory" \
-invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=ehdb6

run the above command as a one line you can remore \ from end of each line 

Thanks for viewing this article

Hope it will help you.


Feel free to ask viewssharings.blogspot.in@gmail.com



Saturday, 14 June 2014

rman duplicate command for rman cloning(Robustness of ORACLE Technology)



ROBUSTNESS OF ORACLE TECHNOLOGIES : PART1
* Cloning Oracle Database  using backup based duplication
----------------------------------------------------------------------------
I was wondering about the cloning of oracle database using rman which directly converts the database name from ‘X’ to ‘Y’.
For example If I have a production database named PROD, I want to clone it to a new server for testing and the database name I want to keep as TEST.  
Is it possible using rman? If Yes, how can we do ?
The answer is yes and in fact it is the simplest way of doing cloning using rman duplicate command.
There are various ways to use rman duplicate command for database cloning depending on your requirement and availability.
Here are the ways:
Broad classification:
·         Backup based Duplication (only requires backup piece full with archivelog. Source database connection does require)
·         Active duplication (connection to source is required, no backups required )
  In my scenario I had only database rman full backup with archivelog(backup database plus archivelog) was there.
So I did the following steps to clone :
Environment:
Source DB
Target DB
PROD
TEST
DB_FILE_NAME_CONVERT=/u01/PROD/oradata
DB_FILE_NAME_CONVERT=
/u02/TEST/oradata
LOG_FILE_NAME_CONVERT=/u01/PROD/oradata
LOG_FILE_NAME_CONVERT=
/u01/TEST/oradata

I created a initialization parameter file initTEST.ora set the parameter according to TEST database server filesystem.


Added the following parameters in the initTEST.ora
DB_NAME= ‘TEST’
DB_FILE_NAME_CONVERT==’/u01/PROD/oradata’,’/u02/TEST/oradata’
LOG_FILE_NAME_CONVERT=’/u01/PROD/oradata’,’/u02/TEST/oradata’
I have created password file for TEST DATABASE
$cd $ORACLE_HOME/dbs
$orapwd file=orapwTEST password=system$

Copied the backup pieces from PROD to TEST Database location
‘/backup/PROD’
Now we are ready for Backup based duplication
I have started the TEST database with startup nomount.
Then I connected auxiliary database
$rman AUXILIARY /

Rman>duplicate database PROD dbid=123455 to TEST
      BACKUP LOCATION=‘/backup/PROD’
      NOFILENAMECHECK;

At the end it has started with database with name TEST

WOW!!!! It is awesome. It has reduced many steps unlike my previous way of doing rman restore manually.

If I would have used active duplication I would have to use the below rman command
$rman TARGET sys/password@source AUXILIARY /
 
Before running the above command I would have to configure listener and 
tnsnames.ora file for connecting to PROD database.
 
Then, for cloning here would be the command
 
RMAN>DUPLICATE TARGET DATABASE TO TEST
  SPFILE 
  NOFILENAMECHECK;
 
Please feel free to ask at viewssharings.blogspot.in@gmail.com

Friday, 6 June 2014

SCAN LISTENER is showing Intermediate state in 11gR2 RAC

Hi, recently I met a problem with scan listener twice and I solved them in two different scenario.

when I observed in $crsctl stat res -t output scan listenr1 is showing intermediate state and lsnrctl status listener_scan1 is showing no services then I just did the below step in order to relocate the scan where it may belongs

$srvctl relocate scan_listener -i 2 -n ecdb1

it went fine in 1st time.  as other two scan listeners were running fine I did not touch them.

At the second time I faced the same issue, I tried to do the same as above but It all scan listeners were showing intermediate state.

I did the below steps to solve the problem.

step-1  stop all listeners running from ORACLE_HOME and GRID_HOME
$lsnrctl stop 
step-2  stop scan_listeners using srvctl stop scan in the grid home
$srvctl stop scan_listener (stop all scan listeners)
step-3 start the scan using srvctl start scan from grid home
$ srvctl stop scan (stop all scan VIPS)
step-4 start scan VIP using srvctl start scan the scan_listeners from grid home
$ srvctl start scan
$srvctl start scan_listener

Then start the listener from GRID_ORACLE_HOME, this has to be done carefully. Don't start the listener from RDBMS_ORACLE_HOME
now check in crsctl stat res -t scan listeners are online with no problem

 Please feel free to ask manojpalbabu@gmail.com

Monday, 24 March 2014

Dynamic change of resources in AIX servers generates core dumps Error

Hi,
  
    We are observing core dump errors whenever we are increasing the cores or memory dynamically through HMC.

Example
 
    CORE FILE NAME
/u01/app/grid/log/ecdb1/crfmond/core
PROGRAM NAME
osysmond.bin
STACK EXECUTION DISABLED
           0
COME FROM ADDRESS REGISTER
??
PROCESSOR ID
  hw_fru_id: 0
  hw_cpu_id: 2

Cause

Case I:

AIX perfstat_disk() library call has some known issues and according to this, perfstat_disk() core dumps as per above call stack.


Case II:

The problem is caused by dynamic changing the number of CPUs. Currently it is not supported in osysmond.bin for dynamic change of configuration.
Solution

Case I:

These symptom are reported in Bug 16492284 and has been closed as a duplicate of unpublished base bug 14541391. This base bug has been fixed in version 12.1

Case II:

After changing any configuration like adding/deleting cpus/disks/network cards, restart the ora.crf resource:
1) stop the crf stack

    <GRID_HOME>/bin/crsctl stop res ora.crf -init

2) Start the crf stack

    <GRID_HOME>/bin/crsctl start res ora.crf -init

There is neither CRS nor database downtime required for above ora.crf start/stop operation as recommended by oracle support.

Reference: Oracle Support Knowledge Base

Wednesday, 19 March 2014

Possibly Best options for generating nmon reports in AIX 6.1

Hi,

  Here are some tips for generating nmon reports.

/usr/bin/topas_nmon -AdfPtV^ -s 30 -c 2880
/usr/bin/topas_nmon -f -t -d -A -O -L -N -P -V -T -^ -s 60 -c 1440


A--Summarizes the Async I/O (AIO server) processes.
d--Displays the I/O information of disks.
f--Specifies that the output is in spreadsheet format
P--Includes the Paging Space section in the recording file.
t--Includes the top processes in the output
V--ncludes disk volume group section.
^--Includes the Fibre Channel (FC) sections.
n--network statistics
c--processor statistics on bar graphs

Silent mode of Oracle Database Installation using manully created response file

######Silent Mode of Installation of database ignoring prereqs

#####Command ./runInstaller -silent -ignorePrereq  -showProgress -responseFile  /home/oracle/myresp.rsp

copy the below contents to myresp.rsp file and save then use the above command
---------------------------------- ---------------------------------------------
oracle.install.responseFileVersion=11.2.0
oracle.install.option=INSTALL_DB_AND_CONFIG
ORACLE_HOSTNAME=edc-chn-uat
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u01/app/oraInventory
SELECTED_LANGUAGES=en
ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
ORACLE_BASE=/u01/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.EEOptionsSelection=false
oracle.install.db.DBA_GROUP=dba
oracle.install.db.OPER_GROUP=oinstall
oracle.install.db.config.starterdb.type=GENERAL_PURPOSE
oracle.install.db.config.starterdb.globalDBName=ORCL
oracle.install.db.config.starterdb.SID=orcl
oracle.install.db.config.starterdb.characterSet=AL32UTF8
oracle.install.db.config.starterdb.memoryOption=true
oracle.install.db.config.starterdb.memoryLimit=700
oracle.install.db.config.starterdb.installExampleSchemas=true
oracle.install.db.config.starterdb.enableSecuritySettings=true
oracle.install.db.config.starterdb.password.ALL=manoj123
oracle.install.db.config.starterdb.control=DB_CONTROL
oracle.install.db.config.starterdb.automatedBackup.enable=false
oracle.install.db.config.starterdb.storageType=FILE_SYSTEM_STORAGE
oracle.install.db.config.starterdb.fileSystemStorage.dataLocation=/u02/oracle/orcl/oradata
oracle.install.db.config.starterdb.fileSystemStorage.recoveryLocation=/u01/app/oracle/flash_recovery_area
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false
DECLINE_SECURITY_UPDATES=true
oracle.installer.autoupdates.option=SKIP_UPDATES

It works absolutely fine provided that set database prerequisites before installing DB as per oracle suggestion.

All the options given in database software default template. You can also use them. Oracle given response files are present inside dbsoft/install/ directory

Certain parameters and that are to be taken care before 11gR2 GRID INFRASTRUCTURE installation

Here are some parameters, ownership, permissions that are to be taken care before 11gR2 Grid Infrastructure Installation  in AIX on Power Servers.

Network Parameters settings
-------------------------
/usr/sbin/no -o udp_sendspace=65536       
/usr/sbin/no -o udp_recvspace=655360       
/usr/sbin/no -o tcp_sendspace=65536       
/usr/sbin/no -o tcp_recvspace=65536       
/usr/sbin/no -o rfc1323=1       
/usr/sbin/no -o sb_max=1310720       
/usr/sbin/no -o ipqmaxlen=512

Kernel Parameters
------------------------
vmo -p -o minperm%=3   
vmo -p -o maxperm%=90   
vmo -p -o maxclient%=90   
vmo -p -o lru_file_repage=0   
vmo -p -o strict_maxclient=1   
vmo -p -o strict_maxperm=0
ioo –a aio_maxreqs=65536   

Adding capabilities to the grid user as suggested by oracle
---------------------------------------------------
#chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid

 Disk Parameters settings for ASM storage
---------------------------------------------
#/usr/sbin/chdev -l hdisk2 -a pv=clear   
#/usr/sbin/chdev -l hdisk3 -a pv=clear

#chdev -l hdisk2 -a  reserve_policy=no_reserve
#chdev -l hdisk3 -a  reserve_policy=no_reserve

#chown grid:asmadmin /dev/rhdisk2           
#chmod 660 /dev/rhdisk2
#chown grid:asmadmin /dev/rhdisk3           
#chmod 660 /dev/rhdisk3

#chdev -l hdisk2 -a reserve_policy=no_reserve           
#chdev -l hdisk3 -a reserve_policy=no_reserve           

newly added virtual Network to a RHEL-VM using RHEVM was not showing in ifconfig command

Newly added virtual Network to a RHEL-VM using RHEVM was not showing in ifconfig command
==================================================================
ifconfig command was showing only eth0 adn loopback interface

Check for ethtool eth0


[root@testvm1 ~]# ethtool eth0
Settings for eth0:
        Link detected: yes


[root@testvm1 ~]# ethtool eth1
Settings for eth1:
        Link detected: yes


but in ifconfig it is showing only

[root@testvm1 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:1A:4A:28:20:1B
          inet addr:10.40.9.51  Bcast:10.40.9.255  Mask:255.255.255.0
          inet6 addr: fe80::21a:4aff:fe28:201b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:103629 errors:0 dropped:0 overruns:0 frame:0
          TX packets:122 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:10988003 (10.4 MiB)  TX bytes:17383 (16.9 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)


Solution:

  create a new configuration file for eth1 (ifcfg-eth1)

then restart the service

#service network restart

Tuesday, 18 March 2014

Enterprise Manager Dbcontrol for a clustered environment(11gR2 RAC)

Here are the steps for configuring Enterprise Manager dbcontrol for 11gR2 clustered environment.

step-1 export DBUNIQNAME
you can get it from show parameter db_unique_name

$export ORACLE_UNQNAME=RACDB

SYSTEM
Check for DBUNIQUE name
=======
SELECT name, db_unique_name FROM v$database;

check the service status If dbcontrol is running on any cluster node
=================
Note: if dbcontrol repository is there for one node or dbcontrol is running on any cluster node you need to drop it.

$emctl status dbconsole  -- check for dbcontrol running on any node
$emca -deconfig dbcontrol db -repos drop  -- dropping dbcontrol running on single node
$emctl status agent  -- check the status of agent running on any node
$emca -repos drop -cluster


db user passwd unlock
=====================
alter user SYSMAN identified by SYSMAN account unlock;
alter user DBSNMP identified by DBSNMP account unlock;

Check for passwd file
======================
select * from V$PWFILE_USERS;

USERNAME                       SYSDB SYSOP SYSAS
—————————— —– —– —–
SYS                            TRUE  TRUE  FALSE

db user for ASM
==============
export GRID HOME AND SID FOR ASM

sqlplus sys/system as sysasm
create user asmsnmp identified by asmsnmp;
grant sysdba to asmsnmp;
grant sysasm to asmsnmp;
grant sysoper to asmsnmp;

SQL> select * from gv$pwfile_users;

   INST_ID USERNAME                       SYSDB SYSOP SYSAS
---------- ------------------------------ ----- ----- -----
         2 SYS                                           TRUE  TRUE  TRUE
         2 ASMSNMP                               TRUE  TRUE  TRUE
         1 SYS                                           TRUE  TRUE  TRUE
         1 ASMSNMP                               TRUE  TRUE  TRUE

SQL>  select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS
------------------------------ ----- ----- -----
SYS                                     TRUE  TRUE  FALSE
ASMSNMP                         TRUE  FALSE FALSE

FIX:
======
alter user sys identified by SYSTEM;

passwd file for oracle_home/dbs for sysuser
=================
passwords for rdbms home
--------------
orapwd file=orapwRACDB1 password=SYSTEM entries=5 force=y ignorecase=y
orapwd file=orapwRACDB2 password=SYSTEM entries=5 force=y ignorecase=y

password for grid home
----------------------
inside grid_home/dbs
orapwd file=orapw+ASM password=SYSTEM entries=5 force=y ignorecase=y

before creating gridcontrol for cluster please check the followings
1) valid username/password.
2) Database should be up.
3) Scan listener should be up.
4) Database service should be registered with scan listener.
5) Password file should be  configured correctly.

Now configure emctl for cluster
--------------------------------
if you are creating for the first time
 $emca -config dbcontrol db -repos create -cluster

if you want to recreate the emcontrol again(drop and create)
  $emca -config dbcontrol db -repos recreate -cluster

This will ask you few questions like
DBNME
port number
clusternmae
ASMSNMP passwd
SYS Passwrd
DBSNMP passwd

keep these ready and give input.

100% it will be successfull.

Please feel free to ask @manojpalbabu@gmail.com





Sunday, 9 March 2014

Troubleshooting Error 1017 received logging on to the standby (Error 1034 received logging on to the standby. PING[ARC2]: Heartbeat failed to connect to standby 'STBYKOL'. Error is 1034)

Hi Guys,

  Here is an Interesting case and  I was stuck with the problem since last 2 days and a half. In fact I was almost done with everything to resolve the mentioned problem but I could not able to solve almost.

Then I left it as I became exhausted, But I kept on thinking Where could be the problem, at last I got it.

Here was my case:
--------------------

  I have 2 node rac as primary and a standby was intended to create on single node.

   I changed parameter file, created passwords, tns entries accordingly. Then I did duplicate target and restored database as standby. After doing everything I saw, RAC node1 was able to send archive logs to dest_2 which is my standby but unfortunately node2 was unable to enter into standy database.

Here are

Problem Statement:
----------------------------

Standby was receiving node1's log files but node2 was unable to send logfiles due to the below error

Error 1017 received logging on to the standby
------------------------------------------------------------
Check that the primary and standby are using a password file
and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
and that the SYS password is same in the password files.

Steps for diagnosing
----------------------------
step-1
 when I checked  query on node 2

sql>select DEST_ID,DEST_NAME,STATUS,BINDING,ERROR from v$ARCHIVE_DEST where status<>'INACTIVE';

all log file location were valid

Sql> SELECT DESTINATION, STATUS, ERROR FROM V$ARCHIVE_DEST WHERE DEST_ID=2;

dest2 for archive log dest is valid

sql> select error_code,message from v$dataguard_status;

Error 1034 received logging on to the standby

PING[ARC2]: Heartbeat failed to connect to standby 'STBYDB'. Error is 1034.

Then I have done the below steps to resolve the problem

1.) alter system set  log_archive_dest_state_2=defer scope=both sid='*';

  (on primary RAC any node )

   Sql>recover managed standby database cancel;    (on standby side)

2) alter database set SEC_CASE_SENSITIVE_LOGON=FALSE scope=both sid='*';

 (on RAC side)

3) shutdown the standby database infact if possible primary also
then remove password files for all primary rac nodes and standby nodes and then created the password file on their respective server

orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=system entries=5 force=y ignorecase=Y

 Then start the primary db (rac nodes by srvctl start database -d primaryDB )

4) alter system set LOG_ARCHIVE_DEST_2='SERVICE=STBYKOL ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STBYDB' scope=both sid='*';

 (on any primary rac node and standby db)

5) alter system set  log_archive_dest_state_2=enable scope=both sid='*'; (on any primary RAC node)

 6) recover standby database using current logfile disconnect; (on standby database)

After doing this I observed the alert log  and it worked

To check the database  syncing

on both side check the command

sql>select current_scn from v$database;

The value of primary and standby would be almost same


Hope It will help you guys

If any query don't forget to mail me at viewssharings.blogspot.in@gmail.com

 

Wednesday, 22 January 2014

Concurrent Processing - Troubleshooting Concurrent Request ORA-20100 errors in the request logs

Metalink Source:  Doc ID 261693.1

APPLIES TO:Oracle Application Object Library - Version 11.5.10.2 to 12.0.6 [Release 11.5 to 12]
Oracle Concurrent Processing - Version 11.5.10.0 to 12.1.3 [Release 11.5 to 12.1]
Information in this document applies to any platform.

PURPOSE
Provide hints for application administrators and DBAs on how to troubleshoot ORA-20100 when running PL/SQL programs on Apps Concurrent Processing.
TROUBLESHOOTING STEPS
If a PL/SQL concurrent program can not write to an external file, you will receive error messages similar to:
 

MSG-00102: Error Message :ORA-20100: File o0000071.tmp creation for FND_FILE failed.
You will find more information on the cause of the error in request log.
ORA-06512: at "APPS.FND_FILE", line 378
ORA-06512: at "APPS.FND_FILE", line 473
ORA-06512: at "APPS.AP_TRIAL_BALANCE_PKG", line 192
REP-1419: 'beforereport': PL/SQL program aborted.

ORACLE error 20100 in FDPSTP
Cause: FDPSTP failed due to ORA-20100: File o0036176.tmp creation for FND_FILE failed.
You will find more information on the cause of the error in request log.
ORA-06512: at "APPS.FND_FILE", line 410
ORA-06512

NOTE: Applications also produces temporary PL/SQL output files used in concurrent processing. These files are written to a location on the database server node specified by the APPLPTMP environment setting. The APPLPTMP directory must be the same directory as specified by the utl_file_dir parameter in your database initialization file.

Rapid Install sets both APPLPTMP and the utl_file_dir parameter to the same default directory. As the temporary files placed in this directory may contain context sensitive information, it should be a secure directory on the database server node with read and write access for the database server owner. In a multi-node system, the directory defined by APPLPTMP does not need to exist on the application tier servers. During an upgrade with AutoUpgrade, you must provide the utl_file_dir parameter value for the APPLPTMP environment setting.
To isolate where the problem is, verify the following:
1) Make sure your environment is properly sourced.
For example, if you don't have your database environment sourced properly, you will be missing ORA_NLSxx, that is used to indicate where Oracle RDBMS/client software can locate the
defintions of Charactersets (used in NLS_LANG or as NLS_CHARACTERSET/
NLS_NCHAR_CHARACTERSET),NLS_SORT, NLS_LANGUAGE (or derived/related parameters)
or NLS_TERRITORY (or derived/related parameters).

ORA_NLS10 missing causes a non recognition of the usual chr(10) as a new line.

2) Make sure that the name of the file is valid (the file name should not include characters like "^").

3) Make sure that APPLPTMP is set to a valid directory and that BOTH the applmgr user and the database user have read and write permissions on that directory (normally, it can be set to the same directory as APPLPTMP).

Make sure to run autoconfig if you happen to modify APPLPTMP in the  autoconfig context file.

If you have a RAC-enabled environment, make sure you are not missing any mount points you may have pointing to APPLPTMP/APPLTMP

4) Make sure the directory pointed by APPLPTMP is the first entry on the utl_file_dir. Also, verify that all the entries on the utl_file_dir are valid and that the applmgr has read/write permissions.

If using an spfile, verify the proper syntax to set utl_file_dir:

Ex.

    ALTER SYSTEM SET UTL_FILE_DIR='directory1','directory2' scope=spfile;

5) Check if you can write a file directly using FND_FILE, which is the package used by the Application. From sqlplus, connected as the apps user, run:

    SQL> exec FND_FILE.PUT_LINE(FND_FILE.LOG, 'THIS IS A TEST');
This should dump a file on APPLPTMP.

If this test works, it would indicate that FND_FILE is working fine and the problem is possibly with the Application.

You may want to leave only one entry on utl_file_dir for this test.

FND_FILE uses the UTL_FILE package, which can only write to directories specified in init.ora. APPLPTMP is a special directory used only for PLSQL temporary files. When a concurrent program uses the FND_FILE package, the concurrent manager uses APPLPTMP as the directory to which it writes temporary files. Thus, APPLPTMP must be one of the directories listed in init.ora in order for FND_FILE to work.

6) Make sure that the file does not exit on the directory pointed by APPLPTMP.

This issue may occur if the temporary file being created by the concurrent program already exists in that directory and that already existing file is owned by another user, therefore it cannot override it. For example, you have a cloned instance, and the APPLPTMP directory still keeps the temp files that were created in the source instance.

Verify the file stated in the error message you are getting (for the example above File o0000071.tmp) does not exist in the directory pointed by $APPLPTMP. If it does, do the following:

1. Stop all services. This is to make sure no user is logged on and making use of $APPLPTMP
2. Remove all files from directory pointed by $APPLPTMP
3. Restart all services
4. Re-test the failed process

7) If still having problems, check if you can write a file using UTL_FILE, which is used by FND_FILE.

Run the PL/SQL below,  after changing <first entry on utl_file_dir> to the first entry on utl_file_dir (you may want to leave just one entry on utl_file_dir for this test).

set serveroutput on
DECLARE
  file_location VARCHAR2(256) := '<first entry on utl_file_dir>';
  file_name VARCHAR2(256) := 'utlfile1.lst';
  file_text VARCHAR2(256) := 'THIS IS A TEST';
  file_id UTL_FILE.file_type;
BEGIN
  file_id := UTL_FILE.fopen(file_Location, file_name, 'W');
  UTL_FILE.put_line(file_id, file_text);
  UTL_FILE.fclose(file_id);
EXCEPTION
  WHEN UTL_FILE.INVALID_PATH
  THEN dbms_output.put_line('Invalid path ' || SQLERRM);
    WHEN OTHERS
  THEN dbms_output.put_line('Others '|| SQLCODE || ' ' || SQLERRM);
END;
/
This program should dump a file on the requested directory. If the test fails, the problem is probably on the Database side.

If it works, the problem is probably on FND_FILE. In this scenario, check the versions of AFCPPIOS.pls and AFCPPIOB.pls.

8) If still having problems, go to your temporary files directory (i.e. $APPLPTMP) and check is not full. If it is, do the following:

    1. Shutdown all the concurrent processes and managers.
    2. Verify that no FNDLIBR process is running on (ps -ef | grep FNDLIBR).
    3. Delete all the files in the temporary directory.
    4. Run the CMCLEAN.SQL script from Note 134007.1 (don't forget to commit)
    5. Run the concurrent program "Purge Concurrent Request and/or Manager Data" for the Entry ="All" and mode ="Age"
    6. Test the issue.
    The mode value for the concurrent program "Purge Concurrent Request and/or Manager Data" can be the number of days before which the logs will be purged.
    7. Make sure to verify UTL_FILE_DIR and APPLPTMP are set to same directory.

9) If still having problems, your SPFILE may be corrupt. Do the following to recreate your SPFILE:

    1. Login to the database server using oraprod user

    2. Create a new PFILE (aka init<sid>.ora file) from the current SPFILE
    2.1. cd $ORACLE_HOME/dbs
    2.2. sqlplus " / as sysdba"
    2.3. SQL> create pfile from spfile
    2.4. Make sure that there is a file with name "init<SID>.ora" has been created today

    3. Edit "init<SID>.ora" file and change the value of the utl_file_dir parameter to a readable/writable directory (i.e. $APPLPTMP, "/usr/tmp")
    4. Either rename the original spfile in order to startup the database with the new pfile or create a new spfile for pfile.
    8. Restart the database

Adding A LUN from SAN to AIX

Hi Guys,
      Here is an article for you that best describes how you can add a LUN to a AIX machine.
There is no wonder in doing this. As AIX can only be installed on Power Servers and Power Servers are very high cost so small scale companies avoid to have it. But it's power of processing and robustness is remarkable. I am lucky to have an experience on it. In fact I got few training from IBM on AIX administration and Trouble shooting.



1.       Create a  LUN of required size in storage using storage Manager.
2.       Map that LUN to   required hostgroup or host.
3.       Connect to host machine.
4.     run  cfgmgr in root #cfgmgr
5.       #lspv
PVName                PVID             VGName           Status
hdisk0          00f6fed9834f5f14                    rootvg           active
hdisk1          00f6fed9a6e670fa                    datavg          active
hdisk2           none                                          none            none
                                                                                               
6.       #chdev -l hdisk2 -a pv=yes to create a PVID on the disk
     PVName                PVID                VGName        StatuS
     hdisk0          00f6fed9834f5f14              rootvg          active
             hdisk1          00f6fed9a6e670fa             datavg          active
              hdisk2                         id                       none            active
                           
                           or 
 Instead of using above procedure you can also directly go for step-7

7.       We need to assign that harddisk to volumegroup.
If volume group(VG)  is already is there then we can assign that harddisk to that particular VG
#extend  -f  vgname  hdiskn
                                                          (or)
Create a volume group by #smitty  mkvg
o/p:
     hdisk0          00f6fed9834f5f14                    rootvg           active
             hdisk1          00f6fed9a6e670fa                    datavg          active
             hdisk2                   id                                   datavg           active

8.   Then we need to create a logical volume.
     We can create through #smitty mklv
         (or)
 We can create a filesystem   with default  lv  by using the following command;
       #crfs    –v  jfs2  –g  datavg  –a  size=298G  -A  yes  –m   /u03

9.   TO assign pp   Smitty chlv
10.   Lsv g  –l  vgname
11.     TO assign pp s to  chfs –a size=+2G  /u03
12.   Mount /u03

   

Automatic Undo Management (AUM) and System Managed Undo (SMU)

Hi folks,
    Recently I was going though a very good article on undo management. I am sharing few ideas that I got which are very useful for day to day db administration.
 This is a very common problem for all of us, though we work for years, still we overlooked many concepts that are very much essential. for example here are few questions I am presenting, please have a look

Q-1 What is Undo ?
Q-2 How many undo can we have for a database ?
Q-3 What is Automatic UNDO Retention ?
Q-4 What is Guaranteed UNDO Retention?
Q-5 What are the various statuses for Undo Extents?
Q-6 How to check undo growth ?
Q-7 What are the possible causes for excessive undo growth?
Q-7 What is SMU ?
 

These seems very simple questions yet quite effective if we have the ideas.

Answering questions 

What is Undo?Oracle maintains information to nullify changes made to the database. Such information consists of records of the actions of transactions, collectively known as undo. Oracle uses the undo to do the following:

-  Rollback an active transaction
-  Recover a terminated transaction
-  Provide read consistency
-  Recovery from logical corruptions   

How many undo can we have for a database ?
                        We can have many undo tablespaces in a database, but only one can be Active per instance.
In Oracle Real Application Clusters (RAC) enviornment, we need to have one Active undo tablespace per instance.  The UNDO_TABLESPACE parameter will be used for assigning a particular undo tablespace to an instance.

What is UNDO Retention
                          Undo Retention refers to duration of retaining the undo data after a transaction.
After a transaction is committed, undo data is no longer needed for rollback or transaction recovery purposes. However, for consistent read purposes, long-running queries may require this old undo information for producing older images of data blocks. Furthermore, the success of several Oracle Flashback features can also depend upon the availability of older undo information. For these reasons, it is desirable to retain the old undo information for as long as possible.

Automatic undo management eliminates the complexities of managing rollback segment space and lets you exert control over how long undo is retained before being overwritten.

You can set the UNDO_RETENTION parameter to a low threshold value so that the system retains the undo for at least the time specified in the parameter.

What is Automatic UNDO Retention
                          There is no parameter for this,  Automatic UNDO Retention is enabled by default in 10g.
Oracle Database 10g automatically tunes a parameter called the undo retention period. The undo retention period indicates the amount of time that must pass before old undo information.  Which means the undo information for committed transactions can be overwritten. The database collects usage statistics and tunes the undo retention period based on these statistics and on undo tablespace size. Provided that automatic undo management is enabled, the database automatically tunes the undo retention period as follows:
The current value for tuned undo retention can be viewed by following query.
SELECT TO_CHAR(BEGIN_TIME, 'MM/DD/YYYY HH24:MI:SS') BEGIN_TIME,
    TUNED_UNDORETENTION FROM V$UNDOSTAT;

For AUTOEXTEND undo tablespaces, the system retains undo for at least the time specified in this parameter, and automatically tunes the undo retention period to satisfy the undo requirements of the queries.
This could lead to excessive undo generation, to honor undo retention
 

What is Guaranteed UNDO Retention
Oracle Database 10g lets you guarantee undo retention. In Oracle 10g Release 2, you can enable and disable undo retention.
When you enable this option, the database never overwrites unexpired undo data. That is undo data whose age is less than the undo retention period.

This option is disabled by default, which means that the database can overwrite the unexpired undo data to avoid failure of DML operations if there is not enough free space left in the undo tablespace.

By enabling the guarantee option, you instruct the database not to overwrite unexpired undo data even if it means risking failure of currently active DML operations. Therefore, use caution when enabling this feature.

To enable do the following against the undo tablespace.
ALTER TABLESPACE UNDOTBS RETENTION GUARANTEE;

A typical use of the guarantee option is when you want to ensure deterministic and predictable behavior of Flashback Query by guaranteeing the availability of the required undo data.


What are the various statuses for Undo Extents

Transaction Status of the undo in the extent can be any of the following:
SELECT DISTINCT STATUS, SUM(BYTES), COUNT(*)  
   FROM DBA_UNDO_EXTENTS GROUP BY STATUS; 

ACTIVE - Undo Extent is Active, Used by a transaction.

EXPIRED - Undo Extent is expired (Exceeded the Undo Retention). (Transaction Commited and passed the undo retention)

UNEXPIRED - Undo Extent will be required to honor UNDO_RETENTION.(Transactions committed but still honoring  )

How to check undo growth

To understand the free space with undo tablespace.
SELECT SUM(BYTES) FROM DBA_FREE_SPACE WHERE TABLESPACE_NAME='&UNDOTBS';

To understand state of the extents, space-used in the current undo tablespace.
SELECT DISTINCT STATUS, SUM(BYTES), COUNT(*) 
   FROM DBA_UNDO_EXTENTS GROUP BY STATUS;

To understand the no of active transactions and its undo usage.
SELECT XIDUSN, XIDSLOT, XIDSQN, USED_UBLK FROM V$TRANSACTION WHERE STATUS='ACTIVE' ;

What are the possible causes for excessive undo growth

There could be various causes for excessive undo growth.  To start the diagnosis we need to understand the following.

Transactions with huge undo
It is obvious to see high undo usage when there are huge transactions.
If that is going to be the case this growth should be expected behavior.

UNDO RETENTION
Higher undo retention will cause higher undo growth.  Because we wont mark the undo extents as EXPIRED till the duration of undo retention.

Disabling autoextend on datafiles of  active undo tablespace will reuse the UNEXPIRED extents when it has space crunch. It is a trade-off between undo retention and undo space.
If you wish to satisfy Undo Retention, switch on autoextend in undo tablespace datafiles.
SELECT FILE_ID, AUTOEXTENSIBLE FROM DBA_DATA_FILES WHERE
   TABLESPACE_NAME='&UNDOTBS';
To make those datafile auto extensible, run the following command.
ALTER DATABASE DATAFILE '&FILE_ID' AUTOEXTEND ON;
If you wish to switch off auto extend and to reuse the UNEXPIRED space, do the following
ALTER DATABASE DATAFILE '&FILE_ID' AUTOEXTEND OFF;

 State of undo extents
The status of the undo extents needs to be closely monitored.
There are few bugs with different releases where EXPIRED extents are not being reused.

(a) If good number of extents in UNEXPIRED status, it could be due to high undo_retention.
SELECT DISTINCT STATUS, SUM(BYTES), COUNT(*)  
   FROM DBA_UNDO_EXTENTS GROUP BY STATUS;

What is AUM / SMU 

Automatic Undo Management(AUM) is introduced in Oracle 9i, which replaces the rollback segments.
This is also called System Managed Undo(SMU) as the undo is managed by oracle.

Automatic undo management is undo-tablespace based. You allocate space in the form of an undo tablespace, instead of allocating many rollback segments in different sizes.

Oracle strongly recommends their customers to use Automatic Undo Management (AUM).
 





Hope this article will help you...



Tuesday, 21 January 2014

Demonstration of Co-related subquery


UNDERSTANDING CO-RELATED SUBQUERY
A correlated subquery is a subquery that uses values from the outer query, requiring the inner query to execute once for each outer query
Here is a TABLE: emp

EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO
7369
SMITH
CLERK
7902
12/17/1980
800

20
7788
SCOTT
ANALYST
7566
4/19/1987
3000

20
7839
KING
PRESIDENT

11/17/1981
5000

10
7844
TURNER
SALESMAN
7698
9/8/1981
1500
0
30
7902
FORD
ANALYST
7566
12/3/1981
3000

20

I want to find out nth highest salary of an employee from emp table
Now the query is :
select distinct sal from emp e where &no=(select count(distinct sal) from emp where e.sal<=sal) 

Demonstration
As we know co-related subquery uses the value from the outer query requiring the inner query to execute once for each outer query.
Here I am dividing the query into 3 sections
Section 1:
Outer query passess the value to inner query

Section 2:
Inner query uses the value passed from outer query for evaluation

Section 3:

Output of inner query is compared with the value passed in where clause


SECTION-1
Distinct keyword sorted the output
5000
3000
1500
800
-------------------
5000 --> passed
3000
1500
800




2nd execution
5000
3000 --> Passed
1500
800


Likewise we can compare for 3
---nth highest salary
SECTION-2(e.sal<=sal)
1st execution
5000<=5000 (true cnt=1)
5000<=3000(false cnt=1)
5000<=1500(flase cnt=1)
5000<=800(false   cnt=1)

 -----------------
1st execution
1st 5000 will be comared and
Output will be 1 as above



2nd execution
3000<=5000 (true cnt=1)
3000<=3000(true cnt=2)
3000<1500 (false cnt=2)
3000<800   (false cnt=2)
SECTION-3

&NO=1
Evaluation (1=1 true )
Hence 5000 is the highest sal


 ------------------

&NO=2 (second highest sal)

Evaluaton (2=1  false )
Hence 5000 not 2nd highest sal


Evaluation (2=2 which is true)
Hence 3000 will be the 2nd highest