Wednesday 22 January 2014

Concurrent Processing - Troubleshooting Concurrent Request ORA-20100 errors in the request logs

Metalink Source:  Doc ID 261693.1

APPLIES TO:Oracle Application Object Library - Version 11.5.10.2 to 12.0.6 [Release 11.5 to 12]
Oracle Concurrent Processing - Version 11.5.10.0 to 12.1.3 [Release 11.5 to 12.1]
Information in this document applies to any platform.

PURPOSE
Provide hints for application administrators and DBAs on how to troubleshoot ORA-20100 when running PL/SQL programs on Apps Concurrent Processing.
TROUBLESHOOTING STEPS
If a PL/SQL concurrent program can not write to an external file, you will receive error messages similar to:
 

MSG-00102: Error Message :ORA-20100: File o0000071.tmp creation for FND_FILE failed.
You will find more information on the cause of the error in request log.
ORA-06512: at "APPS.FND_FILE", line 378
ORA-06512: at "APPS.FND_FILE", line 473
ORA-06512: at "APPS.AP_TRIAL_BALANCE_PKG", line 192
REP-1419: 'beforereport': PL/SQL program aborted.

ORACLE error 20100 in FDPSTP
Cause: FDPSTP failed due to ORA-20100: File o0036176.tmp creation for FND_FILE failed.
You will find more information on the cause of the error in request log.
ORA-06512: at "APPS.FND_FILE", line 410
ORA-06512

NOTE: Applications also produces temporary PL/SQL output files used in concurrent processing. These files are written to a location on the database server node specified by the APPLPTMP environment setting. The APPLPTMP directory must be the same directory as specified by the utl_file_dir parameter in your database initialization file.

Rapid Install sets both APPLPTMP and the utl_file_dir parameter to the same default directory. As the temporary files placed in this directory may contain context sensitive information, it should be a secure directory on the database server node with read and write access for the database server owner. In a multi-node system, the directory defined by APPLPTMP does not need to exist on the application tier servers. During an upgrade with AutoUpgrade, you must provide the utl_file_dir parameter value for the APPLPTMP environment setting.
To isolate where the problem is, verify the following:
1) Make sure your environment is properly sourced.
For example, if you don't have your database environment sourced properly, you will be missing ORA_NLSxx, that is used to indicate where Oracle RDBMS/client software can locate the
defintions of Charactersets (used in NLS_LANG or as NLS_CHARACTERSET/
NLS_NCHAR_CHARACTERSET),NLS_SORT, NLS_LANGUAGE (or derived/related parameters)
or NLS_TERRITORY (or derived/related parameters).

ORA_NLS10 missing causes a non recognition of the usual chr(10) as a new line.

2) Make sure that the name of the file is valid (the file name should not include characters like "^").

3) Make sure that APPLPTMP is set to a valid directory and that BOTH the applmgr user and the database user have read and write permissions on that directory (normally, it can be set to the same directory as APPLPTMP).

Make sure to run autoconfig if you happen to modify APPLPTMP in the  autoconfig context file.

If you have a RAC-enabled environment, make sure you are not missing any mount points you may have pointing to APPLPTMP/APPLTMP

4) Make sure the directory pointed by APPLPTMP is the first entry on the utl_file_dir. Also, verify that all the entries on the utl_file_dir are valid and that the applmgr has read/write permissions.

If using an spfile, verify the proper syntax to set utl_file_dir:

Ex.

    ALTER SYSTEM SET UTL_FILE_DIR='directory1','directory2' scope=spfile;

5) Check if you can write a file directly using FND_FILE, which is the package used by the Application. From sqlplus, connected as the apps user, run:

    SQL> exec FND_FILE.PUT_LINE(FND_FILE.LOG, 'THIS IS A TEST');
This should dump a file on APPLPTMP.

If this test works, it would indicate that FND_FILE is working fine and the problem is possibly with the Application.

You may want to leave only one entry on utl_file_dir for this test.

FND_FILE uses the UTL_FILE package, which can only write to directories specified in init.ora. APPLPTMP is a special directory used only for PLSQL temporary files. When a concurrent program uses the FND_FILE package, the concurrent manager uses APPLPTMP as the directory to which it writes temporary files. Thus, APPLPTMP must be one of the directories listed in init.ora in order for FND_FILE to work.

6) Make sure that the file does not exit on the directory pointed by APPLPTMP.

This issue may occur if the temporary file being created by the concurrent program already exists in that directory and that already existing file is owned by another user, therefore it cannot override it. For example, you have a cloned instance, and the APPLPTMP directory still keeps the temp files that were created in the source instance.

Verify the file stated in the error message you are getting (for the example above File o0000071.tmp) does not exist in the directory pointed by $APPLPTMP. If it does, do the following:

1. Stop all services. This is to make sure no user is logged on and making use of $APPLPTMP
2. Remove all files from directory pointed by $APPLPTMP
3. Restart all services
4. Re-test the failed process

7) If still having problems, check if you can write a file using UTL_FILE, which is used by FND_FILE.

Run the PL/SQL below,  after changing <first entry on utl_file_dir> to the first entry on utl_file_dir (you may want to leave just one entry on utl_file_dir for this test).

set serveroutput on
DECLARE
  file_location VARCHAR2(256) := '<first entry on utl_file_dir>';
  file_name VARCHAR2(256) := 'utlfile1.lst';
  file_text VARCHAR2(256) := 'THIS IS A TEST';
  file_id UTL_FILE.file_type;
BEGIN
  file_id := UTL_FILE.fopen(file_Location, file_name, 'W');
  UTL_FILE.put_line(file_id, file_text);
  UTL_FILE.fclose(file_id);
EXCEPTION
  WHEN UTL_FILE.INVALID_PATH
  THEN dbms_output.put_line('Invalid path ' || SQLERRM);
    WHEN OTHERS
  THEN dbms_output.put_line('Others '|| SQLCODE || ' ' || SQLERRM);
END;
/
This program should dump a file on the requested directory. If the test fails, the problem is probably on the Database side.

If it works, the problem is probably on FND_FILE. In this scenario, check the versions of AFCPPIOS.pls and AFCPPIOB.pls.

8) If still having problems, go to your temporary files directory (i.e. $APPLPTMP) and check is not full. If it is, do the following:

    1. Shutdown all the concurrent processes and managers.
    2. Verify that no FNDLIBR process is running on (ps -ef | grep FNDLIBR).
    3. Delete all the files in the temporary directory.
    4. Run the CMCLEAN.SQL script from Note 134007.1 (don't forget to commit)
    5. Run the concurrent program "Purge Concurrent Request and/or Manager Data" for the Entry ="All" and mode ="Age"
    6. Test the issue.
    The mode value for the concurrent program "Purge Concurrent Request and/or Manager Data" can be the number of days before which the logs will be purged.
    7. Make sure to verify UTL_FILE_DIR and APPLPTMP are set to same directory.

9) If still having problems, your SPFILE may be corrupt. Do the following to recreate your SPFILE:

    1. Login to the database server using oraprod user

    2. Create a new PFILE (aka init<sid>.ora file) from the current SPFILE
    2.1. cd $ORACLE_HOME/dbs
    2.2. sqlplus " / as sysdba"
    2.3. SQL> create pfile from spfile
    2.4. Make sure that there is a file with name "init<SID>.ora" has been created today

    3. Edit "init<SID>.ora" file and change the value of the utl_file_dir parameter to a readable/writable directory (i.e. $APPLPTMP, "/usr/tmp")
    4. Either rename the original spfile in order to startup the database with the new pfile or create a new spfile for pfile.
    8. Restart the database

Adding A LUN from SAN to AIX

Hi Guys,
      Here is an article for you that best describes how you can add a LUN to a AIX machine.
There is no wonder in doing this. As AIX can only be installed on Power Servers and Power Servers are very high cost so small scale companies avoid to have it. But it's power of processing and robustness is remarkable. I am lucky to have an experience on it. In fact I got few training from IBM on AIX administration and Trouble shooting.



1.       Create a  LUN of required size in storage using storage Manager.
2.       Map that LUN to   required hostgroup or host.
3.       Connect to host machine.
4.     run  cfgmgr in root #cfgmgr
5.       #lspv
PVName                PVID             VGName           Status
hdisk0          00f6fed9834f5f14                    rootvg           active
hdisk1          00f6fed9a6e670fa                    datavg          active
hdisk2           none                                          none            none
                                                                                               
6.       #chdev -l hdisk2 -a pv=yes to create a PVID on the disk
     PVName                PVID                VGName        StatuS
     hdisk0          00f6fed9834f5f14              rootvg          active
             hdisk1          00f6fed9a6e670fa             datavg          active
              hdisk2                         id                       none            active
                           
                           or 
 Instead of using above procedure you can also directly go for step-7

7.       We need to assign that harddisk to volumegroup.
If volume group(VG)  is already is there then we can assign that harddisk to that particular VG
#extend  -f  vgname  hdiskn
                                                          (or)
Create a volume group by #smitty  mkvg
o/p:
     hdisk0          00f6fed9834f5f14                    rootvg           active
             hdisk1          00f6fed9a6e670fa                    datavg          active
             hdisk2                   id                                   datavg           active

8.   Then we need to create a logical volume.
     We can create through #smitty mklv
         (or)
 We can create a filesystem   with default  lv  by using the following command;
       #crfs    –v  jfs2  –g  datavg  –a  size=298G  -A  yes  –m   /u03

9.   TO assign pp   Smitty chlv
10.   Lsv g  –l  vgname
11.     TO assign pp s to  chfs –a size=+2G  /u03
12.   Mount /u03

   

Automatic Undo Management (AUM) and System Managed Undo (SMU)

Hi folks,
    Recently I was going though a very good article on undo management. I am sharing few ideas that I got which are very useful for day to day db administration.
 This is a very common problem for all of us, though we work for years, still we overlooked many concepts that are very much essential. for example here are few questions I am presenting, please have a look

Q-1 What is Undo ?
Q-2 How many undo can we have for a database ?
Q-3 What is Automatic UNDO Retention ?
Q-4 What is Guaranteed UNDO Retention?
Q-5 What are the various statuses for Undo Extents?
Q-6 How to check undo growth ?
Q-7 What are the possible causes for excessive undo growth?
Q-7 What is SMU ?
 

These seems very simple questions yet quite effective if we have the ideas.

Answering questions 

What is Undo?Oracle maintains information to nullify changes made to the database. Such information consists of records of the actions of transactions, collectively known as undo. Oracle uses the undo to do the following:

-  Rollback an active transaction
-  Recover a terminated transaction
-  Provide read consistency
-  Recovery from logical corruptions   

How many undo can we have for a database ?
                        We can have many undo tablespaces in a database, but only one can be Active per instance.
In Oracle Real Application Clusters (RAC) enviornment, we need to have one Active undo tablespace per instance.  The UNDO_TABLESPACE parameter will be used for assigning a particular undo tablespace to an instance.

What is UNDO Retention
                          Undo Retention refers to duration of retaining the undo data after a transaction.
After a transaction is committed, undo data is no longer needed for rollback or transaction recovery purposes. However, for consistent read purposes, long-running queries may require this old undo information for producing older images of data blocks. Furthermore, the success of several Oracle Flashback features can also depend upon the availability of older undo information. For these reasons, it is desirable to retain the old undo information for as long as possible.

Automatic undo management eliminates the complexities of managing rollback segment space and lets you exert control over how long undo is retained before being overwritten.

You can set the UNDO_RETENTION parameter to a low threshold value so that the system retains the undo for at least the time specified in the parameter.

What is Automatic UNDO Retention
                          There is no parameter for this,  Automatic UNDO Retention is enabled by default in 10g.
Oracle Database 10g automatically tunes a parameter called the undo retention period. The undo retention period indicates the amount of time that must pass before old undo information.  Which means the undo information for committed transactions can be overwritten. The database collects usage statistics and tunes the undo retention period based on these statistics and on undo tablespace size. Provided that automatic undo management is enabled, the database automatically tunes the undo retention period as follows:
The current value for tuned undo retention can be viewed by following query.
SELECT TO_CHAR(BEGIN_TIME, 'MM/DD/YYYY HH24:MI:SS') BEGIN_TIME,
    TUNED_UNDORETENTION FROM V$UNDOSTAT;

For AUTOEXTEND undo tablespaces, the system retains undo for at least the time specified in this parameter, and automatically tunes the undo retention period to satisfy the undo requirements of the queries.
This could lead to excessive undo generation, to honor undo retention
 

What is Guaranteed UNDO Retention
Oracle Database 10g lets you guarantee undo retention. In Oracle 10g Release 2, you can enable and disable undo retention.
When you enable this option, the database never overwrites unexpired undo data. That is undo data whose age is less than the undo retention period.

This option is disabled by default, which means that the database can overwrite the unexpired undo data to avoid failure of DML operations if there is not enough free space left in the undo tablespace.

By enabling the guarantee option, you instruct the database not to overwrite unexpired undo data even if it means risking failure of currently active DML operations. Therefore, use caution when enabling this feature.

To enable do the following against the undo tablespace.
ALTER TABLESPACE UNDOTBS RETENTION GUARANTEE;

A typical use of the guarantee option is when you want to ensure deterministic and predictable behavior of Flashback Query by guaranteeing the availability of the required undo data.


What are the various statuses for Undo Extents

Transaction Status of the undo in the extent can be any of the following:
SELECT DISTINCT STATUS, SUM(BYTES), COUNT(*)  
   FROM DBA_UNDO_EXTENTS GROUP BY STATUS; 

ACTIVE - Undo Extent is Active, Used by a transaction.

EXPIRED - Undo Extent is expired (Exceeded the Undo Retention). (Transaction Commited and passed the undo retention)

UNEXPIRED - Undo Extent will be required to honor UNDO_RETENTION.(Transactions committed but still honoring  )

How to check undo growth

To understand the free space with undo tablespace.
SELECT SUM(BYTES) FROM DBA_FREE_SPACE WHERE TABLESPACE_NAME='&UNDOTBS';

To understand state of the extents, space-used in the current undo tablespace.
SELECT DISTINCT STATUS, SUM(BYTES), COUNT(*) 
   FROM DBA_UNDO_EXTENTS GROUP BY STATUS;

To understand the no of active transactions and its undo usage.
SELECT XIDUSN, XIDSLOT, XIDSQN, USED_UBLK FROM V$TRANSACTION WHERE STATUS='ACTIVE' ;

What are the possible causes for excessive undo growth

There could be various causes for excessive undo growth.  To start the diagnosis we need to understand the following.

Transactions with huge undo
It is obvious to see high undo usage when there are huge transactions.
If that is going to be the case this growth should be expected behavior.

UNDO RETENTION
Higher undo retention will cause higher undo growth.  Because we wont mark the undo extents as EXPIRED till the duration of undo retention.

Disabling autoextend on datafiles of  active undo tablespace will reuse the UNEXPIRED extents when it has space crunch. It is a trade-off between undo retention and undo space.
If you wish to satisfy Undo Retention, switch on autoextend in undo tablespace datafiles.
SELECT FILE_ID, AUTOEXTENSIBLE FROM DBA_DATA_FILES WHERE
   TABLESPACE_NAME='&UNDOTBS';
To make those datafile auto extensible, run the following command.
ALTER DATABASE DATAFILE '&FILE_ID' AUTOEXTEND ON;
If you wish to switch off auto extend and to reuse the UNEXPIRED space, do the following
ALTER DATABASE DATAFILE '&FILE_ID' AUTOEXTEND OFF;

 State of undo extents
The status of the undo extents needs to be closely monitored.
There are few bugs with different releases where EXPIRED extents are not being reused.

(a) If good number of extents in UNEXPIRED status, it could be due to high undo_retention.
SELECT DISTINCT STATUS, SUM(BYTES), COUNT(*)  
   FROM DBA_UNDO_EXTENTS GROUP BY STATUS;

What is AUM / SMU 

Automatic Undo Management(AUM) is introduced in Oracle 9i, which replaces the rollback segments.
This is also called System Managed Undo(SMU) as the undo is managed by oracle.

Automatic undo management is undo-tablespace based. You allocate space in the form of an undo tablespace, instead of allocating many rollback segments in different sizes.

Oracle strongly recommends their customers to use Automatic Undo Management (AUM).
 





Hope this article will help you...



Tuesday 21 January 2014

Demonstration of Co-related subquery


UNDERSTANDING CO-RELATED SUBQUERY
A correlated subquery is a subquery that uses values from the outer query, requiring the inner query to execute once for each outer query
Here is a TABLE: emp

EMPNO
ENAME
JOB
MGR
HIREDATE
SAL
COMM
DEPTNO
7369
SMITH
CLERK
7902
12/17/1980
800

20
7788
SCOTT
ANALYST
7566
4/19/1987
3000

20
7839
KING
PRESIDENT

11/17/1981
5000

10
7844
TURNER
SALESMAN
7698
9/8/1981
1500
0
30
7902
FORD
ANALYST
7566
12/3/1981
3000

20

I want to find out nth highest salary of an employee from emp table
Now the query is :
select distinct sal from emp e where &no=(select count(distinct sal) from emp where e.sal<=sal) 

Demonstration
As we know co-related subquery uses the value from the outer query requiring the inner query to execute once for each outer query.
Here I am dividing the query into 3 sections
Section 1:
Outer query passess the value to inner query

Section 2:
Inner query uses the value passed from outer query for evaluation

Section 3:

Output of inner query is compared with the value passed in where clause


SECTION-1
Distinct keyword sorted the output
5000
3000
1500
800
-------------------
5000 --> passed
3000
1500
800




2nd execution
5000
3000 --> Passed
1500
800


Likewise we can compare for 3
---nth highest salary
SECTION-2(e.sal<=sal)
1st execution
5000<=5000 (true cnt=1)
5000<=3000(false cnt=1)
5000<=1500(flase cnt=1)
5000<=800(false   cnt=1)

 -----------------
1st execution
1st 5000 will be comared and
Output will be 1 as above



2nd execution
3000<=5000 (true cnt=1)
3000<=3000(true cnt=2)
3000<1500 (false cnt=2)
3000<800   (false cnt=2)
SECTION-3

&NO=1
Evaluation (1=1 true )
Hence 5000 is the highest sal


 ------------------

&NO=2 (second highest sal)

Evaluaton (2=1  false )
Hence 5000 not 2nd highest sal


Evaluation (2=2 which is true)
Hence 3000 will be the 2nd highest