Friday, 1 January 2021

Experiences and takeaway from 2020.....

After a long break from technical blogging, I thought of writing something different, beyond technical stuffs. 2020 was really an year to remember in the human history in the name of Covid-19, a deadliest disease the world has ever seen before. People of all generation has witnessed it and learned a lot from this pandemic. The whole world went into silent mode for quite some time. Staying at home and working from home become a new normal for everyone. World's economy shrank by a marginal %, so the Individual economy. The long story short, world has suffered from all aspect like never before. 


As the whole world went into the practice of working from home, very few sector took the advantage of it while  the informal sectors which covers 90% of the workforce goes to site for work regularly actually suffered a lot. 

Life changing experience during this period!!

  As an IT employee, I got the advantage of working from home on the other hand got some bitter experience by spending more hours and getting more health issues. And this issue is not only with me, it is true for other employees also who did work from home during the whole year starting from March 2020. Due to the extreme level of work and stress, I lost a good work life balance, created few health issues due to overspending hours on chair without any physical exercises.  


What opportunities this Pandemic has created?

If we take it in a positive way, this Pandemic has given a lot of lesson to humans and their attitude. 

1. HealthCare improvements and preparedness needed to respond such Pandemic in future. 
2. Govt. must be ready with some proactive measures  deal with such situation 
3. Collective steps must be taken by responsible countries to ensure this doesn't repeat. 

Technical Opportunities for IT: 
 
With my limited 10 years of IT experience, I have seen a complete paradigm change of  technologies. 
1. Security 
2. DevOps 
3. Data Science 
4. Artificial Intelligence  
are few top trending technologies. 

With the recent incident of massive Data breach in North America has shook the whole world and has left with utter disappointment for all the IT experts who spent years  to build and operationalize a secure infrastructure.  Due to the vulnerability on one system the whole information system got compromised!!  Hence understanding of security inside Out became an absolute requirement for everyone. From developers to QA and from Administrators to managers everyone has to learn and understand the importance of security and follow security practices strictly.   

Sunday, 4 February 2018

How to read a CSV file using PLSQL and loading into a table

A simple example demonstrating how you can quickly read a csv file and load data into a table.

STEP:1 create directory object in DB

create directory EMPDATA as '/home/oracle/EMPDATA';

STEP:2 Create a table to which csv data will be inserted 


  CREATE TABLE EMP_DEPT
    (
    EMPNO NUMBER(4),
    ENAME VARCHAR2(10),
    SAL NUMBER(7,2),
    DNAME VARCHAR2(14)
    );  


STEP:3  Run the below procedure to read data from CSV and insert into table emp_dept

DECLARE
      F UTL_FILE.FILE_TYPE;
      V_LINE VARCHAR2 (1000);
      V_EMPNO NUMBER(4);
      V_ENAME VARCHAR2(10);
      V_SAL NUMBER(7,2);
      V_DNAME VARCHAR2(14);
    BEGIN
     F := UTL_FILE.FOPEN ('EMPDATA', 'employee.csv', 'R');
    IF UTL_FILE.IS_OPEN(F) THEN
      LOOP
        BEGIN
          UTL_FILE.GET_LINE(F, V_LINE, 1000);
          IF V_LINE IS NULL THEN
            EXIT;
          END IF;
          V_EMPNO := REGEXP_SUBSTR(V_LINE, '[^,]+', 1, 1);
          V_SAL := REGEXP_SUBSTR(V_LINE, '[^,]+', 1, 3);
          V_ENAME := REGEXP_SUBSTR(V_LINE, '[^,]+', 1, 2);
          V_DNAME := REGEXP_SUBSTR(V_LINE, '[^,]+', 1, 4);
          INSERT INTO EMP_DEPT VALUES(V_EMPNO, V_ENAME, V_SAL, V_DNAME);
          COMMIT;
        EXCEPTION
        WHEN NO_DATA_FOUND THEN
          EXIT;
        END;
      END LOOP;
    END IF;
    UTL_FILE.FCLOSE(F);
  END;
  /

STEP:4 Select the table emp_dept and check if csv data are loaded or not

SQL> Select * from emp_dept;
SQL> select * from emp_dept;

     EMPNO ENAME             SAL DNAME
---------- ---------- ---------- --------------
         1 manoj            1000 IT
         2 Santanu          2000 Accounting
         3 Manjit           3000 Billing


I hope this can help you. 


Reference: http://nimishgarg.blogspot.in/2013/04/load-csv-file-in-oracle-using-plsql.html

Tuesday, 26 December 2017

Hola!! A Successful IDCS Integration with EBS!!!

Configuring On-Prem Identity and Access Management solution and Integration with EBS is really cumbersome. Spending time for building infrastructure and Investing on manpower is really a tough job.

Now this challenge is addressed in IDCS integration with EBS. It has simplified the process of integration and reduce the manpower for this integration.


Coming soon... 

       Keep an eye on it...

Wednesday, 29 November 2017

How To Create Individual Wallet for databases sharing the same home on the same host

TDE PoC -- self test

Background:
===========

Two Databases on the same server sharing same ORACLE_HOME. How can we configure two Individual wallet for each database

two database: test1 , test2

hostname : db01.oracle.com
==============

Create wallet directories for both instances.

mkdir -p /u01/app/WALLET/test1  /u01/app/WALLET/test2

Change the sqlnet.ora as below (1 single entry for all database but the path will be translated on the basis of ORACLE_SID value)


pENCRYPTION_WALLET_LOCATION=
  (SOURCE=
   (METHOD=FILE)
    (METHOD_DATA=
     (DIRECTORY=/u01/app/WALLET/$ORACLE_SID)))


SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/WALLET/$ORACLE_SID' IDENTIFIED BY "Oracle123"; -- Keystore Created for test1 instance

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "Oracle123"; -- Keystore opened for test1 instance

Now check wallet status on both the instance

open two terminal and set test1 on tty1 and test2 on tty2

on tty1 : instance test1

SQL> select status from v$encryption_wallet;

STATUS
------------------------------
OPEN_NO_MASTER_KEY

on tty2 : instance test2

SQL> select status from v$encryption_wallet;

STATUS
------------------------------
NOT_AVAILABLE


Inference:As we didnt create any Keystore for test2 instance hence keystore status is NOT_AVAILABLE


Now create a Keystore for test2  instance


SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/WALLET/$ORACLE_SID' IDENTIFIED BY "Oracle123"; -- Keystore Created for test2 instance

SQL> ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "Oracle123"; -- Keystore opened for test2 instance

check the status of keystore in test2 :

SQL> select status from v$encryption_wallet;

STATUS
------------------------------
OPEN_NO_MASTER_KEY



Now create master key in both the instance :

SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "Oracle123" WITH BACKUP;  -- run this on both test1 and test2

keystore altered.

SQL> select status from v$encryption_wallet;

STATUS
------------------------------
OPEN

On test1 Instance:
==================
SQL> select  utl_raw.cast_to_varchar2( utl_encode.base64_encode('01'||substr(mkeyid,1,4))) || utl_raw.cast_to_varchar2( utl_encode.base64_encode(substr(mkeyid,5,length(mkeyid)))) masterkeyid_base64

 FROM (select RAWTOHEX(mkid) mkeyid from x$kcbdbk);

MASTERKEYID_BASE64
--------------------------------------------------------------------------------
AVYo2uBozE/DvzpLWTiu/4o=


SQL> !mkstore  -wrl /u01/app/WALLET/$ORACLE_SID -viewEntry ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY
Oracle Secret Store Tool : Version 12.1.0.2
Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY = AVYo2uBozE/DvzpLWTiu/4oAAAAAAAAAAAAAAAAAAAAAAAAAAAAA


on test2 Instance:
=================

SQL> select  utl_raw.cast_to_varchar2( utl_encode.base64_encode('01'||substr(mkeyid,1,4))) || utl_raw.cast_to_varchar2( utl_encode.base64_encode(substr(mkeyid,5,length(mkeyid)))) masterkeyid_base64

 FROM (select RAWTOHEX(mkid) mkeyid from x$kcbdbk);

MASTERKEYID_BASE64
--------------------------------------------------------------------------------
Ab1JjU1zS0+Xv91R1bigDHM=

mkstore  -wrl /u01/app/WALLET/$ORACLE_SID -viewEntry ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY

Enter wallet password:
ORACLE.SECURITY.DB.ENCRYPTION.MASTERKEY = Ab1JjU1zS0+Xv91R1bigDHMAAAAAAAAAAAAAAAAAAAAAAAAAAAAA


SQL> create tablespace enc_test1 datafile '/u01/app/test1/TEST1/datafile/test1.dbf' SIZE 10M ENCRYPTION USING 'AES256'  DEFAULT STORAGE (ENCRYPT);

SQL> create tablespace enc_test2 datafile '/u01/app/test1/TEST1/datafile/test2.dbf' SIZE 10M ENCRYPTION USING 'AES256'  DEFAULT STORAGE (ENCRYPT);

SQL> create user test1 identified by Oracle123 default tablespace enc_test1;

SQL> create user test2  identified by Oracle123 default tablespace enc_test2;

SQL> grant connect,resource, unlimited tablespace to test1;

SQL> grant connect,resource, unlimited tablespace to test2;

CREATE TABLE TEST1.appdata(name varchar2(20),roll number);
insert into TEST1.appdata values ('Manoj',1);
insert into TEST1.appdata values ('John',2);
insert into TEST1.appdata values ('Scott',3);

CREATE TABLE TEST2.appdata(name varchar2(20),roll number);
insert into TEST2.appdata values ('Sajan',1);
insert into TEST2.appdata values ('Johnson',2);
insert into TEST2.appdata values ('Tiger',3);

Now from external client(Sql Developer) we loged into test1 and test2 instances and checked

select * from test1.appdata

NAME                       ROLL
-------------------- ----------
Manoj                         1
John                          2
Scott                         3

select * from test2.appdata
NAME                       ROLL
-------------------- ----------
Sajan                         1
Johnson                       2
Tiger                         3



Inference: From this PoC it is proved that we can access Individual database wallet from any application.














Monday, 9 October 2017

Oracle Database Security Assessment

Security, an unavoidable technology these days if you see the technology trend and the number of innovations in IT. Securing your Database is equally important as securing applications and network. Attacks can happen from inside or outside but the portion of damage that can do is unexpected. If attackers can gain access to database from any vulnerable points(say it from Network or Application or From OS)  it can lead to disaster. Hence, it is vital to understand the database vulnerabilities and how to fill the gap.

DBSAT (Database Security Assessment Tool) essentially an Oracle Database security Vulnerability assessment tool which tells what are all security gaps available in the database and it also suggests with recommendations. Roles, privileges, Security configurations, OS level permissions, options and features etc. are the major part of the DBSA report which tells if they are configured as per security standards with the database or not.

A comprehensive and formatted report can be easily generated using DBSA tool. DBSA tool is packaged and provided in support.oracle.com (DOC ID: 2138254.1).

There are two options available in dbsat tool.


$dbsat collect <connect_string> <destination>  -- for collecting data from the database 


$dbsat report collectedFile

Check the Oracle reference below for more details and architecture diagram. 

Reference: https://docs.oracle.com/cd/E76178_01/SATUG/toc.htm#SATUG-GUID-7FFFEC95-6D54-4299-95FA-FDFA2F8835B0


If you want more information you can contact viewssharings@gmail.com


Tuesday, 3 October 2017

Oracle Database Security Products covering 360 degree Security

We often see customers are looking for various security products available to provide comprehensive security solutions. But most of them we see are segmented that means no single solutions can cover everything(360 degree security solution). We saw bunch of Oracle Security products are available ranging from Database to Middleware. I will brief about Database Security products in this article.

Oracle has a bunch of Security Products, combined them a comprehensive database security solutions can be availed to provide 360 degree security protection.


Broadly these products are categorized under EVALUATE, PREVENT, DETECT, DATA DRIVEN SECURITY

Let us see what are the products coming under which category.

1. EVALUATE: Privilege Analysis, DataBase Security Assessment
2. PREVENT: Database Vault, Data Redaction, Data Encryption, Data Masking and Subsetting, Key Management
3. DETECT: Database SQL Firewall, Audit Vault
4. DATA DRIVEN SECURITY: Label Security, Real Application Security, Row Level Security



DBSAT-(Database Security Assessment Tool): It captures all security parameters and their configurations, security features and options being used and recommendations.

Privilege Analysis: It comes along with Oracle Database Vault license. It evaluates the privileges and roles and filters out necessary and unnecessary privileges.

Database Vault: Privilege account management, Multi-factor authorizations etc. Separation of duties

Data Encryption: Encrypts data at rest in column or tablespace levels.

Data Redaction: Also known as dynamic masking, it mask data on the basis of policy and applies on the basis of factors.(Users, roles, Ip address etc) ex: Credit card number XXXX-XXXX-XXXX-1234

Data Masking and Subsetting: It masks data completely in the underlying table hence reduce the sensitive data exposure to the test and dev like non-prod environments.

Oracle Key vault: Protects keys, certificates, wallet contents etc. and manage key life cycle.
It acts like HSM. TDE keys can be access directly from OKV.

Audit Vault and Database Firewall: DB Firewall can be deployed as first line of defense. It analyses incoming sqls and detects anomaly. Audit vault is a central auditing and monitoring system which can be integrated with DB firewall to work together in detecting anomaly and alert to the security admin.


If you are interested in these technologies, Kindly contact viewssharings@gmail.com.



Friday, 18 August 2017

Sending and receiving file through sftp in non-interactive mode

Here is a case where sometime we need to send the files or receive files using sftp non-interactive mode.

Here lftp has been used to pass the credentials to sftp. agent.jar is a file locally available and We wanted it to copy to remote host 192.168.56.153.

Here is the code.

--------------
#!/bin/sh

HOST=192.168.56.153
USER=oracle
PASS=oracle
PORT=22

lftp -u ${USER},${PASS} sftp://${HOST}:${PORT} <<EOF
cd /home/oracle
put agent.jar
bye
EOF

echo "done"

-------------

put is for sending a file from local to remote and get is to receive a file from remote to local server. you need to change directory using cd and use put or get accordingly.


Hope this will help.

Friday, 11 August 2017

How to hide the default bash prompt and use desire prompt

When you login into a linx or unix user, you will be falling under the home directory and the home directory sometime be in a path which is lengthy. You will find it difficult in a small terminal display to type your command.

for example:
  home directory would be like below
      username@hostname format>

  or username:pathOftheUserHome>
   

We can change the prompt as desired. For example

username>

hostname>

anyName>

here is the example how we can change the prompt.

export PS1='\u>'   outpiut: oracle>

export PS1='\h>'  output:   example>

export PS1='\u@\h>'  output: oracle@example

export PS1='Manoj>'  output Manoj>


You can put this in .bash_profile to load this prompt when you login to the user.


Hope this article will help.


  

Friday, 27 January 2017

Data masking for EBusiness suite 12.2 is on high demand

"I found something interesting these days when experts are talking about data security, this topic is hot now and in fact hottest. There are some sort of chaos going around in the organisations due to insider threat, hackers, cyber attack etc. There are hail lot of incidents happenings every day around the world, Organizations are loosing their stocks, many have been fined an enormous amount as they could not comply to the governments standards for data protection even many have failed to sustain their business due to frequent hacking and not able to protect their business sensitive information." 


Technologies are evolving time to time to prevent hackers or from cyber attack etc. which is similar to fencing around a house to protect against intruders. Still there is a large surface of vulnerability left inside the organisation which is called insider theft. It is really difficult to enforce processes and policies to restrict insider theft. So organizations needs some way to avoid such inadvertent situations.

Non-Prod environments are most vulnerable for such attacks, Below figure can demonstrate how proliferation of sensitive data increases security risks.




when you are using ERPs which holds business sensitive information it is really important to  de-identify these sensitive information before taking it to any non-prod environment. But it is extremely difficult to de-identify the whole database sensitive information as application data model contains thousands of sensitive columns of different formats.

Oracle ERP is one of the complex application system, to such goal Oracle has introduced Data masking template for  EBusiness Suite 12.2  which works perfect with em13c.


I have done a PoC to check how it works. I prepared an EBS environment R12.2 with database 12.1.0.2 and Enterprise Manager cloud control 13.1.

Followed Oracle support document
Using Oracle E-Business Suite Data Masking Pack Release 12.2 with Oracle Enterprise Manager Cloud Control 13c (Doc ID 2076834.1)

This works perfectly fine for me. After a completing masking process I was able to start all application services and could see the data were scrambled. It was not at all possible to identify the original form or to reference them.

I would urge EBS dbas to test this out and suggest respective organisation to use this. 


Saturday, 22 October 2016

Oracle DBCS : How to work with Oracle Public Cloud

Here are the steps, how to use DBCS for creating Oracle Database.

Simplified steps to create Database Cloud Service.

List of actions:
1. We will learn generating key pairs(private and public key) using puttygen.exe
2. we will create a DBCS (Database cloud Service)
3. We will use OpenSSH authorized key to access database created in the oracle cloud.

Step-1: Generate key pairs and save them in a directory. Copy the red rounded portion of OpenSSH key and saved it also in a file. This key will be used during cloud DB creation.

Sunday, 11 September 2016

Enterprise Manager 13c failed to start with the error The loading of OPSS java security policy provider failed due to exception


EM 13c failed to start with the below error.

Error :
<BEA-090892> <The loading of OPSS java security policy provider failed due to exception.


$OMS_HOME/bin/emctl start oms

webtier started successfully

but rest of services could  not start(admin server, jvmd  and bi-server ..) when I checked the admin server log files I found the below error

OPSS Unable to start Weblogic Server XML20108 Fatal Error JPS02592 Failed To Push Ldap Config Data To LibOvd


Issue:

 Sep 6, 2016 7:03:16 PM oracle.security.jps.internal.igf.ovd.OvdIGFServiceProvider pushLdapNamesTolibOvd SEVERE:JPS-02592
<Sep 6, 2016 7:03:16 PM SGT> <Error> <Security> <BEA-090892> <The loading of OPSS java security policy provider failed due to exception, see the exception stack trace or the server log file for

root cause. If still see no obvious cause, enable the debug flag -Djava.security.debug=jpspolicy to get more information. Error message: JPS-02592: Failed to push ldap config data to libOvd for

service instance "idstore.ldap" in JPS context "default", cause: org.xml.sax.SAXException: Error Parsing at line #1: 1.org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; <Line 1, Column

1>: XML-20108: (Fatal Error) Start of root element expected.>
<Sep 6, 2016 7:03:16 PM SGT> <Critical> <WebLogicServer> <BEA-000386> <Server subsystem failed. Reason: weblogic.security.SecurityInitializationException: The loading of OPSS java security policy

provider failed due to exception, see the exception stack trace or the server log file for root cause. If still see no obvious cause, enable the debug flag -Djava.security.debug=jpspolicy to get

more information. Error message: JPS-02592: Failed to push ldap config data to libOvd for service instance "idstore.ldap" in JPS context "default", cause: org.xml.sax.SAXException: Error Parsing at

line #1: 1.org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; <Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected.
weblogic.security.SecurityInitializationException: The loading of OPSS java security policy provider failed due to exception, see the exception stack trace or the server log file for root cause. If

still see no obvious cause, enable the debug flag -Djava.security.debug=jpspolicy to get more information. Error message: JPS-02592: Failed to push ldap config data to libOvd for service instance

"idstore.ldap" in JPS context "default", cause: org.xml.sax.SAXException: Error Parsing at line #1: 1.
org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; <Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected.
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.loadOPSSPolicy(CommonSecurityServiceManagerDelegateImpl.java:1402)
        at weblogic.security.service.CommonSecurityServiceManagerDelegateImpl.initialize(CommonSecurityServiceManagerDelegateImpl.java:1022)
        at weblogic.security.service.SecurityServiceManager.initialize(SecurityServiceManager.java:873)
        at weblogic.security.SecurityService.start(SecurityService.java:141)
        at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
        Truncated. see log file for complete stacktrace
Caused By: oracle.security.jps.JpsRuntimeException: JPS-02592: Failed to push ldap config data to libOvd for service instance "idstore.ldap" in JPS context "default", cause:

org.xml.sax.SAXException: Error Parsing at line #1: 1.
org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1; <Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected.


Possible Reasons and Resolution:
================================

1.  Please verify whether your file system having disk space issues(check the file system where EM 13C is installed).
One of the causes of XML files corruption is seen when there is a filesystem

2.  If it is not relates to file system space issues, then it might be because of corrupted adapters.os_xml

3.  As additional symptom $DOMAIN_HOME/config/fmwconfig/ovd/default/server.os_xml was detected as zero (0) bytes.

Corruption of the adapters.os_xml or server.os_xml under this LibOVD location file may lead to Weblogic server startup failure.

Please navigate to the fmwconfig folder and check the adapters.os_xml or server.os_xml file or both
$DOMAIN_HOME/config/fmwconfig/ovd/default/adapters.os_xml

There are chances the any of those xml file could be corrupted due to which the Weblogic server fails to start.


4.  In case the file is corrupted, please perform the following steps:(caution)

   i) Take a backup of $DOMAIN_HOME/config/fmwconfig/ovd/default/adapters.os_xml
      or  server.os_xml
   ii) Delete corrupted adapters.os_xml or server.os_xml files under
        $DOMAIN_HOME/config/fmwconfig/ovd/default/adapters.os_xml

   iii)  Copy adapters.os_xml or server.os_xml from
         $MW_HOME/oracle_common/modules/oracle.ovd_11.1.1/templates/ to
         $DOMAIN_HOME/config/fmwconfig/ovd/default/. Or restore files from backup.

5.  Restart the server. It worked for me!!

Hope this post helps you to resolve this issue.



This worked for me perfectly, Hope it will work for you as well...

Sunday, 24 July 2016

Latest Announcement !!!!Upcoming Event!!!

We are delighted to announce that we will be starting our online classes and demonstration on various Oracle Technologies shortly. We have experts with large range of experiences on Oracle technologies like (sql/Plsql, Oracle Database Administration, Oracle Real Application Cluster, Oracle Disaster Recovery, Oracle E-business Suite Administration, Data Warehousing etc. )

These will be completely online classes with demonstration. Materials for the classes will be provided. Our service portfolio contains the below technologies for the job seekers who are eager to change their current working technologies and also for them who are ready to join IT organizations to start their careers.

Service Portfolio :

1. Oracle sql/ Plsql

2. Oracle Database Administration

3. Oracle E-Business administration

4. Oracle Database Security   (NEW)

5. Oracle High Availability

6.  Oracle Disaster Recovery(Maximum Availability Architechture)

7 . Linux Administration Basics and Shell Scripting

8. Data WareHousing (Informatica)

Also we will try to do on demand classes as and when it is required for the enthusiastic learners.

For subscribing any of the above services we are trying to set comparatively much less amount than what you would be getting in your locality.


Thank You,
Please feel free to write to viewssharings.blogspot.in@gmail.com




Friday, 15 July 2016

Oracle Database Security

1. Why Database Security ?

   Databases are the core component of enterprises as it holds business information over the years, hence Data protection is very much essential  and so Databases. 

2. Why Oracle Database Security ?

   Oracle offers customers an end to end security product and solutions to meet customer business needs in securing their infrastructure. Oracle Database security product portfolio contains.

1. Database Vault
2. Database Masking and Subsetting
3. Transparent Data Encryption and Data Reduction
4. Data Encryption
5. Oracle Wallet
6. Oracle Key Vault
7. Audit Vault and Database Firewall

Majority of Oracle Database security products supports both on premises and oracle public cloud infrastructures.


Oracle Database Masking and Subsetting : 

Oracle Database Masking and Subsetting pack in Oracle Enterprise Manager enables organisation to maximize the business value of their production data by leveraging it in various non-production system while at the same time ensuring that copied data is full sanitized to maintain security and privacy remain in compliance with regulation and keep costs low.

* Data Masking and Subsetting solution makes it easy and identify sensitive data through automated discovery jobs and reusable data models.

* It provides an integrated solution to mask and optionally subset data in a single workflow.

* It provides high fidelity data outputs by explicitly tracking data relationship and maintains referential integrity.

* It facilitates masking and subsetting data in databases or by real time extraction of data in a database.

* It provides to mask Oracle and Non-Oracle databases.

* Supports local and Oracle public cloud databases in a single solution.

High Level steps for masking and subsetting:

1. Create Data Model: Discovers Sensitive Data and Data relationship

2. Select formats and criteria : Masking formats and Templates, Goals and Conditions for subsetting

3. Preview and Validate : Preview masking algorithm results, preview subset reduction results

4. Execute Transformation : In Database mode or In Expert Mode




Thanking You,

Manoj Kumar 
Oracle Database Security Consultant, EMEA presales







Saturday, 4 June 2016

Linux: How to Configure the DNS Server for 11gR2 SCAN (Doc ID 1107295.1)

To BottomTo Bottom

08-May-2013HOWTO
Rate this documentEmail link to this documentOpen document in new windowPrintable Page
In this Document


Goal

Solution

References



Applies to:

Oracle Database - Enterprise Edition - Version 11.2.0.1 to 11.2.0.3 [Release 11.2]
Generic Linux
The commands listed in this Note tested at Red Hat Enterprise Server 5 Update 2. For the other Linux enviroments it should be similar.

Goal

This note explains how to configure the DNS to accommodate SCAN-VIP. In most cases this task is carried out by the Network Administrator, but awareness of these steps can be very useful for assisting your network administrator in configuring DNS properly for SCAN and/or provide the ability to configure DNS in a sandbox enviroment.

If there is no separate DNS Server box available for your test case, you can have one of the cluster nodes (example: rac1 or rac2) also acting as the DNS server. Note, however, that using one of your cluster nodes as your DNS server is not supported in production.

This note will demonstrate how to prepare the SCAN-IP on a Linux DNS Server

When installing Grid Infrastructure, there are 2 options:

1. Configure GNS and let it handle name resolution
OR
2. Choose not to configure GNS and configure each Node and SCAN name with IP addresses defined in DNS

For the purpose of this note, we will not involve GNS (see Note:946452.1 for how to configure GNS).

The three nodes involved in this case are:  rac1, rac2, and dns1.  The domain is:  testenv.com

Node Name           Public IP            Private IP              VIP IP          
rac1.testenv.com     17.17.0.1            172.168.2.1           192.168.2.221
rac2.testenv.com     17.17.0.2            172.168.2.2           192.168.2.222
dns1.testenv.com     17.17.0.35            

The target scan-vip name is: rac-scan  
rac-scan will be configued with the following 3 IP addresses:  192.168.2.11, 192.168.2.12, 192.168.2.13

Solution

1.  On dns1.testenv.com install the DNS Server Package:
# yum install bind-libs bind bind-utils

Three packages must be installed on Linux for DNS Server:
  • bind (includes DNS server, named)
  • bind-utils (utilities for querying DNS servers about host information)
  • bind-libs (libraries used by the bind server and utils package)
You can obtain an account from the Yum Server which will install the package for you automatically. 

OR

You can manually download these packages:
  • bind.XXX.rpm (for example bind-9.2.4-22.el3.i386.rpm)
  • bind-utils.XXX.rpm
  • bind-libs.XX.rpm
And use the rpm command to do the DNS Server installation (For example)
#  rpm -Uvh bind-9.2.3-1.i386.rpm

2. On  dns1.testenv.com system edit the "/etc/named.conf" file

a. Configure the "forwarder" under "options" in "/etc/named.conf " (If you do not have another DNS or Router that can resolve names for you, skip this step) :
options {
.
.
// Forwarder: Anything this DNS can't resolve gets forwarded to other DNS.
forwarders { 10.10.1.1; };  // This is the IP for another DNS/Router
};

b. Configure Zone Entries for your domain in "/etc/named.conf "  
If you are using localdomain, it has been automatically configured and you can skip this step.
For this case we are using "testenv.com" so here we need to add the following lines to "/etc/named.conf"
zone "testenv.com" IN {
type master;
file "testenv.com.zone";
allow-update { none; };
};

The "file" parameter specifies the name of the file in the "/var/named/" directory that contains the configuration for this zone.

c. Configure reverse lookup in "/etc/named.conf "
Reverse lookup is used to let the client find out if the hostname matches to the related IP.  Because we are using 192.168.2.X for VIP and SCAN-VIP so we need to configure the reverse lookup for 192.168.2.X

To configure reverse lookup add the following to "/etc/named.conf"
zone "2.168.192.in-addr.arpa." IN {
type master;
file "2.168.192.in-addr.arpa";
allow-update { none; };
};

3. On dns1.testenv.com edit the config  files under /var/named
a. Edit the DNS Zone Configuration file:
If you are using localdomain you can edit /var/named/localdomain.zone
For this case we edit the file name: testenv.com.zone and localdomain.zone

Add the line below to the end of this file:
rac1-vip IN A 192.168.2.221
rac2-vip IN A 192.168.2.222
rac-scan IN A 192.168.2.11
rac-scan IN A 192.168.2.12
rac-scan IN A 192.168.2.13

Put all the private IPs, VIP and SCAN VIPs in the DNS config file.  If you only want the DNS to resolve the scan-vip, only include the rac-scan with its three corresponding IP addresses in the file.  Also if you only need one SCAN IP, you can put only one entry in the file.

b. Create/Edit the "/var/named/2.168.192.in-addr.arpa" file for reverse lookups as follows:
$ORIGIN 2.168.192.in-addr.arpa.
$TTL 1H
@ IN SOA testenv.com. root.testenv.com. ( 2
3H
1H
1W
1H )
2.168.192.in-addr.arpa. IN NS testenv.com.

221 IN PTR rac1-vip.testenv.com.
222 IN PTR rac2-vip.testenv.com.
11 IN PTR rac-scan.testenv.com.
12 IN PTR rac-scan.testenv.com.
13 IN PTR rac-scan.testenv.com.

4. On dns1.testenv.com : stop/start DNS Server to ensure it can be successfully restarted and make sure the DNS Server will be started automatically:
# service named stop
# service named start
# chkconfig named on

The DNS Server configuration has been completed, next we need to point our RAC nodes to use this DNS server for name resolution.

5. Configure "/etc/resolv.conf" on all nodes:
nameserver 17.17.0.35
search localdomain testenv.com
It should point to the DNS Server Address.  In this case nameserver has been set to the IP address of dns1.  If the node itself is also acting as the DNS Server it should point to its own IP address.

6. Optionally change the hosts search order in  /etc/nsswitch.conf on all nodes:
hosts: dns files nis
The default sequence is: files nis dns; We must move dns to the first entry.
If there is nscd (Name Service Caching Daemon) running, then service nscd needs to be restarted:
# /sbin/service nscd restart


At this point the configuration is complete.  We should be able to test the forward and reverse lookups using the "nslookup" command.

# nslookup rac-scan.testenv.com
Server: 17.17.0.35
Address: 17.17.0.35#53

Name: rac-scan.testenv.com
Address: 192.168.2.11
Name: rac-scan.testenv.com
Address: 192.168.2.12
Name: rac-scan.testenv.com
Address: 192.168.2.13

# nslookup 192.168.2.11
Server: 17.17.0.35
Address: 17.17.0.35#53

11.2.168.192.in-addr.arpa name = rac-scan.testenv.com.

# nslookup 192.168.2.12
Server: 17.17.0.35
Address: 17.17.0.35#53

12.2.168.192.in-addr.arpa name = rac-scan.testenv.com.

# nslookup 192.168.2.13
Server: 17.17.0.35
Address: 17.17.0.35#53

13.2.168.192.in-addr.arpa name = rac-scan.testenv.com.

If you try to ping rac-scan.testenv.com at this moment, you will find it displays one of SCAN ip addresses but it will not be able to be reached. This is the correct behavior.

Once the GI software has been installed and is running it will bring these IP addresses online and at that point the SCAN IP should be pingable.
                   DATA GUARD SETUP WITH DG BROKER IN A SINGLE INSTANCE
                    ====================================================
 CONFIGURATION

PRIMARY
==========
hostname : testvm1
DATABASE: DGDB
DB_UNIQUE_NAME: DGDB

STANDBY
==========
HOSTNAME: testvm2
DATABASE: DGDB
DB_UNIQUE_NAME: DGDBSTBY

BROKER SERVER
===========
DGBroker Server

HOSTNAME: myhost1
DATABASE: TEST
INSTANCE: TEST

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
PRIMARYDB

SQL> select DB_UNIQUE_NAME from v$database;

DB_UNIQUE_NAME
------------------------------
DGDB

SQL> show parameter service_name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
service_names                        string      DGDB
SQL>

password file creation on primary
================================
$orapwd  file=orapwPRIMARYDB password=SYSTEM force=y

copy the file to standby database and mv the name like below

$mv orapwPRIMARYDB orapwSTANDBYDB


Enable archivelog mode on primary
================================
SELECT log_mode FROM v$database;

LOG_MODE
------------
ARCHIVELOG

force logging on primary
=========================

SQL> ALTER DATABASE FORCE LOGGING;

Database altered.

parameter setting in primary
==============================
ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(DGDB,DGDBSTBY)' SCOPE=SPFILE;

ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DGDBSTBY' SCOPE=SPFILE;

ALTER SYSTEM SET LOG_ARCHIVE_DEST_2= 'SERVICE=DGDB ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DGDB' SCOPE=SPFILE;

ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE SCOPE=SPFILE;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE SCOPE=SPFILE;

ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE SCOPE=SPFILE;

ALTER SYSTEM set log_archive_format='%t_%s_%r.dbf' SCOPE=SPFILE;


ALTER SYSTEM SET FAL_SERVER=STANDBY SCOPE=SPFILE;

ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=20 SCOPE=SPFILE;


ALTER SYSTEM SET DB_FILE_NAME_CONVERT='/home/oracle/app/oradata/DGDB/','/home/oracle/app/oradata/DGDBSTBY/' SCOPE=SPFILE;

ALTER SYSTEM SET LOG_FILE_NAME_CONVERT='/home/oracle/app/oradata/DGDB/','/home/oracle/app/oradata/DGDBSTBY/'  SCOPE=SPFILE;

ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO SCOPE=SPFILE;

Add standby redolog files to primary
====================================
ALTER DATABASE ADD STANDBY LOGFILE  ('/home/oracle/app/oradata/DGDB/stbyredo01.log') size 50M;
ALTER DATABASE ADD STANDBY LOGFILE  ('/home/oracle/app/oradata/DGDB/stbyredo02.log') size 50M;
ALTER DATABASE ADD STANDBY LOGFILE  ('/home/oracle/app/oradata/DGDB/stbyredo03.log') size 50M;
ALTER DATABASE ADD STANDBY LOGFILE  ('/home/oracle/app/oradata/DGDB/stbyredo04.log') size 50M;


take a backup with archivelog
==============================

sql>BACKUP DATABASE PLUS ARCHIVELOG;


sql>ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/oracle/stbyctl.ctl';


STANDBY CONFIGURATION:
======================
COPY PASSWORD FILE, PARAMETER FILE,TNSFILE,LISTENER,BACKUPS TO STANDBY


RMAN>man TARGET sys/SYSTEM@PRIMARY AUXILIARY sys/SYSTEM@STANDBY



[oracle@testvm2 backups]$ rman TARGET sys/SYSTEM@PRIMARY AUXILIARY sys/SYSTEM@STANDBY

Recovery Manager: Release 11.2.0.3.0 - Production on Sat Sep 13 20:40:58 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DGDB (DBID=2204161043)
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04006: error from auxiliary database: ORA-12528: TNS:listener: all appropriate instances are blocking new connections


SOLUTION
===========
INITIAL LISTENER.ORA CONFIGURATION
=================================
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = testvm2)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

ADR_BASE_LISTENER = /home/oracle/app

NEW CONFIGURATION
===================

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = DGDBSTBY )
      (ORACLE_HOME = /home/oracle/app/oracle/11.2.0/db_1/)
      (SID_NAME = STANDBYDB)
    )
)

LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = testvm2)(PORT = 1521))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

ADR_BASE_LISTENER = /home/oracle/app



restore the database in standby
=================================
COPY BACKUPS AND STANDBY controlfile to standby location '/home/oracle/backups/'

and then
connect to

$rman TARGET sys/SYSTEM@PRIMARY AUXILIARY sys/SYSTEM@STANDBY

DUPLICATE DATABASE
FOR STANDBY
BACKUP LOCATION '/home/oracle/backups/'
DORECOVER
NOFILENAMECHECK;

Start Apply Process ON STANDBY
================================

# Background redo apply. Control is returned to the session once the apply process is started.
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;


==========Validating configuration=================

on primary
============
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1310745

on standby
===========
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1310750

on both side to check the status
=============================
sql>select message from v$dataguard_status;

parameter for PRIMARY
========================

*.audit_file_dest='/home/oracle/app/admin/DGDB/adump'
*.audit_trail='db'
*.compatible='11.2.0.0.0'
*.control_files='/home/oracle/app/oradata/DGDB/control01.ctl','/home/oracle/app/fast_rec   overy_area/DGDB/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_file_name_convert='/home/oracle/app/oradata/DGDB/','/home/oracle/app/oradata/DGDBST   BY/'
*.db_name='DGDB'
*.db_recovery_file_dest='/home/oracle/app/fast_recovery_area'
*.db_recovery_file_dest_size=4322230272
*.diagnostic_dest='/home/oracle/app'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=PRIMARYDBXDB)'
*.fal_server='DGDBSTBY'
*.log_archive_config='DG_CONFIG=(DGDB,DGDBSTBY)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DGDB'
*.log_archive_dest_2='SERVICE=DGDBSTBY ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DGDBSTBY'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=20
*.open_cursors=300
*.pga_aggregate_target=1671430144
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=5016387584
*.standby_file_management='AUTO'
*.undo_tablespace='UNDOTBS1'

parameter for STANDBY
=======================

*.audit_file_dest='/home/oracle/app/admin/DGDBSTBY/adump'
*.compatible='11.2.0.0.0'
*.control_files='/home/oracle/app/oradata/DGDBSTBY/control01.ctl','/home/oracle/app/fast_recovery_area/DGDBSTBY/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_file_name_convert='/home/oracle/app/oradata/DGDB/','/home/oracle/app/oradata/DGDBSTBY/'
*.db_name='DGDB'
*.db_recovery_file_dest='/home/oracle/app/fast_recovery_area'
*.db_recovery_file_dest_size=4322230272
*.db_unique_name='DGDBSTBY'
*.diagnostic_dest='/home/oracle/app'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=STANDBYDBXDB)'
*.fal_server='DGDB'
*.log_archive_config='DG_CONFIG=(DGDB,DGDBSTBY)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DGDBSTBY'
*.log_archive_dest_2='SERVICE=DGDB ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DGDB'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=20
*.log_file_name_convert='/home/oracle/app/oradata/DGDB/','/home/oracle/app/oradata/DGDBSTBY/'
*.open_cursors=300
*.pga_aggregate_target=1671430144
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=5016387584
*.standby_file_management='AUTO'
*.undo_tablespace='UNDOTBS1'


after changing the parameter files need to create the password file once again

Note: create the password file one primary machine and copy it using scp to standby side and rename it accordingly

DG Broker configuration
==========================
alter system set dg_broker_config_file1='/u01/app/oracle/product/11.2.0/db_1/dbs/dr1TEST.dat' scope=both;

alter system set dg_broker_config_file2='/u01/app/oracle/product/11.2.0/db_1/dbs/dr2TEST.dat' scope=both;

alter system set dg_broker_start=TRUE scope=both


[oracle@myhost1 ~]$ dgmgrl
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/SYSTEM@DGDB
Connected.
DGMGRL>
DGMGRL>

DGMGRL>create configuration 'TESTCONF' as primary database is 'DGDB' connect identifier is DGDB;

before running the above configure dataguard broker on  primary

DGMGRL> add database 'DGDBSTBY' as connect identifier is DGDBSTBY;
DGMGRL>ENABLE CONFIGURATION;


DGMGRL> show database 'DGDB'

Database - DGDB

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    PRIMARYDB

Database Status:
SUCCESS

DGMGRL> show database 'DGDBSTBY'

Database - DGDBSTBY

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds
  Apply Lag:       0 seconds
  Real Time Query: ON
  Instance(s):
    STANDBYDB

Database Status:
SUCCESS

DGMGRL>

DGMGRL> show database verbose  'DGDB';

Database - DGDB

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    PRIMARYDB

  Properties:
    DGConnectIdentifier             = 'dgdb'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '20'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = '/home/oracle/app/oradata/DGDB/, /home/oracle/app/oradata/DGDBSTBY/'
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    SidName                         = 'PRIMARYDB'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=testvm1)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DGDB_DGMGRL)(INSTANCE_NAME=PRIMARYDB)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS


DGMGRL> show database verbose 'DGDBSTBY';

Database - DGDBSTBY

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds
  Apply Lag:       0 seconds
  Real Time Query: ON
  Instance(s):
    STANDBYDB

  Properties:
    DGConnectIdentifier             = 'dgdbstby'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '20'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = '/home/oracle/app/oradata/DGDB/, /home/oracle/app/oradata/DGDBSTBY/'
    LogFileNameConvert              = '/home/oracle/app/oradata/DGDB/, /home/oracle/app/oradata/DGDBSTBY/'
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    SidName                         = 'STANDBYDB'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=testvm2)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DGDBSTBY_DGMGRL)(INSTANCE_NAME=STANDBYDB)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS

dgmgrl>EDIT DATABASE 'DGDB' set property LogXptMode='SYNC';
dgmgrl>edit database 'DGDBSTBY' set property LogXptMode='SYNC';

dgmgrl>EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;

enabling fsfo
==============
dgmgrl>enable fast_start failover;

Error: ORA-16651: requirements not met for enabling fast-start failover

dgmgrl>edit database 'DGDB' set property StandbyFileManagement='AUTO';

dgmgrl>edit database 'DGDBSTBY' set property StandbyFileManagement='AUTO';



failover and switchover testing using DG Broker
===============================================

Data Guard Broker - Configure Fast Start Failover, Data Protection Levels and the Data Guard Observer (Doc ID 1508729.1)


DGMGRL> show configuration verbose;

Configuration - TESTCONF

  Protection Mode: MaxAvailability
  Databases:
    DGDBSTBY - Primary database
    DGDB     - (*) Physical standby database

  (*) Fast-Start Failover target

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'

Fast-Start Failover: ENABLED

  Threshold:        30 seconds
  Target:           DGDB
  Observer:         myhost1
  Lag Limit:        30 seconds (not in use)
  Shutdown Primary: TRUE
  Auto-reinstate:   TRUE

Configuration Status:
SUCCESS


Try to enable fsfo
====================
DGMGRL> enable fast_start failover;
Error: ORA-16651: requirements not met for enabling fast-start failover

Failed.

Disable managed recovery in the standby site
DGMGRL> edit database DGDB set state=APPLY-OFF;
Succeeded.


Data Guard Broker - Configure Fast Start Failover, Data Protection Levels and the Data Guard Observer (Doc ID 1508729.1) 
 
In this Document
     The Current Broker Configuration
     The Primary Site Broker Configuration
     The Standby Site Broker Configuration
     Configuring Failover Thresholds
     The Observer OracleNET configuration
     The Observer site TNSNAMES.ORA entries
    

Testing the Observer Site TNS Aliases
     Configuring the ObserverConnectIdentifier Property in the Data Guard Configuration
     Enable Fast Start Failover
     Attempt to Enable Fast Start Failover
     Troubleshooting a failure in enabling Fast Start Failover
     Establish Max Protection Mode
     The sites' current configuration.
    

Disable managed recovery in the standby site
     Enable Flashback Database
     Set Log shipping to LGWR SYNC by setting the property LogXPTMode to SYNC
     Alter the configuration to enable MaxProtection.
     To set the environment to run MaxAvailability Mode
     Enable Automatic file management in the sites.
     Enable Fast Start Failover
     Start the Data Guard Broker Observer in the Observer Site



Once a broker configuration has been established  addressing the Business' requirements for protecting the environment from data loss need to be considered and potentially addressed through further configuration of the Data Guard environment.

This document examines in detail the process for enabling the MaxAvailability and MaxProtection protection modes, Fast Start Failover and the Data Guard Observer.
Solution
The Current Broker Configuration

All sites are running 11.2.0.3 Enterprise Edition


Configuring Failover Thresholds

Should you to alter the default Thresholds that determine when and why a failover may occur edit the following configuration settings

To alter the amount of time the observer attempts to reconnect to the Primary after it has detected problem and the number of seconds it waits on failed connect attempts before initiating a failover edit the configuration property FastStartFailoverThreshold.

DGMGRL> edit configuration set property FastStartFailoverThreshold=45;
Property "faststartfailoverthreshold" updated



Display the configuration to confirm the settings have been successfully altered.

 Attempt to Enable Fast Start Failover
DGMGRL> enable fast_start failover;
Error: ORA-16651: requirements not met for enabling fast-start failover

Failed.


Currently there is the configuration does not allow fast start failover to be enabled.


Troubleshooting a failure in enabling Fast Start Failover

Looking at the error there are a number of potential causes for this error that include the mix of protection mode and Log shipping modes that the sites are currently configured with.  These will need to altered.


Establish Max Protection Mode

As this Data Guard configuration is to run in MaxProtection mode the following is performed to establish this mode prior to enabling fast start failover.

Currently it is running MaxPerformance mode

DGMGRL> show configuration verbose;

Configuration - TESTCONF

  Protection Mode: MaxAvailability
  Databases:
    DGDB     - Primary database
    DGDBSTBY - (*) Physical standby database

  (*) Fast-Start Failover target

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'

Fast-Start Failover: ENABLED

  Threshold:        30 seconds
  Target:           DGDBSTBY
  Observer:         myhost1
  Lag Limit:        30 seconds (not in use)
  Shutdown Primary: TRUE
  Auto-reinstate:   TRUE

Configuration Status:
SUCCESS

Log Shipping is being performed using LGWR ASYNC as seen via the LogXPTMode property, Standby file management is set to Manual (StandbyFileManagement property) and no Fast Start Failover target (FastStartFailover property in the primary site) has been set. These will need to be altered.

Each site must also have Flashback Database enabled.


The sites' current configuration.
DGMGRL> show database verbose 'DGDB';

Database - DGDB

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    PRIMARYDB

  Properties:
    DGConnectIdentifier             = 'dgdb'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'SYNC'
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '20'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = '/home/oracle/app/oradata/DGDB/, /home/oracle/app/oradata/DGDBSTBY/'
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = 'DGDBSTBY'
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    SidName                         = 'PRIMARYDB'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=testvm1)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DGDB_DGMGRL)(INSTANCE_NAME=PRIMARYDB)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS


DGMGRL> show database verbose 'DGDBSTBY'

Database - DGDBSTBY

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds
  Apply Lag:       0 seconds
  Real Time Query: OFF
  Instance(s):
    STANDBYDB

  Properties:
    DGConnectIdentifier             = 'dgdbstby'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'SYNC'
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '20'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = '/home/oracle/app/oradata/DGDB/, /home/oracle/app/oradata/DGDBSTBY/'
    LogFileNameConvert              = '/home/oracle/app/oradata/DGDB/, /home/oracle/app/oradata/DGDBSTBY/'
    FastStartFailoverTarget         = 'DGDB'
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    SidName                         = 'STANDBYDB'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=testvm2)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DGDBSTBY_DGMGRL)(INSTANCE_NAME=STANDBYDB)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = 'USE_DB_RECOVERY_FILE_DEST'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS


Disable managed recovery in the standby site
DGMGRL> edit database rs1_stb set state=APPLY-OFF;
Succeeded.


Enable Flashback Database
============================
Flashback must be enabled in ALL the database sites.  It was turned on in the primary but had not yet been enabled in the standby site.

The Standy site
SQL> select open_mode,flashback_on from v$database;

OPEN_MODE            FLASHBACK_ON
MOUNTED              NO

SQL> alter database flashback on;

Database altered.


Set Log shipping to LGWR SYNC by setting the property LogXPTMode to SYNC
=========================================================================
The Log shipping mode needs to be set to SYNC in both the sites.

DGMGRL> connect sys@rs1_prm_obsrv
Password:
Connected.

DGMGRL> edit database 'DGDB' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'DGDBSTBY' set property LogXptMode='SYNC';
Property "logxptmode" updated

Alter the configuration to enable MaxAvailability.
===================================================
DGMGRL> edit configuration set protection mode as MaxAvailability;

DGMGRL> exit

In both sites set the property StandbyFileManagement to automatic
===================================================================
DGMGRL> edit database 'DGDB' set property StandbyFileManagement='AUTO';
Property "standbyfilemanagement" updated

DGMGRL> edit database 'DGDBSTBY' set property StandbyFileManagement='AUTO';
Property "standbyfilemanagement" updated

Enable Fast Start Failover
=================================

Now that the broker configuration is complete fast start failover can be enabled.
DGMGRL> enable fast_start failover;
Enabled.


Start the Data Guard Broker Observer in the Observer Site

Now that fast start failover is enabled the broker can be started. Log into the observer machine and start the Faststart Failover observer.
[oracle@grid2vm1 ]$ dgmgrl -logfile /home/oracle/obsrv.log sys/dba4ora@rs1_prm_obsrv "start observer" &


FSFO TESTING
====================

NOW FSFO IS ENABLED IN DG BROKER CONFIGURATION

DGMGRL> show configuration verbose;

Configuration - TESTCONF

  Protection Mode: MaxAvailability
  Databases:
    DGDB     - Primary database
    DGDBSTBY - (*) Physical standby database

  (*) Fast-Start Failover target

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'

Fast-Start Failover: ENABLED

  Threshold:        30 seconds
  Target:           DGDBSTBY
  Observer:         myhost1
  Lag Limit:        30 seconds (not in use)
  Shutdown Primary: TRUE
  Auto-reinstate:   TRUE

Configuration Status:
SUCCESS


STEP-1
===========

on primary DGDB
=================
sql>shutdown abort
ORACLE instance shut down.

IT WILL AUTOMATICALLY CONFIGURE DGDBSTBY AS PRIMARY.

NOW WE NEED TO MAKE DGDB DATABASE AS STANDBY

We need to make DGDB AS STANDBY for that we need to follow the below

mount the DGDB DATABASE

sql>startup mount

now connect the broker server myhost1
and
$dgmgrl
CONNECT THE DGDBSTBY DATABASE
DGMGRL>CONNECT sys/SYSTEM@DGDBSTBY
DGMGRL> reinstate database 'DGDB'
Reinstating database "DGDB", please wait...
Reinstatement of database "DGDB" succeeded
DGMGRL>


After that we can check primary(DGDBSTBY) AND STANDBY (DGDB)

SQL>SELECT OPEN_MODE,DATABASE_ROLE FROM V$DATABASE;