Thursday, February 19, 2015

11g to 12c Upgrade/Migration Using Full Transportable TableSpace & Data Pump


With the Advent of 12c and Oracle trying to convincing the people to migrate to 12c, we decided to test one of our OEM repository DB running on 11.2.0.3 to 12c. The approach we choose is to use TTS + Data Pump feature to migrate the DB to newer version. 

When using conventional methods to move data, Oracle Data Pump uses either external tables or direct path unload to extract data. While the choice between these two access methods is based on the structure and types of the data being unloaded, both methods efficiently extract logical subsets of data from an Oracle database. 

In contrast, a transportable move of data and indexes involves the physical movement of one or more tablespace data files. The data segments inside the tablespace data files are not read individually. Instead, the export operation extracts the metadata that describes the objects containing storage within each data file, and each file is moved as a single entity
Moving large volumes of data using transportable tablespaces can be faster than conventional data movement because there is no need to interpret and extract individual rows of data or index entries. It is possible to move individual tables or partitions in a transportable manner, but the entire tablespace data file is moved in these cases as well. Understanding the difference between conventional and transportable data movement is helpful when considering the distinction between administrative and user tablespaces. 

For the purposes of a full transportable export, administrative tablespaces are the tablespaces provided by Oracle, such as SYSTEM, SYSAUX, TEMP, and UNDO. These tablespaces contain the procedures, packages, and seed data for the core Oracle database functionality and Oracle-provided database components such as Oracle Spatial, Oracle Text, OLAP, JAVAVM, and XML Database. 
In contrast, user tablespaces are those tablespaces defined by database users or applications. These may store user data, application data, and any other information defined by users of the database.

One consideration specific to full transportable export/import arises when there exists a database object (e.g., a partitioned table) that is stored across both user and administrative tablespaces. Storing an object in this way is generally not good practice, but it is possible. If there is an object with storage in both administrative and user tablespaces, then you can either redefine that object before transporting your data, or use conventional Data Pump export/import. The example later in this white paper shows how to detect this condition prior to starting a full transportable export. 

Following is the outline of the step which one can use. 

As a first step, one needs to identify the table-spaces needed for the application data and make sure that objects belonging to application schema are self contained. If they are not then one needs to first migrate them to single table space for easier management, however that's not mandatory. 
1. Identify the table spaces needed for applications.

SQL> select TABLESPACE_NAME,STATUS from dba_tablespaces;

TABLESPACE_NAME              STATUS
------------------------------ ---------
MGMT_ECM_DEPOT_TS        ONLINE
MGMT_TABLESPACE             ONLINE
MGMT_AD4J_TS                  ONLINE


To determine whether our table spaces are self-contained, including verification that
referential integrity constraints will be valid after the transport, execute the following command on the source database.


SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('MGMT_ECM_DEPOT_TS,MGMT_TABLESPACE,MGMT_AD4J_TS', TRUE);
PL/SQL procedure successfully completed.


Note that you must include all user tablespaces in the database when performing this check for a full transportable export/import.
After invoking this PL/SQL procedure, you can see all violations by selecting from the
TRANSPORT_SET_VIOLATIONS view.
 

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
no rows selected


If the set of tablespaces is self-contained, this view will be empty. If any violations are listed, then you must address these issues before proceeding with the full transportable operation.

2. Put TBS in read only mode..
SQL> alter tablespace MGMT_ECM_DEPOT_TS read only;
Tablespace altered.

SQL>  alter tablespace  MGMT_TABLESPACE read only;
Tablespace altered.

SQL>  alter tablespace MGMT_AD4J_TS read only;
Tablespace altered.
 


3. Export the metadata of the needed table spaces.

expdp parfile=expdp.par 
# vi expdp.par
full=Y
transportable=always
version=12
directory=DATA_PUMP_DIR
dumpfile=oemtestdb.dmp
metrics=y
exclude=statistics
#encryption_password=secret123word456
logfile=full_tts_export.log
EXCLUDE=TABLESPACE:"IN ( \'TABLESPACE:USERS\', \'TABLESPACE:UNDOTBS1\', \'TABLESPACE:TEMP\', \'TABLESPACE:EXAMPLE\') "


Final output -

Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
  /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/log/gpdb.dmp
******************************************************************************
Datafiles required for transportable tablespace MGMT_AD4J_TS:
  +RAC_DATA/gpdb/datafile/mgmt_ad4j_ts.339.843266579
Datafiles required for transportable tablespace MGMT_ECM_DEPOT_TS:
  +RAC_DATA/gpdb/datafile/mgmt_ecm_depot_ts.341.843266557
Datafiles required for transportable tablespace MGMT_TABLESPACE:
  +RAC_DATA/gpdb/datafile/mgmt_tablespace.340.843266559
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at 10:03:27

4. Now if you are using ASM to store the files, then connect to ASM instance using asmcmd and copy the files on target server as follows

ASMCMD [+rac_data/gpdb/datafile] > ls -ltr
WARNING:option 'r' is deprecated for 'ls'
please use 'reverse'

Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   MAR 27 04:00:00  Y    EXAMPLE.1618.843199351
DATAFILE  UNPROT  COARSE   MAR 27 04:00:00  Y    MGMT_ECM_DEPOT_TS.341.843266557
DATAFILE  UNPROT  COARSE   MAR 27 04:00:00  Y    MGMT_TABLESPACE.340.843266559
DATAFILE  UNPROT  COARSE   MAR 27 04:00:00  Y    SYSAUX.500.843199155
DATAFILE  UNPROT  COARSE   MAR 27 04:00:00  Y    UNDOTBS1.338.843265261
DATAFILE  UNPROT  COARSE   MAR 27 04:00:00  Y    UNDOTBS1.499.843199157
DATAFILE  UNPROT  COARSE   MAR 27 04:00:00  Y    USERS.1617.843199353
DATAFILE  UNPROT  COARSE   MAR 27 09:00:00  Y    MGMT_AD4J_TS.339.843266579
DATAFILE  UNPROT  COARSE   MAR 27 10:00:00  Y    SYSTEM.1624.843199155

ASMCMD [+rac_data/gpdb/datafile] > cp mgmt_ad4j_ts.339.843266579 /u02/tmp
copying +rac_data/gpdb/datafile/mgmt_ad4j_ts.339.843266579 -> /u02/tmp/mgmt_ad4j_ts.339.843266579

ASMCMD [+rac_data/gpdb/datafile] > cp mgmt_ecm_depot_ts.341.843266557 /u02/tmp
copying +rac_data/gpdb/datafile/mgmt_ecm_depot_ts.341.843266557 -> /u02/tmp/mgmt_ecm_depot_ts.341.843266557

ASMCMD [+rac_data/gpdb/datafile] > cp mgmt_tablespace.340.843266559 /u02/tmp
copying +rac_data/gpdb/datafile/mgmt_tablespace.340.843266559 -> /u02/tmp/mgmt_tablespace.340.843266559


COPY THIS DATAFILES ON TARGET SERVER AND CHANGE PERMISSIONS ACCORDINGLY ( FOR THIS CASE I HAVE CONFIGURED THE ACFS MOUNT ACROSS THE CLUSTER SO I DONT REALLY NEED TO COPY THEM TO TARGET HOST, BUT ONLY TO TARGET LOCATION)

5. copy the dump file to DATA_PUMP_DIR location on target server..
DATA_PUMP_DIR - /u01/app/oracle/admin/gpdbupg/dpdump/                                                                                             
                                                                                                         
NOW ON 12C DATABASE IMPORT THE DUMP

impdp full=Y directory=DATA_PUMP_DIR dumpfile=gpdb.dmp logfile=impdpgpdbupg.log  transport_datafiles='/u02/app/product/12.1.0/datafiles/gpdbupg/gpdbupg/mgmt_ecm_depot_ts.341.843266557','/u02/app/product/12.1.0/datafiles/gpdbupg/gpdbupg/mgmt_ad4j_ts.339.843266579','/u02/app/product/12.1.0/datafiles/gpdbupg/gpdbupg/mgmt_tablespace.340.843266559'

OR 
# impdp parfile=impdp.par

# vi impdp.par
FULL=Y
DIRECTORY=DATA_PUMP_DIR
#ENCRYPTION_PASSWORD=<enc_passwd>
DUMPFILE=oemtestdb.dmp
LOGFILE=src112fullimp.log
#############################
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_ECM_DEPOT_TS.DBF'
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_TABLESPACE.DBF'
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_AD4J_TS.DBF'
#############################

You just finished your 11g to 12c Migration successfully!! All you need to do now is to open the allow the users to connect to this DB and start rolling!! 

There is an another approach to deploy full transportable export/import which employs the network mode of Data Pump, to eliminate the need for a dumpfile. In this case, we will assume that the tablespace data files are in a location, such as a Storage Area Network (SAN) device, which is accessible to both the source and destination systems. This enables you to migrate from a non-CDB into a PDB with one Data Pump command: 

1. Create a CDB on the destination system, including a PDB into which you will import the source database. 
2. Create a database link in the destination PDB, pointing to the source database. 
3. Set user tablespaces in the source database to READ ONLY. Copy the data files for tablespaces to target CDB/PDB. 
4. In the Oracle Database 12c environment, import directly from the source database into the destination PDB using full transportable export/import in network mode:

SQL> alter session set container=upgdb;
Session altered.

SQL> sho user
USER is "SYS"

SQL> CREATE DATABASE LINK src112 CONNECT TO system IDENTIFIED BY oracle USING 'TEST';

Database link created.

SQL> select * from sys.link_test@src112;

         N
----------
        10


# Create tablespaces on target database for application data.
SQL> show con_name

CON_NAME
------------------------------
UPGDB


#impdp sys/oracle@upgdb  parfile=upg.par


#vi upg.par
NETWORK_LINK=src112
VERSION=12
FULL=Y
TRANSPORTABLE=ALWAYS
EXCLUDE=STATISTICS
#ENCRYPTION_PASSWORD=<enc_passwd>
METRICS=Y
LOGFILE=tgtpdb_dir:src112fullimp.log
#############################
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_ECM_DEPOT_TS.DBF'
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_TABLESPACE.DBF'
TRANSPORT_DATAFILES='/u02/app/product/12.1.0/datafiles/upgdb/MGMT_AD4J_TS.DBF'
#############################


Whether you use conventional dump files or network mode, full transportable export/import is a convenient way to upgrade a database to a new version, migrate to a different operating system or hardware platform, migrate into a PDB – or even to perform all three of these upgrades and migrations in a single operation!

Tuesday, February 17, 2015


Change Public/SCAN/Virtual IP/Name in 11g/12c RAC


When working with Real Application Cluster DB, changing the infrastructure properties is bit tricky if not difficult. There is dependency chain with several network & name properties and dependent component from Oracle perspective are also needed to be modified. 


I recently undertook the exercise to do so for one of our RAC Cluster which resulted in this post. There are several use cases which I have covered as follows. 

Case I.   Changing Public Host-name


Public hostname is recorded in OCR, it is entered during installation phase. It can not be modified after the installation. The only way to modify public hostname is by deleting the node, then add the node back with a new hostname, or reinstall the clusterware.


Case II.  Changing Public IP Only Without Changing Interface, Subnet or Netmask

If the change is only public IP address and the new ones are still in the same subnet, nothing needs to be done on clusterware layer, all changes need to be done at OS layer to reflect the change.

1. Shutdown Oracle Clusterware stack
2. Modify the IP address at network layer, DNS and /etc/hosts file to reflect the change
3. Restart Oracle Clusterware stack
Above change can be done in rolling fashion, eg: one node at a time.

Case II.  Changing SCAN / SCAN IP

SCAN is used to access cluster as whole from oracle database clients and can redirect your connection request to any available node on the cluster where the requested service is running. This resource is cluster resource and can fail over to any other node if the node where it is running should fail. The entry of SCAN is in OCR and IP is configured at DNS level. 
So to change to SCAN IP / Name one has to first populate the changes on DNS to take it into effect. Once the changes are in effect, one can modify the SCAN resource in OCR as follows. Remember SCAN acts as cluster entry point and load balancing process , the restart to SCAN will require a brief outage. However the existing connection will not have any impact. 

[oracle@dbrac2 ~]$ srvctl  status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node dbrac1
[oracle@dbrac2 ~]$ srvctl  status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node dbrac1

[oracle@dbrac2 ~]$ srvctl config  scan

SCAN name: dbrac-scan.localdomain, Network: 1
Subnet IPv4: 192.168.2.0/255.255.255.0/eth1, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 192.168.2.110
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:

[oracle@dbrac2 ~]$ srvctl stop scan_listener

[oracle@dbrac2 ~]$ srvctl stop scan
[oracle@dbrac2 ~]$ srvctl  status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is not running
[oracle@dbrac2 ~]$ srvctl  status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is not running

-- MODIFY THE SCAN IP AT OS LEVEL


[root@dbrac2 ~]$ srvctl  modify scan -scanname dbrac-scan.localdomain


[oracle@dbrac2 ~]$ srvctl  config scan

SCAN name: dbrac-scan.localdomain, Network: 1
Subnet IPv4: 192.168.2.0/255.255.255.0/eth1, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 192.168.2.120
SCAN VIP is enabled.

[oracle@dbrac2 ~]$ srvctl start scan_listener

[oracle@dbrac2 ~]$ srvctl start scan

Since this is my test cluster, I have configured only one SCAN, but regardless of it, the process remains the same for SCAN with 3 IPs.


Case II.  Changing Virtual IP / Virtual Host Name

-- CHANGING NODE VIP FROM 192.168.2.103 TO 192.168.2.203 ON DBRAC1
-- SINCE NODE VIP IS PART OF NODE APPS ONE NEEDS TO MODIFY THE IP ADDRESS ON OS LEVEL AND THEN USE SRVCTL TO MODIFY NODEAPPS

[oracle@dbrac1 automation]$ srvctl  config vip -node dbrac1

VIP exists: network number 1, hosting node dbrac1
VIP Name: dbrac1-vip.localdomain
VIP IPv4 Address: 192.168.2.103
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:

[oracle@dbrac1 automation]$ srvctl  stop vip -node dbrac1

PRCR-1065 : Failed to stop resource ora.dbrac1.vip
CRS-2529: Unable to act on 'ora.dbrac1.vip' because that would require stopping or relocating 'ora.LISTENER.lsnr', but the force option was not specified

[oracle@dbrac1 automation]$ srvctl  stop vip -node dbrac1 -force

[oracle@dbrac1 automation]$ srvctl  status  vip -node dbrac1
VIP dbrac1-vip.localdomain is enabled
VIP dbrac1-vip.localdomain is not running

-- NOW MODIFY THE ADDRESS OF NODE VIP ON OS LEVEL USING EITHER /etc/hosts OR DNS.

-- Once done, use SRVCTL to modify OCR resource. 
-- Here I am not changing the name, but only IP 

[oracle@dbrac1 automation]$ srvctl modify nodeapps -node dbrac1 -address dbrac1-vip.localdomain/255.255.255.0/eth1

[oracle@dbrac1 automation]$ srvctl config   vip -node dbrac1
VIP exists: network number 1, hosting node dbrac1
VIP Name: dbrac1-vip.localdomain
VIP IPv4 Address: 192.168.2.203
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:

[root@dbrac2 ~]# srvctl  config nodeapps
Network 1 exists
Subnet IPv4: 192.168.2.0/255.255.255.0/eth1, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node dbrac1
VIP Name: dbrac1-vip.localdomain
VIP IPv4 Address: 192.168.2.203
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node dbrac2
VIP Name: dbrac2-vip.localdomain
VIP IPv4 Address: 192.168.2.104
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:

[oracle@dbrac1 automation]$ srvctl  start vip -node dbrac1

[root@dbrac2 ~]# srvctl  status  nodeapps
VIP dbrac1-vip.localdomain is enabled
VIP dbrac1-vip.localdomain is running on node: dbrac1
VIP dbrac2-vip.localdomain is enabled
VIP dbrac2-vip.localdomain is running on node: dbrac2
Network is enabled
Network is running on node: dbrac1
Network is running on node: dbrac2
ONS is enabled
ONS daemon is running on node: dbrac1
ONS daemon is running on node: dbrac2

-- ON NODE2

[oracle@dbrac2 ~]$ srvctl  stop vip -node dbrac2 -force
[oracle@dbrac2 ~]$ srvctl  status  vip -node dbrac2
VIP dbrac2-vip.localdomain is enabled
VIP dbrac2-vip.localdomain is not running

[oracle@dbrac2 ~]$ srvctl modify nodeapps -node dbrac2 -address dbrac2-vip.localdomain/255.255.255.0/eth1


[oracle@dbrac2 ~]$ srvctl  config vip  -n dbrac2
VIP exists: network number 1, hosting node dbrac2
VIP Name: dbrac2-vip.localdomain
VIP IPv4 Address: 192.168.2.204
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:

[oracle@dbrac2 ~]$ srvctl  start vip -node dbrac2


[oracle@dbrac2]$ srvctl status  nodeapps

VIP dbrac1-vip.localdomain is enabled
VIP dbrac1-vip.localdomain is running on node: dbrac1
VIP dbrac2-vip.localdomain is enabled
VIP dbrac2-vip.localdomain is running on node: dbrac2
Network is enabled
Network is running on node: dbrac1
Network is running on node: dbrac2
ONS is enabled
ONS daemon is running on node: dbrac1
ONS daemon is running on node: dbrac2

[oracle@dbrac1 automation]$ crs_stat -t  | grep vip


ora.dbrac1.vip ora....t1.type ONLINE    ONLINE    dbrac1     
ora.dbrac2.vip ora....t1.type ONLINE    ONLINE    dbrac2     
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    dbrac1


A special case for 11gR2  VIP Name Change - 

modifying the VIP hostname only without changing the IP address.

For example: only VIP hostname changes from dbrac1-vip to dbrac1-nvip, IP and other attributes remain the same.


If IP address is not changed, above modify command will not change the USR_ORA_VIP value in 'crsctl stat res ora.dbrac1.vip -p' output. Please use the following command:


# crsctl modify res ora.dbrac1.vip -attr USR_ORA_VIP=ora.dbrac1.nvip

Verify the changes for USR_ORA_VIP field:

# crsctl stat res ora.dbrac1.vip -p |grep USR_ORA_VIP

Three important flag for crsctl stat res command are as follows. 
   -p                Print static configuration
   -v                Print runtime configuration
   -f                Print full configuration


Wednesday, February 11, 2015



De-Configure & Re-configure GI / Cluster Nodes


when you are De-configuring the cluster there are various reasons. One of the most common case is when root.sh failed on the node. In this case one has to deconfigure the cluster node with -force option and has to run root.sh manually. 

Also there are other use cases for De-configure - 

This procedure applies only when all the followings are true:
  • One or partial nodes are having problem, but one or other nodes are running fine - so there's no need to deconfigure the entire clustere
  • And GI is a fresh installation (NOT upgrade) without any patch set (interim patch or patch set update(PSU) is fine).
  • And cluster parameters have not been changed since original configuration, eg: OCR/VD on same location, network configuration has not been changed etc
Steps to de-configure-

As root, on each problematic node, execute:

# <$GRID_HOME>/crs/install/rootcrs.pl -deconfig -force

Steps to reconfigure       

         # <$GRID_HOME>/root.sh
There is one caveat to this process and that is use of config.sh utility. The config.sh utility invokes the wizard that will ask you the properties of the cluster and prepare the rootcrs_param file, which will be copied to all cluster nodes. It will also prompt you to run root.sh( which will in turn call rootconfig script ).
config.sh only needs to be run on one node of the cluster – all required files are propagated to other nodes within the cluster

Cases that config.sh can be used:


  • After GI cluster is deconfigured with rootcrs.pl on all nodes

  • After GI is cloned from other cluster

  • After GI is installed with software only option

Cases that config.sh is not the best tool:

For GI cluster environment, as it will configure/reconfigure all nodes in the cluster which means down time, it is not the best tool for the following scenarios as no down time is needed to accomplish these tasks:

  • one or more nodes are having problem, but there is node or nodes that are running fine, in this case, node removal/addition procedure can be used to avoid downtime.

  • one or more nodes are having problem, but there is node or nodes that are running fine, and the cluster is freshly installed without any patch set regardless how long it has been running - if patch set update(PSU) has been applied, that is fine, and cluster parameters are not changed since original configuration, eg: OCR/VD on same location, network configuration has not been changed etc, and GRID_HOME is intact. In this case, deconfig and reconfig on each problematic node can be used (as root, execute "$GRID_HOME/crs/install/rootcrs.pl -deconfig -force" then "$GRID_HOME/root.sh").
If the above doesn't fix the issue then, node removal/addition procedure should be used.

Some important points to remember when Re-running root.sh
  • Logfiles
    • root.sh log file :    $GRID_HOME/cfgtoollogs/crsconfig/rootcrs_grac31.log
    • Checkpoint File    $GRID_HOME/u01/app/grid/Clusterware/ckptxxx.xml
  • Voting disks and OCR are re-discovered rerunning root.sh
  • User created resources like database resources or service resources are rediscovered
  • root.sh reconfigures OLR / OHASD ( files in  /etc/rc.d/ /etc/init.d are recreated )
  • Backup OCR / OLR before rerunning root.sh
  • rootcrs.pl -deconfig -force -verbose  -lastnode does not  delete the +OCR DG
  • If you need to cleanup your OCR DG  you may need to use dd command to erase the +OCR disks ( see reference section )

Tuesday, February 10, 2015

12c Pluggable DB Point In Time Recovery


While working with pluggable database in our test environment, we came across situation where we want to to point in time recovery of one of the pluggable database.

Version - 12.1.0.2
Container DB - ORCL1 & ORCL2 (RAC CDB)
Pluggable DB - PDB1


Command used to take backup is as follows -

backup pluggable database pdb1 plus archivelog tag='pdb1 backup';

BEFORE INITIATING RESTORE AND RECOVERY MAKE SURE PDB IS IN MOUNTED STATE -

SQL> select name, open_mode from gv$pdbs

NAME                   OPEN_MODE
------------------------------ ----------
PDB$SEED               READ ONLY
PDB1                   MOUNTED
PDB$SEED               READ ONLY
PDB1                   MOUNTED


PDB restore can be done either by directly connecting to PDB or thru CDB. In this case, I will be doing restore and recovery thru CDB, which is preferred way. Following are the steps to analyse and perform restore/recovery of PDB

-- To Identify the latest point to which I need to perform PITR.

RMAN> restore pluggable database pdb1 preview;
List of Backup Sets
===================
BS Key  Type LV Size       Device Type Elapsed Time Completion Time 
------- ---- -- ---------- ----------- ------------ -----------------
33      Full    607.06M    DISK        00:00:30     10-02-15 21:37:41
        BP Key: 33   Status: AVAILABLE  Compressed: NO  Tag: TAG20150210T213722
        Piece Name: +DATA/ORCL/0EAE548942B125F6E0536502A8C06EE1/BACKUPSET/2015_02_10/nnndf0_tag20150210t213722_0.495.871335443
  List of Datafiles in backup set 33
  Container ID: 3, PDB Name: PDB1
  File LV Type Ckp SCN    Ckp Time          Name
  ---- -- ---- ---------- ----------------- ----
  19      Full 9644899    10-02-15 21:36:26 +DATA/ORCL/0EAE548942B125F6E0536502A8C06EE1/DATAFILE/system.293.871263527
  20      Full 9644899    10-02-15 21:36:26 +DATA/ORCL/0EAE548942B125F6E0536502A8C06EE1/DATAFILE/sysaux.296.871263527
  21      Full 9644899    10-02-15 21:36:26 +DATA/ORCL/0EAE548942B125F6E0536502A8C06EE1/DATAFILE/users.292.871263605

archived logs generated after SCN 9646487 not found in repository
RMAN-05119: recovery can not be done to a consistent state.
Media recovery start SCN is 9646487
Recovery must be done beyond SCN 9646487 to clear datafile fuzziness
Finished restore at 10-02-15 21:56:09

-- Validate the ability of backup, if re-storable or not.

RMAN> restore pluggable database pdb1 validate;

Starting restore at 10-02-15 21:56:57
using channel ORA_DISK_1

channel ORA_DISK_1: starting validation of datafile backup set
channel ORA_DISK_1: reading from backup piece +DATA/ORCL/0EAE548942B125F6E0536502A8C06EE1/BACKUPSET/2015_02_10/nnndf0_tag20150210t213722_0.495.871335443
channel ORA_DISK_1: piece handle=+DATA/ORCL/0EAE548942B125F6E0536502A8C06EE1/BACKUPSET/2015_02_10/nnndf0_tag20150210t213722_0.495.871335443 tag=TAG20150210T213722
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:08
Finished restore at 10-02-15 21:57:05


[oracle@dbrac1 admin]$ rman target /
connected to target database: ORCL (DBID=1388310818)


RMAN> spool log to rman.log
RMAN> run  {
set until time "to_date('2015-10-02 21:36:26','YYYY-DD-MM HH24:MI:SS')";2>
restore pluggable database pdb1;
recover pluggable database pdb1;
}

2> RMAN>

using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=96 instance=orcl1 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00019 to +DATA/ORCL/0EAE548942B125F6E0536502A8C06EE1/DATAFILE/system.293.871263527
channel ORA_DISK_1: restoring datafile 00020 to +DATA/ORCL/0EAE548942B125F6E0536502A8C06EE1/DATAFILE/sysaux.296.871263527
channel ORA_DISK_1: restoring datafile 00021 to +DATA/ORCL/0EAE548942B125F6E0536502A8C06EE1/DATAFILE/users.292.871263605
channel ORA_DISK_1: reading from backup piece +DATA/ORCL/0EAE548942B125F6E0536502A8C06EE1/BACKUPSET/2015_02_10/nnndf0_tag20150210t213722_0.495.871335443
channel ORA_DISK_1: piece handle=+DATA/ORCL/0EAE548942B125F6E0536502A8C06EE1/BACKUPSET/2015_02_10/nnndf0_tag20150210t213722_0.495.871335443 tag=TAG20150210T213722
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 10-02-15 22:00:40


RMAN>
Starting recover at 10-02-15 22:00:42
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 10-02-15 22:00:45


RMAN> alter pluggable database pdb1 open ;Statement processed


SQL> select name, open_mode from gv$pdbs;

NAME                   OPEN_MODE
------------------------------ ----------
PDB$SEED               READ ONLY
PDB1                   READ WRITE
PDB$SEED               READ ONLY
PDB1                   MOUNTED


Hope this will help you in pluggable database recovery.

Thursday, February 5, 2015

Remote ASM File Copy Issue

Recently during one of our restore exercise, we need to transfer the backup files from ASM  of Cluster A to ASM of Cluster B. When I tried to copy the file using following command if failed continuously. 

## To make error message more meaningful, need to enable the trace..
export   DBI_TRACE=1

ASMCMD [+] > cp +image_bkp/prd/autobackup/2015_02_05/s_870845182.578.870845183 sys@rac_tgt.1529.+ASM2:+RAC5_DATA/BACKUP_PIECES/prd/

Enter password: ******
    -> DBI->connect(dbi:Oracle:host=rac_tgt.1529;port=1521;sid=+ASM2, sys, ****, HASH(0x1b0dde70))
connect using '(DESCRIPTION=(ADDRESS=(HOST=rac_tgt.1529)(PROTOCOL=tcp)(PORT=1521))(CONNECT_DATA=(SID=+ASM2)))'    !! ERROR: '12545' 'ORA-12545: Connect failed because target host or object does not exist (DBD ERROR: OCIServerAttach)' (err#0)
    <- connect('host=rac_tgt.1529;port=1521;sid=+ASM2', 'sys', ...)= undef at DBI.pm line 637
       DBI connect('host=rac_tgt.1529;port=1521;sid=+ASM2','sys',...) failed: ORA-12545: Connect failed because target host or object does not exist (DBD ERROR: OCIServerAttach)
ASMCMD-8201: (Bad argc for RDBMS:ASMCMD-8201)

ASMCMD [+] > cp --port 1523 +image_bkp/prd/autobackup/2015_02_05/s_870845182.578.870845183 sys@rac_tgt.+ASM2:+RAC5_DATA/BACKUP_PIECES/prd/s_870845182.578.870845183 --port 1529


copying +image_bkp/prd/autobackup/2015_02_05/s_870845182.578.870845183 -> rac_tgt.intra.searshc.com:+RAC5_DATA/BACKUP_PIECES/prd/s_870845182.578.870845183
    !! ERROR: '15056' 'ORA-15056: additional error message
ORA-15046: ASM file name '+RAC5_DATA/BACKUP_PIECES/prd/s_870845182.578.870845183' is not in single-file creation form
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 413
ORA-06512: at line 3 (DBD ERROR: OCIStmtExecute)' (err#1)
    <- execute= undef at asmcmdbase.pm line 3509
ASMCMD-8016: copy source->'+image_bkp/prd/autobackup/2015_02_05/s_870845182.578.870845183' and target->'+RAC5_DATA/BACKUP_PIECES/prd/s_870845182.578.870845183' failed


Checking Metalink Note # 452158.1 I saw that the ASMCMD  cp command fails because the ASM file name was not in a form that can be used to create an single file. File name should not contain the file number/incarnation.

ASMCMD [+] > cp --port 1523 +image_bkp/prd/autobackup/2015_02_05/s_870845182.578.870845183 sys@rac_tgt.+ASM2:+RAC5_DATA/BACKUP_PIECES/prd/test --port 1529
Enter password: ******

On target server -

ASMCMD [+RAC5_DATA/BACKUP_PIECES/prd] > ls -lt

Type        Redund  Striped  Time             Sys  Name
                                              N    AUG11/
                                              N    test => +RAC5_DATA/ASM/AUTOBACKUP/test.2365.870918155

As you can see the file is created with link pointing to file with incarnation and file number. 

Tuesday, February 3, 2015

Load SQL Plans into SQL Plan Management (SPM) 
from the Automatic Workload Repository (AWR) 

Managing performance can be tricky thing specially when one is dealing with SQL Performance. There are many varied reasons which could be result in performance degradation. Usually, when there is change in profile of data or introduction of new data structures or upgrades. This can be achieved by plan stability and it is widely used feature of Oracle DB. The Baseline is key to achieving plan stability. 

When one has performance issues with SQL, one can achieve the stable plan either from history or from cached statement, known to have good plan. 
This note provide steps for loading a SQL plan into SQL plan baseline from AWR.

As a starting point one has to be clear about the culprit causing performance bottleneck. 
you need to gather info about sql_id and other related info which can be used to achieve desired result. 

1. As a first step one needs to create SQL Tuning Set. You can see it as a kind of container object that holds the statements and plan for your statement.                                                                                                                                                
exec dbms_sqltune.create_sqlset(sqlset_name => '76cmg604xv5gm_sqlset_test',description => 'sqlset test HS');

PL/SQL procedure successfully completed.

2. Next step is, since we want to load plan from AWR repository, is to identify the range of snap id which contains the sql statement and plan, we are after. 

One can use following two views to get that info. 
To find snapshot related info - dba_hist_snapshot 
To Identify the snap_id in which the sql belongs  - dba_hist_sqlstat 

Use following code to load data from AWR to SQL Tuning Set. 

declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur for
select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(27437,27440,'sql_id='||CHR(39)||'76cmg604xv5gm'||CHR(39)||'',NULL,NULL,NULL,NULL,NULL,NULL,'ALL')) p;

DBMS_SQLTUNE.LOAD_SQLSET('76cmg604xv5gm_sqlset_test', baseline_ref_cur);
end;
/

PL/SQL procedure successfully completed.

To load all the sqls captured in AWR snapshot use “NULL” in basic_filter.

declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur for
select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(&begin_snap_id, &end_snap_id,NULL,NULL,NULL,NULL,NULL,NULL,NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET('all_sqlset_test', baseline_ref_cur);
end;
/

Now check if the SQL Set is populated with data or not...

SELECT NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET where name='76cmg604xv5gm_sqlset_test';


NAME                           OWNER                          CREATED  
------------------------------ ------------------------------ ---------
STATEMENT_COUNT
---------------
76cmg604xv5gm_sqlset_test      SYS                            03-FEB-15
              1
1 row selected.

To check statements from SQL Set - 

select sql_id, substr(sql_text,1, 15) text from dba_sqlset_statements
where sqlset_name = '76cmg604xv5gm_sqlset_test';

To Check the contents of the SQL Set -                                                                              
select * from table(dbms_xplan.display_sqlset('76cmg604xv5gm_sqlset_test','&sql_id'));

Enter sql_id : 76cmg604xv5gm       

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
Error: cannot fetch explain plan for the specified SQL tuning set and sql_id    
1 row selected.

If you see at the moment, the SQL Plan Baselines will be empty. 

select count(*) from dba_sql_plan_baselines;

  COUNT(*)
----------
         0
1 row selected.

3. Next, load SQL execution plans from SQLSET into SQL Baseline 

Use following code to load sql plan from STS to baselines. 

declare
my_int pls_integer;
begin
my_int := dbms_spm.load_plans_from_sqlset (sqlset_name => '76cmg604xv5gm_sqlset_test', sqlset_owner => 'SYS',fixed => 'NO', enabled => 'YES');
end;
/

PL/SQL procedure successfully completed.

You can also set basic filter as follows

basic_filter => 'sql_id="76cmg604xv5gm"'

One done, check the SQL Plan Baselines, and you will see that it has baselines created for that specific SQL. 

SELECT SQL_HANDLE, PLAN_NAME, ORIGIN, ENABLED, ACCEPTED, FIXED, MODULE FROM   DBA_SQL_PLAN_BASELINES;

Hope this will help you in trouble shooting your severe performance hangs.