Friday, September 16, 2016

RMAN-08138: WARNING: archived log not deleted - must create more backups

When I take the RMAN backup to disk of 12c Oracle database, seeing the error "RMAN-08138: WARNING: archived log not deleted - must create more backups" and archive logs are not getting deleted after the backup.

That is, rman is able to take the backup of database and archive logs but not deleting the backed up archive logs in order to free up the space.

Solution:-

The problem was with the below default configuration setting, as per this RMAN is looking to delete the archive logs that are backed up to SBT_TAPE.

CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';


But as I said I am taking the backup to Disk, I changed the configuration parameter as below  to delete the archive logs after backing up to Disk.

CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;

Now rman is able to delete the archive logs after the backup to disk.

Wednesday, July 27, 2016

ORA-16826: apply service state is inconsistent with the DelayMinsproperty

Seeing warning "ORA-16826: apply service state is inconsistent with the DelayMinsproperty" in Dataguard Broker configuration.

DGMGRL> show configuration;

Configuration - orcl_dg

  Protection Mode: MaxPerformance
  Databases:
    orclprmy     - Primary database
    orclstby - Physical standby database
Warning: ORA-16826: apply service state is inconsistent with the DelayMins property

Fast-Start Failover: DISABLED

Configuration Status:
WARNING


Solution:-

The error is due to the mismatch of ‘delayMins’ property. We need to convert the standby to real time apply.

Note: - Standby redo logs must be created before converting the standby to real time apply.


  •                    Converting the standby to real time apply:-
1.    Cancel the current recovery mode on standby database

SQL> alter database recover managed standby database cancel;

2.    Enable real time apply

SQL> alter database recover managed standby database using current logfile disconnect from session;


  •                  Now check the dataguard configuration

DGMGRL> show configuration;

Configuration - orcl_dg

  Protection Mode: MaxPerformance
  Databases:
    orclprmy     - Primary database
    orclstby - Physical standby database
     

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS


Monday, July 4, 2016

ORA 600 [krr_process_read_error_1]: Oracle 12c Dataguard Error

After I upgrade the database from 11.2.0.4 to 12.1.0.2 started seeing the error “ORA 600 [krr_process_read_error_1]” on standby databases on AIX platform.  Also, MRP process is crashing.

When the MRP process crashes, the logs that come from Primary to standby will not be applied until the MRP process is started again.


Workaround:

This problem is due to the bug 22294260 happening only on AIX environments. 

Only the workaround for this problem is to start restart the recovery on standby database. 

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING ARCHIVED LOGFILE DISCONNECT; 


ORA-700 [kskvmstatact: excessive swapping observed] : Oracle 12c RAC Error

Recently I upgraded 11.2.0.4 RAC to 12.1.0.2 on Linux. After upgrading both GI and Database to 12.1.0.2 started seeing “ORA-700 [kskvmstatact: excessive swapping observed]” alerts frequently.

Below is the alert log content when we got the alert:
WARNING: Heavy swapping observed on system in last 5 mins.
pct of memory swapped in [1.96%] pct of memory swapped out [0.41%].
Please make sure there is no memory pressure and the SGA and PGA
are configured correctly. Look at DBRM trace file for more details.
Errors in file /u01/app/oracle/diag/rdbms/orcl/ORCL1/trace/ORCL1_dbrm_14607.trc  (incident=37241):
ORA-00700: soft internal error, arguments: [kskvmstatact: excessive swapping observed], [], [], [], [], [], [], [], [], [], [], []

Every time when I get the alert observed sum of the pct memory swapped in and swapped out always higher than 2%. In this case swapped in is 1.96% and swapped out is 0.41%, i.e. total of 2.37%.

 

Solution:-

This is the expected behavior in 12c and no solution to get rid of this error for now.

Bug 19495842 is there to change the threshold in the swap warning in the alert log in future releases. But, we need to bear this alert in 12.1.0.2 :).


Thursday, June 23, 2016

Oracle RAC 12c MGMTDB (Management Database)

1.       MGMTDB is a container used to store the diagnostic information collected by Cluster Health Monitor (CHM).
2.       It is the single instance database managed by Oracle clusterware in Oracle 12c, in 11g we used to have Berkley database for the same purpose.
3.       Having MGMTDB is optional in 12.1.0.1, you can choose whether to have it or not during the time of Clusterware setup or upgrade from older versions to 12.1.0.1. But in 12.1.0.2 it become mandatory to have MGMTDB.
4.       MGMTDB runs on the master node (i.e. on which node MGMTDB runs, we can consider that as Master node).
5.       If the node where Management Database running crashes, automatically MGMTDB will be failed over to another node.
6.       Management database will be stored on the same storage as OCR and VOTING disks. So, if you go with MGMTDB option in 12c, you should have more than 5GB space for OCR and VOTING disks.

If you use the small size disks it will not allow you to setup/upgrade clusterware.

 

Commands on MGMTDB


1.       Finding on which node MGMTDB is running:
$ oclumon manage -get MASTER

Master = rac1

2.       Status of the MGMTDB
$srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node rac1

3.       Configuration of ManagementDB (MGMTDB)
$ srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: <CRS home>
Oracle user: oragrid
Spfile: +OCR_VOTE/_MGMTDB/PARAMETERFILE/spfile.268.915007725
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: crs_linux
PDB service: crs_linux
Cluster name: crs-linux
Database instance: -MGMTDB
4.       Location of Cluster Health Monitor (CHM) repository
$ oclumon manage -get reppath
CHM Repository Path = +OCR_VOTE/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysmgmtdata.269.915007863
5.       CHM Repository size
$ oclumon manage -get repsize

CHM Repository Size = 136320 seconds


Wednesday, June 22, 2016

Datapump (expdp and impdp): Oracle DBA Tutorial

DataPump

       The data pump facility runs on the server side.
       It enables very high speed movement of data and metadata from one DB to another.
       Dump files generated by Data Pump export are not compatible with original export utility.

Benefits of using DataPump over normal export and import

       The ability to restart data pump jobs
       The ability to detach from and reattach to long running jobs without affecting the job itself.
       Data pump export and import use parallel execution rather than a single stream of execution, for improved performance.

Data Pump Export Modes

A Data Pump export utility is invoked using expdp command. Backup can be taken at below modes:
       Full Export Mode:- Specified using FULL parameter in expdp command
       Schema Mode:- Specified using SCHEMAS parameter
       Table Mode:- Specified using TABLES parameter
       Tablespace Mode:- TABLESPACES parameter
       Transportable Tablespace Mode: - TRANSPORT_TABLESPACES parameter.

Data Pump Import Modes

The Data Pump import utility is invoked using impdp command. Backup can be imported below modes:
       Full Import Mode:- Specified using FULL parameter in impdp command
       Schema Mode:- Specified using SCHEMAS parameter
       Table Mode:- Specified using TABLES parameter
       Tablespace Mode:- Specified using TABLESPACES parameter
       Transportable Tablespace Mode: - Specified using TRANSPORT_TABLESPACES parameter.

Export Examples

Before taking the datapump backup, we need to create a backup directory at O.S. level and then at the DB level as the SYS user.
1.       Create /u02/expdp_bkp at O.S. Level.

2.       At SQL prompt:
SQL> create or replace directory expdp_bkp as ‘/u02/expdp_bkp’;
SQL> grant read, write on directory expdp_bkp to public;

Example 1:-
Taking Schema (SCOTT user) backup
$expdp system/<password> dumpfile=scott.dmp logfile=scott.log directory=expdp_bkp schemas=scott

Giving the job name for the backup
$ expdp scott/<password> dumpfile=scott.dmp logfile=scott.log directory=expdp_bkp job_name=dpschema

We can monitor the datapump jobs by querying on dba_datapump_jobs as SYS/SYSTEM users.
SQL> select *from dba_datapump_jobs;

Example 2:-
Taking Full database backup
$expdp system/<password> dumpfile=fulldb.dmp logfile=fulldb.log directory=expdp_bkp full=y

Example 3:-
Taking Table Level backup
$ expdp scott/<password> dumpfile=tables.dmp logfile=tables.log directory=expdp_bkp job_name=dptables tables=emp,dept

Example 4:-
Taking Row level backup
If you want to take the backup of EMP table whose deptno = 10
$expdp scott/<password> dumpfile=query.dmp logfile=query.log directory=expdp_bkp tables=emp query=\’ where deptno=10\’

Example 5:-

Stopping and restarting the datapump jobs
Stopping the datapump job:-
When the backup job is running at OS level press Ctrl+C, follow below instructions
Export>stop_job=immediate
YES
Restarting the stopped jobs:-
$expdp attach=<job name>
Export> continue_client

Example 6: -
Using parallel option during export or import.
$expdp system/<password> dumpfile=fulldb.dmp logfile=fulldb.log directory=expdp_bkp full=y parallel=4



Import Examples


Example 1:-
Importing SCOTT backup to another database
NOTE: - If the backup is exported by SCOTT user, then during the time of import you need to create SCOTT user explicitly to do the import. Else if the backup is exported by SYS/SYSTEM user, no need to create the target user explicitly. IMPDP will take care of creating the users.
$impdp system/<password> dumpfile=schema.dmp logfile=scottimp.log directory=expdp_bkp  schemas=scott

Example 2:-
Importing the dump of SCOTT user to USER3 schema

$impdp system/<password> dumpfile=schema.dmp logfile=scottimp.log directory=expdp_bkp  remap_schema=scott:USER3 table_exists_action=replace

Tuesday, June 21, 2016

Direct Path vs Conventional Path Export : Oracle Database Admin (DBA) Tutorial

Conventional path Export

Conventional path Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into the buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file. 

Direct path Export

When using a Direct path Export, the data is read from disk directly into the export session's program global area (PGA): the rows are transferred directly to the Export session's private buffer. This also means that the SQL command-processing layer (evaluation buffer) can be bypassed, because the data is already in the format that Export expects. As a result, unnecessary data conversion is avoided. The data is transferred to the Export client, which then writes the data into the export file. 

       The parameter DIRECT specifies whether you use the direct path Export (DIRECT=Y) or the conventional path Export (DIRECT=N).

       To improve the performance of Direct path still further, we can use RECORDLENGTH parameter in export.

       The values that can be possible for RECORDLENGTH are multiples of OS block size / multiples of DB_BLOCK_SIZE. If we don’t specify RECORDLENGTH and still use direct=y, then oracle will take default OS block size (In most of the env, it is 1024 bytes)

Ex:- exp scott/tiger file=/u01/scottbkp.dmp log=/u01/scottbkp.log direct=y RECORDLENGTH=65536


LIMITATIONS OF DIRECT PATH EXPORTS

1.       A Direct path Export does not influence the time it takes to Import the data. That is, an export file created using direct path Export or Conventional path Export, will take the same amount of time to Import. 
2.       You cannot use the DIRECT=Y parameter when exporting in transportable tablespace mode.  You can use the DIRECT=Y parameter when exporting in full, user or table mode
3.       The parameter QUERY applies ONLY to conventional path Export. It cannot be specified in a direct path export (DIRECT=Y).
4.       A Direct path Export can only export the data when the NLS_LANG environment variable of the session who is invoking the export, is equal to the database character set. If NLS_LANG is not set (default is AMERICAN_AMERICA.US7ASCII) and/or NLS_LANG is different, Export will display the warning EXP-41 and abort with EXP-0.