Friday, September 16, 2016

RMAN-08138: WARNING: archived log not deleted - must create more backups

When I take the RMAN backup to disk of 12c Oracle database, seeing the error "RMAN-08138: WARNING: archived log not deleted - must create more backups" and archive logs are not getting deleted after the backup.

That is, rman is able to take the backup of database and archive logs but not deleting the backed up archive logs in order to free up the space.

Solution:-

The problem was with the below default configuration setting, as per this RMAN is looking to delete the archive logs that are backed up to SBT_TAPE.

CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO 'SBT_TAPE';


But as I said I am taking the backup to Disk, I changed the configuration parameter as below  to delete the archive logs after backing up to Disk.

CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;

Now rman is able to delete the archive logs after the backup to disk.

Wednesday, July 27, 2016

ORA-16826: apply service state is inconsistent with the DelayMinsproperty

Seeing warning "ORA-16826: apply service state is inconsistent with the DelayMinsproperty" in Dataguard Broker configuration.

DGMGRL> show configuration;

Configuration - orcl_dg

  Protection Mode: MaxPerformance
  Databases:
    orclprmy     - Primary database
    orclstby - Physical standby database
Warning: ORA-16826: apply service state is inconsistent with the DelayMins property

Fast-Start Failover: DISABLED

Configuration Status:
WARNING


Solution:-

The error is due to the mismatch of ‘delayMins’ property. We need to convert the standby to real time apply.

Note: - Standby redo logs must be created before converting the standby to real time apply.


  •                    Converting the standby to real time apply:-
1.    Cancel the current recovery mode on standby database

SQL> alter database recover managed standby database cancel;

2.    Enable real time apply

SQL> alter database recover managed standby database using current logfile disconnect from session;


  •                  Now check the dataguard configuration

DGMGRL> show configuration;

Configuration - orcl_dg

  Protection Mode: MaxPerformance
  Databases:
    orclprmy     - Primary database
    orclstby - Physical standby database
     

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS


Monday, July 4, 2016

ORA 600 [krr_process_read_error_1]: Oracle 12c Dataguard Error

After I upgrade the database from 11.2.0.4 to 12.1.0.2 started seeing the error “ORA 600 [krr_process_read_error_1]” on standby databases on AIX platform.  Also, MRP process is crashing.

When the MRP process crashes, the logs that come from Primary to standby will not be applied until the MRP process is started again.


Workaround:

This problem is due to the bug 22294260 happening only on AIX environments. 

Only the workaround for this problem is to start restart the recovery on standby database. 

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING ARCHIVED LOGFILE DISCONNECT; 


ORA-700 [kskvmstatact: excessive swapping observed] : Oracle 12c RAC Error

Recently I upgraded 11.2.0.4 RAC to 12.1.0.2 on Linux. After upgrading both GI and Database to 12.1.0.2 started seeing “ORA-700 [kskvmstatact: excessive swapping observed]” alerts frequently.

Below is the alert log content when we got the alert:
WARNING: Heavy swapping observed on system in last 5 mins.
pct of memory swapped in [1.96%] pct of memory swapped out [0.41%].
Please make sure there is no memory pressure and the SGA and PGA
are configured correctly. Look at DBRM trace file for more details.
Errors in file /u01/app/oracle/diag/rdbms/orcl/ORCL1/trace/ORCL1_dbrm_14607.trc  (incident=37241):
ORA-00700: soft internal error, arguments: [kskvmstatact: excessive swapping observed], [], [], [], [], [], [], [], [], [], [], []

Every time when I get the alert observed sum of the pct memory swapped in and swapped out always higher than 2%. In this case swapped in is 1.96% and swapped out is 0.41%, i.e. total of 2.37%.

 

Solution:-

This is the expected behavior in 12c and no solution to get rid of this error for now.

Bug 19495842 is there to change the threshold in the swap warning in the alert log in future releases. But, we need to bear this alert in 12.1.0.2 :).


Thursday, June 23, 2016

Oracle RAC 12c MGMTDB (Management Database)

1.       MGMTDB is a container used to store the diagnostic information collected by Cluster Health Monitor (CHM).
2.       It is the single instance database managed by Oracle clusterware in Oracle 12c, in 11g we used to have Berkley database for the same purpose.
3.       Having MGMTDB is optional in 12.1.0.1, you can choose whether to have it or not during the time of Clusterware setup or upgrade from older versions to 12.1.0.1. But in 12.1.0.2 it become mandatory to have MGMTDB.
4.       MGMTDB runs on the master node (i.e. on which node MGMTDB runs, we can consider that as Master node).
5.       If the node where Management Database running crashes, automatically MGMTDB will be failed over to another node.
6.       Management database will be stored on the same storage as OCR and VOTING disks. So, if you go with MGMTDB option in 12c, you should have more than 5GB space for OCR and VOTING disks.

If you use the small size disks it will not allow you to setup/upgrade clusterware.

 

Commands on MGMTDB


1.       Finding on which node MGMTDB is running:
$ oclumon manage -get MASTER

Master = rac1

2.       Status of the MGMTDB
$srvctl status mgmtdb
Database is enabled
Instance -MGMTDB is running on node rac1

3.       Configuration of ManagementDB (MGMTDB)
$ srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home: <CRS home>
Oracle user: oragrid
Spfile: +OCR_VOTE/_MGMTDB/PARAMETERFILE/spfile.268.915007725
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: crs_linux
PDB service: crs_linux
Cluster name: crs-linux
Database instance: -MGMTDB
4.       Location of Cluster Health Monitor (CHM) repository
$ oclumon manage -get reppath
CHM Repository Path = +OCR_VOTE/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysmgmtdata.269.915007863
5.       CHM Repository size
$ oclumon manage -get repsize

CHM Repository Size = 136320 seconds


Wednesday, June 22, 2016

Datapump (expdp and impdp): Oracle DBA Tutorial

DataPump

       The data pump facility runs on the server side.
       It enables very high speed movement of data and metadata from one DB to another.
       Dump files generated by Data Pump export are not compatible with original export utility.

Benefits of using DataPump over normal export and import

       The ability to restart data pump jobs
       The ability to detach from and reattach to long running jobs without affecting the job itself.
       Data pump export and import use parallel execution rather than a single stream of execution, for improved performance.

Data Pump Export Modes

A Data Pump export utility is invoked using expdp command. Backup can be taken at below modes:
       Full Export Mode:- Specified using FULL parameter in expdp command
       Schema Mode:- Specified using SCHEMAS parameter
       Table Mode:- Specified using TABLES parameter
       Tablespace Mode:- TABLESPACES parameter
       Transportable Tablespace Mode: - TRANSPORT_TABLESPACES parameter.

Data Pump Import Modes

The Data Pump import utility is invoked using impdp command. Backup can be imported below modes:
       Full Import Mode:- Specified using FULL parameter in impdp command
       Schema Mode:- Specified using SCHEMAS parameter
       Table Mode:- Specified using TABLES parameter
       Tablespace Mode:- Specified using TABLESPACES parameter
       Transportable Tablespace Mode: - Specified using TRANSPORT_TABLESPACES parameter.

Export Examples

Before taking the datapump backup, we need to create a backup directory at O.S. level and then at the DB level as the SYS user.
1.       Create /u02/expdp_bkp at O.S. Level.

2.       At SQL prompt:
SQL> create or replace directory expdp_bkp as ‘/u02/expdp_bkp’;
SQL> grant read, write on directory expdp_bkp to public;

Example 1:-
Taking Schema (SCOTT user) backup
$expdp system/<password> dumpfile=scott.dmp logfile=scott.log directory=expdp_bkp schemas=scott

Giving the job name for the backup
$ expdp scott/<password> dumpfile=scott.dmp logfile=scott.log directory=expdp_bkp job_name=dpschema

We can monitor the datapump jobs by querying on dba_datapump_jobs as SYS/SYSTEM users.
SQL> select *from dba_datapump_jobs;

Example 2:-
Taking Full database backup
$expdp system/<password> dumpfile=fulldb.dmp logfile=fulldb.log directory=expdp_bkp full=y

Example 3:-
Taking Table Level backup
$ expdp scott/<password> dumpfile=tables.dmp logfile=tables.log directory=expdp_bkp job_name=dptables tables=emp,dept

Example 4:-
Taking Row level backup
If you want to take the backup of EMP table whose deptno = 10
$expdp scott/<password> dumpfile=query.dmp logfile=query.log directory=expdp_bkp tables=emp query=\’ where deptno=10\’

Example 5:-

Stopping and restarting the datapump jobs
Stopping the datapump job:-
When the backup job is running at OS level press Ctrl+C, follow below instructions
Export>stop_job=immediate
YES
Restarting the stopped jobs:-
$expdp attach=<job name>
Export> continue_client

Example 6: -
Using parallel option during export or import.
$expdp system/<password> dumpfile=fulldb.dmp logfile=fulldb.log directory=expdp_bkp full=y parallel=4



Import Examples


Example 1:-
Importing SCOTT backup to another database
NOTE: - If the backup is exported by SCOTT user, then during the time of import you need to create SCOTT user explicitly to do the import. Else if the backup is exported by SYS/SYSTEM user, no need to create the target user explicitly. IMPDP will take care of creating the users.
$impdp system/<password> dumpfile=schema.dmp logfile=scottimp.log directory=expdp_bkp  schemas=scott

Example 2:-
Importing the dump of SCOTT user to USER3 schema

$impdp system/<password> dumpfile=schema.dmp logfile=scottimp.log directory=expdp_bkp  remap_schema=scott:USER3 table_exists_action=replace

Tuesday, June 21, 2016

Direct Path vs Conventional Path Export : Oracle Database Admin (DBA) Tutorial

Conventional path Export

Conventional path Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into the buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file. 

Direct path Export

When using a Direct path Export, the data is read from disk directly into the export session's program global area (PGA): the rows are transferred directly to the Export session's private buffer. This also means that the SQL command-processing layer (evaluation buffer) can be bypassed, because the data is already in the format that Export expects. As a result, unnecessary data conversion is avoided. The data is transferred to the Export client, which then writes the data into the export file. 

       The parameter DIRECT specifies whether you use the direct path Export (DIRECT=Y) or the conventional path Export (DIRECT=N).

       To improve the performance of Direct path still further, we can use RECORDLENGTH parameter in export.

       The values that can be possible for RECORDLENGTH are multiples of OS block size / multiples of DB_BLOCK_SIZE. If we don’t specify RECORDLENGTH and still use direct=y, then oracle will take default OS block size (In most of the env, it is 1024 bytes)

Ex:- exp scott/tiger file=/u01/scottbkp.dmp log=/u01/scottbkp.log direct=y RECORDLENGTH=65536


LIMITATIONS OF DIRECT PATH EXPORTS

1.       A Direct path Export does not influence the time it takes to Import the data. That is, an export file created using direct path Export or Conventional path Export, will take the same amount of time to Import. 
2.       You cannot use the DIRECT=Y parameter when exporting in transportable tablespace mode.  You can use the DIRECT=Y parameter when exporting in full, user or table mode
3.       The parameter QUERY applies ONLY to conventional path Export. It cannot be specified in a direct path export (DIRECT=Y).
4.       A Direct path Export can only export the data when the NLS_LANG environment variable of the session who is invoking the export, is equal to the database character set. If NLS_LANG is not set (default is AMERICAN_AMERICA.US7ASCII) and/or NLS_LANG is different, Export will display the warning EXP-41 and abort with EXP-0.

Export and Import : Oracle Database Admin (DBA) Tutorial

We can take different types of Oracle database backups.
  1. Logical Backup
  2. Physical Backup

1.     Logical Backup
Logical backup can be done with export utility
Export (exp)  and Datapump (expdp):-
Export/expdp is an oracle utility used to store Oracle database in export format (.dmp) files for later retrieval.
Import (imp) and Datapump (impdp):-
Import is used to import the exported files into database.

2.     Physical Backup
Physical backup is backup of Oracle database blocks, it is again divided into COLD and HOT backups.

Export (exp)
Export backup can be taken at different levels
  1. Table level
  2. Schema level
  3. Database level
  4. Tablespace level
  5. Transportable tablespace across platforms

Import (imp)
Export backup can be taken at different levels
  1. Table level
  2. Schema level
  3. Database level
  4. Tablespace level
  5. Transportable tablespace across platforms


Examples

  1. Full database backup
exp system/<password> file=fulldb.dmp log=fulldb.log full=y

  1. Schema level backup (SCOTT)
exp system/<password> file=scottbkp.dmp log=scottbkp.log owner=scott

  1. Table level backup
To take the backup of tables dept and salgrade under SCOTT schema

exp scott/<password> file=tablesbkp.dmp log=tablesbkp.log tables=dept,salgrade

  1. Row level backup
If you want to take the backup of rows from EMP table whose deptno=10

Exp scott/<password> file==rowsbkp.dmp log=rowsbkp.log tables=emp query=\’ where deptno=10\’

  1. Importing the backup of scott user to testuser

On the target database create testuser with the same privileges as SCOTT user

SQL> create user testuser identified by testpwd;
SQL> grant connect, resource to testuser;

Imp system/<password> file=fulldb.dmp log=impscott.log fromuser=scott touser=testuser commit=y ignore=y

  1. Seeing the contents of a dumpfile

Imp scott/<password> file=scottbkp.dmp show=y



1.     Improving Export and Import Performance

EXPORT
       Set the BUFFER parameter to a high value (e.g. 2M)
       Set the RECORDLENGTH parameter to a high value (e.g. 64K)
       Stop unnecessary applications to free-up resources for your job.
       If you run multiple export sessions, ensure they write to different physical disks.
       DO NOT export to an NFS mounted filesystem. It will take forever.

IMPORT
       Create an indexfile so that you can create indexes AFTER you have imported data. Do this by setting INDEXFILE to a filename and then import. No data will be imported but a file containing index definitions will be created. You must edit this file afterwards and supply the passwords for the schemas on all CONNECT statements. 
       Place the file to be imported on a separate physical disk from the oracle data files
       Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init$SID.ora file
       Set the LOG_BUFFER to a big value and restart oracle.
       Stop redo log archiving if it is running (ALTER DATABASE NOARCHIVELOG;)
       Use COMMIT=N in the import parameter file if you can afford it
    Use ANALYZE=N in the import parameter file to avoid time consuming ANALYZE statements

       Remember to run the indexfile previously created 


ORA-00845: MEMORY_TARGET not supported on this system

Problem:-

Database startup throwing error “ORA-00845: MEMORY_TARGET not supported on this system”

SQL> startup nomount;
ORA-00845: MEMORY_TARGET not supported on this system

Observed below error in the Alert log:-

WARNING: You are trying to use the MEMORY_TARGET feature. This feature requires the /dev/shm file system to be mounted for at least 1593835520 bytes. /dev/shm is either not mounted or is mounted with available space less than this size. Please fix this so that MEMORY_TARGET can work as expected. Current available is 1326542848 and used is 661401600 bytes. Ensure that the mount point is /dev/shm for this directory.
memory_target needs larger /dev/shm



Solution:-

As per the alert log the error caused is due to the small size allocated to /dev/shm. So I changed the size of /dev/shm to large enough.

As the root user executed below commands:

#mount –t tmpfs shmfs –o size=7G /dev/shm

In this case, the size of the shared memory device is configured to be 7GB.

In order to make the same change persistent across system reboots, add an entry for this to the /etc/fstab mount table, as in:

As root user open /etc/fstab and add below entry

shmfs /dev/shm tmpfs size=7G 0

Note: - If you have RAC setup, perform the above steps in all the nodes.

Now you will be able to start the Oracle instance without above error. 

Monday, June 20, 2016

Oracle Database Admin (DBA) Tutorial: Redo Log files management

1.  What is redo log file

       Oracle writes all statements except, SELECT statement, to the log files. 
        If a user updates a row, Oracle will change the row in db_buffer_cache and records the statement in the logfile and give the message to the user that  row is updated. Actually the row is not yet written back to the datafile but still it give the message to the user that row is updated. After 3 seconds the row is actually written to the datafile. This is known as deferred batch writes. 
       Since Oracle defers writing to the datafile there is chance of power failure or system crash before the row is written to the disk. That’s why Oracle writes the statement in redo log file so that in case of power failure or system crash oracle can re-execute the statements next time when you open the database.
       Every Oracle database must have at least 2 redo logfile groups.

2.  Redo log groups status


·       UNUSED - The Redo Log Group has never been used – this status only occurs for a newly added Redo Log Group.
·       CURRENT - Current redo log. This implies that the redo log is active. The redo log could be open or closed.

·       ACTIVE - Log is active but is not the current log. It is needed for crash recovery. It may be in use for block recovery. It may or may not be archived.

·       CLEARING - Log is being re-created as an empty log after an ALTER DATABASE CLEAR LOGFILE statement. After the log is cleared, the status changes to UNUSED.

·       INACTIVE - Log is no longer needed for instance recovery. It may be in use for media recovery. It might or might not be archived.

3.  Create online Redo log Groups

Online redo logs are created when the database was created.

4.  Adding a New Redo Logfile Group

Ex:-
alter database add logfile group 4 ('/u02/oradata/orcl/redo04a.log‘,’ /u02/oradata/orcl/redo04a.log) size 50m;
Note: You can add groups to a database up to the MAXLOGFILES setting you have specified at the time of creating the database. If you want to change MAXLOGFILE setting you have to create a new controlfile
Select '- MAXLOGFILES - '||records_total  from v$controlfile_record_section where type = 'REDO LOG';

5.  Adding Members to an existing group


Ex: alter database add logfile member '/u02/oradata/orcl/redo01b.log' to group 1;
Note: You can add members to a group up to the MAXLOGMEMBERS setting you have specified at the time of creating the database. If you want to change MAXLOGMEMBERS setting you have create a new controlfile
              Select 'MAXLOGMEMBERS ' || dimlm from x$kccdi;

6.  Dropping Members from a group

       You can drop member from a log group only if the group is having more than one member and if it is not the current group.
       If you want to drop members from the current group, force a log switch or wait so that log switch occurs and another group becomes current.
       To force a log switch give the following command à alter system switch logfile

The following command can be used to drop a logfile member:-
alter database drop logfile member '/u02/oradata/GG/GGTEST/redo01b.log';
Note: When you drop logfiles the files are not deleted from the disk. You have to use O/S command to delete the files from disk.

7.  Dropping Logfile Group


·       You can also drop logfile group only if the database is having more than two groups and if it is not the current group.
              alter database drop logfile group 4;
Note:-When you drop logfiles the files are not deleted from the disk. You have to use O/S command to delete the files from disk.

8.  Resizing Logfiles

You cannot resize logfiles. If you want to resize a logfile create a new logfile group with the new size and subsequently drop the old logfile group.

9.  Renaming or Relocating Logfiles

For Example, suppose you want to move  a logfile from ‘/u01/app/oracle/orcl/log1.ora’ to ‘/u02/oracle/ica/log1.ora’, then do the following steps:
1.       Shutdown the database
SQL>shutdown immediate;
2.       Move the logfile from old to new location using operating system command
$mv /u01/app/oracle/orcl/log1.ora  /u02/oracle/ica/
3.       Start and mount the database
SQL>startup mount
4.       Now give the following command to change the location in controlfile
SQL>alter database rename file ‘/u01/app/oracle/orcl/log1.ora’ to ‘/u02/oracle/ica/log2.ora’;
5.       Open the database
SQL>alter database open;

10.               Clearing REDO LOGFILES


·       A redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot continue. In this situation the ALTER DATABASE CLEAR LOGFILE statement can be used to reinitialize the file without shutting down the database.
Ex: - Alter database clear logfile group 3;
·       This statement overcomes two situations where dropping redo logs is not possible:
                                                               i.      If there are only two log groups
                                                             ii.      The corrupt redo log file belongs to the current group
·       If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement.
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

11.               Viewing Information about Log files


       select *from v$logfile;
       Select *from v$log;