Tuesday, March 31, 2015

Three dynamic performance views of wait events

Three dynamic performance views of wait events:

V$SESSION_WAIT :-
It displays the events for which sessions have just completed waiting or are currently waiting.

V$SYSTEM_EVENT  :_
It displays the total number of times all the sessions have waited for the events in that view.

V$SESSION_EVENT-:
 It is similar to V$SYSTEM_EVENT, but displays all waits for each session.

Sunday, March 29, 2015

Lock Script

Hii All

 Today i'm posting  lock script. sometime some session block to other session. This situation is very critical because our whole system is impacted. so this script is very helpful to us.

select s1.username || '@' || s1.machine
   || ' ( SID=' || s1.sid || ' )  is blocking '
     || s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
       from v$lock l1, v$session s1, v$lock l2, v$session s2
      where s1.sid=l1.sid and s2.sid=l2.sid
      and l1.BLOCK=1 and l2.request > 0
       and l1.id1 = l2.id1
      and l2.id2 = l2.id2 ;

Note-- The above script inform us to locking detail and sid detail.

select  p.spid,s.sid,s.serial#,s.p1,s.p1text,s.username,s.status,s.last_call_et,p.program,p.terminal,s.logon_time,s.module,s.osuser,l.SQL_TEXT,w.event,w.SECONDS_IN_WAIT
from V$process p,V$session s,v$sql l,v$session_wait w
where s.paddr = p.addr
and s.sid=w.sid
and l.SQL_ID=s.SQL_ID
and l.HASH_VALUE=s.SQL_HASH_VALUE
and l.ADDRESS=s.SQL_ADDRESS
and s.sid=&SID
order by  SECONDS_IN_WAIT;


Note- The above script inform us to sql detail.


If you want to grep more detail corresponding this session you can check with the help of  v$session. From this command  we can  check long running sql.

If suppose session is idle long time you can kill the session also. After that lock is automatically cleared.

SQL> Alter system kill session 'Sid&serial';


Thursday, March 19, 2015

Initialization Parameters for ASM Instances


Initialization Parameters for ASM Instances

 Parameters that start with ASM_ cannot be set in a normal database instance, although they will be present to the show parameter command.

1.Instance_type:

The INSTANCE_TYPE initialization parameter must be set to Oracle ASM for an Oracle ASM instance. This parameter is optional for an Oracle ASM
instance in an Oracle grid infrastructure home.The default is RDBMS.

The following is an example of the INSTANCE_TYPE parameter in the initialization file:

INSTANCE_TYPE = ASM

2.Db_unique_name:

This parameter specifies the unique name for this group of ASM instances within the cluster or on a node. This parameter defaults to +ASM and
only needs to be modified if you are trying to run multiple ASM instances on the same node.

3.ASM_POWER_LIMIT:

The ASM_POWER_LIMIT initialization parameter specifies the default power for disk rebalancing. The default value is 1 and the range of allowable
values is 0 to 11 inclusive. A value of 0 disables rebalancing. Higher numeric values enable the rebalancing operation to complete more quickly,
but might result in higher I/O overhead.

4.Asm_diskgroups:

The ASM_DISKGROUPS initialization parameter specifies a list of the names of disk groups that an Oracle ASM instance mounts at startup.
Oracle ignores the value that you set for ASM_DISKGROUPS when you specify the NOMOUNT option at startup or when you issue the ALTER DISKGROUP
ALL MOUNT statement. The default value of the ASM_DISKGROUPS parameter is a NULL string.The ASM_DISKGROUPS parameter is dynamic.

The following is an example of setting the ASM_DISKGROUPS parameter dynamically:

SQL> ALTER SYSTEM SET ASM_DISKGROUPS = DATA, FRA;

The following is an example of the ASM_DISKGROUPS parameter in the initialization file:

ASM_DISKGROUPS = DATA, FRA

5.Asm_diskstring:

The purpose of this parameter is to limit the set of disks that Automatic Storage Management considers for discovery.
If not specified the entire system is searched for compatible volumes which can lengthen startup time. The parameters default value is NULL.

(This default causes ASM to find all of the disks in a platform-specific location to which it has read/write access.)














Wednesday, March 18, 2015

ASM Background Processes


ASM Background Processes 

Like normal database instances ASM instance too have the usual background processes like SMON, PMON, DBWr, CKPT and LGWr.
In addition to that the ASM instance also have the following background processes.

1.RBAL-
Opens all device files as part of discovery and coordinate the rebalance activity.

2.ARBx -
These are the slave processes that do the rebalance activity.

3.GMON-
 This process is responsible for managing the disk-level activities(drop/off-line) and advancing disk group compatibility.

4.MARK-
The Mark Allocation Unit (AU) for Re sync Coordinator (MARK)process coordinates the updates to the Staleness Registry when the disks go off-line.
This process runs in the RDBMS instance and is started only when disks go off-line in ASM redundancy disk groups.

5.Onnn-
One or more slave process forming a pool of connection to the ASM instance for exchanging message.

6.PZ9x-
These processes are parallel slave processes (where x is a number),used in fetching data on behalf of GV$ queries.

7.VKTM-
 This process is used to maintain the fast timer and has the same functionality in the RDBMS instances.

Note--
On Unix, the ASM processes can be listed using the following command:

ps –ef|grep asm



Thursday, March 12, 2015

Difference Between CHAR vs VARCHAR


CHAR

1.Used to store character string value of fixed length.

2.The maximum no. of characters the data type can hold is 255 characters.

3.It's 50% faster than VARCHAR.

4.Uses static memory allocation.

5.It will waste a lot of disk space.

6.The system does not have to search for the end of string.

7. Use Char when the data entries in a column are expected to be the same size.

EX. 
Declare test Char(100);
test='Test'-
Then "test" occupies 100 bytes first four bytes with values and rest with blank data.



VARCHAR

1.Used to store variable length alphanumeric data.

2.The maximum this data type can hold is up to 4000 characters.

3.It's slower than CHAR.

4.Uses dynamic memory allocation.

5.It will not occupy any space.

6.In VARCHAR the system has to first find the end of string and then go for searching.

7. VARCHAR when the data entries in a column are expected to vary considerably in size.

EX. 
Declare test VARCHAR100);
test='Test'-
Then "test" occupies only 4+2=6 bytes. First four bytes for value and other two bytes for variable length information.


Conclusion:

1.When using the fixed length data's in column like phone number, use CHAR.
2.When using the variable length data's in column like address , use VARCHAR.



Difference Between Views vs Materialized Views



Views vs Materialized Views



1.First difference between View & Materlized View is that , In Views query result is not stored in the disk or database But MV allow to store query result in disk or table.

2. In case of view we always get latest data but in case of MV we need to refresh the view for getting latest data.

3.Performance of view is less than MV.

4.One more difference , In case of view its only the logical view of table no separate copy of table but in case of MV we get separate copy of table.

5.In case of MV we need extra trigger or some automatic method so that we can keep MV refreshed ,This is not required for view in database.

6.A materialized view may be used by the optimizer as a way of pre-aggregating certain interesting data sets in order to more efficiently answer business questions. A view is just a stored query that is executed at runtime.

7.A view occupies no space. but materialized view occupies space. It exists in the same way as a table:
 it sits on a disk and could be indexed or partitioned.




 

Wednesday, March 11, 2015

MONITOR AN IMPORT DATAPUMP JOB

MONITOR AN IMPORT DATAPUMP JOB (IMPDP)

Many time I find myself having to do imports of data and during the import process customers are asking for status on the import.  To help alleviate these questions and concerns there are a few ways to provide a status on the import process.  I will outline them below:

Use the UNIX “ps –ef” command to track the import the command problem.  Good way to make sure that the process hasn’t error our and quite.

From the UNIX command prompt, use the “tail –f” option against the import log file.  This will give you updates as the log file records the import process.

Set the “status” parameter either on the command line or in the parameter file for the import job.  This will display the status of the job on your standard output.

Use the database view “dba_datapump_jobs” to monitor the job.  This view will tell you a few key items about the job.  The important column in the view is STATUS.  If this column says “executing” then the job is currently running.

Lastly, a good way to watch this process is from the “v$session_longops” view.  This view will give you a way to calculate percentage completed.

There are 5 distinct ways of monitoring a datapump import job.  These approaches can also be used with datapump export jobs.  Overall, monitoring a datapump job could help you in resolving customer questions about how long it will take.

Difference Between Grep & Find


Grep & Find

The main difference between the two is that grep is used to search for a particular string in a file
 whereas
find is used to locate files in a directory,
also you might want to check out the two commands by typing 'man find' and 'man grep'.

Grep command:-> is used for finding any string in the file.
Ex-> 1. grep <String> <filename>
2.grep 'abc xyz' jump.txt

The Above noticable point that it display the whole line,in which line abc xyz string is found.

Find command :-> used to find the file or directory in given path,
Ex-> 1. find <filename>
2. find jump*
display all file name starting with jump,
3. find jump*game.txt

The above noticable point that it display all file in current directory starting with jump and ending with game.

Cluster Background process:


Cluster Background process:

There are difference between Rac and Cluster Background process. Cluster background Discription are Below:

A.Cluster ready Service Daemon (CRSD)
B.Cluster Synchronization Service (CSS)
C.Event Management Daemon (EVMD)
D.Disk Monitor daemon (diskmon)
E.Oracle Notification Service (ONS)

A.Cluster ready Service Daemon (CRSD)

1.Oracle clusterware uses CRS for interaction between the OS and the Database.
2.This metadata is stored in the OCR.
3.It is managing high availability operations in a cluster.
4.The crsd process generates events when the status of a resource changes.

B.Cluster Synchronization Service (CSS)

1.Manages the cluster configuration by controlling which nodes are members of the cluster and by notifying members
 when a node joins or leaves the cluster.
2.Oracle Clusterware performs synchronization of group/locks for the node.
3.Prevents data corruption in event of a split brain.
4.Read voting disk to determine the number and names of members in the cluster.
5.CSS tries to establish connection to all nodes in the cluster using the private interconnect.
6.CSS verifies the number of nodes already registered as part of the cluster by performing an active count function.
7.If no MASTER node has been established CSS authorizes the first node that attains the ACTIVE state as MASTER.
8.Read voting disk to determine the number and names of members in the cluster.
9.Determines the location of the OCR from the ocr.loc file and reads the OCR file to determine the location of the voting  disk.
10.CSS performs state changes to bring the voting disk online.

C.Event Management Daemon (EVMD)

1.It is a background process that publishes Oracle Clusterware events.
2.It is propagates events through the Oracle Notification Service (ONS).
3.EVMD is the communication bridge between the Cluster-Ready Service Daemon (CRSD) and CSSD.[ All communications between the
CRS and CSS happen via the EVMD].

D.Disk Monitor daemon (diskmon)

1.Monitors and performs input/output fencing for Oracle Exadata Storage Server.
2.Exadata storage can be added to any Oracle RAC node at any point in time, the diskmon daemon is always started when ocssd is start.

E.Oracle Notification Service (ONS)

1. It keeps the high availability information.
2.Whenever state of cluster resource changes ONS process , each node will communicate with each other .
3.It is a publish-and-subscribe service for communicating Fast Application Notification (FAN) events.









Sunday, March 8, 2015

RAC Background Processes


RAC Background Processes

A.Lock Monitor Processes (LMON)
B.Lock Monitor Services (LMS)
C.Lock Monitor Daemon Process ( LMD)
D.LCKn ( Lock Process)
E.DIAG (Diagnostic Daemon)
F.ACMS:(Atomic Controlfile to Memory Service)--(from Oracle 11g)

A.Lock Monitor Processes (LMON)-

1.It is also called as  GES [Global Enqueue Service] monitor.
2.LMON Maintains GCS memory structures.
3.LMON handles the abnormal termination of processes and instances.
4.LMON deals with Reconfiguration of locks & resources when an instance joins or leaves the cluster (During reconfiguration LMON generate the trace files)
5.LMON Processes manages the global locks & resources.
6.LMON also provides cluster group services.
7.It checks for instance deaths and listens for local messaging.
8.Lock monitor co-ordinates with the Process Monitor (PMON) to recover dead processes that hold instance locks.

B.Lock Monitor Services (LMS)

1.It is also called the GCS (Global Cache Services) processes.
2.It consumes significant amount of CPU time.
3.It is the cache fusion part and the most active process.
4.Each node will have 2 or more LMS processes.
5.LMS also constantly checks with the LMD background process (or our GES process) to get the lock requests placed by the LMD process.
6.It is advised to Increase the parameter value, if global cache activity is very high.
7.It handles the consistent copies of blocks that are transferred between instances.

C. Lock Monitor Daemon Process ( LMDn)

1.It also monitors for lock conversion time outs.
2.LMD process performs lock and deadlock detection globally.
3.LMD process also handles deadlock detection and remote enqueue requests.
4.LMON-provided services are also known as cluster group services (CGS).

D.LCKn (Lock Process)

1.This process is called as instance enqueue process.
2.It manages instance resource requests & cross instance calls for shared resources.
3.During instance recovery, it builds a list of invalid lock elements and validates lock elements.
4.This process manages non-cache fusion resource requests such as library and row cache requests.

E.DIAG (Diagnostic Daemon)

1.A new background process introduced in Oracle 10g featuring new enhanced diagnosability framework.
2.Regularly monitors the health of the instance.
3.It is also checks instance hangs & deadlocks.
4.It captures the vital diagnostics data for instance & process failures.

F.ACMS:(Atomic Controlfile to Memory Service)--(from Oracle 11g)
1.ACMS stands for Atomic Control file Memory Service.
2.ACMS is a agent that help to ensure committed data written into the disk from SGA.
























Saturday, March 7, 2015

Difference Between Delete,Truncate & Drop


DELETE
1. DELETE is a DML Command.
2. DELETE statement is executed using a row lock, each row in the table is locked for deletion.
3. We can specify filters in where clause
4. It deletes specified data if where condition exists.
5. Delete activates a trigger because the operation are logged individually.
6. Slower than truncate because, it keeps logs.
7. Rollback is possible.
8. Delete command does not resets the High Water Mark for the table.

For Example:

SQL> SELECT COUNT(*) FROM emp;

  COUNT(*)
----------
        14

SQL> DELETE FROM emp WHERE job = 'CLERK';

4 rows deleted.

SQL> COMMIT;

Commit complete.

SQL> SELECT COUNT(*) FROM emp;

  COUNT(*)
----------
        10

TRUNCATE
1. TRUNCATE is a DDL command.
2. TRUNCATE TABLE always locks the table and page but not each row.
3. Cannot use Where Condition.
4. It Removes all the data.
5. TRUNCATE TABLE cannot activate a trigger because the operation does not log individual row deletions.
6. Faster in performance wise, because it doesn't keep any logs.
7. Rollback is not possible.
8. TRUNCATE command resets the High Water Mark for the table.

For Example

SQL> TRUNCATE TABLE emp;

Table truncated.

SQL> SELECT COUNT(*) FROM emp;

  COUNT(*)
----------
         0

DROP

1.DROP command is a DDL command.
2.It removes the information along with structure.
3.It also removes all information about the table from data dictionary.
4.All the tables' rows, indexes and privileges will also be removed.
5.No DML triggers will be fired.
6.rolled back is not possible.

For Example:

SQL> DROP TABLE emp;

Table dropped.

SQL> SELECT * FROM emp;
SELECT * FROM emp
              *
ERROR at line 1:
ORA-00942: table or view does not exist



Wednesday, March 4, 2015

Difference between Traditional Exp/Imp and Datapump(Expdp/Impdp)

Difference between Traditional Exp/Imp and Datapump(Expdp/Impdp).

There are a lots of different between old traditional utlity and expdp. Below are main difference----

1-Datapump operates on a group of files called dump file sets. However, normal export operates on a single file.

2-Datapump access files in the server (using ORACLE directories). Traditional export can access files in client and server both
(not using ORACLE directories).

3-Exports (exp/imp) represent database metadata information as DDLs in the dump file, but in datapump, it represents in XML document format.

4-Datapump has parallel execution but in exp/imp single stream execution.

5-Datapump does not support sequential media like tapes, but traditional export supports.

6-Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.

7-Data Pump will recreate the user, whereas the old imp utility required the DBA to create the user ID before importing.

8-In Data Pump, we can stop and restart the jobs.

9-Expdp/Impdp consume more undo tablespace than original Export and Import.

10.Data Pump does not use the BUFFERS parameter.

Tuesday, March 3, 2015

Oracle Background Processes



Oracle Background Processes

Here are some of the most important Oracle background processes:
 Not all background processes are mandatory for an instance.
 Some are mandatory and some are optional. Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON.


1.Database Writer (DBWR)  

Database Writer or Dirty Buffer Writer process is responsible for writing dirty buffers from the database block cache to the database data files.
Oracle Database allows a maximum of 20 database writer processes.DBWR only writes blocks back to the data files on commit,
or when the cache is full and space has to be made for more blocks.

2. Log Writer (LGWR)  
 
The Log Writer process (LGWR) writes the redo log buffer to a redo log file on disk. LGWR is an Oracle background process responsible for redo log
buffer management.
LGWR writes all redo entries that have been copied into the buffer since the last time it wrote.In RAC, each RAC instance has its own LGWR process
that maintains that instances thread of redo logs.

3. Checkpoint (CKPT) 

Checkpoint is an internal mechanism of oracle. When a checkpoint occurs the latest SCN is written to the control file and to all datafile headers.
This operation is performed by the checkpoint process.

Main Purposes--
1- To establish a data consistency.
2- Enable faster database recovery.

4.System Monitor (SMON)

The System Monitor process (SMON)performs instance recovery at instance start up. SMON is also responsible for cleaning up temporary segments that
are no longer in use; it also coalesces contiguous free extents to make larger blocks of free space available .It is also responsible for
roll back and roll forward.

5.Process Monitor (PMON)

The Process Monitor (PMON) performs process recovery when a user process fails. PMON is responsible for cleaning up the cache and freeing resources
that the process was using.
For example, it resets the status of the active transaction table, releases locks, and removes the process ID from the list of active processes.

Optional Background Processes

6. Archiver (ARCn)
The optional Archive process writes filled redo logs to the archive log locations. ARCn is present only if the database is running  in archive log mode and automatic archiving is enabled.





Monday, March 2, 2015

How to drop database


Simple Step-

In 10g onwards dropping a database is very easy. Earlier in order to drop a database it was required to manually
remove all the datafiles, control files,redo logfiles and init,password file etc but with oracle 10g dropping a database is a single command.

In order to drop the database start the database in restrict mode and bring it in mount state as shown:

1. Logging into db
sqlplus / as sysdba

2.SQL> shutdown immediate;
oracle database closed
oracle database dismounted
oracle instance shutdown

3.SQL> startup restrict mount;

4.SQL> drop database;

Database dropped

5.SQL> exit

Thus we will find all files are deleted.

Sunday, March 1, 2015

What is difference between physical and Logical Standby

What is difference between physical and Logical Standby

Physical Standby:
============

1. Physical standby schema matches exactly the source database.

2-Provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database
on a block-for-block basis.

3- The database schema, including indexes, are the same.

4-A physical standby database is kept synchronized with the primary database by recovering
 the redo data received from the primary database.

5-High availability solutions Or disaster recovery Solution.

6-It is open Mount Stage.

Logical Standby:
===========

1 Logical standby database does not have to match the schema structure of the source database.

2 Logical standby database can be used concurrently for data protection, reporting, and database upgrades.

3 This Kind Of Configuration can be Opened in Read Only Mode .

4 Can have additional materialized views and indexes added for faster performance

5 Logical standby tables can be open for SQL queries (read only), and all other standby tables can be open for updates.

6 This allows users to access a logical standby database for queries and reporting purposes at any time.



How to change the protection mode

Protection modes in Data Guard
  There are three protection modes in Data Guard: Maximum protection, maximum availability and Maximum performance .
  The protection mode can be determined by the protection_mode (and protection_level) column in v$database


Changing the protection mode
alter database set standby database to maximize [protection|availability|performance]

Oracle Data Guard - Protection Modes

Introduction
When using an Oracle standby database for Business Continuity purposes there are 3 possible modes of
 operation for determining how the data is sent from the primary (the database currently being used to
 support the business queries) database to the standby (fail over database to be used upon invocation of business continuity) database.

1-Maximum Performance
2-Maximum Protection
3-Maximum Availability


1-Maximum Performance Mode---

This performance mode provides the highest level of data protection that is possible without affecting the performance of a primary database.
This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the on line log.
Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment,
 so primary database performance is unaffected by delays in writing redo data to the standby database(s).
This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance.
This is the default protection mode.

2-Maximum Protection
This protection mode guarantees that no data loss will occur if the primary database fails.
 To provide this level of protection, the redo data needed to recover each transaction must be written to both the local online redo log and
 to the standby redo log on at least one standby database before the transaction commits. To ensure data loss cannot occur, the primary database
 shuts down if a fault prevents it from writing its redo stream to at least one remote standby redo log.

3- Maximum Availability
This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database.
Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one
synchronized standby database.

This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database

How to create user in MY SQL

Create  a new MySQL user Account mysql > CREATE USER ' newuser '@'localhost' IDENTIFIED BY ' password '...