Queries blocking each other - sql

I am logged into my application from system. I perform refresh from one user and copy paste
from the other. refresh has mainly a set of select queries and copy paste is having more of insert queries.
refresh as such takes one minute or less to perform but when copy-paste is being done from other system it takes a lot of time or waits for the copy paste to complete and only then it completes.
I am using oracle 10g database.I have been using oracle sql develepor (monitor session) to see the real time queries but not have been able to use it effectively.
Can you please tell me:
How to see conflicting queries if at all.
How to see locks acquired by variuos queries.
how long it takes to complete one query.
Any other suggestion or any other approach or tool that i may use .

How to see conflicting queries
In Enterprise Edition, you can use the Enterprise Manager to track the bloking sessions, and the participating queries. (Enterprise Manager for 10g documentation)
You can also write SQL queries for this, like detailed in this article: Tracking Oracle blocking sessions
SQL from the article (listing blocking sessions):
select blocking_session, sid, serial#, wait_class, seconds_in_wait
from v$session
where blocking_session is not NULL
order by blocking_session;
Listing the active queries (from Ask Anantha):
SELECT a.USERNAME, a.STATUS, b.sql_text
FROM V$SESSION a
INNER JOIN V$SQLAREA b ON a.SQL_ADDRESS= b.ADDRESS;
Recommended reading : V$SESSION table
How to see locks acquired by variuos queries.
This query will tell you the session IDs (From Oracle forum):
set linesize 150;
set head on;
col sid_serial form a13
col ora_user for a15;
col object_name for a35;
col object_type for a10;
col lock_mode for a15;
col last_ddl for a8;
col status for a10;
break on sid_serial;
SELECT l.session_id||','||v.serial# sid_serial,
l.ORACLE_USERNAME ora_user,
o.object_name,
o.object_type,
DECODE(l.locked_mode,
0, 'None',
1, 'Null',
2, 'Row-S (SS)',
3, 'Row-X (SX)',
4, 'Share',
5, 'S/Row-X (SSX)',
6, 'Exclusive',
TO_CHAR(l.locked_mode)
) lock_mode,
o.status,
to_char(o.last_ddl_time,'dd.mm.yy') last_ddl
FROM dba_objects o, gv$locked_object l, v$session v
WHERE o.object_id = l.object_id
and l.SESSION_ID=v.sid
order by 2,3;
How long it takes to complete one query
You can track it with this SQL from SearchOracle
SELECT *
FROM
(select
username,opname,sid,serial#,context,sofar,totalwork ,round(sofar/totalwork*100,2) "% Complete"
from v$session_longops)
WHERE "% Complete" != 100
Any other suggestion or any other approach or tool that i may use
Well, Google comes to mind...

Related

Get sid, SQL query in active processes in Oracle

Currently to view currently running processes in Oracle, I use the following command:
select s.sid, s.serial#
from v$session s
join v$process p on s.paddr = p.addr
Is there a way in that query to join to get the sql_query that is running for each of those processes as well?
Of course using v$active_session_history (Starting 11g), it give you lot of information about the queries running for the process.
The below query will give you the last executed queries (using order by desc on sql_exec_start) that is being happening on the database. I specified the columns in the select could help you. by the way top_level_sql_id its useful in case your query is being running by a procedure then youll know which procedure is executing it.
selecT ar.sql_text,
ash.session_id,
ash.SESSION_SERIAL#,
ash.sql_id,
ash.top_level_sql_id,
ash.sql_plan_operation,
ash.sql_exec_start,
ash.event,
ash.blocking_session,
ash.program,
ash.module,
ash.machine
from v$active_session_history ash
inner join v$sqlarea ar on ar.sql_id=ash.sql_id
order by sql_exec_start desc

How to change query status from suspended to runnable?

I have a query that needs to update 2 million records but there is no space in the disk, so the query is suspended right now. After that, I free up some space, but the query is still in suspended. So how should I change the status to Runnable or is there any way to tell sql server that you have enough space right now, and you can run the query.
After that, I free up some space, but the query is still in suspended.is there any way to tell sql server that you have enough space right now, and you can run the query.
SQLSERVER will change the query status from suspended to runnable automatically,it is not managed by you..
Your job here is to check ,why the query is suspended..Below dmvs can help
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type from sys.dm_exec_requests
where session_id=<< your session id>>
There are many reasons why a query gets suspended..some of them include locking/blocking,rollback,getting data from disk..
You will have to check the status as per above dmv and see what is the reason and troubleshoot accordingly..
Below is some sample piece of code which can help you in understanding what suspended means
create table t1
(
id int
)
insert into t1
select row_number() over (order by (select null))
from
sys.objects c
cross join
sys.objects c1
now in one tab of ssms:
run below query
begin tran
update t1
set id=id+1
Open another tab and run below query
select * from t1
Now open another tab and run below query
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type,status from sys.dm_exec_requests
where session_id=<< your session id of select >>
or run below query
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type,status from sys.dm_exec_requests
where blocking_session_id>0
You can see status as suspended due to blocking,once you clear the blocking(by committing transaction) , you will see sql server automatically resumes suspended query in this case

Oracle Procedure to detect long running SQL

I need to create an SQL Procedure that can detect any long running SQLs from non DBMS users that run longer than a certain time period. I got all the relevant Information with a Select Statement:
select s.username, s.machine, db.name, s.last_call_et, sq.sql_fulltext, s.sql_id, s.status, s.serial#
from v_$session s, v_$sql sq, dba_users d, v$database db
where s.sql_id = sq.sql_id
and s.username = d.username
and s.status = 'ACTIVE'
and s.last_call_et > 50
and d.default_tablespace <> 'SYSAUX'
and d.default_tablespace <> 'SYSTEM';
My Question now is, how do i insert the data from the SELECT statement into two tables (dba_long_sqlstat, dba_hist_long_sqlstat) and if a long running SQL was detectet more then once in the same houre there should only be one record in the DBA_HIST_LONG_SQLSTAT. In the DBA_LONG_SQLSTAT table it doesnt matter how many records of the same SQL there are, just that the records should not be kept longer then 24h.
is this possible somehow?

join large tables - transactional log full due to active transaction

I have two large tables with 60 million, resp. 10 million records. I want to join both tables together however the process runs for 3 hours then comes back with the error message:
the transaction log for database is full due to 'active_transaction'
The autogrowth is unlimited and I have set the DB recovery to simple
The size of the log drive is 50 GB
I am using SQL server 2008 r2.
The SQL query I am using is:
Select * into betdaq.[dbo].temp3 from
(Select XXXXX, XXXXX, XXXXX, XXXXX, XXXXX
from XXX.[dbo].temp1 inner join XXX.[dbo].temp2
on temp1.Date = temp2.[Date] and temp1.cloth = temp2.Cloth nd temp1.Time = temp1.Time) a
A single command is a transaction and the transaction does not commit until the end.
So you are filling up the transaction log.
You are going to need to loop and insert like 100,000 rows at a time
Start with this just to test the first 100,000
Then will need to add loop with a cursor
create table betdaq.[dbo].temp3 ...
insert into betdaq.[dbo].temp3 (a,b,c,d,e)
Select top 100000 with ties XXXXX, XXXXX, XXXXX, XXXXX, XXXXX
from XXX.[dbo].temp1
join XXX.[dbo].temp2
on temp1.Date = temp2.[Date]
and temp1.Time = temp1.Time
and temp1.cloth = temp2.Cloth
order by temp1.Date, temp1.Time
And why? That is a LOT of data. Could you use a View or a CTE?
If those join columns are indexed a View will be very efficient.
Transaction log can be full even though database is in simple recovery model,even though select into is a minimally logged operation,log can become full due to other transactiosn running in parallel as well.
I would use below queries to check tlog space usage by transactions while the query is runnnig
select * from sys.dm_db_log_space_usage
select * from sys.dm_tran_database_transactions
select * from sys.dm_tran_active_transactions
select * from sys.dm_tran_current_transaction
further below query can be used to check sql text also
https://gallery.technet.microsoft.com/scriptcenter/Transaction-Log-Usage-By-e62ba57d

Fine tuning update query

I am developing a batch process in the peoplesoft application engine.
I have inserted data in staging table from JOB table.
There are 120,596 employees in total, whose data have to be processed, this is in development environment.
In testing environment, the number of rows to be processed is 249047.
There are many non job data which also have to be sent for employees.
My design is in such way that I will write individual update statements to update the data in the table, then I will select data from the staging table and write it in the file.
The update is taking too much time, I would like to know a technique to fine tune it.
Searched for many things, and even tried using /* +Append */ in the update query, but it throws an error message, sql command not ended.
Also, my update query has to check for nvl or null values.
Is there any way to share the code over stackoverflow, I mean, this is insert,update statement, written in peoplesoft actions, so that people here can have a look into that?
Kindly suggest me a technique, my goal is to finish the execution within 5-10 minutes.
My update statement:
I have figured out the cause. It is this update statement
UPDATE %Table(AZ_GEN_TMP)
SET AZ_HR_MANAGER_ID = NVL((
SELECT e.emplid
FROM PS_EMAIL_ADDRESSES E
WHERE UPPER(SUBSTR(E.EMAIL_ADDR, 0, INSTR(E.EMAIL_ADDR, '#') -1)) = (
SELECT c.contact_oprid
FROM ps_az_can_employee c
WHERE c.emplid = %Table(AZ_GEN_TMP).EMPLID
AND c.rolename='HRBusinessPartner'
AND c.seqnum = (
SELECT MAX(c1.seqnum)
FROM ps_az_can_employee c1
WHERE c1.emplid= c.emplid
AND c1.rolename= c.rolename ) )
AND e.e_addr_type='PINT'), ' ')
In order to fine tune this,I am inserting the value contact_oprid in my staging table, using hint.
SELECT /* +ALL_ROWS */ c.contact_oprid
FROM ps_az_can_employee c
WHERE c.emplid = %Table(AZ_GEN_TMP).EMPLID
AND c.rolename='HRBusinessPartner'
AND c.seqnum = (
SELECT MAX(c1.seqnum)
FROM ps_az_can_employee c1
WHERE c1.emplid= c.emplid
AND c1.rolename= c.rolename ) )
AND e.e_addr_type='PINT')
and doing an update on staging table:
UPDATE staging_table
SET AZ_HR_MANAGER_ID = NVL((
SELECT e.emplid
FROM PS_EMAILtable E
WHERE UPPER(REGEXP_SUBSTR(e.email_addr,'[^#]+',1,1)) = staging_table.CONTACT_OPRID
AND e.e_addr_type='PINT'),' ') /
This will take 5 hours, as it has to process 2 lakhs rows of data.
Is there any way using which the processing can be speed up, i mean, using hints or indexes?
Also, if I don't use this, the processing to update other value is very fast, gets finished in 10 minutes.
Kindly help me with this.
Thanks.
I have resolved this, used MERGE INTO TABLE oracle statement, and now the process takes 10 minutes to execute, including file writing operation. Thanks all for your help and suggestions.