Whether SQL Blocking Session will be cleared by itself? - sql

I can view the blocking session in our Production DB.
SELECT * FROM sys.sysprocesses WHERE spid = YOURSPID
I have not killed the session : Kill <SPID>
Does the blocking session will get clear by itself ?

It depends on the user(if any) who is active and who if any is executing the query. If there is any active user who has blocked the session then you have to kill the session for that user or else wait for him/his session to get finished. You can execute the sp_whoisactive to check the list of active users. And check if any user is blocking the session then you can explicitly kill the session for that user to get it working.
You can also refer: Identify the cause of SQL Server blocking.
This query is also a good way to analyze detailed information about
locks, and help you to identify the cause of a large number of blocks.
WITH [Blocking]
AS (SELECT w.[session_id]
,s.[original_login_name]
,s.[login_name]
,w.[wait_duration_ms]
,w.[wait_type]
,r.[status]
,r.[wait_resource]
,w.[resource_description]
,s.[program_name]
,w.[blocking_session_id]
,s.[host_name]
,r.[command]
,r.[percent_complete]
,r.[cpu_time]
,r.[total_elapsed_time]
,r.[reads]
,r.[writes]
,r.[logical_reads]
,r.[row_count]
,q.[text]
,q.[dbid]
,p.[query_plan]
,r.[plan_handle]
FROM [sys].[dm_os_waiting_tasks] w
INNER JOIN [sys].[dm_exec_sessions] s ON w.[session_id] = s.[session_id]
INNER JOIN [sys].[dm_exec_requests] r ON s.[session_id] = r.[session_id]
CROSS APPLY [sys].[dm_exec_sql_text](r.[plan_handle]) q
CROSS APPLY [sys].[dm_exec_query_plan](r.[plan_handle]) p
WHERE w.[session_id] > 50
AND w.[wait_type] NOT IN ('DBMIRROR_DBM_EVENT'
,'ASYNC_NETWORK_IO'))
SELECT b.[session_id] AS [WaitingSessionID]
,b.[blocking_session_id] AS [BlockingSessionID]
,b.[login_name] AS [WaitingUserSessionLogin]
,s1.[login_name] AS [BlockingUserSessionLogin]
,b.[original_login_name] AS [WaitingUserConnectionLogin]
,s1.[original_login_name] AS [BlockingSessionConnectionLogin]
,b.[wait_duration_ms] AS [WaitDuration]
,b.[wait_type] AS [WaitType]
,t.[request_mode] AS [WaitRequestMode]
,UPPER(b.[status]) AS [WaitingProcessStatus]
,UPPER(s1.[status]) AS [BlockingSessionStatus]
,b.[wait_resource] AS [WaitResource]
,t.[resource_type] AS [WaitResourceType]
,t.[resource_database_id] AS [WaitResourceDatabaseID]
,DB_NAME(t.[resource_database_id]) AS [WaitResourceDatabaseName]
,b.[resource_description] AS [WaitResourceDescription]
,b.[program_name] AS [WaitingSessionProgramName]
,s1.[program_name] AS [BlockingSessionProgramName]
,b.[host_name] AS [WaitingHost]
,s1.[host_name] AS [BlockingHost]
,b.[command] AS [WaitingCommandType]
,b.[text] AS [WaitingCommandText]
,b.[row_count] AS [WaitingCommandRowCount]
,b.[percent_complete] AS [WaitingCommandPercentComplete]
,b.[cpu_time] AS [WaitingCommandCPUTime]
,b.[total_elapsed_time] AS [WaitingCommandTotalElapsedTime]
,b.[reads] AS [WaitingCommandReads]
,b.[writes] AS [WaitingCommandWrites]
,b.[logical_reads] AS [WaitingCommandLogicalReads]
,b.[query_plan] AS [WaitingCommandQueryPlan]
,b.[plan_handle] AS [WaitingCommandPlanHandle]
FROM [Blocking] b
INNER JOIN [sys].[dm_exec_sessions] s1
ON b.[blocking_session_id] = s1.[session_id]
INNER JOIN [sys].[dm_tran_locks] t
ON t.[request_session_id] = b.[session_id]
WHERE t.[request_status] = 'WAIT'
GO

Related

Runtime of stored procedure with temporary tables varies intermittently

We are facing a performance issue while executing a stored procedure. It usually takes between 10-15 minutes to run, but sometimes it takes up to more than 30 minutes to execute.
We captured visualize plan execute files for the Normal run and Long run cases.
By checking the visualized plan we came to know that, one particular Insert block of code takes extra time in the long run. And by checking
"EXPLAIN PLAN FOR SQL PLAN CACHE ENTRY <plan_id> "
the table we found that the order of execution differs in the long run.
This is the block which takes extra time to run sometimes.
INSERT INTO #TMP_DATI_SALDI_LORDI_BASE (
"COD_SCENARIO","COD_PERIODO","COD_CONTO","COD_DEST1","COD_DEST2","COD_DEST3","COD_DEST4","COD_DEST5"
,"IMPORTO","COD_VALUTA","IMPORTO_VALUTA_ORIGINARIA","COD_VALUTA_ORIGINARIA","NOTE"
)
( SELECT
SCEN_P.SCENARIO
,SCEN_P.PERIOD
,ACCOUT_ADJ.ATTRIBUTO1 AS "COD_CONTO"
,DATAS_rev.COD_DEST1
,DATAS_rev.COD_DEST2
,DATAS_rev.COD_DEST3
,__typed_NString__($1, 50)
,'RPT_NON'
,SUM(
CASE WHEN INFO.INCOT = 'FOB' THEN
CASE ACCOUT_rev.ATTRIBUTO1 WHEN 'CalcInsurance' THEN
0
ELSE
DATAS_rev.IMPORTO
END
ELSE
DATAS_rev.IMPORTO
END
* (DATAS_ADJ.IMPORTO - DATAS.IMPORTO)
)
,DATAS_rev.COD_VALUTA
,SUM(
CASE WHEN INFO.INCOT = 'FOB' THEN
CASE ACCOUT_rev.ATTRIBUTO1 WHEN 'CalcInsurance' THEN
0
ELSE
DATAS_rev.IMPORTO_VALUTA_ORIGINARIA
END
ELSE
DATAS_rev.IMPORTO_VALUTA_ORIGINARIA
END
* (DATAS_ADJ.IMPORTO_VALUTA_ORIGINARIA - DATAS.IMPORTO_VALUTA_ORIGINARIA)
)
,DATAS_rev.COD_VALUTA_ORIGINARIA
,'CPM_SP_CACL_FY_E3 Parts Option ADJ'
FROM #TMP_TAGERT_SCEN_P SCEN_P
INNER JOIN #TMP_DATI_SALDI_LORDI_BASE DATAS_rev
ON DATAS_rev.COD_SCENARIO = SCEN_P.SCENARIO
AND DATAS_rev.COD_PERIODO = SCEN_P.PERIOD
AND LEFT(DATAS_rev.COD_DEST3, 1) = 'O'
INNER JOIN CONTO ACCOUT_rev
ON ACCOUT_rev.COD_CONTO = DATAS_rev.COD_CONTO
AND ACCOUT_rev.ATTRIBUTO1 IN ('CalcFOB','CalcInsurance') --FOB,Insurance(Ocean freight is Nothing by Option)
INNER JOIN #DSL DATAS
ON DATAS.COD_SCENARIO = 'LAUNCH'
AND DATAS.COD_PERIODO = 12
AND DATAS.COD_DEST1 = 'NC'
AND DATAS.COD_DEST2 = 'NC'
AND DATAS.COD_DEST3 = 'F001'
AND DATAS.COD_DEST4 = DATAS_rev.COD_DEST4
AND DATAS.COD_DEST5 = 'INP'
INNER JOIN CONTO ACCOUT
ON ACCOUT.COD_CONTO = DATAS.COD_CONTO
AND ACCOUT.ATTRIBUTO2 = 'E3'
INNER JOIN CONTO ACCOUT_ADJ
ON ACCOUT_ADJ.ATTRIBUTO3 = DATAS.COD_CONTO
AND ACCOUT_ADJ.ATTRIBUTO2 = 'HE3'
INNER JOIN #DSL DATAS_ADJ
ON LEFT(DATAS_ADJ.COD_SCENARIO,4) = LEFT(SCEN_P.SCENARIO,4)
AND DATAS_ADJ.COD_PERIODO = 12
AND DATAS_ADJ.COD_DEST1 = DATAS.COD_DEST1
AND DATAS_ADJ.COD_DEST2 = DATAS.COD_DEST2
AND DATAS_ADJ.COD_DEST3 = DATAS.COD_DEST3
AND DATAS_ADJ.COD_DEST4 = DATAS.COD_DEST4
AND DATAS_ADJ.COD_DEST5 = DATAS.COD_DEST5
AND DATAS_ADJ.COD_CONTO = ACCOUT_ADJ.COD_CONTO
LEFT OUTER JOIN #TMP_KDPWT_INCOTERMS INFO
ON INFO.P_CODE = DATAS.COD_DEST4
GROUP BY
SCEN_P.SCENARIO,SCEN_P.PERIOD,ACCOUT_ADJ.ATTRIBUTO1,DATAS_rev.COD_DEST1,DATAS_rev.COD_DEST2
,DATAS_rev.COD_DEST3, DATAS.COD_DEST4,DATAS_rev.COD_VALUTA,DATAS_rev.COD_VALUTA_ORIGINARIA,INFO.INCOT
)
I will share the order of execution details also for normal and long run case.
Could someone please help us to overcome this issue? And also we don't know how to fix the order of the join execution. Is there any way to fix the join order execution, Please guide us.
Thanks in advance
Vinothkumar
Without a lot more detailed information, there is no way to tell exactly why your INSERT statement shows this alternating runtime behaviour.
Based on my experience, such an analysis can take quite some time and there are only few people available that are capable to perform it. If you can get someone like that to look at this, make sure to understand and learn.
What I can tell from the information shared is this
using temporary tables to structure a multi-stage data flow is the wrong thing to do on SAP HANA. Instead, use table variables in SQLScript.
if you insist on using the temporary tables, make them at least column tables; this will allow to avoid a need for some internal data materialisation.
when using joins make sure that the joined columns are of the same data type. The explain plan is full of TO_INT(), TO_DECIMAL(), and other conversion functions. Those take time, memory, and make it hard for the optimiser(s) to estimate cardinalities.
as the statement uses a lot of temporary tables, the different join orders can easily result from different volumes of data that was present when the SQL was parsed, prepared and optimised. One option to avoid this is to have HANA ignore any cached plans for the statement. The documentation has the HINTS for that.
And that is about what I can say about this with the available information.

Django permissions using too much the database

We started investigation on our database as it is the less scalable component in our infrastructure.
I checked the table pg_stat_statements of our Postgresql database with the following query:
SELECT userid, calls, total_time, rows, 100.0 * shared_blks_hit /
nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent, query
FROM pg_stat_statements ORDER BY total_time DESC LIMIT 5;
Everytime, the same query is first in the list:
16386 | 21564 | 4077324.749363 | 1423094 | 99.9960264252721535 |
SELECT DISTINCT "auth_user"."id", "auth_user"."password", "auth_user"."last_login",
"auth_user"."is_superuser", "auth_user"."username", "auth_user"."first_name",
"auth_user"."last_name", "auth_user"."email", "auth_user"."is_staff",
"auth_user"."is_active", "auth_user"."date_joined" FROM "auth_user"
LEFT OUTER JOIN "auth_user_groups" ON ("auth_user"."id" = "auth_user_groups"."user_id")
LEFT OUTER JOIN "auth_group" ON ("auth_user_groups"."group_id" = "auth_group"."id")
LEFT OUTER JOIN "auth_group_permissions" ON ("auth_group"."id" = "auth_group_permissions"."group_id")
LEFT OUTER JOIN "auth_user_user_permissions" ON ("auth_user"."id" = "auth_user_user_permissions"."user_id")
WHERE ("auth_group_permissions"."permission_id" = $1 OR "auth_user_user_permissions"."permission_id" = $2)
This sounds like a permission check and as I understand, it is cached at request level.
I wonder if someone did a package to cache them into memcached for instance, or found a solution to reduce the amount of requests done to check those permissions?
I checked all indices and they seem correct. The request is a bit slow mostly because we have a lot of permissions but still, the amount of calls is crazy.

Running two sql server jobs into a single job not working

we have a sql server agent job which is running two sql servant jobs.The job is designed in the way that the second job will wait till the first job completed. But sometimes the wait is not working and it causes both jobs ends up running at same time. This causes failure of both jobs. We uses below code for waiting mechanism.
WHILE
(
SELECT COUNT(*) AS cnt
FROM msdb.dbo.sysjobs J
INNER JOIN msdb.dbo.sysjobactivity JA
ON J.job_id = JA.job_id
WHERE start_execution_date IS NOT NULL
AND stop_execution_date IS NULL
AND J.name = "first job name"
AND last_executed_step_date > DATEADD(DD,-1,GETDATE())--//Added to bypass historical corrupt data
) > 0
BEGIN
--WAIT
WAITFOR DELAY '00:01';
END
any suggestion why this is not working sometimes?

How to detect query which holds the lock in Postgres?

I want to track mutual locks in postgres constantly.
I came across Locks Monitoring article and tried to run the following query:
SELECT bl.pid AS blocked_pid,
a.usename AS blocked_user,
kl.pid AS blocking_pid,
ka.usename AS blocking_user,
a.query AS blocked_statement
FROM pg_catalog.pg_locks bl
JOIN pg_catalog.pg_stat_activity a ON a.pid = bl.pid
JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND kl.pid != bl.pid
JOIN pg_catalog.pg_stat_activity ka ON ka.pid = kl.pid
WHERE NOT bl.granted;
Unfortunately, it never returns non-empty result set. If I simplify given query to the following form:
SELECT bl.pid AS blocked_pid,
a.usename AS blocked_user,
a.query AS blocked_statement
FROM pg_catalog.pg_locks bl
JOIN pg_catalog.pg_stat_activity a ON a.pid = bl.pid
WHERE NOT bl.granted;
then it returns queries which are waiting to acquire a lock. But I cannot manage to change it so that it can return both blocked and blocker queries.
Any ideas?
Since 9.6 this is a lot easier as it introduced the function pg_blocking_pids() to find the sessions that are blocking another session.
So you can use something like this:
select pid,
usename,
pg_blocking_pids(pid) as blocked_by,
query as blocked_query
from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0;
From this excellent article on query locks in Postgres, one can get blocked query and blocker query and their information from the following query.
CREATE VIEW lock_monitor AS(
SELECT
COALESCE(blockingl.relation::regclass::text,blockingl.locktype) as locked_item,
now() - blockeda.query_start AS waiting_duration, blockeda.pid AS blocked_pid,
blockeda.query as blocked_query, blockedl.mode as blocked_mode,
blockinga.pid AS blocking_pid, blockinga.query as blocking_query,
blockingl.mode as blocking_mode
FROM pg_catalog.pg_locks blockedl
JOIN pg_stat_activity blockeda ON blockedl.pid = blockeda.pid
JOIN pg_catalog.pg_locks blockingl ON(
( (blockingl.transactionid=blockedl.transactionid) OR
(blockingl.relation=blockedl.relation AND blockingl.locktype=blockedl.locktype)
) AND blockedl.pid != blockingl.pid)
JOIN pg_stat_activity blockinga ON blockingl.pid = blockinga.pid
AND blockinga.datid = blockeda.datid
WHERE NOT blockedl.granted
AND blockinga.datname = current_database()
);
SELECT * from lock_monitor;
As the query is long but useful, the article author has created a view for it to simplify it's usage.
This modification of a_horse_with_no_name's answer will give you the last (or current, if it's still actively running) query of the blocking session in addition to just the blocked sessions:
SELECT
activity.pid,
activity.usename,
activity.query,
blocking.pid AS blocking_id,
blocking.query AS blocking_query
FROM pg_stat_activity AS activity
JOIN pg_stat_activity AS blocking ON blocking.pid = ANY(pg_blocking_pids(activity.pid));
This helps you understand which operations are interfering with each other (even if the block comes from a previous query), helping you understand the impact of killing one session and figuring out how to prevent blocking in the future.
Postgres has a very rich system catalog exposed via SQL tables.
PG's statistics collector is a subsystem that supports collection and reporting of information about server activity.
Now to figure out the blocking PIDs you can simply query pg_stat_activity.
select pg_blocking_pids(pid) as blocked_by
from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0;
To, get the query corresponding to the blocking PID, you can self-join or use it as a where clause in a subquery.
SELECT query
FROM pg_stat_activity
WHERE pid IN (select unnest(pg_blocking_pids(pid)) as blocked_by from pg_stat_activity where cardinality(pg_blocking_pids(pid)) > 0);
Note: Since pg_blocking_pids(pid) returns an Integer[], so you need to unnest it before you use it in a WHERE pid IN clause.
Hunting for slow queries can be tedious sometimes, so have patience. Happy hunting.
How to show all blocked queries.
select pid,
usename,
pg_blocking_pids(pid) as blocked_by,
query as blocked_query
from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0;
You can kill a blocked query by using the below command.
SELECT pg_cancel_backend(a.pid), pg_terminate_backend(a.pid);
You can terminate all blocked queries using it.
SELECT pg_cancel_backend(a.pid), pg_terminate_backend(a.pid)
FROM( select pid,
usename,
pg_blocking_pids(pid) as blocked_by,
query as blocked_query
from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0) a
for postgresql versions earlier than postgresql 9.6 which does not have pg_blocking_pids function,you can use following query to find blocked query and blocking query.
SELECT w.query AS waiting_query,
w.pid AS waiting_pid,
w.usename AS waiting_user,
now() - w.query_start AS waiting_duration,
l.query AS locking_query,
l.pid AS locking_pid,
l.usename AS locking_user,
t.schemaname || '.' || t.relname AS tablename,
now() - l.query_start AS locking_duration
FROM pg_stat_activity w
JOIN pg_locks l1 ON w.pid = l1.pid AND NOT l1.granted
JOIN pg_locks l2 ON l1.relation = l2.relation AND l2.granted
JOIN pg_stat_activity l ON l2.pid = l.pid
JOIN pg_stat_user_tables t ON l1.relation = t.relid
WHERE w.waiting;
One thing I find that is often missing from these is an ability to look up row locks. At least on the larger databases I have worked on, row locks are not shown in pg_locks (if they were, pg_locks would be much, much larger and there isn't a real data type to show the locked row in that view properly).
I don't know that there is a simple solution to this but usually what I do is look at the table where the lock is waiting and search for rows where the xmax is less than the transaction id present there. That usually gives me a place to start, but it is a bit hands-on and not automation friendly.
Note that shows you uncommitted writes on rows on those tables. Once committed, the rows are not visible in the current snapshot. But for large tables, that is a pain.
Others have already answered your query, but I had certain situation: There were lots of queries blocking each other and I wanted to find the main blocker sessions for each blocked session.
Fir example session 5 was blocked by session 4 and 4 was waitig for 3 and 2 while 3 and 2 were waiting for session 1.
5->4->{3,2}->1
In this case session 1 is real problem, once it is cleared, others will take care of themselves.
So I wanted a query which will show me results like this:
Blocked pid
Blocker pid
5
1
4
1
3
1
2
1
So if there is any chain locked sessions, this query will show you the main blocker sessions for each blocked session.
;with recursive
find_the_source_blocker as (
select pid
,pid as blocker_id
from pg_stat_activity pa
where pa.state<>'idle'
and array_length(pg_blocking_pids(pa.pid), 1) is null
union all
select
t.pid as pid
,f.blocker_id as blocker_id
from find_the_source_blocker f
join ( SELECT
act.pid,
blc.pid AS blocker_id
FROM pg_stat_activity AS act
LEFT JOIN pg_stat_activity AS blc ON blc.pid = ANY(pg_blocking_pids(act.pid))
where act.state<>'idle') t on f.pid=t.blocker_id
)
select distinct
s.pid
,s.blocker_id
,pb.usename as blocker_user
,pb.query_start as blocker_start
,pb.query as blocker_query
,pt.query_start as trans_start
,pt.query as trans_query
from find_the_source_blocker s
join pg_stat_activity pb on s.blocker_id=pb.pid
join pg_stat_activity pt on s.pid=pt.pid
where s.pid<>s.blocker_id

The Same SQL Query takes longer to run in one DB than another DB under the same server

I have a SQL database server and 2 databases under it with the same structure and data. I run the same sql query in the 2 databases, one of them takes longer while the other completes in less than 50% of the time. They both have different execution plans.
The query for the view is as below:
SELECT DISTINCT i.SmtIssuer, i.SecID, ra.AssetNameCurrency AS AssetIdCurrency, i.IssuerCurrency, seg.ProxyCurrency, shifts.ScenarioDate, ten.TenorID, ten.Tenor,
shifts.Shift, shifts.BusinessDate, shifts.ScenarioNum
FROM dbo.tblRrmIssuer AS i INNER JOIN
dbo.tblRrmSegment AS seg ON i.Identifier = seg.Identifier AND i.SegmentID = seg.SegmentID INNER JOIN
dbo.tblRrmAsset AS ra ON seg.AssetID = ra.AssetID INNER JOIN
dbo.tblRrmHistSimShift AS shifts ON seg.Identifier = shifts.Identifier AND i.SegmentID = shifts.SegmentID INNER JOIN
dbo.tblRrmTenor AS ten ON shifts.TenorID = ten.TenorID INNER JOIN
dbo.tblAsset AS a ON i.SmtIssuer = a.SmtIssuer INNER JOIN
dbo.tblRrmSource AS sc ON seg.SourceID = sc.SourceID
WHERE (a.AssetTypeID = 0) AND (sc.SourceName = 'CsVaR') AND (shifts.SourceID =
(SELECT SourceID
FROM dbo.tblRrmSource
WHERE (SourceName = 'CsVaR')))
The things i have already tried are - rebuild & reorganize index on the table (tblRRMHistSimShifts - this table has over 2 million records), checked for locks or other background processes or errors on server, Max degree of parallelism for the server is 0.
Is there anything more you can suggest to fix this issue?
The fact that you have two databases on same server and with same data set (as you said) does not ensure same execution plan.
Here are some of the reasons why the query plan may be different:
mdf and ldf files (for each database) are on different drives. If one
drives is faster, that database will run the query faster too.
stalled statistics. If you have one database with newer stats than
the other one, SQL has better chances of picking a proper (and
faster) execution plan.
Indexes: I know you said they both are identical, but I would check
if you have same type of Indexes on both.
Focus on see why the query is running slow or see the actual execution plan, instead of comparing. Checking the actual execution plan for the slow query will give you a hint of why is running slower.
Also, I would not add a NO LOCK statement to fix the issue. In my experience, most slow queries can be tuned up via code or Index, instead of adding a NO LOCK hint that may get you modified or old result sets, depending of your transactions.
Best way is rebuild & reorganize your request
SELECT DISTINCT i.SmtIssuer, i.SecID, ra.AssetNameCurrency AS AssetIdCurrency, i.IssuerCurrency, seg.ProxyCurrency, shifts.ScenarioDate, ten.TenorID, ten.Tenor,
shifts.Shift, shifts.BusinessDate, shifts.ScenarioNum
FROM dbo.tblRrmIssuer AS i INNER JOIN dbo.tblRrmSegment AS seg ON i.Identifier = seg.Identifier AND i.SegmentID = seg.SegmentID
INNER JOIN dbo.tblRrmSource AS sc ON seg.SourceID = sc.SourceID
INNER JOIN dbo.tblRrmAsset AS ra ON seg.AssetID = ra.AssetID
INNER JOIN dbo.tblRrmHistSimShift AS shifts ON seg.Identifier = shifts.Identifier AND i.SegmentID = shifts.SegmentID AND shifts.SourceID = sc.SourceID
INNER JOIN dbo.tblRrmTenor AS ten ON shifts.TenorID = ten.TenorID
INNER JOIN dbo.tblAsset AS a ON i.SmtIssuer = a.SmtIssuer
WHERE (a.AssetTypeID = 0) AND (sc.SourceName = 'CsVaR')