How to change query status from suspended to runnable? - sql

I have a query that needs to update 2 million records but there is no space in the disk, so the query is suspended right now. After that, I free up some space, but the query is still in suspended. So how should I change the status to Runnable or is there any way to tell sql server that you have enough space right now, and you can run the query.

After that, I free up some space, but the query is still in suspended.is there any way to tell sql server that you have enough space right now, and you can run the query.
SQLSERVER will change the query status from suspended to runnable automatically,it is not managed by you..
Your job here is to check ,why the query is suspended..Below dmvs can help
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type from sys.dm_exec_requests
where session_id=<< your session id>>
There are many reasons why a query gets suspended..some of them include locking/blocking,rollback,getting data from disk..
You will have to check the status as per above dmv and see what is the reason and troubleshoot accordingly..
Below is some sample piece of code which can help you in understanding what suspended means
create table t1
(
id int
)
insert into t1
select row_number() over (order by (select null))
from
sys.objects c
cross join
sys.objects c1
now in one tab of ssms:
run below query
begin tran
update t1
set id=id+1
Open another tab and run below query
select * from t1
Now open another tab and run below query
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type,status from sys.dm_exec_requests
where session_id=<< your session id of select >>
or run below query
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type,status from sys.dm_exec_requests
where blocking_session_id>0
You can see status as suspended due to blocking,once you clear the blocking(by committing transaction) , you will see sql server automatically resumes suspended query in this case

Related

SQL view takes long time to finish, but the same internal query executes in 1 second

I am facing a weird experience with a query which is for handling data from ADO. Here is the view
ALTER view [dbo].[v_Missing_Pepics] as
select ProjectName Title,PM.ProjectCode,LifeCycle,dbo.fn_kip_ado_status_mapping(LifeCycle,PM.ProjectCode) KipStatus,TeamCode,ISNULL((select top 1 value from string_split(TA.AreaPath,'\') where LTRIM(value) like 'Domain%'),'Automation') Domain,DM.DomainName KipDomain,TeamName
from [dbo].[v_ProjectMaster_Latest] PM
left outer join areapath_mapping TA on TA.KeyedInTeamCode=PM.TeamCode
left join v_portfolio_epics PE on PE.ProjectCode=PM.ProjectCode
inner join domain_master DM on DM.DomainCode=PM.DomainCode
where ProjectActive = 'yes' and LifeCycle not in ('In Close-Down', 'Completed','Withdrawn')
and PE.ProjectCode is null and DM.DomainName not in ('Data Power') and PM.ProjectCode not like 'EXP%'
GO
When I try to execute the query like this
Select * from v_Missing_Pepics
It took more than 80sec to finish. But when I copy the query alone (Within the view), it executes in just 1 second.
I don't understand why??
I am working in Azure SQL.
Try
sp_recompile 'v_Missing_Pepics'
And see if that resolves the performance problems with the view.

Select Query doesn't show any results and appears to hang

I have a table, dbo.PE, in SQL Server 2017. Yesterday I had over 40,000 records. today when I do this:
Select Top 1000 * From dbo.PE
Nothing happens. No results (not even a No records message). It just sits there and spins and says Executing Query until it is cancelled. Any Idea to what is going on? I tried inserting new data and once again, nothing happens, just sits there and spins until cancelled.
I can access other tables in the database, just not this one. No permissions have been changed.
So, To answer this question, gsharp was correct in that the table was locked. I ran the following statement to kill the session (https://infrastructureland.wordpress.com/2015/01/24/how-to-release-or-remove-lock-on-a-table-sql-server/):
SELECT
OBJECT_NAME(P.object_id) AS TableName,
Resource_type,
request_session_id
FROM sys.dm_tran_locks L
JOIN sys.partitions P ON L.resource_associated_entity_id = p.hobt_id
WHERE OBJECT_NAME(P.object_id) = ‘dbo.PE’
Found out that the session ID was 54 so I executed:
kill 54

How to figure out what object is represented by associatedObjectId during blocking?

I got report from another team about blocking in SQL Server. Looking at results of
Exec sp_who2
and query from blog by Glenn Berry
SELECT blocking.session_id AS blocking_session_id
,blocked.session_id AS blocked_session_id
,waitstats.wait_type AS blocking_resource
,waitstats.wait_duration_ms
,waitstats.resource_description
,blocked_cache.text AS blocked_text
,blocking_cache.text AS blocking_text
FROM sys.dm_exec_connections AS blocking
INNER JOIN sys.dm_exec_requests blocked
ON blocking.session_id = blocked.blocking_session_id
CROSS APPLY sys.dm_exec_sql_text(blocked.sql_handle) blocked_cache
CROSS APPLY sys.dm_exec_sql_text(blocking.most_recent_sql_handle) blocking_cache
INNER JOIN sys.dm_os_waiting_tasks waitstats
ON waitstats.session_id = blocked.session_id
I want not able to catch anything being blocked. Running this multiple time I start to notice that something shows up but next time I run query, blcoking is gone.
I created temp table by SELECT INTO
,blocking_cache.text AS blocking_text
INTO #TempBlockingTable
FROM sys.dm_exec_connections AS blocking
After that modified query to be INSERT INTO SELECT. Now I was able to run query as many times as I want without fearing that results would be gone.
I kept running query for about 10 seconds until I finally got some results.
SELECT * FROM #TempBlockingTable
Looking at resource_description column from sys.dm_os_waiting_tasks I found that data is displayed in the following format.
<type-specific-description> id=lock<lock-hex-address> mode=<mode> associatedObjectId=<associated-obj-id>
Microsoft documentation on sys.dm_os_waiting_tasks http://technet.microsoft.com/en-us/library/ms188743.aspx does not have definition for associatedObjectId
Answer actually comes from Aaron Bertrand found it on Google Groups. To Get OBJECT_NAME of associatedObjectId you need to run the following query
SELECT OBJECT_NAME([object_id])
FROM sys.partitions
WHERE partition_id = 456489945132196
This number 456489945132196 represents value of associatedObjectId from resource_description column from sys.dm_os_waiting_tasks

Fine tuning update query

I am developing a batch process in the peoplesoft application engine.
I have inserted data in staging table from JOB table.
There are 120,596 employees in total, whose data have to be processed, this is in development environment.
In testing environment, the number of rows to be processed is 249047.
There are many non job data which also have to be sent for employees.
My design is in such way that I will write individual update statements to update the data in the table, then I will select data from the staging table and write it in the file.
The update is taking too much time, I would like to know a technique to fine tune it.
Searched for many things, and even tried using /* +Append */ in the update query, but it throws an error message, sql command not ended.
Also, my update query has to check for nvl or null values.
Is there any way to share the code over stackoverflow, I mean, this is insert,update statement, written in peoplesoft actions, so that people here can have a look into that?
Kindly suggest me a technique, my goal is to finish the execution within 5-10 minutes.
My update statement:
I have figured out the cause. It is this update statement
UPDATE %Table(AZ_GEN_TMP)
SET AZ_HR_MANAGER_ID = NVL((
SELECT e.emplid
FROM PS_EMAIL_ADDRESSES E
WHERE UPPER(SUBSTR(E.EMAIL_ADDR, 0, INSTR(E.EMAIL_ADDR, '#') -1)) = (
SELECT c.contact_oprid
FROM ps_az_can_employee c
WHERE c.emplid = %Table(AZ_GEN_TMP).EMPLID
AND c.rolename='HRBusinessPartner'
AND c.seqnum = (
SELECT MAX(c1.seqnum)
FROM ps_az_can_employee c1
WHERE c1.emplid= c.emplid
AND c1.rolename= c.rolename ) )
AND e.e_addr_type='PINT'), ' ')
In order to fine tune this,I am inserting the value contact_oprid in my staging table, using hint.
SELECT /* +ALL_ROWS */ c.contact_oprid
FROM ps_az_can_employee c
WHERE c.emplid = %Table(AZ_GEN_TMP).EMPLID
AND c.rolename='HRBusinessPartner'
AND c.seqnum = (
SELECT MAX(c1.seqnum)
FROM ps_az_can_employee c1
WHERE c1.emplid= c.emplid
AND c1.rolename= c.rolename ) )
AND e.e_addr_type='PINT')
and doing an update on staging table:
UPDATE staging_table
SET AZ_HR_MANAGER_ID = NVL((
SELECT e.emplid
FROM PS_EMAILtable E
WHERE UPPER(REGEXP_SUBSTR(e.email_addr,'[^#]+',1,1)) = staging_table.CONTACT_OPRID
AND e.e_addr_type='PINT'),' ') /
This will take 5 hours, as it has to process 2 lakhs rows of data.
Is there any way using which the processing can be speed up, i mean, using hints or indexes?
Also, if I don't use this, the processing to update other value is very fast, gets finished in 10 minutes.
Kindly help me with this.
Thanks.
I have resolved this, used MERGE INTO TABLE oracle statement, and now the process takes 10 minutes to execute, including file writing operation. Thanks all for your help and suggestions.

SQL queries slow when running in sequence, but quick when running separately

I have a table that I will populate with values from an expensive calculation (with xquery from an immutable XML column). To speed up deployment to production I have precalculated values on a test server and saved to a file with BCP.
My script is as follows
-- Lots of other work, including modifying OtherTable
CREATE TABLE FOO (...)
GO
BULK INSERT FOO
FROM 'C:\foo.dat';
GO
-- rerun from here after the break
INSERT INTO FOO
(ID, TotalQuantity)
SELECT
e.ID,
SUM(e.Quantity) as TotalQuantity
FROM (select
o.ID,
h.n.value('TotalQuantity[1]/.', 'int') as TotalQuantity
FROM dbo.OtherTable o
CROSS APPLY XmlColumn.nodes('(item/.../salesorder/)') h(n)
WHERE o.ID NOT IN (SELECT DISTINCT ID FROM FOO)
) as E
GROUP BY e.ID
When I run the script in management studio the first two statements completes within seconds, but the last statement takes 4 hours to complete. Since no rows are added to the OtherTable since my foo.dat was computed management studio reports (0 row(s) affected).
If I cancel the query execution after a couple of minutes and selects just the last query and run that separately it completes within 5 seconds.
Notable facts:
The OtherTable contains 200k rows and the data in XmlColumn is pretty large, total table size ~3GB
The FOO table gets 1.3M rows
What could possibly make the difference?
Management studio has implicit transactions turned off. Is far as I can understand each statement will then run in its own transaction.
Update:
If I first select and run the script until -- rerun from here after the break, then select and run just the last query, it is still slow until I cancel execution and try again. This at least rules out any effects of running "together" with the previous code in the script and boils down to the same query being slow on first execution and fast on the second (running with all other conditions the same).
Probably different execution plans. See Slow in the Application, Fast in SSMS? Understanding Performance Mysteries.
Could it possibly be related to the statistics being completely wrong on the newly created Foo table? If SQL Server automatically updates the statistics when it first runs the query, the second run would have its execution plan created from up-to-date statistics.
What if you check the statistics right after the bulk insert (with the STATS_DATE function) and then checks it again after having cancelled the long-running query? Did the stats get updated, even though the query was cancelled?
In that case, an UPDATE STATISTICS on Foo right after the bulk insert could help.
Not sure exactly why it helped, but i rewrote the last query to an left outer join instead and suddenly the execution dropped to 15 milliseconds.
INSERT INTO FOO
(ID, TotalQuantity)
SELECT
e.ID,
SUM(e.Quantity) as TotalQuantity
FROM (select
o.ID,
h.n.value('TotalQuantity[1]/.', 'int') as TotalQuantity
FROM dbo.OtherTable o
INNER JOIN FOO f ON o.ID = f.ID
CROSS APPLY o.XmlColumn.nodes('(item/.../salesorder/)') h(n)
WHERE f.ID = null
) as E
GROUP BY e.ID