I have a table, dbo.PE, in SQL Server 2017. Yesterday I had over 40,000 records. today when I do this:
Select Top 1000 * From dbo.PE
Nothing happens. No results (not even a No records message). It just sits there and spins and says Executing Query until it is cancelled. Any Idea to what is going on? I tried inserting new data and once again, nothing happens, just sits there and spins until cancelled.
I can access other tables in the database, just not this one. No permissions have been changed.
So, To answer this question, gsharp was correct in that the table was locked. I ran the following statement to kill the session (https://infrastructureland.wordpress.com/2015/01/24/how-to-release-or-remove-lock-on-a-table-sql-server/):
SELECT
OBJECT_NAME(P.object_id) AS TableName,
Resource_type,
request_session_id
FROM sys.dm_tran_locks L
JOIN sys.partitions P ON L.resource_associated_entity_id = p.hobt_id
WHERE OBJECT_NAME(P.object_id) = ‘dbo.PE’
Found out that the session ID was 54 so I executed:
kill 54
Related
I have a query that needs to update 2 million records but there is no space in the disk, so the query is suspended right now. After that, I free up some space, but the query is still in suspended. So how should I change the status to Runnable or is there any way to tell sql server that you have enough space right now, and you can run the query.
After that, I free up some space, but the query is still in suspended.is there any way to tell sql server that you have enough space right now, and you can run the query.
SQLSERVER will change the query status from suspended to runnable automatically,it is not managed by you..
Your job here is to check ,why the query is suspended..Below dmvs can help
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type from sys.dm_exec_requests
where session_id=<< your session id>>
There are many reasons why a query gets suspended..some of them include locking/blocking,rollback,getting data from disk..
You will have to check the status as per above dmv and see what is the reason and troubleshoot accordingly..
Below is some sample piece of code which can help you in understanding what suspended means
create table t1
(
id int
)
insert into t1
select row_number() over (order by (select null))
from
sys.objects c
cross join
sys.objects c1
now in one tab of ssms:
run below query
begin tran
update t1
set id=id+1
Open another tab and run below query
select * from t1
Now open another tab and run below query
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type,status from sys.dm_exec_requests
where session_id=<< your session id of select >>
or run below query
select session_id,blocking_session_id,wait_resource,wait_time,
last_wait_type,status from sys.dm_exec_requests
where blocking_session_id>0
You can see status as suspended due to blocking,once you clear the blocking(by committing transaction) , you will see sql server automatically resumes suspended query in this case
I have two large tables with 60 million, resp. 10 million records. I want to join both tables together however the process runs for 3 hours then comes back with the error message:
the transaction log for database is full due to 'active_transaction'
The autogrowth is unlimited and I have set the DB recovery to simple
The size of the log drive is 50 GB
I am using SQL server 2008 r2.
The SQL query I am using is:
Select * into betdaq.[dbo].temp3 from
(Select XXXXX, XXXXX, XXXXX, XXXXX, XXXXX
from XXX.[dbo].temp1 inner join XXX.[dbo].temp2
on temp1.Date = temp2.[Date] and temp1.cloth = temp2.Cloth nd temp1.Time = temp1.Time) a
A single command is a transaction and the transaction does not commit until the end.
So you are filling up the transaction log.
You are going to need to loop and insert like 100,000 rows at a time
Start with this just to test the first 100,000
Then will need to add loop with a cursor
create table betdaq.[dbo].temp3 ...
insert into betdaq.[dbo].temp3 (a,b,c,d,e)
Select top 100000 with ties XXXXX, XXXXX, XXXXX, XXXXX, XXXXX
from XXX.[dbo].temp1
join XXX.[dbo].temp2
on temp1.Date = temp2.[Date]
and temp1.Time = temp1.Time
and temp1.cloth = temp2.Cloth
order by temp1.Date, temp1.Time
And why? That is a LOT of data. Could you use a View or a CTE?
If those join columns are indexed a View will be very efficient.
Transaction log can be full even though database is in simple recovery model,even though select into is a minimally logged operation,log can become full due to other transactiosn running in parallel as well.
I would use below queries to check tlog space usage by transactions while the query is runnnig
select * from sys.dm_db_log_space_usage
select * from sys.dm_tran_database_transactions
select * from sys.dm_tran_active_transactions
select * from sys.dm_tran_current_transaction
further below query can be used to check sql text also
https://gallery.technet.microsoft.com/scriptcenter/Transaction-Log-Usage-By-e62ba57d
I'm struggling to understand how the following two queries could be blocking each other.
Running query (could be almost anything though):
insert bulk [Import].[WorkTable] ...
I'm trying to run the following SELECT query at the same time:
SELECT *
FROM ( SELECT * FROM #indexPart ip
JOIN sys.indexes i (NOLOCK)
ON i.object_id = ip.ObjectId
and i.name = ip.IndexName) i
CROSS
APPLY sys.dm_db_index_physical_Stats(db_id(), i.object_id,i.index_id,NULL,'LIMITED') ps
WHERE i.is_disabled = 0
The second query is blocked by the first query and shows a LCK_M_IS as wait info. Import information is that the temporary table #indexPart contains one record of an index on a completely different table. My expectation is that the cross apply tries to run the stats on that one index which has nothing to do with the other query running.
Thanks
EDIT (NEW):
After several more tests I think I found the culprit but again can't explain it.
Bulk Insert Session has an X lock on table [Import].[WorkTable]
The query above is checking for an Index on table [Import].[AnyOtherTable] BUT is requesting an IS lock on [Import].[WorkTable]. I've verified again and again that the query above (when running the stuff without cross apply) is only returning an index on table [Import].[AnyOtherTable].
Now here comes the magic, changing the CROSS APPLY to an OUTER APPLY runs through just fine without any locking issues.
I hope someone can explain this to me ...
The problem could be at the where clause you used. It should be within the inline table. The following change could make a difference.
FROM ( SELECT * FROM #indexPart ip
JOIN sys.indexes i (NOLOCK)
ON i.object_id = ip.ObjectId
and i.name = ip.IndexName
WHERE i.is_disabled = 0) i
If you do like so, this may reduce the number of records passed onto the cross apply statement.
I got report from another team about blocking in SQL Server. Looking at results of
Exec sp_who2
and query from blog by Glenn Berry
SELECT blocking.session_id AS blocking_session_id
,blocked.session_id AS blocked_session_id
,waitstats.wait_type AS blocking_resource
,waitstats.wait_duration_ms
,waitstats.resource_description
,blocked_cache.text AS blocked_text
,blocking_cache.text AS blocking_text
FROM sys.dm_exec_connections AS blocking
INNER JOIN sys.dm_exec_requests blocked
ON blocking.session_id = blocked.blocking_session_id
CROSS APPLY sys.dm_exec_sql_text(blocked.sql_handle) blocked_cache
CROSS APPLY sys.dm_exec_sql_text(blocking.most_recent_sql_handle) blocking_cache
INNER JOIN sys.dm_os_waiting_tasks waitstats
ON waitstats.session_id = blocked.session_id
I want not able to catch anything being blocked. Running this multiple time I start to notice that something shows up but next time I run query, blcoking is gone.
I created temp table by SELECT INTO
,blocking_cache.text AS blocking_text
INTO #TempBlockingTable
FROM sys.dm_exec_connections AS blocking
After that modified query to be INSERT INTO SELECT. Now I was able to run query as many times as I want without fearing that results would be gone.
I kept running query for about 10 seconds until I finally got some results.
SELECT * FROM #TempBlockingTable
Looking at resource_description column from sys.dm_os_waiting_tasks I found that data is displayed in the following format.
<type-specific-description> id=lock<lock-hex-address> mode=<mode> associatedObjectId=<associated-obj-id>
Microsoft documentation on sys.dm_os_waiting_tasks http://technet.microsoft.com/en-us/library/ms188743.aspx does not have definition for associatedObjectId
Answer actually comes from Aaron Bertrand found it on Google Groups. To Get OBJECT_NAME of associatedObjectId you need to run the following query
SELECT OBJECT_NAME([object_id])
FROM sys.partitions
WHERE partition_id = 456489945132196
This number 456489945132196 represents value of associatedObjectId from resource_description column from sys.dm_os_waiting_tasks
I have a table that I will populate with values from an expensive calculation (with xquery from an immutable XML column). To speed up deployment to production I have precalculated values on a test server and saved to a file with BCP.
My script is as follows
-- Lots of other work, including modifying OtherTable
CREATE TABLE FOO (...)
GO
BULK INSERT FOO
FROM 'C:\foo.dat';
GO
-- rerun from here after the break
INSERT INTO FOO
(ID, TotalQuantity)
SELECT
e.ID,
SUM(e.Quantity) as TotalQuantity
FROM (select
o.ID,
h.n.value('TotalQuantity[1]/.', 'int') as TotalQuantity
FROM dbo.OtherTable o
CROSS APPLY XmlColumn.nodes('(item/.../salesorder/)') h(n)
WHERE o.ID NOT IN (SELECT DISTINCT ID FROM FOO)
) as E
GROUP BY e.ID
When I run the script in management studio the first two statements completes within seconds, but the last statement takes 4 hours to complete. Since no rows are added to the OtherTable since my foo.dat was computed management studio reports (0 row(s) affected).
If I cancel the query execution after a couple of minutes and selects just the last query and run that separately it completes within 5 seconds.
Notable facts:
The OtherTable contains 200k rows and the data in XmlColumn is pretty large, total table size ~3GB
The FOO table gets 1.3M rows
What could possibly make the difference?
Management studio has implicit transactions turned off. Is far as I can understand each statement will then run in its own transaction.
Update:
If I first select and run the script until -- rerun from here after the break, then select and run just the last query, it is still slow until I cancel execution and try again. This at least rules out any effects of running "together" with the previous code in the script and boils down to the same query being slow on first execution and fast on the second (running with all other conditions the same).
Probably different execution plans. See Slow in the Application, Fast in SSMS? Understanding Performance Mysteries.
Could it possibly be related to the statistics being completely wrong on the newly created Foo table? If SQL Server automatically updates the statistics when it first runs the query, the second run would have its execution plan created from up-to-date statistics.
What if you check the statistics right after the bulk insert (with the STATS_DATE function) and then checks it again after having cancelled the long-running query? Did the stats get updated, even though the query was cancelled?
In that case, an UPDATE STATISTICS on Foo right after the bulk insert could help.
Not sure exactly why it helped, but i rewrote the last query to an left outer join instead and suddenly the execution dropped to 15 milliseconds.
INSERT INTO FOO
(ID, TotalQuantity)
SELECT
e.ID,
SUM(e.Quantity) as TotalQuantity
FROM (select
o.ID,
h.n.value('TotalQuantity[1]/.', 'int') as TotalQuantity
FROM dbo.OtherTable o
INNER JOIN FOO f ON o.ID = f.ID
CROSS APPLY o.XmlColumn.nodes('(item/.../salesorder/)') h(n)
WHERE f.ID = null
) as E
GROUP BY e.ID