SQL 2008 table locked - can't work out why - sql

I have two databases on one SQL 2008 server. Database 1 seems to be causing a lock on a table on database 2. There no queries are running on database 1 that should affect database 2.
Is this normal behaviour?
When I view the running queries with this command
SELECT sqltext.TEXT,
req.session_id,
req.status,
req.command,
req.cpu_time,
req.total_elapsed_time/1000 [seconds]
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
it tells me as much, and says that the command on database 2 is suspended.
I'm at a bit of a loss. What sort of things should I look at to work out why the table in database 2 is locked?

Queries running are irrelevant - the lock can be from a query that DID run and the connection / transaction is still valid (i.e. open transaction, not commited / rolled back), in which case the lock stays in place.
You basically have to identiy:
The connection that locks the table.
THe command chain run there within the connection.
Locks originate from operations the db does - so unless you got a low level critical error (VERY unlikely with an error like that) something has caused the lock to be generated.

Related

SQL Server BULK INSERT errored out and did not release Sch-M lock, table is unaccessible

TLDR: I think SQL Server did not release a Sch-M lock and I can't release it or find the transaction holding the lock
I was trying to build a copy of another table and run some benchmarking on the data updates from disk, and when running a BULK INSERT on the empty table it errored out because I had used the wrong file name. I was running the script by selecting a large portion of text from a notepad in SQL Server Management Studio and hitting the Execute button. Now, no queries regarding the table whatsoever can execute, including things like OBJECT_ID or schema information in SQL Server Management Studio when manually refreshed. The queries just hang, and in the latter case SSMS gives an error about a lock not being relinquished within a timeout.
I have taken a few debugging steps so far.
I took a look at the sys.dm_tran_locks table filtered on the DB's resource ID. There, I can see a number of shared locks on the DB itself and some exclusive key locks, and exactly 1 lock on an object, a Sch-M lock. When I try and get the object name from the resource ID on the sys.dm_tran_locks table, the query hangs just like OBJECT_ID() does (and does not for other table names/IDs). The lock cites session ID 54.
I have also taken a look at the sys.dm_exec_requests table to try and find more information about SPID 54, but there are no rows with that session id. In fact, the only processes there are ones owned by sa and the single query checking the sys.dm_exec_requests table, owned by myself.
From this, if I understand everything correctly, it seems like somehow the BULK INSERT statement somehow failed to release the Sch-M lock that it takes.
So here are my questions: Why is there still a Sch-M lock on the table if the process that owns it seems to no longer exist? Is there some way to recover access to the table without restarting the SQL Server process? Would SQL code in the script after the BULK INSERT have run, but just on an empty table?
I am using SQL Server 2016 and SQL Server Management Studio 2016.

Why SQL statement doesn't work? How do transactions work?

Question expression:
I am trying out a few things with regards to transactions. Consider the following operations:
Open a connection window 1 in SQL Server Management Studio
BEGIN a transaction, then delete about 2000 rows of data from table A
Open another connection window 2 in SQL Server Management Studio
Insert one new row of data into table A in window 2 (no transaction), it runs successfully right now
Then, I repeat the same operation, but in STEP 2 I delete 10k rows of data, in that case, STEP 4 can't run successfully. I had already waited for half an hour.
It shows that it is executing SQL...can't finish. Finally, I insert the data using connection window 1, it works right now.
Why does it work with 2k rows but not 10k rows?
The Sql sentence I execute:
In connection windows A, I execute
BEGIN TRAN
Delete from tableA (10K rows)
In connection windows B, I execute
Insert into tableA(..) VALUES (...)
windows B can't executes successfully.
Many thanks #Gordon
The cause:
I search the keyword about lock escalation.
I try to track lock escalation using SQL Server Profiler, and I get a
lock:escalation when I delete many data in a transaction(I don't commit or rollback).
So, I know the concept of lock escalation.
I delete too much data in a table ,then the row locks escalation to table escalation.I didn't commit or rollback them, other connection (or application)
can't access the table with table lock.
trace lock escalation in sql profiler
When the lock escalation happens in MSSQL Server
"The locks option also affects when lock escalation occurs. When locks is set to 0, lock escalation occurs when the memory used by the current lock structures reaches 40 percent of the Database Engine memory pool. When locks is not set to 0, lock escalation occurs when the number of locks reaches 40 percent of the value specified for locks."
How to set Using SQL Server Management Studio that will impact lock escalation:
To configure the locks option
In Object Explorer, right-click a server and select Properties .
Click the Advanced node.
Under Parallelism , type the desired value for the locks option.
Use the locks option to set the maximum number of available locks, such limiting the amount of memory SQL Server uses for them.
What you are experiencing is "lock escalation". By default, SQL Server uses (I think) row level locks for the delete. However if an expression has more than some threshold -- 5,000 locks -- then SQL Server escalates the locking to the entire table.
This is an automatic mechanism, which you can turn off if you need.
There is a lot of information about this, both in SQL Server documents and related documents.

Is it ok to KILL this DELETE query?

I ran a query to delete around 4 million rows from my database. It ran for about 12 hours before my laptop lost the network connection. At that point, I decided to take a look at the status of the query in the database. I found that it was in the suspended state. Specifically:
Start Time SPID Database Executing SQL Status command wait_type wait_time wait_resource last_wait_type
---------------------------------------------------------------------------------------------------------------------------------------------------
2018/08/15 11:28:39.490 115 RingClone *see below suspended DELETE PAGEIOLATCH_EX 41 5:1:1116111 PAGEIOLATCH_EX
*Here is the sql query in question:
DELETE FROM T_INDEXRAWDATA WHERE INDEXRAWDATAID IN (SELECT INDEXRAWDATAID FROM T_INDEX WHERE OWNERID='1486836020')
After reading this;
https://dba.stackexchange.com/questions/87066/sql-query-in-suspended-state-causing-high-cpu-usage
I realize I probably should have broken this up into smaller pieces to delete them (or even delete them one-by-one). But now I just want to know if it is "safe" for me to KILL this query, as the answer in that post suggests. One thing the selected answer states is that "you may run into data consistency problems" if you KILL a query while it's executing. If it causes some issues with the data I am trying to delete, I'm not that concerned. However, I'm more concerned about this causing some issues with other data, or with the table structure itself.
Is it safe to KILL this query?
If you ran the delete from your laptop over the network and it lost connection with the server, you can either kill the spid or wait when it will disappear by itself. Depending on the ##version of your SQL Server instance, in particular how well it's patched, the latter might require instance restart.
Regarding the consistency issues, you seem to misunderstand it. It is possible only if you had multiple statements run in a single batch without being wrapped with a transaction. As I understand, you had a single statement; if that's the case, don't worry about consistency, SQL Server wouldn't have become what it is now if it would be so easy to corrupt its data.
I would have rewritten the query however, if T_INDEX.INDEXRAWDATAID column has NULLs then you can run into issues. It's better to rewrite it via join, also adding batch splitting:
while 1=1 begin
DELETE top (10000) t
FROM T_INDEXRAWDATA t
inner join T_INDEX i on t.INDEXRAWDATAID = i.INDEXRAWDATAID
WHERE i.OWNERID = '1486836020';
if ##rowcount = 0
break;
checkpoint;
end;
It definitely will not be any slower, but it can boost performance, depending on your schema, data and the state of any indices the tables have.

db2 V9.1 deadlocks

I have 4 different services in my application which SELECT and UPDATE on the same table in my database (db2 v9.1) on AIX 6.1, not big table around 300,000 records.The 4 services work execute in parallel way, and each service execute in sequential way (not parallel).
The issue that everyday I face horrible deadlock problem, the db hangs for about 5 to 10 minutes then it get back to its normal performance.
My services are synchronized in a way which make them never SELECT or UPDATE on the same row so I believe even if a deadlock occurred it supposed to be on a row level not table level, RIGHT?
Also, in my SELECT queries I use "ONLY FOR FETCH WITH UR", in db2 v9.1 that means not to lock the row as its only for read purpose and there will be no update (UR = uncommitted read).
Any Ideas about whats happening and why?
Firstly, these are certainly not deadlocks: a deadlock would be resolved by DB2 within few seconds by rolling back one of the conflicting transactions. What you are experiencing are most likely lock waits.
As to what's happening, you will need to monitor locks as they occur. You can use the db2pd utility, e.g.
db2pd -d mydb -locks showlocks
or
db2pd -d mydb -locks wait
You can also use the snapshot monitor:
db2 update monitor switches using statement on lock on
db2 get snapshot for locks on mydb
or the snapshot views:
select * from sysibmadm.locks_held
select * from sysibmadm.lockwaits

Master database DB STARTUP problem

I have a SQL Server 2008 database and I have a problem with this database that I don't understand.
The steps that caused the problems are:
I ran a SQL query to update a table called authors from another table called authorAff
The authors table is 123,385,300 records and the authorsAff table is 139,036,077
The query took about 7 days executing but it didn't finish
I decided to cancel the query to do it another way.
The connection on which I was running the query disconnected suddenly so the database became in recovery until the query cancels
The server was shut down many times afterwards because of some electricity problems
The database took about two days and then recovered.
Now when I run this query
SELECT TOP 1000 *
FROM AUTHORS WITH(READUNCOMMITTED)
It executes and returns the results but when I remove WITH(READUNCOMMITTED) hint it gets locked by a process running on the master database that appears only on the Activity Monitor with Command [DB STARTUP] and no results show up.
so what is the DB STARTUP command and if it's a problem, how can I solve it?
Thank you in advance.
I suspect that your user database is still trying to rollback the transaction that you canceled. A general rule of thumb indicates that it will take about the same amount of time, or more, for an aborted transaction to rollback as it has taken to run.
The rollback can't be avoided even with the SQL Server stops and starts you had.
The reason you can run a query WITH(READUNCOMMITTED) is because it's ignoring the locks associated with transaction that is rolling back. Your query results are considered unreliable, but ironically, the results are probably what you want to see since the blocking process is a rollback.
The best solution is to wait it out, if you can afford to do so. You may be able to find ways to kill the blocking process, but then you should be concerned with database integrity.