Question expression:
I am trying out a few things with regards to transactions. Consider the following operations:
Open a connection window 1 in SQL Server Management Studio
BEGIN a transaction, then delete about 2000 rows of data from table A
Open another connection window 2 in SQL Server Management Studio
Insert one new row of data into table A in window 2 (no transaction), it runs successfully right now
Then, I repeat the same operation, but in STEP 2 I delete 10k rows of data, in that case, STEP 4 can't run successfully. I had already waited for half an hour.
It shows that it is executing SQL...can't finish. Finally, I insert the data using connection window 1, it works right now.
Why does it work with 2k rows but not 10k rows?
The Sql sentence I execute:
In connection windows A, I execute
BEGIN TRAN
Delete from tableA (10K rows)
In connection windows B, I execute
Insert into tableA(..) VALUES (...)
windows B can't executes successfully.
Many thanks #Gordon
The cause:
I search the keyword about lock escalation.
I try to track lock escalation using SQL Server Profiler, and I get a
lock:escalation when I delete many data in a transaction(I don't commit or rollback).
So, I know the concept of lock escalation.
I delete too much data in a table ,then the row locks escalation to table escalation.I didn't commit or rollback them, other connection (or application)
can't access the table with table lock.
trace lock escalation in sql profiler
When the lock escalation happens in MSSQL Server
"The locks option also affects when lock escalation occurs. When locks is set to 0, lock escalation occurs when the memory used by the current lock structures reaches 40 percent of the Database Engine memory pool. When locks is not set to 0, lock escalation occurs when the number of locks reaches 40 percent of the value specified for locks."
How to set Using SQL Server Management Studio that will impact lock escalation:
To configure the locks option
In Object Explorer, right-click a server and select Properties .
Click the Advanced node.
Under Parallelism , type the desired value for the locks option.
Use the locks option to set the maximum number of available locks, such limiting the amount of memory SQL Server uses for them.
What you are experiencing is "lock escalation". By default, SQL Server uses (I think) row level locks for the delete. However if an expression has more than some threshold -- 5,000 locks -- then SQL Server escalates the locking to the entire table.
This is an automatic mechanism, which you can turn off if you need.
There is a lot of information about this, both in SQL Server documents and related documents.
Related
TLDR: I think SQL Server did not release a Sch-M lock and I can't release it or find the transaction holding the lock
I was trying to build a copy of another table and run some benchmarking on the data updates from disk, and when running a BULK INSERT on the empty table it errored out because I had used the wrong file name. I was running the script by selecting a large portion of text from a notepad in SQL Server Management Studio and hitting the Execute button. Now, no queries regarding the table whatsoever can execute, including things like OBJECT_ID or schema information in SQL Server Management Studio when manually refreshed. The queries just hang, and in the latter case SSMS gives an error about a lock not being relinquished within a timeout.
I have taken a few debugging steps so far.
I took a look at the sys.dm_tran_locks table filtered on the DB's resource ID. There, I can see a number of shared locks on the DB itself and some exclusive key locks, and exactly 1 lock on an object, a Sch-M lock. When I try and get the object name from the resource ID on the sys.dm_tran_locks table, the query hangs just like OBJECT_ID() does (and does not for other table names/IDs). The lock cites session ID 54.
I have also taken a look at the sys.dm_exec_requests table to try and find more information about SPID 54, but there are no rows with that session id. In fact, the only processes there are ones owned by sa and the single query checking the sys.dm_exec_requests table, owned by myself.
From this, if I understand everything correctly, it seems like somehow the BULK INSERT statement somehow failed to release the Sch-M lock that it takes.
So here are my questions: Why is there still a Sch-M lock on the table if the process that owns it seems to no longer exist? Is there some way to recover access to the table without restarting the SQL Server process? Would SQL code in the script after the BULK INSERT have run, but just on an empty table?
I am using SQL Server 2016 and SQL Server Management Studio 2016.
SQL Server is SQL Azure, basically it's SQL Server 2008 for normal process.
I have a table, called TASK, constantly have new data in (new task), and removed (task complete)
For new data in, I use INSERT INTO .. SELECT ..., most of time takes very long, lets say dozen of minutes.
For old data out, I first use SELECT (WITH NOLOCK) to get task, UPDATE to let other thread know this task already starts to process, then DELETE once finished.
Dead lock sometime happens on SELECT, most time happens on UPDATE and DELETE.
this is not time critical task, so I can start process the new data once all INSERT finished. Is there any kind of LOCK to ask SELECT not to select it before the INSERT finished? Or any kind of other suggestion to avoid Conflict. I can redesign table if needed.
later the sqlserver2005,resolve lock is easy.
for conflict
1.you can use the service broker.
2.use the isolution level.
dbcc useroptions ,at last row ,you can see the deflaut isolution level is read_committed,this is the session level.
we can change the level to read_committed_snapshot for conflict,in sqlserver, not realy row lock like oracle.but we can use this method implement.
ALTER DATABASE DBName
SET READ_COMMITTED_SNAPSHOT ON;
open this feature,must in single user schame.
and you can test it.
for session A ,session B.
A:update table1 set name = 'new' with(Xlock) where id = 1
B:you still update other row and select all the data from table.
my english is not very good,but for lock ,i know.
in sqlserver,for function ,there are three locks.
1.optimistic lock ,use the timestamp(rowversion) control.
2.pessimism lock ,force lock when use the date.use Ulock,Xlock and so on.
3.virtual lock,use the proc getapplock().
if you need lock schame in system architecture,please me email : mjjjj2001#163.com
Consider using service broker if this is a processing queue.
There are a number of considerations that affect performance and locking. I surmise that the data is being updated and deleted in a separate session. Which transaction isolation level is in use for the insert session and the delete session.
Has the insert session and all transactions committed and closed when the delete session runs? Are there multiple delete sessions running concurrently? It is very important to have an index on the columns you are using to identify a task for the SELECT/UPDATE/DELETE statements, especially if you move to a higher isolation level such as REPEATABLE READ or SERIALIZED.
All of these issues could be solved by moving to Service Broker if it is appropriate.
When running a stored procedure (from a .NET application) that does an INSERT and an UPDATE, I sometimes (but not that often, really) and randomly get this error:
ERROR [40001] [DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]Your server command (family id #0, process id #46) encountered a deadlock situation. Please re-run your command.
How can I fix this?
Thanks.
Your best bet for solving you deadlocking issue is to set "print deadlock information" to on using
sp_configure "print deadlock information", 1
Everytime there is a deadlock this will print information about what processes were involved and what sql they were running at the time of the dead lock.
If your tables are using allpages locking. It can reduce deadlocks to switch to datarows or datapages locking. If you do this make sure to gather new stats on the tables and recreate indexes, views, stored procedures and triggers that access the tables that are changed. If you don't you will either get errors or not see the full benefits of the change depending on which ones are not recreated.
I have a set of long term apps which occasionally over lap table access and sybase will throw this error. If you check the sybase server log it will give you the complete info on why it happened. Like: The sql that was involved the two processes trying to get a lock. Usually one trying to read and the other doing something like a delete. In my case the apps are running in separate JVMs, so can't sychronize just have to clean up periodically.
Assuming that your tables are properly indexed (and that you are actually using those indexes - always worth checking via the query plan) you could try breaking the component parts of the SP down and wrapping them in separate transactions so that each unit of work is completed before the next one starts.
begin transaction
update mytable1
set mycolumn = "test"
where ID=1
commit transaction
go
begin transaction
insert into mytable2 (mycolumn) select mycolumn from mytable1 where ID = 1
commit transaction
go
I have a SQL statement from my application. I'd like to know which locks that statement acquires; how can I do that with SQL server?
The statement has been involved in a deadlock, which I'm trying to analyze; I cannot reproduce the deadlock.
I'm running on MS SQL Server 2005.
You can run the statement in a transaction, but not commit the transaction. Since the locks will be held until the transaction is committed, this gives you time to inspect the locks. (Not indefinitely, but like 5 minutes by default.)
Like:
BEGIN TRANSACTION
select * from table
Then open Management Studio and check the locks. They're in Management -> Activity Monitor -> Locks by Object or Locks by Process. After you're done, run:
COMMIT TRANSACTION
to free the locks.
I would suggest that you turn on the Deadlock Detection Trace Flags in the first instance, rather than running a Profiler Trace indefinitely.
This way, the event details will be logged to the SQL Server Error Log.
Review the following Books Online reference for details of the various trace flags. You need to use 1204 and/or 1222
http://msdn.microsoft.com/en-us/library/ms188396(SQL.90).aspx
Be sure to enable the trace flags with server scope and not just the current session. For example use:
DBCC TRACEON(1222,-1)
Here's a query that will show you all active locks, who's got them, and what object they are on. I pulled this from a technet article or something years and years ago. It works on SQL 2000 and 2005 (change sysobjects to sys.objects for 2005.) Uncomment the WHERE clause if you want to restrict it to just this databse, and just the "EXCLUSIVE" locks.
select 'Locks' as Locks,
spid, nt_username, name, hostname, loginame, waittime, open_tran,
convert(varchar ,getdate() - last_batch, 114) as TimeSinceLastCommand,
case req_mode
when 0 then 'Not granted'
when 1 then 'Schema stability'
when 2 then 'Schema modification'
when 3 then 'Intent shared'
when 4 then 'Shared intent update'
when 5 then 'Intent shared shared'
when 6 then 'Intent exclusive'
when 7 then 'Shared Intent Exclusive'
when 8 then 'Shared'
when 9 then 'Update'
when 10 then 'Intent insert NULL'
when 11 then 'Intent shared exclusive'
when 12 then 'Intent update'
when 13 then 'Intent shared-update'
when 14 then 'Exclusive'
when 15 then 'Bulk operation'
else str(req_mode) end as LockMode
from master..syslockinfo
left join sysobjects so on so.id = rsc_objid
left join master..sysprocesses sp on sp.spid = req_spid
--where rsc_dbid = (select db_id()) and ltrim(req_mode) in (6,7,11,14)
run a trace in the profiler (pick the blank template), select the deadlock graph event, and on the new tab that appears (Events Extraction Settings), save each (check save deadlock XML events separately) in its own file. Open this file in an xml viewer and it will be easy to tell what is happening. Each process is contained, with a stack of procedure calls, etc. and all locks are in there too.
Let this trace run until the deadlock happens again, info is only recorded when a deadlock happens, so not much overhead. If it never happens again, good it is solved, if not you have captured all the info.
You can run a profiler trace on your dev box for the queries and see exactly what locks are taken. This is typically a huge amount of data, but much of it will be patterns you can skim over. E.g. for read committed isolation, you will see a succession of locks being acquired and released as you do a table or index scan (each row must be locked before it is read, and it is immediately released after it is read).
What isolation do you run under? What kind of queries are deadlocking? Are you using explicit transactions that encompass multiple updates, or are they single statements deadlocking?
The most typical case for a deadlock is a transaction with the sequence (update table x, update table y), and a second transaction with the sequence (update table y, update table x). The common solution is to make sure you use the same update sequence across queries.
Let us know what kind of queries they are, there are different common issues for different types of transactions.
Troubleshooting deadlocks:
http://blogs.msdn.com/bartd/archive/2006/09/09/747119.aspx
http://blogs.msdn.com/bartd/archive/2006/09/13/751343.aspx
Reproducing deadlocks:
http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/01/01/reproducing-deadlocks-involving-only-one-table.aspx
We are using ADO.NET to connect to a SQL 2005 server, and doing a number of inserts/updates and selects in it. We changed one of the updates to be inside a transaction however it appears to (b)lock the entire table when we do it, regardless of the IsolationLevel we set on the transaction.
The behavior that I seem to see is that:
If you have no transactions then it's an all out fight (losers getting dead locked)
If you have a few transactions then they win all the time and block all others out unless
If you have a few transactions and you set something like nolock on the rest then you get transactions and nothing blocked. This is because every statement (select/insert/delete/update) has an isolationlevel regardless of transactions.
Is this correct?
The answer to your question is: It depends.
If you are updating a table, SQL Server uses several strategies to decide how many rows to lock, row level locks, page locks or full table locks.
If you are updating more than a certain percentage of the table (configurable as I remember), then SQL Server gives you a table level lock, which may block selects.
The best reference is:
Understanding Locking in SQL Server:
http://msdn.microsoft.com/en-us/library/aa213039(SQL.80).aspx
(for SQL Server 2000)
Introduction to Locking in SQL Server: http://www.sqlteam.com/article/introduction-to-locking-in-sql-server
Isolation Levels in the Database Engine: http://msdn.microsoft.com/en-us/library/ms189122.aspx (for SQL server 2008, but 2005 version is available).
Good luck.
Your update statement (i.e one that changes data) will hold locks regardless of the isolation level and whether you have explicitly defined a transaction of not.
What you can control is the granularity of the locks by using query hints. So if the update is locking the entire table, then you can specify a query hint to only lock the affected rows (ROWLOCK hint). That is unless your query is updating the whole table of course.
So to answer your question, the first connection to request locks on a resource will hold those locks for the duration of the transaction. You can specify that a select does not hold locks by using the read uncommitted isolation level, statements that change data insert/update/delete always hold locks regardless. The next connection to request locks on the same resource will wait until the first has finished and will then hold its locks. Dead locking is a specific scenario where two connections are holding locks and each is waiting for the other connection's resource, to avoid the engine waiting forever, one connection is chosen as the deadlock victim.