Does SELECT query lock a table or a page in SQL Server? - sql

We have just upgraded our production sql instance from 2012 to 2016 Standard Edition. As we have been working hard to find deadlocks in case it exists, i have just faced one and didnt quite understand what is exactly happening. The reason i did not understand the issue is that one session is blocking another session but the blocking session is a select query session. it prevents another session to insert the table.
The blocked session query is;
INSERT INTO [AUDITHISTORYLOG_BACKUP_2017_1]([TABLE_NAME],[OPERATION_TYPE],[HOST_NAME],[USER_NAME],[PRIMARY_KEY],[FIELD],[OLD_VALUE],[NEW_VALUE],[CREATE_DATE]) values(#1,#2,#3,#4,#5,#6,#7,#8,#9)
The blocking session query is;
SELECT * FROM AuditDB.dbo.AUDITHISTORYLOG_BACKUP_2017_1 WHERE CREATE_DATE>CAST(GETDATE()-30 AS DATE) ORDER BY CREATE_DATE DESC
How does this select query block the insert transaction ?
Wait_Type: LCK_M_IX
Wait_Resource: PAGE: 10:1:20598647
Transaction Isolation Level: Read Committed
Can anyone help ?

How does this select query block the insert transaction ?
Yes it can cause the types of locks are not compatible. A SELECT query requires SHARED lock whereas INSERT requires EXCLUSIVE lock and both are not compatible. That is, a shared lock if present on the same resource (in your case AUDITHISTORYLOG_BACKUP_2017_1 table) on which exclusive lock is requested; that exclusive lock can't be granted until the shared lock is taken off or shared lock have been released.

Related

Why SQL statement doesn't work? How do transactions work?

Question expression:
I am trying out a few things with regards to transactions. Consider the following operations:
Open a connection window 1 in SQL Server Management Studio
BEGIN a transaction, then delete about 2000 rows of data from table A
Open another connection window 2 in SQL Server Management Studio
Insert one new row of data into table A in window 2 (no transaction), it runs successfully right now
Then, I repeat the same operation, but in STEP 2 I delete 10k rows of data, in that case, STEP 4 can't run successfully. I had already waited for half an hour.
It shows that it is executing SQL...can't finish. Finally, I insert the data using connection window 1, it works right now.
Why does it work with 2k rows but not 10k rows?
The Sql sentence I execute:
In connection windows A, I execute
BEGIN TRAN
Delete from tableA (10K rows)
In connection windows B, I execute
Insert into tableA(..) VALUES (...)
windows B can't executes successfully.
Many thanks #Gordon
The cause:
I search the keyword about lock escalation.
I try to track lock escalation using SQL Server Profiler, and I get a
lock:escalation when I delete many data in a transaction(I don't commit or rollback).
So, I know the concept of lock escalation.
I delete too much data in a table ,then the row locks escalation to table escalation.I didn't commit or rollback them, other connection (or application)
can't access the table with table lock.
trace lock escalation in sql profiler
When the lock escalation happens in MSSQL Server
"The locks option also affects when lock escalation occurs. When locks is set to 0, lock escalation occurs when the memory used by the current lock structures reaches 40 percent of the Database Engine memory pool. When locks is not set to 0, lock escalation occurs when the number of locks reaches 40 percent of the value specified for locks."
How to set Using SQL Server Management Studio that will impact lock escalation:
To configure the locks option
In Object Explorer, right-click a server and select Properties .
Click the Advanced node.
Under Parallelism , type the desired value for the locks option.
Use the locks option to set the maximum number of available locks, such limiting the amount of memory SQL Server uses for them.
What you are experiencing is "lock escalation". By default, SQL Server uses (I think) row level locks for the delete. However if an expression has more than some threshold -- 5,000 locks -- then SQL Server escalates the locking to the entire table.
This is an automatic mechanism, which you can turn off if you need.
There is a lot of information about this, both in SQL Server documents and related documents.

Design a Lock for SQL Server to help relax the conflict between INSERT and SELECT

SQL Server is SQL Azure, basically it's SQL Server 2008 for normal process.
I have a table, called TASK, constantly have new data in (new task), and removed (task complete)
For new data in, I use INSERT INTO .. SELECT ..., most of time takes very long, lets say dozen of minutes.
For old data out, I first use SELECT (WITH NOLOCK) to get task, UPDATE to let other thread know this task already starts to process, then DELETE once finished.
Dead lock sometime happens on SELECT, most time happens on UPDATE and DELETE.
this is not time critical task, so I can start process the new data once all INSERT finished. Is there any kind of LOCK to ask SELECT not to select it before the INSERT finished? Or any kind of other suggestion to avoid Conflict. I can redesign table if needed.
later the sqlserver2005,resolve lock is easy.
for conflict
1.you can use the service broker.
2.use the isolution level.
dbcc useroptions ,at last row ,you can see the deflaut isolution level is read_committed,this is the session level.
we can change the level to read_committed_snapshot for conflict,in sqlserver, not realy row lock like oracle.but we can use this method implement.
ALTER DATABASE DBName
SET READ_COMMITTED_SNAPSHOT ON;
open this feature,must in single user schame.
and you can test it.
for session A ,session B.
A:update table1 set name = 'new' with(Xlock) where id = 1
B:you still update other row and select all the data from table.
my english is not very good,but for lock ,i know.
in sqlserver,for function ,there are three locks.
1.optimistic lock ,use the timestamp(rowversion) control.
2.pessimism lock ,force lock when use the date.use Ulock,Xlock and so on.
3.virtual lock,use the proc getapplock().
if you need lock schame in system architecture,please me email : mjjjj2001#163.com
Consider using service broker if this is a processing queue.
There are a number of considerations that affect performance and locking. I surmise that the data is being updated and deleted in a separate session. Which transaction isolation level is in use for the insert session and the delete session.
Has the insert session and all transactions committed and closed when the delete session runs? Are there multiple delete sessions running concurrently? It is very important to have an index on the columns you are using to identify a task for the SELECT/UPDATE/DELETE statements, especially if you move to a higher isolation level such as REPEATABLE READ or SERIALIZED.
All of these issues could be solved by moving to Service Broker if it is appropriate.

PostgreSQL Locking Questions

I am trying to figure out how to lock an entire table from writing in Postgres however it doesn't seem to be working so I am assuming I am doing something wrong.
Table name is 'users' for example.
LOCK TABLE users IN EXCLUSIVE MODE;
When I check the view pg_locks it doesn't seem to be in there. I've tried other locking modes as well to no avail.
Other transactions are also capable of performing the LOCK function and do not block like I assumed they would.
In the psql tool (8.1) I simply get back LOCK TABLE.
Any help would be wonderful.
There is no LOCK TABLE in the SQL standard, which instead uses SET TRANSACTION to specify concurrency levels on transactions. You should be able to use LOCK in transactions like this one
BEGIN WORK;
LOCK TABLE table_name IN ACCESS EXCLUSIVE MODE;
SELECT * FROM table_name WHERE id=10;
Update table_name SET field1=test WHERE id=10;
COMMIT WORK;
I actually tested this on my db.
Bear in mind that "lock table" only lasts until the end of a transaction. So it is ineffective unless you have already issued a "begin" in psql.
(in 9.0 this gives an error: "LOCK TABLE can only be used in transaction blocks". 8.1 is very old)
The lock is only active until the end of the current transaction and released when the transaction is committed (or rolled back).
Therefore, you have to embed the statement into a BEGIN and COMMIT/ROLLBACK block. After executing:
BEGIN;
LOCK TABLE users IN EXCLUSIVE MODE;
you could run the following query to see which locks are active on the users table at the moment:
SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid WHERE relation = 'users'::regclass::oid;
The query should show the exclusive lock on the users table. After you perform a COMMIT and you re-run the above-mentioned query, the lock should not longer be present.
In addition, you could use a lock tracing tool like https://github.com/jnidzwetzki/pg-lock-tracer/ to get real-time insights into the locking activity of the PostgreSQL server. Using such lock tracing tools, you can see which locks are taken and released in real-time.

Why would this SELECT statement lock up on SQL Server?

I have a simple query like this
SELECT * FROM MY_TABLE;
When I run it, SQL Server Management Studio hangs.
Other tables and views are working fine.
What can cause this? I've had locks while running UPDATE statements before, and I know how to approach those. But what could cause a SELECT to lock?
I have run the "All Blocking Transactions" report, and it says there are none.
It is probably not the select that is locking up, but some other process that is editing (udpate/delete/insert) the table that is causing the locks.
You can view which process is blocking by runing exec sp_who2 on your SQL Server.
Alternatively, if you are OK with dirty reads, you can do one of two things
SELECT * FROM Table WITH (NOLOCK)
OR
SET Transaction Isolation Level Read Uncommitted
SELECT * FROM Table
If there's a lot of other activity going on, something else might be causing locks, and your SELECT might be the deadlock victim. if you run the following
SELECT * FROM my_table WITH(nolock)
you're telling the database that you're ok to read dirty (uncomitted) data, and that locks caused by other activity can be safely ignored.
Also, if a query like that causes management studio to hang, your table might use some optimization
Use this:
SELECT * FROM MY_TABLE with (NOLOCK)
Two possibilities:
Its a really massive table, and you're trying to return 500m rows.
Some other process has a lock on the table, preventing your select from going through until that lock is released.
MY_TABLE could be also locked up by some uncommitted transaction -- i.e. script/stored procedure running (or failed while running) in another MSMM window.

SQL Transactions (b)locking Selects - is my understanding correct

We are using ADO.NET to connect to a SQL 2005 server, and doing a number of inserts/updates and selects in it. We changed one of the updates to be inside a transaction however it appears to (b)lock the entire table when we do it, regardless of the IsolationLevel we set on the transaction.
The behavior that I seem to see is that:
If you have no transactions then it's an all out fight (losers getting dead locked)
If you have a few transactions then they win all the time and block all others out unless
If you have a few transactions and you set something like nolock on the rest then you get transactions and nothing blocked. This is because every statement (select/insert/delete/update) has an isolationlevel regardless of transactions.
Is this correct?
The answer to your question is: It depends.
If you are updating a table, SQL Server uses several strategies to decide how many rows to lock, row level locks, page locks or full table locks.
If you are updating more than a certain percentage of the table (configurable as I remember), then SQL Server gives you a table level lock, which may block selects.
The best reference is:
Understanding Locking in SQL Server:
http://msdn.microsoft.com/en-us/library/aa213039(SQL.80).aspx
(for SQL Server 2000)
Introduction to Locking in SQL Server: http://www.sqlteam.com/article/introduction-to-locking-in-sql-server
Isolation Levels in the Database Engine: http://msdn.microsoft.com/en-us/library/ms189122.aspx (for SQL server 2008, but 2005 version is available).
Good luck.
Your update statement (i.e one that changes data) will hold locks regardless of the isolation level and whether you have explicitly defined a transaction of not.
What you can control is the granularity of the locks by using query hints. So if the update is locking the entire table, then you can specify a query hint to only lock the affected rows (ROWLOCK hint). That is unless your query is updating the whole table of course.
So to answer your question, the first connection to request locks on a resource will hold those locks for the duration of the transaction. You can specify that a select does not hold locks by using the read uncommitted isolation level, statements that change data insert/update/delete always hold locks regardless. The next connection to request locks on the same resource will wait until the first has finished and will then hold its locks. Dead locking is a specific scenario where two connections are holding locks and each is waiting for the other connection's resource, to avoid the engine waiting forever, one connection is chosen as the deadlock victim.