I am trying clean a temporal table. the quickest way was to drop or truncate table then rollback and only include required rows.
Now my issue is with the 'deadlock' of database. is there a way to use 'with nolock' for the database to not lock
BEGIN TRANSACTION;
drop table audit.Testing with (nolock) ;
rollback transaction
SELECT *
from
(select *
,rn = ROW_NUMBER() OVER (PARTITION BY Id ORDER BY Id DESC)
FROM audit.testing with (nolock)
) a
where rn =1
order by Id, SysEndTime desc```
You are in a deadlock because WITH (NOLOCK) is the equivalent of using READ UNCOMMITTED READUNCOMMITTED
Specifies that dirty reads are allowed. No shared locks are issued to prevent other transactions from modifying data read by the current transaction, and exclusive locks set by other transactions do not block the current transaction from reading the locked data. Allowing dirty reads can cause higher concurrency, but at the cost of reading data modifications that then are rolled back by other transactions. This may generate errors for your transaction, present users with data that was never committed, or cause users to see records twice (or not at all).
READUNCOMMITTED and NOLOCK hints apply only to data locks. All queries, including those with READUNCOMMITTED and NOLOCK hints, acquire Sch-S (schema stability) locks during compilation and execution. Because of this, queries are blocked when a concurrent transaction holds a Sch-M (schema modification) lock on the table. For example, a data definition language (DDL) operation acquires a Sch-M lock before it modifies the schema information of the table. Any concurrent queries, including those running with READUNCOMMITTED or NOLOCK hints, are blocked when attempting to acquire a Sch-S lock. Conversely, a query holding a Sch-S lock blocks a concurrent transaction that attempts to acquire a Sch-M lock
microsoft docs
Related
I have a stored procedure which does the following:
selects top N from table
sets these rows as processed
returns these rows to the client
Here is roughly how I am doing it in Sybase ASE:
set rowcount #count
begin tran get_items
insert into #temp_table
select item
from available_item
where is_processed = 0
update available_item
set is_processed = 1
from available_item, #temp_table
where available_item.item = #temp_table.item
# select the processed items...
commit trans
I am wondering whether there is a race condition here. If two separate processes execute this stored procedure at the same time, could they select and mark processed the same data? Or does having it in a transaction stop this?
If not, is there a way to hold locks on selected rows?
Some of the details will depend on your tables locking scheme. Allpages, pages and row level locking will have different impacts on your ability to run concurrent updates on a single table. I am assuming a page/row level scheme to allow for concurrency.
Your query will grab an initial shared page/row lock, which will be upgraded to an update lock, which will then be followed by an exclusive row lock on the updated pages/rows. No other processes will be able to make changes to the selected pages/rows for the duration of the transaction, but another process could read the selected rows prior to your update, which could lead to some inconsistency.
To get around this possibility, you can specify the isolation level in the transaction to either isolation level 2 (repeatable reads), or isolation level 3 (serialization). You may want to read up on the specifics of each level to decide which you want to enforce, and the trade-offs associate with it.
In your transaction, you would use it like this:
set rowcount #count
set transaction isolation level 2
...
Something to note, is that depending on the number of records you grab in your query, you could trigger a lock upgrade which could prevent your concurrent transactions from executing, even if they are not looking at the same rows as your first transaction. By default, the server will attempt to escalate to a table lock if it acquires locks on more than 200 pages/rows. This can be changed either to an explicit value or a range of values and percentage, and is configurable at the server, database or table level.
Relevant Documentation:
Transaction: Maintaining Data Consistency and Recovery
Performance and Tuning Series: Locking and Concurrency Control
Transact-SQL Users Guide 15.7 > Transactions: Maintaining Data Consistency and Recovery
I know that you would normally apply WITH (NOLOCK) at the table level but let's say hypothetically you'd like to join 15 tables together. Is there an easier way to apply WITH (NOLOCK) to all of the tables without having to write it after each table name?
You can set the transaction isolation level to read uncommitted:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT ...
This effectively treats the entire transaction as a WITH (NOLOCK)
From MSDN:
READ UNCOMMITTED
Specifies that statements can read rows that have been modified by other transactions but not yet committed.
Transactions running at the READ UNCOMMITTED level do not issue shared locks to prevent other transactions from modifying data read by the current transaction. READ UNCOMMITTED transactions are also not blocked by exclusive locks that would prevent the current transaction from reading rows that have been modified but not committed by other transactions. When this option is set, it is possible to read uncommitted modifications, which are called dirty reads. Values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction. This option has the same effect as setting NOLOCK on all tables in all SELECT statements in a transaction. This is the least restrictive of the isolation levels.
I believe that each SELECT statement in SQL Server will cause either a Shared or Key lock to be placed. But will it place that same type of lock when it is in a transaction? Will Shared or Key locks allow other processes to read the same records?
For example I have the following logic
Begin Trans
-- select data that is needed for the next 2 statements
SELECT * FROM table1 where id = 1000; -- Assuming this returns 10, 20, 30
insert data that was read from the first query
INSERT INTO table2 (a,b,c) VALUES(10, 20, 30);
-- update table 3 with data found in the first query
UPDATE table3
SET d = 10,
e = 20,
f = 30;
COMMIT;
At this point will my select statement still create a shared or key lock or will it get escalated to exclusive lock? Will other transaction be able to read records from the table1 or will all transaction wait until the my transaction is committed before others are able to select from it?
In an application does it makes since to move the select statement outside of a transaction and just keep the insert/update in one transaction?
A SELECT will always place a shared lock - unless you use the WITH (NOLOCK) hint (then no lock will be placed), use a READ UNCOMMITTED transaction isolation level (same thing), or unless you specifically override it with query hints like WITH (XLOCK) or WITH (UPDLOCK).
A shared lock allows other reading processes to also acquire a shared lock and read the data - but they prevent exclusive locks (for insert, delete, update operations) from being acquired.
In this case, with just three rows selected, there will be no lock escalation (that only happens when more than 5000 locks are being acquired by a single transaction).
Depending on the transaction isolation level, those shared locks will be held for different amounts of times. With READ COMMITTED, the default level, the locks is released immediately after the data has been read, while with REPEATABLE READ or SERIALIZABLE levels, the locks will be held until the transaction is committed or rolled back.
We're considering turning on read committed snapshot isolation level to help with some of our table locking problems.
We have a few stored procedures that use the table hint WITH (ROWLOCK, READPAST) as a queuing based table setup. This prevents multiple worker roles from reading the same row.
I'm a little concerned that the read committed snapshot could potentially break the queuing system and more than one worker role could read the same row. Does anyone know if turning on RCS isolation would break this process?
WITH q AS
(
SELECT TOP Column1
FROM Table1 c WITH (ROWLOCK, READPAST)
WHERE c.NextAttemptDate <= GETUTCDATE()
ORDER BY c.NextAttemptDate ASC
)
UPDATE q
SET OperationStatusType = 8
OUTPUT inserted.Column1
An UPDATE takes U-locks on rows of the target table while it checks whether they match your filter or not. After that the lock is either released or converted to an X-lock further down the query pipeline.
It is not possible to affect write locking using locking hints or the isolation level. You are safe. U-locks will be taken.
I'm in the habit of always adding WITH (UPDLOCK) redundantly as well so that this behavior is documented in the code.
I just want to ask if it is always the first query will be executed when encapsulate to a transaction? for example i got 500 k records to be deleted and 500 k to be inserted, is there a possibility of locking?
Actually I already test this query and it works fine but i want to make sure if my assumption is correct.
Note: this will Delete and Insert the same record with possible update on other columns.
BEGIN TRAN;
DELETE FROM OUTPUT TABLE WHERE ID = (1,2,3,4 etc)
INSERT INTO OUTPUT TABLE Values (1,2,3,4 etc)
COMMIT TRAN;
Within a transaction all write locks (all locks acquired for modifications) must obey the strict two phase locking rule. One of the consequences is that a write (X) lock acquired in a transaction cannot be released until the transaction commits. So yes, the DELETE and INSERT will execute sequentially and all locks acquired during the DELETE will be retained while executing the INSERT.
Keep in mind that deleting 500k rows in a transaction will escalate the locks to one table lock, see Lock Escalation.
Deleting 500k rows and inserting 500k rows in a single transaction, while maybe correct, is a bad idea. You should avoid such large units of works, long transaction, if possible. Long transactions pin the log in place, create blocking and contention, increase recovery and DB startup time, increase SQL Server resource consumption (locks require memory).
You should consider doing the operation in small batches (perhaps 10000 rows at time), use MERGE instead of DELETE/INSERT (if possible) and, last but not least, consider a partitioned sliding window
implementation, see How to Implement an Automatic Sliding Window in a Partitioned Table.
From the documentation on TRANSACTION (emphasis mine):
BEGIN TRANSACTION represents a point at which the data referenced by a
connection is logically and physically consistent. If errors are
encountered, all data modifications made after the BEGIN TRANSACTION
can be rolled back to return the data to this known state of
consistency. Each transaction lasts until either it completes without
errors and COMMIT TRANSACTION is issued to make the modifications a
permanent part of the database, or errors are encountered and all
modifications are erased with a ROLLBACK TRANSACTION statement.
BEGIN TRANSACTION starts a local transaction for the connection
issuing the statement. Depending on the current transaction isolation
level settings, many resources acquired to support the Transact-SQL
statements issued by the connection are locked by the transaction
until it is completed with either a COMMIT TRANSACTION or ROLLBACK
TRANSACTION statement. Transactions left outstanding for long periods
of time can prevent other users from accessing these locked resources,
and also can prevent log truncation.
Although BEGIN TRANSACTION starts a local transaction, it is not
recorded in the transaction log until the application subsequently
performs an action that must be recorded in the log, such as executing
an INSERT, UPDATE, or DELETE statement. An application can perform
actions such as acquiring locks to protect the transaction isolation
level of SELECT statements, but nothing is recorded in the log until
the application performs a modification action.