How to implement Serializable Isolation Level in SQL Server - sql

I need to implement a serializable isolation level in SQL Server but I've tried many ways and I don't get it.
I need to lock 1 row in one transaction (It doesn´t matter if lock the complete table). So, another transaction can´t even select the row (don´t read).
The last thing I tried:
For transaction 1:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT code FROM table1 WHERE code = 1
-- Here I select in another instance the same row
COMMIT TRAN
For transaction 2:
BEGIN TRAN
SELECT code FROM table1 WHERE code = 1
COMMIT TRAN
I would expect that transaction 2 wait until transaction 1 commit the operation, but the transaction 2 gives me the row.
Anyone can explain me if I miss something?

SQL Server conforms to the strict definition of a Serializable query. That is, there must be a result that can logically be generated IF both queries ran in serial order - Transaction 1 finishing before Transaction 2 can start, or vice versa.
This results in some effects that can be different than you would expect. There is a great explanation of the Serializable isolation level over at SQLPerformance.com that makes clear some of what this logical serializability ends up meaning. (Very helpful site, that one.)
For your above queries, there is no logical requirement to prevent the second query from reading the same row as the first query. No matter in what order the queries are run, they will both return the same data without modifying it. Since the Query Analyzer can identify this, there is no reason to place a read lock on the data. However, if one of the queries performed an update on the data, then (warning - logic assumption here, since I don't actually know the internals of how SQL Server handles this) the QA would set a stronger lock on the selected rows.
TL;DR - SQL Server wants to minimize blocking, so it uses logical analysis to see what types of locks are needed for a serializable isolation level, and it (tries to) use the minimum number and strength of locks needed to achieve its goal.
Now that we've dealt with that - there are only two ways that I can think of to lock a row so that no one else can read it: using XLOCK + TABLOCK (locking the whole table - not a recommended practice) or having some form of a field on each row that is updated when you start your process - something like an SPID field, or a bit flag for Locked. When you update it within your transaction, only SELECTs with NOLOCK hints will be able to read it.
Clearly, neither of these are optimal. I recommend the "This row is busy - go away" flag, as that's probably the approach I would take for an (almost) absolute lock on a row.

According to the documentation:
SERIALIZABLE Specifies the following:
Statements cannot read data that has been modified but not yet committed by other transactions.
No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that would fall in the range of keys read by any statements in the
current transaction until the current transaction completes.
If you're not making any changes to data with an INSERT, UPDATE, or DELETE inside transaction 1, SQL will release the Shared Lock after the read completes.
What you might want to try is adding a table hit to prevent the row lock from being released until the end of transaction 1.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT code
FROM table1 WITH(ROWLOCK, HOLDLOCK)
WHERE code = 1
COMMIT TRAN

Maybe you can solve this with some hack like this?
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
UPDATE someTableForThisHack set val = CASE WHEN val = 1 THEN 0 else 1 End
SELECT code from table1.....
COMMIT TRANSACTION
So you create a table someTableForThisHack and insert one row to it.

Related

Unexpected behaviour of the Serializable isolation level

Test setup
I have a SQL Server 2014 and a simple table MyTable that contains columns Code (int) and Data (nvarchar(50)), no indexes created for this table.
I have 4 records in the table in the following manner:
1, First
2, Second
3, Third
4, Fourth
Then I run the following query in a transaction:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRANSACTION
DELETE FROM dbo.MyTable
WHERE dbo.MyTable.Code = 2
I have one affected row and I don't issue either Commit or Rollback.
Next I start yet another transaction:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
SELECT TOP 10 Code, Data
FROM dbo.MyTable
WHERE Code = 3
At this step the transaction with the SELECT query hangs waiting for completion of the transaction with the DELETE query.
My question
I don't understand why the transaction with SELECT query is waiting for the transaction with the DELETE query. In my understanding the deleted row (with code 2) has nothing to do with the selected row (with code 3) and as far as I understand the specific of isolation level SERIALIZABLE SQL Server shouldn't lock entire table in this case. Maybe this happens because the minimal locking amount for SERIALIZABLE is a page? Then it could produce an inconsistent behavior for selecting rows from some other pages if the table would have more rows, say 1000000 (some rows from other pages wouldn't be locked then). Please help to figure out why the locking takes place in my case.
Under locking READ COMMITTED, REPEATABLE READ, or SERIALIZABLE a SELECT query must place Shared (S) locks for every row the query plan actually reads. The locks can be placed either at the row-level, page-level, or table-level. Additionally SERIALIZABLE will place locks on ranges, so that no other session could insert a matching row while the lock is held.
And because you have "no indexes created for this table", this query:
SELECT TOP 10 Code, Data
FROM dbo.MyTable
WHERE Code = 3
Has to be executed with a table scan, and it must read all the rows (even those with Code=2) to determine whether they qualify for the SELECT.
This is one reason why you should almost always use Row-Versioning, either by setting the database to READ COMMITTED SNAPSHOT, or by coding read-only transactions to use SNAPSHOT isolation.

Does a SQL UPDATE operation read data to "local memory"?

This answer quotes this Technet article which explains the two interpretations of lost updates:
A lost update can be interpreted in one of two ways. In the first scenario, a lost update is considered to have taken place when data that has been updated by one transaction is overwritten by another transaction, before the first transaction is either committed or rolled back. This type of lost update cannot occur in SQL Server 2005 because it is not allowed under any transaction isolation level.
The other interpretation of a lost update is when one transaction (Transaction #1) reads data into its local memory, and then another transaction (Transaction #2) changes this data and commits its change. After this, Transaction #1 updates the same data based on what it read into memory before Transaction #2 was executed. In this case, the update performed by Transaction #2 can be considered a lost update.
So it looks like the difference is that in the first scenario the whole update happens out of "local memory" while in the second one there's "local memory" used and this makes a difference.
Suppose I have the following code:
UPDATE MagicTable SET MagicColumn = MagicColumn + 10 WHERE SomeCondition
Does this involve "local memory"? Is it prone to the first or to the second interpretation of lost updates?
I suppose it would come under the second interpretation.
However the way this type of UPDATE is implemented in SQL Server a lost update is still not possible. Rows read for the update are protected with a U lock (converted to X lock when the row is actually updated).
U locks are not compatible with other U locks (or X locks)
So at all isolation levels if two concurrent transactions were to run this statement then one of them would end up blocked behind the other transaction's U lock or X lock and would not be able to proceed until that transaction completes.
Therefore it is not possible for lost updates to occur with this pattern in SQL Server at any isolation level.
To achieve a lost update you would need to do something like
BEGIN TRAN
DECLARE #MagicColumn INT;
/*Two concurrent transactions can both read the same pre-update value*/
SELECT #MagicColumn = MagicColumn FROM MagicTable WHERE SomeCondition
UPDATE MagicTable SET MagicColumn = #MagicColumn + 10 WHERE SomeCondition
COMMIT

Check if table data has changed?

I am pulling the data from several tables and then passing the data to a long running process. I would like to be able to record what data was used for the process and then query the database to check if any of the tables have changed since the process was last run.
Is there a method of solving this problem that should work across all sql databases?
One possible solution that I've thought of is having a separate table that is only used for keeping track of whether the data has changed since the process was run. The table contains a "stale" flag. When I start running the process, stale is set to false. If any creation, update, or deletion occurs in any of the tables on which the operation depends, I set stale to true. Is this a valid solution? Are there better solutions?
One concern with my solution is situations like this:
One user starts inserting a new row into one of the tables. Stale gets set to true, but the new row has not actually been added yet. Another user has simultaneously started the long running process, pulling the data from the tables and setting the flag to false. The row is finally added. Now the data used for the process is out of date but the flag indicates it is not stale. Would transactions be able to solve this problem?
EDIT:
This is some SQL for my idea. Not sure if it works, but just to give you a better idea of what I was thinking:
# First transaction reads the data and sets the flag to false
BEGIN TRANSACTION
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
UPDATE flag SET stale = false
SELECT * FROM DATATABLE
COMMIT TRANSACTION
# Second transaction updates the data and sets the flag to true
BEGIN TRANSACTION
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
UPDATE data SET val = 15 WHERE ID = 10
UPDATE flag SET stale = true
COMMIT TRANSACTION
I do not have much experience with transactions or handwriting xml, so there are probably issues with this. From what I understand two serializable transactions can not be interleaved. Please correct me if I'm wrong.
Is there a way to accomplish this with only the first transaction? The process will be run rarely, but the updates to the data table will occur more frequently, so it would be nice to not lock up the data table when performing updates.
Also, is the SET TRANSACTION ISOLATION syntax specific to MS?
The stale flag will probably work, but a timestamp would be better since it provides more metadata about the age of the records which could be used to tune your queries, e.g., only pull data that is over 5 minutes old.
To address your concern about inserting a row at the same time a query is run, transactions with an appropriate isolation level will help. For row inserts, updates, and selects, at least use a transaction with an isolation level that prevents dirty reads so that no other connections can see the updated data until the transaction is committed.
If you are strongly concerned about the case where an update happens at the same time as a record pull, you could use the REPEATABLE READ or even SERIALIZABLE isolation levels, but this will slow DB access down.
Your SQLServer sampled should work. For alternate databases, Here's an example that works in PostGres:
Transaction 1
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
-- run queries that update the tables, then set last_updated column
UPDATE sometable SET last_updatee = now() WHERE id = 1;;
COMMIT;
Transaction 2
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
-- select data from tables, then set last_queried column
UPDATE sometable SET last_queried = now() WHERE id = 1;
COMMIT;
If transaction 1 starts, and then transaction 2 starts before transaction 1 has completed, transaction 2 will block during on the update, and then will throw an error when transaction 1 is committed. If transaction 2 starts first, and transaction 1 starts before that has finished, then transaction 1 will error. Your application code or process should be able to handle those errors.
Other databases use similar syntax - MySQL (with InnoDB plugin) requires you to set the isolation level before you start the transaction.

How do I only select rows that have been committed - sql2008

How do I select all rows for a table, their isn't part of any transaction that hasn't committed yet?
Example:
Let's say,
Table T has 10 rows.
User A is doing a transaction with some queries:
INSERT INTO T (...)
SELECT ...
FROM T
// doing other queries
Now, here comes the tricky part:
What if User B, in the time between User A inserted the row and the transaction was committed, was updating a list in the system with a select on Table T.
I only want that the SELECT User B is using returned the 10 rows(all rows from the table, that can't later be rolled back). How do I do this, if it's even possible?
I have tried setting the isolationlevel on the transaction and adding "WITH(NOLOCK)" "WITH(READUNCOMMITTED)" to the query without any luck.
The query either return all 11 records or it's waiting for the transaction to commit, and that's not what I need.
Any tips is much appriciated, thanks.
You need to use (default) read committed isolation level and the READPAST hint to skip rows locked as they are not committed (rather than being blocked waiting for the locks to be released)
This does rely on the INSERT taking out rowlocks though. If it takes out page locks you will be back to being blocked. Example follows
Connection 1
IF OBJECT_ID('test_readpast') IS NULL
BEGIN
CREATE TABLE test_readpast(i INT PRIMARY KEY CLUSTERED)
INSERT INTO test_readpast VALUES (1)
END
BEGIN TRAN
INSERT INTO test_readpast
WITH(ROWLOCK)
--WITH(PAGLOCK)
VALUES (2)
SELECT * FROM sys.dm_tran_locks WHERE request_session_id=##SPID
WAITFOR DELAY '00:01';
ROLLBACK
Connection 2
SELECT i
FROM test_readpast WITH (readpast)
Snapshot isolation ?
Either I or else the three people who have answered early have misread/ misinterpreted your question, so I have given a link so you can determine for yourself.
Actually, read uncommitted and nolock are the same. They mean you get to see rows that have not been committed yet.
If you run at the default isolation level, read committed, you will not see new rows that have not been committed. This should work by default, but if you want to be sure, prefix your select with set transaction isolation level read committed.

SQL Server Race Condition Question

(Note: this is for MS SQL Server)
Say you have a table ABC with a primary key identity column, and a CODE column. We want every row in here to have a unique, sequentially-generated code (based on some typical check-digit formula).
Say you have another table DEF with only one row, which stores the next available CODE (imagine a simple autonumber).
I know logic like below would present a race condition, in which two users could end up with the same CODE:
1) Run a select query to grab next available code from DEF
2) Insert said code into table ABC
3) Increment the value in DEF so it's not re-used.
I know that, two users could get stuck at Step 1), and could end up with same CODE in the ABC table.
What is the best way to deal with this situation? I thought I could just wrap a "begin tran" / "commit tran" around this logic, but I don't think that worked. I had a stored procedure like this to test, but I didn't avoid the race condition when I ran from two different windows in MS:
begin tran
declare #x int
select #x= nextcode FROM def
waitfor delay '00:00:15'
update def set nextcode = nextcode + 1
select #x
commit tran
Can someone shed some light on this? I thought the transaction would prevent another user from being able to access my NextCodeTable until the first transaction completed, but I guess my understanding of transactions is flawed.
EDIT: I tried moving the wait to after the "update" statement, and I got two different codes... but I suspected that. I have the waitfor statement there to simulate a delay so the race condition can be easily seen. I think the key problem is my incorrect perception of how transactions work.
Set the Transaction Isolation Level to Serializable.
At lower isolation levels, other transactions can read the data in a row that is read, (but not yet modified) in this transaction. So two transactions can indeed read the same value. At very low isolation (Read Uncommitted) other transactions can even read data after it's been modified (but before committed)...
Review details about SQL Server Isolation Levels here
So bottom line is that the Isolation level is crtitical piece here to control what level of access other transactions get into this one.
NOTE. From the link, about Serializable
Statements cannot read data that has been modified but not yet committed by other transactions.
This is because the locks are placed when the row is modified, not when the Begin Trans occurs, So what you have done may still allow another transaction to read the old value until the point where you modify it. So I would change the logic to modify it in the same statement as you read it, thereby putting the lock on it at the same time.
begin tran
declare #x int
update def set #x= nextcode, nextcode += 1
waitfor delay '00:00:15'
select #x
commit tran
As other responders have mentioned, you can set the transaction isolation level to ensure that anything you 'read' using a SELECT statement cannot change within a transaction.
Alternatively, you could take out a lock specifically on the DEF table by adding the syntax WITH HOLDLOCK after the table name, e.g.,
SELECT nextcode FROM DEF WITH HOLDLOCK
It doesn't make much difference here, as your transaction is small, but it can be useful to take out locks for some SELECTs and not others within a transaction. It's a question of 'repeatability versus concurrency'.
A couple of relavant MS-SQL docs.
Isolation levels
Table hints
Late answer. You want to avoid a race condition...
"SQL Server Process Queue Race Condition"
Recap:
You began a transaction. This doesn't actually "do" anything in and of itself, it modifies subsequent behavior
You read data from a table. The default isolation level is Read Committed, so this select statement is not made part of the transaction.
You then wait 15 seconds
You then issue an update. With the declared transaction, this will generate a lock until the transaction is committed.
You then commit the transaction, releasing the lock.
So, guessing you ran this simultaneously in two windows (A and B):
A read the "next" value from table def, then went into wait mode
B read the same "next" value from the table, then went into wait mode. (Since A only did a read, the transaction did not lock anything.)
A then updated the table, and probably commited the change before B exited the wait state.
B then updated the table, after A's write was committed.
Try putting the wait statement after the update, before the commit, and see what happens.
It's not a real race condition. It's more a common problem with concurrent transactions. One solution is to set a read lock on the table and therefor have a serialization in place.
This is actually a common problem in SQL databases and that is why most (all?) of them have some built in features to take care of this issue of obtaining a unique identifier. Here are some things to look into if you are using Mysql or Postgres. If you are using a different database I bet the provide something very similar.
A good example of this is postgres sequences which you can check out here:
Postgres Sequences
Mysql uses something called auto increments.
Mysql auto increment
You can set the column to a computed value that is persisted. This will take care of the race condition.
Persisted Computed Columns
NOTE
Using this method means you do not need to store the next code in a table. The code column becomes the reference point.
Implementation
Give the column the following properties under computed column specification.
Formula = dbo.GetNextCode()
Is Persisted = Yes
Create Function dbo.GetNextCode()
Returns VarChar(10)
As
Begin
Declare #Return VarChar(10);
Declare #MaxId Int
Select #MaxId = Max(Id)
From Table
Select #Return = Code
From Table
Where Id = #MaxId;
/* Generate New Code ... */
Return #Return;
End