If an update....Where affects no rows, are any locks created? - sql

If I run a SQL statement like
UPDATE table
SET col = value
WHERE X=Y
And no rows match, therefore no rows are changed, are any locks created by the update?
The DBMS is Sybase + SQL Server

You can play with this script and see for yourself that sometimes locks are acquired and held even when no rows are updated:
CREATE TABLE dbo.Test
(
i INT NOT NULL
PRIMARY KEY ,
j INT NULL
) ;
go
INSERT dbo.Test
( i, j )
VALUES ( 1, 2 ) ;
GO
SELECT ##spid ;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE ;
BEGIN TRANSACTION ;
UPDATE dbo.Test
SET j = 3
WHERE i = 3 ;
SELECT *
FROM sys.dm_tran_locks
WHERE request_session_id = ##spid;
COMMIT ;

If field x is indexed, then there will probably be a shared lock on that index while your UPDATE is checking it for matching records.
There should not be any row locks, but all locking behavior is contingent on your server-level isolation settings.

In case an update statement is used which does not effect the records then an exclusive intent lock is being taken for the update statement while in transaction as first the rows effected are to be selected followed by the update on the table, however as there are no rows that need to be updated this intent lock is taken on the table for the transaction in an exclusive mode.

Related

Modify two tables (insert or update) based on existance of a row in the first table

I have a simple thing to do but somehow can't figure out how to do it.
I have to modify two tables (insert or update) based on existance of a row in the first table.
There is a possibility that some other process will insert the row with id = 1
between getting the flag value and "if" statement that examines its value.
The catch is - I have to change TWO tables based on the flag value.
Question: How can I ensure the atomicity of this operation?
I could lock both tables by "select with TABLOCKX", modify them and release the lock by committing the transaction but ... won't it be overkill?
declare #flag int = 0
begin tran
select #flag = id from table1 where id = 1
if #flag = 0
begin
insert table1(id, ...) values(1, ...)
insert table2(id, ...) values(1, ...)
end
else
begin
update table1 set colX = ... where id = 1
update table2 set colX = ... where id = 1
end
commit tran
To sumarize our conversation and generalize to other's case :
If your column [id] is either PRIMARY KEY or UNIQUE you can put a Lock on that row. No other process will be able to change the value of [id]
If not, in my opinion you won't have other choice than Lock the table with a TABLOCKX. It will prevent any other process to UPDATE,DELETE or INSERT a row.
With that lock, it could possibly allow an other process to SELECT over the table depending on your isolation level.
If your database is in read_committed_snapshot, the other process would read the "old" value of the same [id].
To check your isolation level you can run
SELECT name, is_read_committed_snapshot_on FROM sys.databases

Table locked after a quick INSERT and UPDATE of 1 record

I executed a few queries like below, where it INSERTS 1 row to the table then UPDATE it, and I didn't get any error.
But then I found out that it locked the table, where no one else can query the table.
Do you know why the query below would lock the table ?
Can I not SET vchB = vchNumber, vchC = vchNumber right after INSERT INTO ?
I read when/what locks are hold/released in READ COMMITTED isolation level, and it says "All lock will release only after committed/rollbacked".
The INSERT and UPDATE statements below were committed succesfully (it only INSERT and UPDATE 1 row in the table), and it only took a second to run, but yet when other people query the table, it just hang.
Thank you.
BEGIN TRANSACTION T1
INSERT INTO myTbl(vchSN,vchNumber,vchName)
SELECT 'AB12','1234','My Name'
UPDATE myTbl
SET vchB = vchNumber, vchC = vchNumber, vchtab = 'N' where vchSN = 'AB12'
COMMIT TRANSACTION T1

Inserting large number of records without locking the table

I am trying to insert 1,500,000 records into a table. Am facing table lock issues during the insertion. So I came up with the below batch insert.
DECLARE #BatchSize INT = 50000
WHILE 1 = 1
BEGIN
INSERT INTO [dbo].[Destination]
(proj_details_sid,
period_sid,
sales,
units)
SELECT TOP(#BatchSize) s.proj_details_sid,
s.period_sid,
s.sales,
s.units
FROM [dbo].[SOURCE] s
WHERE NOT EXISTS (SELECT 1
FROM dbo.Destination d
WHERE d.proj_details_sid = s.proj_details_sid
AND d.period_sid = s.period_sid)
IF ##ROWCOUNT < #BatchSize
BREAK
END
I have a clustered Index on Destination table (proj_details_sid ,period_sid ). NOT EXISTS part is just to restrict inserted records from again inserting into the table
Am I doing it right, will this avoid table lock ? or is there any better way.
Note : Time taken is more or less same with batch and without batch insert
Lock escalation is not likely to be related to the SELECT part of your statement at all.
It is a natural consequence of inserting a large number of rows
Lock escalation is triggered when lock escalation is not disabled on the table by using the ALTER TABLE SET LOCK_ESCALATION option, and when either of the following conditions exists:
A single Transact-SQL statement acquires at least 5,000 locks on a single nonpartitioned table or index.
A single Transact-SQL statement acquires at least 5,000 locks on a single partition of a partitioned table and the ALTER TABLE SET LOCK_ESCALATION option is set to AUTO.
The number of locks in an instance of the Database Engine exceeds memory or configuration thresholds.
If locks cannot be escalated because of lock conflicts, the Database Engine periodically triggers lock escalation at every 1,250 new locks acquired.
You can easily see this for yourself by tracing the lock escalation event in Profiler or simply trying the below with different batch sizes. For me TOP (6228) shows 6250 locks held but TOP (6229) it suddenly plummets to 1 as lock escalation kicks in. The exact numbers may vary (dependant on database settings and resources currently available). Use trial and error to find the threshold where lock escalation appears for you.
CREATE TABLE [dbo].[Destination]
(
proj_details_sid INT,
period_sid INT,
sales INT,
units INT
)
BEGIN TRAN --So locks are held for us to count in the next statement
INSERT INTO [dbo].[Destination]
SELECT TOP (6229) 1,
1,
1,
1
FROM master..spt_values v1,
master..spt_values v2
SELECT COUNT(*)
FROM sys.dm_tran_locks
WHERE request_session_id = ##SPID;
COMMIT
DROP TABLE [dbo].[Destination]
You are inserting 50,000 rows so almost certainly lock escalation will be attempted.
The article How to resolve blocking problems that are caused by lock escalation in SQL Server is quite old but a lot of the suggestions are still valid.
Break up large batch operations into several smaller operations (i.e. use a smaller batch size)
Lock escalation cannot occur if a different SPID is currently holding an incompatible table lock - The example they give is a different session executing
BEGIN TRAN
SELECT * FROM mytable (UPDLOCK, HOLDLOCK) WHERE 1=0
WAITFOR DELAY '1:00:00'
COMMIT TRAN
Disable lock escalation by enabling trace flag 1211 - However this is a global setting and can cause severe issues. There is a newer option 1224 that is less problematic but this is still global.
Another option would be to ALTER TABLE blah SET (LOCK_ESCALATION = DISABLE) but this is still not very targeted as it affects all queries against the table not just your single scenario here.
So I would opt for option 1 or possibly option 2 and discount the others.
Instead of checking the data exists in Destination, it seems better to store all data in temp table first, and batch insert into Destination
Reference: Using ROWLOCK in an INSERT statement (SQL Server)
DECLARE #batch int = 100
DECLARE #curRecord int = 1
DECLARE #maxRecord int
-- remove (nolock) if you don't want to have dirty read
SELECT row_number over (order by s.proj_details_sid, s.period_sid) as rownum,
s.proj_details_sid,
s.period_sid,
s.sales,
s.units
INTO #Temp
FROM [dbo].[SOURCE] s WITH (NOLOCK)
WHERE NOT EXISTS (SELECT 1
FROM dbo.Destination d WITH (NOLOCK)
WHERE d.proj_details_sid = s.proj_details_sid
AND d.period_sid = s.period_sid)
-- change this maxRecord if you want to limit the records to insert
SELECT #maxRecord = count(1) from #Temp
WHILE #maxRecord >= #curRecord
BEGIN
INSERT INTO [dbo].[Destination]
(proj_details_sid,
period_sid,
sales,
units)
SELECT proj_details_sid, period_sid, sales, units
FROM #Temp
WHERE rownum >= #curRecord and rownum < #curRecord + #batch
SET #curRecord = #curRecord + #batch
END
DROP TABLE #Temp
I added (NOLOCK) your destination table -> dbo.Destination(NOLOCK).
Now, You won't lock your table.
WHILE 1 = 1
BEGIN
INSERT INTO [dbo].[Destination]
(proj_details_sid,
period_sid,
sales,
units)
SELECT TOP(#BatchSize) s.proj_details_sid,
s.period_sid,
s.sales,
s.units
FROM [dbo].[SOURCE] s
WHERE NOT EXISTS (SELECT 1
FROM dbo.Destination(NOLOCK) d
WHERE d.proj_details_sid = s.proj_details_sid
AND d.period_sid = s.period_sid)
IF ##ROWCOUNT < #BatchSize
BREAK
END
To do this you can use WITH (NOLOCK) in your select statement.
BUT NOLOCK is not recommended on OLTP Databases.

There is no row but (XLOCK,ROWLOCK) locked it?

Consider this simple table :
table create statement is :
CREATE TABLE [dbo].[Test_Serializable](
[Id] [int] NOT NULL,
[Name] [nvarchar](50) NOT NULL
)
so there is not any primary key or index.
consider it's emopty and has not any row.I want to Insert this row (1,'nima') but I want to check if there is a row with Id=1 or not.if yes call RAISERROR and if no Insert row.I write this script:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRY
BEGIN TRAN ins
IF EXISTS(SELECT * FROM Test_Serializable ts WITH(xlock,ROWLOCK) WHERE ts.Id=1)
RAISERROR(N'Row Exists',16,1);
INSERT INTO Test_Serializable
(
Id,
[Name]
)
VALUES
(
1,
'nima'
)
COMMIT TRAN ins
END TRY
BEGIN CATCH
DECLARE #a NVARCHAR(1000);
SET #a=ERROR_MESSAGE();
ROLLBACK TRAN ins
RAISERROR(#a,16,1);
END CATCH
this script works fine but there is interesting point.
I run this script from 2 SSMS and step by step run this 2 scripts(in debug mode).Interesting point is however my table has no row but one of the script when reach IF EXIST statement lock the table.
My question is whether (XLOCK,ROWLOCK) locks entire table because there is no row?or it locks phantom row :) !!???
Edit 1)
This is my scenario:
I have a table with for example 6 fields
this is Uniqueness Rules:
1)City_Code + F1_Code are Unique
2)City_Code + F2_Code are Unique
3)City_Code + F3_Code + F4_Code are uinque
the problem is user may want to fill city_code and F1_Code and when it want Insert it in other fileds we must have Empty String or 0 (for numeric fields) value.
If user want to fill City_Code + F3_Code + F4_Code then F1_Code and F2_Code must have Empty String values
How I can check this better?I can't create any Unique Index for every rules
To answer your question, the SERIALIZABLE isolation level performs range locks which would include non-existant rows within the range.
http://msdn.microsoft.com/en-us/library/ms191272.aspx
Key-range locking ensures that the following operations are
serializable:
Range scan query
Singleton fetch of nonexistent row
Delete operation
Insert operation
XLOCK is exclusive lock: so as WHERE traverses rows, rows are locked.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE isn't about duplicates or locking of rows, it simply removes the chance of "Phantom reads". From a locking perspective, it takes range locks (eg all rows between A and B)
So with XLOCK and SERIALIZABLE you lock the table. You want UPDLOCK which isn't exclusive.
With UPDLOCK, this pattern is not safe. Under high load, you will still get duplicate errors because 2 concurrent EXISTS won't find a row, both try to INSERT, one gets a duplicate error.
So just try to INSERT and trap the error:
BEGIN TRAN ins
INSERT INTO Test_Serializable
(
Id,
[Name]
)
VALUES
(
1,
'nima'
)
COMMIT TRAN ins
END TRY
BEGIN CATCH
DECLARE #a NVARCHAR(1000);
IF ERROR_NUMBER() = 2627
RAISERROR(N'Row Exists',16,1);
ELSE
BEGIN
SET #a=ERROR_MESSAGE();
RAISERROR(#a,16,1);
END
ROLLBACK TRAN ins
END CATCH
I've mentioned this before
Edit: to force various uniques for SQL Server 2008
Use filtered indexes
CREATE UNIQUE NONCLUSTERED INDEX IX_UniqueF1 ON (City_Code, F1_Code)
WHERE F2_Code = '' AND F3_Code = '' AND AND F4_Code = 0;
CREATE UNIQUE NONCLUSTERED INDEX IX_UniqueF1 ON (City_Code, F2_Code)
WHERE F1_Code = '' AND F3_Code = '' AND AND F4_Code = 0;
CREATE UNIQUE NONCLUSTERED INDEX IX_UniqueF3F4 ON (City_Code, F3_Code, F4_Code)
WHERE F1_Code = '' AND F2_Code = '';
You can do the same with indexed views on earlier versions

In MS SQL Server, is there a way to "atomically" increment a column being used as a counter?

Assuming a Read Committed Snapshot transaction isolation setting, is the following statement "atomic" in the sense that you won't ever "lose" a concurrent increment?
update mytable set counter = counter + 1
I would assume that in the general case, where this update statement is part of a larger transaction, that it wouldn't be. For example, I think this scenario is possible:
update the counter within transaction #1
do some other stuff
in transaction #1
update the counter
with transaction #2
commit
transaction #2
commit transaction #1
In this situation, wouldn't the counter end up only being incremented by 1? Does it make a difference if that is the only statement in a transaction?
How does a site like stackoverflow handle this for its question view counter? Or is the possibility of "losing" some increments just considered acceptable?
According to the MSSQL Help, you could do it like this:
UPDATE tablename SET counterfield = counterfield + 1 OUTPUT INSERTED.counterfield
This will update the field by one, and return the updated value as a SQL recordset.
Read Committed Snapshot only deals with locks on selecting data from tables.
In t1 and t2 however, you're UPDATEing the data, which is a different scenario.
When you UPDATE the counter you escalate to a write lock (on the row), preventing the other update from occurring. t2 could read, but t2 will block on its UPDATE until t1 is done, and t2 won't be able to commit before t1 (which is contrary to your timeline). Only one of the transactions will get to update the counter, therefore both will update the counter correctly given the code presented. (tested)
counter = 0
t1 update counter (counter => 1)
t2 update counter (blocked)
t1 commit (counter = 1)
t2 unblocked (can now update counter) (counter => 2)
t2 commit
Read Committed just means you can only read committed values, but it doesn't mean you have Repeatable Reads. Thus, if you use and depend on the counter variable, and intend to update it later, you're might be running the transactions at the wrong isolation level.
You can either use a repeatable read lock, or if you only sometimes will update the counter, you can do it yourself using an optimistic locking technique. e.g. a timestamp column with the counter table, or a conditional update.
DECLARE #CounterInitialValue INT
DECLARE #NewCounterValue INT
SELECT #CounterInitialValue = SELECT counter FROM MyTable WHERE MyID = 1234
-- do stuff with the counter value
UPDATE MyTable
SET counter = counter + 1
WHERE
MyID = 1234
AND
counter = #CounterInitialValue -- prevents the update if counter changed.
-- the value of counter must not change in this scenario.
-- so we rollback if the update affected no rows
IF( ##ROWCOUNT = 0 )
ROLLBACK
This devx article is informative, although it talks about the features while they were still in beta, so it may not be completely accurate.
update: As Justice indicates, if t2 is a nested transaction in t1, the semantics are different. Again, both would update counter correctly (+2) because from t2's perspective inside t1, counter was already updated once. The nested t2 has no access to what counter was before t1 updated it.
counter = 0
t1 update counter (counter => 1)
t2 update counter (nested transaction) (counter => 2)
t2 commit
t1 commit (counter = 2)
With a nested transaction, if t1 issues ROLLBACK after t1 COMMIT, counter returns to it's original value because it also undoes t2's commit.
No, it's not. The value is read in shared mode and then updated in exclusive mode, so multiple reads can occur.
Either use Serializable level or use something like
update t
set counter = counter+1
from t with(updlock, <some other hints maybe>)
where foo = bar
There is at heart only one transaction, the outermost one. The inner transactions are more like checkpoints within a transaction. Isolation levels affect only sibling outermost transactions, not parent/child related transactions.
The counter will be incremented by two. The following yields one row with a value of (Num = 3). (I opened up SMSS and pointed it to a local SQL Server 2008 Express instance. I have a database named Playground for testing stuff.)
use Playground
drop table C
create table C (
Num int not null)
insert into C (Num) values (1)
begin tran X
update C set Num = Num + 1
begin tran Y
update C set Num = Num + 1
commit tran Y
commit tran X
select * from C
I used this SP to handle the case where name does not have a counter initially
ALTER PROCEDURE [dbo].[GetNext](
#name varchar(50) )
AS BEGIN SET NOCOUNT ON
DECLARE #Out TABLE(Id BIGINT)
MERGE TOP (1) dbo.Counter as Target
USING (SELECT 1 as C, #name as name) as Source ON Target.name = Source.Name
WHEN MATCHED THEN UPDATE SET Target.[current] = Target.[current] + 1
WHEN NOT MATCHED THEN INSERT (name, [current]) VALUES (#name, 1)
OUTPUT
INSERTED.[current];
END