There is no row but (XLOCK,ROWLOCK) locked it? - sql

Consider this simple table :
table create statement is :
CREATE TABLE [dbo].[Test_Serializable](
[Id] [int] NOT NULL,
[Name] [nvarchar](50) NOT NULL
)
so there is not any primary key or index.
consider it's emopty and has not any row.I want to Insert this row (1,'nima') but I want to check if there is a row with Id=1 or not.if yes call RAISERROR and if no Insert row.I write this script:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRY
BEGIN TRAN ins
IF EXISTS(SELECT * FROM Test_Serializable ts WITH(xlock,ROWLOCK) WHERE ts.Id=1)
RAISERROR(N'Row Exists',16,1);
INSERT INTO Test_Serializable
(
Id,
[Name]
)
VALUES
(
1,
'nima'
)
COMMIT TRAN ins
END TRY
BEGIN CATCH
DECLARE #a NVARCHAR(1000);
SET #a=ERROR_MESSAGE();
ROLLBACK TRAN ins
RAISERROR(#a,16,1);
END CATCH
this script works fine but there is interesting point.
I run this script from 2 SSMS and step by step run this 2 scripts(in debug mode).Interesting point is however my table has no row but one of the script when reach IF EXIST statement lock the table.
My question is whether (XLOCK,ROWLOCK) locks entire table because there is no row?or it locks phantom row :) !!???
Edit 1)
This is my scenario:
I have a table with for example 6 fields
this is Uniqueness Rules:
1)City_Code + F1_Code are Unique
2)City_Code + F2_Code are Unique
3)City_Code + F3_Code + F4_Code are uinque
the problem is user may want to fill city_code and F1_Code and when it want Insert it in other fileds we must have Empty String or 0 (for numeric fields) value.
If user want to fill City_Code + F3_Code + F4_Code then F1_Code and F2_Code must have Empty String values
How I can check this better?I can't create any Unique Index for every rules

To answer your question, the SERIALIZABLE isolation level performs range locks which would include non-existant rows within the range.
http://msdn.microsoft.com/en-us/library/ms191272.aspx
Key-range locking ensures that the following operations are
serializable:
Range scan query
Singleton fetch of nonexistent row
Delete operation
Insert operation

XLOCK is exclusive lock: so as WHERE traverses rows, rows are locked.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE isn't about duplicates or locking of rows, it simply removes the chance of "Phantom reads". From a locking perspective, it takes range locks (eg all rows between A and B)
So with XLOCK and SERIALIZABLE you lock the table. You want UPDLOCK which isn't exclusive.
With UPDLOCK, this pattern is not safe. Under high load, you will still get duplicate errors because 2 concurrent EXISTS won't find a row, both try to INSERT, one gets a duplicate error.
So just try to INSERT and trap the error:
BEGIN TRAN ins
INSERT INTO Test_Serializable
(
Id,
[Name]
)
VALUES
(
1,
'nima'
)
COMMIT TRAN ins
END TRY
BEGIN CATCH
DECLARE #a NVARCHAR(1000);
IF ERROR_NUMBER() = 2627
RAISERROR(N'Row Exists',16,1);
ELSE
BEGIN
SET #a=ERROR_MESSAGE();
RAISERROR(#a,16,1);
END
ROLLBACK TRAN ins
END CATCH
I've mentioned this before
Edit: to force various uniques for SQL Server 2008
Use filtered indexes
CREATE UNIQUE NONCLUSTERED INDEX IX_UniqueF1 ON (City_Code, F1_Code)
WHERE F2_Code = '' AND F3_Code = '' AND AND F4_Code = 0;
CREATE UNIQUE NONCLUSTERED INDEX IX_UniqueF1 ON (City_Code, F2_Code)
WHERE F1_Code = '' AND F3_Code = '' AND AND F4_Code = 0;
CREATE UNIQUE NONCLUSTERED INDEX IX_UniqueF3F4 ON (City_Code, F3_Code, F4_Code)
WHERE F1_Code = '' AND F2_Code = '';
You can do the same with indexed views on earlier versions

Related

Modify two tables (insert or update) based on existance of a row in the first table

I have a simple thing to do but somehow can't figure out how to do it.
I have to modify two tables (insert or update) based on existance of a row in the first table.
There is a possibility that some other process will insert the row with id = 1
between getting the flag value and "if" statement that examines its value.
The catch is - I have to change TWO tables based on the flag value.
Question: How can I ensure the atomicity of this operation?
I could lock both tables by "select with TABLOCKX", modify them and release the lock by committing the transaction but ... won't it be overkill?
declare #flag int = 0
begin tran
select #flag = id from table1 where id = 1
if #flag = 0
begin
insert table1(id, ...) values(1, ...)
insert table2(id, ...) values(1, ...)
end
else
begin
update table1 set colX = ... where id = 1
update table2 set colX = ... where id = 1
end
commit tran
To sumarize our conversation and generalize to other's case :
If your column [id] is either PRIMARY KEY or UNIQUE you can put a Lock on that row. No other process will be able to change the value of [id]
If not, in my opinion you won't have other choice than Lock the table with a TABLOCKX. It will prevent any other process to UPDATE,DELETE or INSERT a row.
With that lock, it could possibly allow an other process to SELECT over the table depending on your isolation level.
If your database is in read_committed_snapshot, the other process would read the "old" value of the same [id].
To check your isolation level you can run
SELECT name, is_read_committed_snapshot_on FROM sys.databases

TSQL implementing double check locking

I have an arbitrary stored procedure usp_DoubleCheckLockInsert that does an INSERT for multiple clients and I want to give the stored procedure exclusive access to writing to a table SomeTable when it is within the critical section Begin lock and End lock.
CREATE PROCEDURE usp_DoubleCheckLockInsert
#Id INT
,#SomeValue INT
AS
BEGIN
IF (EXISTS(SELECT 1 FROM SomeTable WHERE Id = #Id AND SomeValue = #SomeValue)) RETURN
BEGIN TRAN
--Begin lock
IF (EXISTS(SELECT 1 FROM SomeTable WHERE Id = #Id AND SomeValue = #SomeValue)) ROLLBACK
INSERT INTO SomeTable(Id, SomeValue)
VALUES(#Id,#SomeValue);
--End lock
COMMIT
END
I have seen how Isolation Level relates to updates, but is there a way to implement locking in the critical section, give the transaction the writing lock, or does TSQL not work this way?
Obtain Update Table Lock at start of Stored Procedure in SQL Server
A second approach which works for me is to combine the INSERT and the SELECT into a single operation.
This index needed only for efficiently querying SomeTable. Note that there is NOT a uniqueness constraint. However, if I were taking this approach, I would actually make the index unique.
CREATE INDEX [IX_SomeTable_Id_SomeValue_IsDelete] ON [dbo].[SomeTable]
(
[Id] ASC,
[SomeValue] ASC,
[IsDelete] ASC
)
The stored proc, which combines the INSERT/ SELECT operations:
CREATE PROCEDURE [dbo].[usp_DoubleCheckLockInsert]
#Id INT
,#SomeValue INT
,#IsDelete bit
AS
BEGIN
-- Don't allow dirty reads
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRAN
-- insert only if data not existing
INSERT INTO dbo.SomeTable(Id, SomeValue, IsDelete)
SELECT #Id, #SomeValue, #IsDelete
where not exists (
select * from dbo.SomeTable WITH (HOLDLOCK, UPDLOCK)
where Id = #Id
and SomeValue = #SomeValue
and IsDelete = #IsDelete)
COMMIT
END
I did try this approach using multiple processes to insert data. (I admit though that I didn't exactly put a lot of stress on SQL Server). There were never any duplicates or failed inserts.
It seems all you are trying to do is to prevent duplicate rows from being inserted. You can do this by adding a unique index, with the option IGNORE_DUP_KEY = ON:
CREATE UNIQUE INDEX [IX_SomeTable_Id_SomeValue_IsDelete]
ON [dbo].[SomeTable]
(
[Id] ASC,
[SomeValue] ASC,
[IsDelete] ASC
) WITH (IGNORE_DUP_KEY = ON)
Any inserts with duplicate keys will be ignored by SQL Server. Running the following:
INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(0,0,0)
INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(1,1,0)
INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(2,2,0)
INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(0,0,0)
Results in:
(1 row(s) affected)
(1 row(s) affected)
(1 row(s) affected)
Duplicate key was ignored.
(0 row(s) affected)
I did not test the above using multiple processes (threads), but the results in that case should be the same - SQL Server should still ignore any duplicates, no matter which thread is attempting the insert.
See also Index Options at MSDN.
I think I may not understand the question but why couldn't you do this:
begin tran
if ( not exists ( select 1 from SomeTable where Id = #ID and SomeValue = #SomeValue ) )
insert into SomeTable ( Id, SomeValue ) values ( #ID, #SomeValue )
commit
Yes you have a transaction every time you do this but as long as your are fast that shouldn't be a problem.
I have a feeling I'm not understanding the question.
Jeff.
As soon as you start messing with sql preferred locking management, you are taking the burdon on, but if you're certain this is what you need, update your sp to select a test variable and replace your "EXISTS" check with that variable. When you query the variable use an exclusive table lock, and the table is yours till your done.
CREATE PROCEDURE usp_DoubleCheckLockInsert
#Id INT
,#SomeValue INT
AS
BEGIN
IF (EXISTS(SELECT 1 FROM SomeTable WHERE Id = #Id AND SomeValue = #SomeValue)) RETURN
BEGIN TRAN
--Begin lock
DECLARE #tId as INT
-- You already checked and the record doesn't exist, so lock the table
SELECT #tId
FROM SomeTable WITH (TABLOCKX)
WHERE Id = #Id AND SomeValue = #SomeValue
IF #tID IS NULL
BEGIN
-- no one snuck in between first and second checks, so commit
INSERT INTO SomeTable(Id, SomeValue)
VALUES(#Id,#SomeValue);
--End lock
COMMIT
END
If you execute this as a query, but don't hit the commit, then try selecting from the table from a different context, you will sit and wait till the commit is enacted.
Romoku, the answers you're getting are basically right, except
that you don't even need BEGIN TRAN.
you don't need to worry about isolation levels.
All you need is a simple insert ... select ... where not exists (select ...) as suggested by Jeff B and Chue X.
Your concerns about concurrency ("I'm talking about concurrency and your answer will not work.") reveal a profound misunderstanding of how SQL works.
SQL INSERT is atomic. You don't have to lock the table; that's what the DBMS does for you.
Instead of offering a bounty for misbegotten questions based on erroneous preconceived notions -- and then summarily dismissing right answers as wrong -- I recommend sitting down with a good book. On SQL. I can suggest some titles if you like.

Why can't I insert/update data without locking the entire table in SQL Server 2005?

I am trying to insert/update rows in a SQL Server table (depending on whether it exists or not). I am executing the SQL from multiple threads on multiple machines and I want to avoid getting duplicate key errors.
I have found many solutions online but all of them are causing transaction deadlocks. This is the general pattern I have been using:
BEGIN TRANSACTION
UPDATE TestTable WITH (UPDLOCK, SERIALIZABLE)
SET Data = #Data
WHERE Key = #Key
IF(##ROWCOUNT = 0)
BEGIN
INSERT INTO TestTable (Key, Data)
VALUES (#Key, #Data)
END
COMMIT TRANSACTION
I have tried:
WITH XLOCK instead of UPDLOCK
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE at the beginning with UPDLOCK
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE and no table hints
I have also tried the following pattern with all the combinations above:
BEGIN TRANSACTION
IF EXISTS (SELECT 1 FROM TestTable WITH (UPDLOCK, SERIALIZABLE) WHERE Key=#Key)
BEGIN
UPDATE TestTable
SET Data = #Data
WHERE Key = #Key
END
ELSE
BEGIN
INSERT INTO TestTable (Key, Data)
VALUES (#Key, #Data)
END
COMMIT TRANSACTION
The only way I can get it to work without deadlocks is to use WITH (TABLOCKX).
I am using SQL Server 2005, the SQL is generated at runtime and so it is not in a stored procedure and some of the tables use composite keys rather than primary keys but I can reproduce it on a table with an integer primary key.
The server logs look like this:
waiter id=processe35978 mode=RangeS-U requestType=wait
waiter-list
owner id=process2ae346b8 mode=RangeS-U
owner-list
keylock hobtid=72057594039566336 dbid=28 objectname=TestDb.dbo.TestTable indexname=PK_TestTable id=lock4f4fb980 mode=RangeS-U associatedObjectId=72057594039566336
waiter id=process2ae346b8 mode=RangeS-U requestType=wait
waiter-list
owner id=processe35978 mode=RangeS-U
owner-list
keylock hobtid=72057594039566336 dbid=28 objectname=TestDb.dbo.TestTable indexname=PK_TestTable id=lock2e8cbc00 mode=RangeS-U associatedObjectId=72057594039566336
The mode is obviously different depending on the table hint used (but the processes are always waiting for the mode they already own). I have seen RangeS-U, RangeX-X and U.
What am I doing wrong?
How about doing your insert first with a join on the table to check for it's existence:
BEGIN TRANSACTION
WITH ToInsert AS(
SELECT #Key AS Key, #Data AS Data
)
INSERT INTO TestTable (Key, Data)
SELECT ti.Key, ti.Data
FROM ToInsert ti
LEFT OUTER JOIN TestTable t
ON t.Key = ti.Key
WHERE t.Key IS NULL
IF(##ROWCOUNT = 0)
BEGIN
UPDATE TestTable WITH (UPDLOCK, SERIALIZABLE)
SET Data = #Data
WHERE Key = #Key
END
COMMIT TRANSACTION
That way your UPDATE statement is assured there always is a record present and your INSERT and INSERT-check is in the same atomic statement instead of being two separate statements.
I looked at this again today and found that I had been a bit of a numpty. I was actually running:
BEGIN TRANSACTION
IF EXISTS (SELECT 1 FROM TestTable WITH (UPDLOCK, SERIALIZABLE) WHERE Key=#Key)
BEGIN
UPDATE TestTable
SET Data = #Data, Key = #Key -- This is the problem
WHERE Key = #Key
END
ELSE
BEGIN
INSERT INTO TestTable (Key, Data)
VALUES (#Key, #Data)
END
COMMIT TRANSACTION
I was locking the key myself. Duh!
Your deadlock is on the index resource.
In the execution plan look for bookmark/key lookups and create a non-clustered index covering those fields - that way the 'read' of the data for the UPDATE will not clash with the 'write' of the INSERT.

updlock vs for update cursor

I need to update a column of all rows of a table and I need to use UPDLOCK to do it.
For example:
UPDATE table (UPDLock)
SET column_name = ‘123’
Another alternative is to use an for update cursor and update each row. The advantage with the second approach is that the lock is not held till the end of the transaction and concurrent updates of the same rows can happen sooner. At the same time update cursors are said to have bad performance. Which is a better approach?
EDIT:
Assume the column is updated with a value that is derived from another column in the table. In other words, column_name = f(column_name_1)
You cannot give an UPDLOCK hint to a write operation, like UPDATE statement. It will be ignored, since all writes (INSERT/UPDATE/DELETE) take the same lock, an exclusive lock on the row being updated. You can quickly validate this yourself:
create table heap (a int);
go
insert into heap (a) values (1)
go
begin transaction
update heap
--with (UPDLOCK)
set a=2
select * from sys.dm_tran_locks
rollback
If you remove the comment -- on the with (UPDLOCK) you'll see that you get excatly the same locks (an X lock on the physical row). You can do the same experiment with a B-Tree instead of a heap:
create table btree (a int not null identity(1,1) primary key, b int)
go
insert into btree (b) values (1)
go
begin transaction
update btree
--with (UPDLOCK)
set b=2
select * from sys.dm_tran_locks
rollback
Again, the locks acquired will be identical with or w/o the hint (an exclusive lock on the row key).
Now back to your question, can this whole table update be done in batches? (since this is basically what you're asking). Yes, if the table has a primary key (to be precise what's required is an unique index to batch on, preferable the clustered index to avoid tipping point issues). Here is an example how:
create table btree (id int not null identity(1,1) primary key, b int, c int);
go
set nocount on;
insert into btree (b) values (rand()*1000);
go 1000
declare #id int = null, #rc int;
declare #inserted table (id int);
begin transaction;
-- first batch has no WHERE clause
with cte as (
select top(10) id, b, c
from btree
order by id)
update cte
set c = b+1
output INSERTED.id into #inserted (id);
set #rc = ##rowcount;
commit;
select #id = max(id) from #inserted;
delete from #inserted;
raiserror (N'Updated %d rows, up to id %d', 0,0,#rc, #id);
begin transaction;
while (1=1)
begin
-- update the next batch of 10 rows, now it has where clause
with cte as (
select top(10) id, b, c
from btree
where id > #id
order by id)
update cte
set c = b+1
output INSERTED.id into #inserted (id);
set #rc = ##rowcount;
if (0 = #rc)
break;
commit;
begin transaction;
select #id = max(id) from #inserted;
delete from #inserted;
raiserror (N'Updated %d rows, up to id %d', 0,0,#rc, #id);
end
commit
go
If your table doesn't have a unique clustered index then it becomes really tricky to do this, you would need to do the same thing a cursor has to do. While from a logical point of view the index is not required, not having it would cause each batch to do a whole-table-scan, which would be pretty much disastrous.
In case you wonder what happens if someone inserts a value behind the current #id, then the answer is very simple: the exactly same thing that would happen if someone inserts a value after the whole processing is complete.
Personally I think the single UPDATE will be much better. There are very few cases where a cursor will be better overall, regardless of concurrent activity. In fact the only one that comes to mind is a very complex running totals query - I don't think I've ever seen better overall performance from a cursor that is not read only, only SELECT queries. Of course, you have much better means of testing which is "a better approach" - you have your hardware, your schema, your data, and your usage patterns right in front of you. All you have to do is perform some tests.
That all said, what is the point in the first place of updating that column so that every single row has the same value? I suspect that if the value in that column has no bearing to the rest of the row, it can be stored elsewhere - perhaps a related table or a single-row table. Maybe the value in that column should be NULL (in which case you get it from the other table) unless it is overriden for a specific row. It seems to me like there is a better solution here than touching every single row in the table every time.

Possible to implement a manual increment with just simple SQL INSERT?

I have a primary key that I don't want to auto increment (for various reasons) and so I'm looking for a way to simply increment that field when I INSERT. By simply, I mean without stored procedures and without triggers, so just a series of SQL commands (preferably one command).
Here is what I have tried thus far:
BEGIN TRAN
INSERT INTO Table1(id, data_field)
VALUES ( (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]');
COMMIT TRAN;
* Data abstracted to use generic names and identifiers
However, when executed, the command errors, saying that
"Subqueries are not allowed in this
context. only scalar expressions are
allowed"
So, how can I do this/what am I doing wrong?
EDIT: Since it was pointed out as a consideration, the table to be inserted into is guaranteed to have at least 1 row already.
You understand that you will have collisions right?
you need to do something like this and this might cause deadlocks so be very sure what you are trying to accomplish here
DECLARE #id int
BEGIN TRAN
SELECT #id = MAX(id) + 1 FROM Table1 WITH (UPDLOCK, HOLDLOCK)
INSERT INTO Table1(id, data_field)
VALUES (#id ,'[blob of data]')
COMMIT TRAN
To explain the collision thing, I have provided some code
first create this table and insert one row
CREATE TABLE Table1(id int primary key not null, data_field char(100))
GO
Insert Table1 values(1,'[blob of data]')
Go
Now open up two query windows and run this at the same time
declare #i int
set #i =1
while #i < 10000
begin
BEGIN TRAN
INSERT INTO Table1(id, data_field)
SELECT MAX(id) + 1, '[blob of data]' FROM Table1
COMMIT TRAN;
set #i =#i + 1
end
You will see a bunch of these
Server: Msg 2627, Level 14, State 1, Line 7
Violation of PRIMARY KEY constraint 'PK__Table1__3213E83F2962141D'. Cannot insert duplicate key in object 'dbo.Table1'.
The statement has been terminated.
Try this instead:
INSERT INTO Table1 (id, data_field)
SELECT id, '[blob of data]' FROM (SELECT MAX(id) + 1 as id FROM Table1) tbl
I wouldn't recommend doing it that way for any number of reasons though (performance, transaction safety, etc)
It could be because there are no records so the sub query is returning NULL...try
INSERT INTO tblTest(RecordID, Text)
VALUES ((SELECT ISNULL(MAX(RecordID), 0) + 1 FROM tblTest), 'asdf')
I don't know if somebody is still looking for an answer but here is a solution that seems to work:
-- Preparation: execute only once
CREATE TABLE Test (Value int)
CREATE TABLE Lock (LockID uniqueidentifier)
INSERT INTO Lock SELECT NEWID()
-- Real insert
BEGIN TRAN LockTran
-- Lock an object to block simultaneous calls.
UPDATE Lock WITH(TABLOCK)
SET LockID = LockID
INSERT INTO Test
SELECT ISNULL(MAX(T.Value), 0) + 1
FROM Test T
COMMIT TRAN LockTran
We have a similar situation where we needed to increment and could not have gaps in the numbers. (If you use an identity value and a transaction is rolled back, that number will not be inserted and you will have gaps because the identity value does not roll back.)
We created a separate table for last number used and seeded it with 0.
Our insert takes a few steps.
--increment the number
Update dbo.NumberTable
set number = number + 1
--find out what the incremented number is
select #number = number
from dbo.NumberTable
--use the number
insert into dbo.MyTable using the #number
commit or rollback
This causes simultaneous transactions to process in a single line as each concurrent transaction will wait because the NumberTable is locked. As soon as the waiting transaction gets the lock, it increments the current value and locks it from others. That current value is the last number used and if a transaction is rolled back, the NumberTable update is also rolled back so there are no gaps.
Hope that helps.
Another way to cause single file execution is to use a SQL application lock. We have used that approach for longer running processes like synchronizing data between systems so only one synchronizing process can run at a time.
If you're doing it in a trigger, you could make sure it's an "INSTEAD OF" trigger and do it in a couple of statements:
DECLARE #next INT
SET #next = (SELECT (MAX(id) + 1) FROM Table1)
INSERT INTO Table1
VALUES (#next, inserted.datablob)
The only thing you'd have to be careful about is concurrency - if two rows are inserted at the same time, they could attempt to use the same value for #next, causing a conflict.
Does this accomplish what you want?
It seems very odd to do this sort of thing w/o an IDENTITY (auto-increment) column, making me question the architecture itself. I mean, seriously, this is the perfect situation for an IDENTITY column. It might help us answer your question if you'd explain the reasoning behind this decision. =)
Having said that, some options are:
using an INSTEAD OF trigger for this purpose. So, you'd do your INSERT (the INSERT statement would not need to pass in an ID). The trigger code would handle inserting the appropriate ID. You'd need to use the WITH (UPDLOCK, HOLDLOCK) syntax used by another answerer to hold the lock for the duration of the trigger (which is implicitly wrapped in a transaction) & to elevate the lock type from "shared" to "update" lock (IIRC).
you can use the idea above, but have a table whose purpose is to store the last, max value inserted into the table. So, once the table is set up, you would no longer have to do a SELECT MAX(ID) every time. You'd simply increment the value in the table. This is safe provided that you use appropriate locking (as discussed). Again, that avoids repeated table scans every time you INSERT.
use GUIDs instead of IDs. It's much easier to merge tables across databases, since the GUIDs will always be unique (whereas records across databases will have conflicting integer IDs). To avoid page splitting, sequential GUIDs can be used. This is only beneficial if you might need to do database merging.
Use a stored proc in lieu of the trigger approach (since triggers are to be avoided, for some reason). You'd still have the locking issue (and the performance problems that can arise). But sprocs are preferred over dynamic SQL (in the context of applications), and are often much more performant.
Sorry about rambling. Hope that helps.
How about creating a separate table to maintain the counter? It has better performance than MAX(id), as it will be O(1). MAX(id) is at best O(lgn) depending on the implementation.
And then when you need to insert, simply lock the counter table for reading the counter and increment the counter. Then you can release the lock and insert to your table with the incremented counter value.
Have a separate table where you keep your latest ID and for every transaction get a new one.
It may be a bit slower but it should work.
DECLARE #NEWID INT
BEGIN TRAN
UPDATE TABLE SET ID=ID+1
SELECT #NEWID=ID FROM TABLE
COMMIT TRAN
PRINT #NEWID -- Do what you want with your new ID
Code without any transaction scope (I use it in my engineer course as an exercice) :
-- Preparation: execute only once
CREATE TABLE increment (val int);
INSERT INTO increment VALUES (1);
-- Real insert
DECLARE #newIncrement INT;
UPDATE increment
SET #newIncrement = val,
val = val + 1;
INSERT INTO Table1 (id, data_field)
SELECT #newIncrement, 'some data';
declare #nextId int
set #nextId = (select MAX(id)+1 from Table1)
insert into Table1(id, data_field) values (#nextId, '[blob of data]')
commit;
But perhaps a better approach would be using a scalar function getNextId('table1')
Any critiques of this? Works for me.
DECLARE #m_NewRequestID INT
, #m_IsError BIT = 1
, #m_CatchEndless INT = 0
WHILE #m_IsError = 1
BEGIN TRY
SELECT #m_NewRequestID = (SELECT ISNULL(MAX(RequestID), 0) + 1 FROM Requests)
INSERT INTO Requests ( RequestID
, RequestName
, Customer
, Comment
, CreatedFromApplication)
SELECT RequestID = #m_NewRequestID
, RequestName = dbo.ufGetNextAvailableRequestName(PatternName)
, Customer = #Customer
, Comment = [Description]
, CreatedFromApplication = #CreatedFromApplication
FROM RequestPatterns
WHERE PatternID = #PatternID
SET #m_IsError = 0
END TRY
BEGIN CATCH
SET #m_IsError = 1
SET #m_CatchEndless = #m_CatchEndless + 1
IF #m_CatchEndless > 1000
THROW 51000, '[upCreateRequestFromPattern]: Unable to get new RequestID', 1
END CATCH
This should work:
INSERT INTO Table1 (id, data_field)
SELECT (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]';
Or this (substitute LIMIT for other platforms):
INSERT INTO Table1 (id, data_field)
SELECT TOP 1
MAX(id) + 1, '[blob of data]'
FROM
Table1
ORDER BY
[id] DESC;