How to loop with a table values when column value is not incremental in SQL Server - sql

I need to insert the value from the table T1 to another table t2 where t1 is truncate and load and any values can come after load. So how to use Loop to insert data into T2. It should happen automatically no manual intervention should required so can't Use table value parameter.
Suppose table1 has column Id
Id
---
4
7
15
I have to insert the data into table 2.
I have used this code:
DECLARE #counter INT = (SELECT MIN(CAST(ID AS INT)) FROM Table1);
WHILE #counter <= (SELECT COUNT(CAST(ID AS INT)) FROM Table1)
BEGIN
INSERT INTO TABLE2 (ID)
VALUES (#Counter)
SET #counter = (SELECT ID FROM table1 WHERE #counter = ID)
END
How to set the counter or pick the value from table1.Id value can come differently every time?
Please help

In the absence of any further detail, it seems you could simply rewrite your query as the below:
INSERT INTO Table2 (ID)
SELECT ID
FROM Table1;
There is no need for a loop (WHILE/CURSOR) for what you have here. SQL is a Query Language, and excels are set based operations. What SQL isn't good at is iterative ones, and whenever a CURSOR or WHILE is used, I would suggest it is almost always being misused; this certainly appears to be one of those times. A WHILE or CURSOR, for a even slightly larger dataset would be significantly slower, probably by 1,000s of times so, than the simple statement above.

Not sure of your logic and its almost always better to use set based solutions but here is a TSQL loop solution. I left out the casting which you may have to use:
declare #curid int
declare #previd int
select #curid= min([ID]) from Table1 ;
while ##rowcount > 0
begin
INSERT INTO TABLE2 (ID) VALUES (#curid)
set #previd=#curid
select #curid= min([ID])
from Table1
where [ID]> #previd;
end

Related

Insert into 2 tables with cursor, then use returned scope_identity to insert into another table

Basically I want to use an existing table, lets call it T1. I have to take that table, row by row and insert different columns into 2 separate tables. For example, C1, C2 into T2 and C3,C4 into T3.
During both of these inserts I need to make sure that the values that I am inserting do not already exist. Unfortunately there are multiple duplicates. Its not my data and it is very dirty. I have to do a ton of casting as is.
Chances are good but not 100% that the column that I want to insert into T2 or T3 may exist while the other does not.
Once those inserts are done I need a #SCOPE_Identity or another way to uniquely identify and hold in two declared values the auto incremented ID's that T2 and T3 create.
These need to then be inserted into T4 which is a lookup table that mostly only stores FK, its own ID, a comment and a BIT.
I know it is a bit of a task, but I really need some help here.
I have tinkered with multiple cursors and loops, but haven't got there yet.
If I figure something out Ill post a solution, if nobody figures it out before me.
EDIT:
So I worked it out. I have posted my code that has been made easy to read and use as an answer. If anyone wants to look at it, comment, make edits etc it will be there. There may be a better way to do it, so please comment if you can.
I am not familiar with your table structure and data amount, but I would take an other way to solve this.
create a buffer table that holds the data that needs to be inserted in table T1 and T2
use this buffer table to populate all of the tables T1, T2, T3
I would try to do this, because using cursors in most cases is slow - you need to try to manipulate the data in batches (group of rows).
How to do this?
First, you can find the maximum identity ID in T1 and T2
Then, create a table that is going to have the following columns:
T1_ID
T2_ID
C1
C2
C3
C4
ShouldBeInsertedInT1
ShouldBeInsertedInT2
Now, you have to populate the table using the data from T1 and generating the T1_ID and T2_ID fields. This is simple ROW_NUMBER function + the maximum identity ID for table T1 and T2.
After the data is in the buffer table, you have to perform two separate updates in order to rise the ShouldBeInsertedInT flags. You have to check which of the columns in the buffer table should be inserted in the T1 and T2 tables. This can be done with joins, exists, etc - it basically depends on your data and business logic.
If you are here, you just need to perform the inserts. For example:
SET IDENTITY_INSERT dbo.T1 ON
INSERT INTO T1
SELECT T1_ID, C1, C2
FROM bufffer
WHERE ShouldBeInsertedInT1 = 1;
SET IDENTITY_INSERT dbo.T1 OFF
SET IDENTITY_INSERT dbo.T2 ON
INSERT INTO T2
SELECT T2_ID, C3, C4
FROM bufffer
WHERE ShouldBeInsertedInT2 = 1;
SET IDENTITY_INSERT dbo.T2 OFF
INSERT INTO T3
SELECT T1_ID, T2_ID
FROM bufffer;
This is just a concept, so you must change this code. Note, that the whole process may need to be in a transaction in order to be sure the maximum identity IDs for T1 and T2 are not changed.
This is the user safe (I'll just call it that) version of what I eventually used to do my insert. This is really designed for importing data sets that in my eyes would be somewhat difficult to do otherwise without row level inserts. When I ran this it took roughly 2 minutes to insert 50, 000 rows. Bearing in mind that I had way more than 4 column, some columns were large, i had to cast everything at least once (some more than others), and I had to do various cutting using LEFT or RIGHT among other things to clean data for the new tables.
Declare #Col1 varchar(50);
DECLARE #Col2 varchar (50);
DECLARE #col3 varchar (50);
DECLARE #col4 varchar (50);
DECLARE #T2ID int;
DECLARE #T3ID int;
DECLARE Cur1 CURSOR -- Create the cursor
LOCAL FAST_FORWARD
-- set the type of cursor. Note you could also use READ_ONLY and FORWARD_ONLY.
-- You would have to performance test to see if you benifit from one or the other
FOR
--select FROM base table Table1
SELECT
Col1, Col2, Col3, Col4
FROM
Table1
WHERE Col1 IS NOT NULL AND Col3 IS NOT NULL
-- If the main columns are null then they are skipped. This was
-- required for my data but not necessarily yours.
OPEN Cur1
FETCH NEXT FROM Cur1 INTO
#Col1, #Col2, #Col3, #Col4;
-- Assigns values to variables declared at the top
WHILE ##FETCH_STATUS = 0
BEGIN
-- Select from table 2
SELECT #T2ID = T2ID
-- where some data in the table is = to the stored data we are searching for
FROM Table2
WHERE #Col1 = [Col1]
IF ##rowcount = 0
BEGIN
INSERT INTO T2
(Col1
,Col2)
VALUES
(#Col1
,#Col2)
SET #T2ID = SCOPE_IDENTITY();
END;
-- Selects from Table3
SELECT #Col3 = Table3Col1
FROM Table3
IF ##rowcount = 0
-- If no rows are returned then proceed with insert
BEGIN
INSERT INTO Table3
(col3
,col4)
VALUES
-- Uses values assigned to the variables from the cursor select
(#col3
,#col4)
SET #T3ID = SCOPE_IDENTITY();
END;
-- Inserts the gathered row id's into the lookup table
INSERT INTO Table4
(Table2ID
,Table3ID)
VALUES (
#Table2ID
,#Table3ID)
FETCH NEXT FROM Cur1 INTO #Col1, #Col2, #col3, #col4;
END;
CLOSE Cur1;
DEALLOCATE Cur1;
If anyone has improvements to offer please do. I am open to suggestions.
Also, unless someone wants me to I won't be accepting my answer as correct, as there may be a better answer.

SQL insert statement from another table loop

I am trying to take values from a table and insert them into another table. However, there is one database column that needs to increase by 1 value each time. This value though is not an identity insert column, the value comes from another table. There is another db column that acts as a counter. I wrote a couple of things but it just isnt helping:
(121 documents)
declare #count int;
set #count=0
while #count<=121
begin
insert into cabinet..DOCUMENT_NAMES_ROBBY (TAG,TAGORDER,ACTIVE,QUEUE,REASON,FORM,DIRECT_VIEW,GLOBAL,FU_SIGN,
SIGN_X,SIGN_Y,SIGN_W,SIGN_H,ANNOTATE,doctype_id,CODE,DOC_TYPE,SET_ID,SUSPEND_DELAY,Text_Editing,Restrict_Viewing,Viewing_Expire,
Viewing_Period,DocHdrLength,DocFtrLength,DocRuleId,Outbound,SigQueue,SigReason) select TAG,TAGORDER,ACTIVE,QUEUE,REASON,FORM,
DIRECT_VIEW,GLOBAL,FU_SIGN,
SIGN_X,SIGN_Y,SIGN_W,SIGN_H,ANNOTATE,(select nextid from cabinet..Wfe_NextValue where Name='documents')+1, CODE,DOC_TYPE,'2',SUSPEND_DELAY,Text_Editing,Restrict_Viewing,Viewing_Expire,
Viewing_Period,DocHdrLength,DocFtrLength,DocRuleId,Outbound,SigQueue,SigReason from cabinet..document_names where SET_ID ='1'
update cabinet..Wfe_NextValue set NextID=NextID+1 where Name='documents'
set #count=#count+1
end
That database column is the doctype_id. Above obviously comes out wrong and puts like 14,000 rows in the table. Basically I want to take every single entry from document_names and put it in document_names_robby...except the doctype_id column should take the value from wfe_nextvalue +1, while at the same time increasing that number in that table by 1 BEFORE inserting the next document name into document_Names_Robby. Any help is appreciated
Many popular databases support sequences. For the sequence, there is a function nextval that returns the sequence value and increments the sequence counter and currval that returns the latest previously returned value, also you can set the initial value and increment. Sequences are thread safe when storing counters in the table column is not.
Rewrite your code using sequences.
Assuming that you are using SQL Server database. Use IDENTITY function
SELECT *, IDENTITY(int, 1,1) AS IDCol FROM Cabinet.DocumentNames INTO #Tab1 WHERE Set_Id = '1';
insert into cabinet..DOCUMENT_NAMES_ROBBY (TAG,TAGORDER,ACTIVE,QUEUE,REASON,FORM,DIRECT_VIEW,GLOBAL,FU_SIGN,
SIGN_X,SIGN_Y,SIGN_W,SIGN_H,ANNOTATE,doctype_id,CODE,DOC_TYPE,SET_ID,SUSPEND_DELAY,Text_Editing,Restrict_Viewing,Viewing_Expire,
Viewing_Period,DocHdrLength,DocFtrLength,DocRuleId,Outbound,SigQueue,SigReason)
SELECT * FROM #Tab1;
DROP TABLE #Tab1;
declare #count int;
set #count=0
declare #nextId int;
select #nextId= nextid from cabinet..Wfe_NextValue where Name='documents'
while #count<=121
begin
insert into cabinet..DOCUMENT_NAMES_ROBBY (TAG,TAGORDER,ACTIVE,QUEUE,REASON,FORM,DIRECT_VIEW,GLOBAL,FU_SIGN,
SIGN_X,SIGN_Y,SIGN_W,SIGN_H,ANNOTATE,doctype_id,CODE,DOC_TYPE,SET_ID,SUSPEND_DELAY,Text_Edi ting,Restrict_Viewing,Viewing_Expire,
Viewing_Period,DocHdrLength,DocFtrLength,DocRuleId,Outbound,SigQueue,SigReason) select TAG,TAGORDER,ACTIVE,QUEUE,REASON,FORM,
DIRECT_VIEW,GLOBAL,FU_SIGN,
SIGN_X,SIGN_Y,SIGN_W,SIGN_H,ANNOTATE,(select nextid from cabinet..Wfe_NextValue where Name='documents')+1, CODE,DOC_TYPE,'2',SUSPEND_DELAY,Text_Editing,Restrict_Viewing,Viewing_Expire,
Viewing_Period,DocHdrLength,DocFtrLength,DocRuleId,Outbound,SigQueue,SigReason from cabinet..document_names where SET_ID ='1'
set #count=#count+1
end
update cabinet..Wfe_NextValue set NextID=NextID+121 where Name='documents'

How to keep a rolling checksum in SQL?

I am trying to keep a rolling checksum to account for order, so take the previous 'checksum' and xor it with the current one and generate a new checksum.
Name Checksum Rolling Checksum
------ ----------- -----------------
foo 11829231 11829231
bar 27380135 checksum(27380135 ^ 11829231) = 93291803
baz 96326587 checksum(96326587 ^ 93291803) = 67361090
How would I accomplish something like this?
(Note that the calculations are completely made up and are for illustration only)
This is basically the running total problem.
Edit:
My original claim was that is one of the few places where a cursor based solution actually performs best. The problem with the triangular self join solution is that it will repeatedly end up recalculating the same cumulative checksum as a subcalculation for the next step so is not very scalable as the work required grows exponentially with the number of rows.
Corina's answer uses the "quirky update" approach. I've adjusted it to do the check sum and in my test found that it took 3 seconds rather than 26 seconds for the cursor solution. Both produced the same results. Unfortunately however it relies on an undocumented aspect of Update behaviour. I would definitely read the discussion here before deciding whether to rely on this in production code.
There is a third possibility described here (using the CLR) which I didn't have time to test. But from the discussion here it seems to be a good possibility for calculating running total type things at display time but out performed by the cursor when the result of the calculation must be saved back.
CREATE TABLE TestTable
(
PK int identity(1,1) primary key clustered,
[Name] varchar(50),
[CheckSum] AS CHECKSUM([Name]),
RollingCheckSum1 int NULL,
RollingCheckSum2 int NULL
)
/*Insert some random records (753,571 on my machine)*/
INSERT INTO TestTable ([Name])
SELECT newid() FROM sys.objects s1, sys.objects s2, sys.objects s3
Approach One: Based on the Jeff Moden Article
DECLARE #RCS int
UPDATE TestTable
SET #RCS = RollingCheckSum1 =
CASE WHEN #RCS IS NULL THEN
[CheckSum]
ELSE
CHECKSUM([CheckSum] ^ #RCS)
END
FROM TestTable WITH (TABLOCKX)
OPTION (MAXDOP 1)
Approach Two - Using the same cursor options as Hugo Kornelis advocates in the discussion for that article.
SET NOCOUNT ON
BEGIN TRAN
DECLARE #RCS2 INT
DECLARE #PK INT, #CheckSum INT
DECLARE curRollingCheckSum CURSOR LOCAL STATIC READ_ONLY
FOR
SELECT PK, [CheckSum]
FROM TestTable
ORDER BY PK
OPEN curRollingCheckSum
FETCH NEXT FROM curRollingCheckSum
INTO #PK, #CheckSum
WHILE ##FETCH_STATUS = 0
BEGIN
SET #RCS2 = CASE WHEN #RCS2 IS NULL THEN #CheckSum ELSE CHECKSUM(#CheckSum ^ #RCS2) END
UPDATE dbo.TestTable
SET RollingCheckSum2 = #RCS2
WHERE #PK = PK
FETCH NEXT FROM curRollingCheckSum
INTO #PK, #CheckSum
END
COMMIT
Test they are the same
SELECT * FROM TestTable
WHERE RollingCheckSum1<> RollingCheckSum2
I'm not sure about a rolling checksum, but for a rolling sum for instance, you can do this using the UPDATE command:
declare #a table (name varchar(2), value int, rollingvalue int)
insert into #a
select 'a', 1, 0 union all select 'b', 2, 0 union all select 'c', 3, 0
select * from #a
declare #sum int
set #sum = 0
update #a
set #sum = rollingvalue = value + #sum
select * from #a
Select Name, Checksum
, (Select T1.Checksum_Agg(Checksum)
From Table As T1
Where T1.Name < T.Name) As RollingChecksum
From Table As T
Order By T.Name
To do a rolling anything, you need some semblance of an order to the rows. That can be by name, an integer key, a date or whatever. In my example, I used name (even though the order in your sample data isn't alphabetical). In addition, I'm using the Checksum_Agg function in SQL.
In addition, you would ideally have a unique value on which you compare the inner and outer query. E.g., Where T1.PK < T.PK for an integer key or even string key would work well. In my solution if Name had a unique constraint, it would also work well enough.

Does anyone know a neat trick for reusing identity values?

Typically when you specify an identity column you get a convenient interface in SQL Server for asking for particular row.
SELECT * FROM $IDENTITY = #pID
You don't really need to concern yourself with the name if the identity column because there can only be one.
But what if I have a table which mostly consists of temporary data. Lots of inserts and lots of deletes. Is there a simple way for me to reuse the identity values.
Preferably I would want to be able to write a function that would return say NEXT_SMALLEST($IDENTITY) as next identity value and do so in a fail-safe manner.
Basically find the smallest value that's not in use. That's not entirely trivial to do, but what I want is to be able to tell SQL Server that this is my function that will generate the identity values. But what I know is that no such function exists...
I want to...
Implement global data base IDs, I need to provide a default value that I'm in control of.
My idea was based around that I should be able to have a table with all known IDs and then every row ID from some other table that needed a global ID would reference that table. The default value would be provided by something like
INSERT INTO GlobalID
RETURN SCOPE_IDENTITY()
No; it's not unique if it can be reused.
Why do you want to re-use them? Why do you concern yourself with this field? If you want to be in control of it, don't make it an identity; create your own scheme and use that.
Don't reuse identities, you'll just shoot your self in the foot. Use a large enough value so that it never rolls over (64 bit big int).
To find missing gaps in a sequence of numbers join the table against itself with a +/- 1 difference:
SELECT a.id
FROM table AS a
LEFT OUTER JOIN table AS b ON a.id = b.id+1
WHERE b.id IS NULL;
This query will find the numbers in the id sequence for which id-1 is not in the table, ie. contiguous sequence start numbers. You can then use SET IDENTITY INSERT OFF to insert a specific id and reuse a number. The cost of doing so is overwhelming (both runtime and code complexity) compared with the an ordinary identity based insert.
If you really want to reset Identity value to the lowest,
here is the trick you can use through DBCC CHECKIDENT
Basically following sql statements resets identity value so that identity value restarts from the lowest possible number
create table TT (id int identity(1, 1))
GO
insert TT default values
GO 10
select * from TT
GO
delete TT where id between 5 and 10
GO
--; At this point, next ID will be 11, not 5
select * from TT
GO
insert TT default values
GO
--; as you can see here, next ID is indeed 11
select * from TT
GO
--; Now delete ID = 11
--; so that we can reseed next highest ID to 5
delete TT where id = 11
GO
--; Now, let''s reseed identity value to the lowest possible identity number
declare #seedID int
select #seedID = max(id) from TT
print #seedID --; 4
--; We reseed identity column with "DBCC CheckIdent" and pass a new seed value
--; But we can't pass a seed number as argument, so let's use dynamic sql.
declare #sql nvarchar(200)
set #sql = 'dbcc checkident(TT, reseed, ' + cast(#seedID as varchar) + ')'
exec sp_sqlexec #sql
GO
--; Now the next
insert TT default values
GO
--; as you can see here, next ID is indeed 5
select * from TT
GO
I guess we would really need to know why you want to reuse your identity column. The only reason I can think of is because of the temporary nature of your data you might exhaust the possible values for the identity. That is not really likely, but if that is your concern, you can use uniqueidentifiers (guids) as the primary key in your table instead.
The function newid() will create a new guid and can be used in insert statements (or other statements). Then when you delete the row, you don't have any "holes" in your key because guids are not created in that order anyway.
[Syntax assumes SQL2008....]
Yes, it's possible. You need to two management tables, and two triggers on each participating table.
First, the management tables:
-- this table should only ever have one row
CREATE TABLE NextId (Id INT)
INSERT NextId VALUES (1)
GO
CREATE TABLE RecoveredIds (Id INT NOT NULL PRIMARY KEY)
GO
Then, the triggers, two on each table:
CREATE TRIGGER tr_TableName_RecoverId ON TableName
FOR DELETE AS BEGIN
IF ##ROWCOUNT = 0 RETURN
INSERT RecoveredIds (Id) SELECT Id FROM deleted
END
GO
CREATE TRIGGER tr_TableName_AssignId ON TableName
INSTEAD OF INSERT AS BEGIN
DECLARE #rowcount INT = ##ROWCOUNT
IF #rowcount = 0 RETURN
DECLARE #required INT = #rowcount
DECLARE #new_ids TABLE (Id INT PRIMARY KEY)
DELETE TOP (#required) OUTPUT DELETED.Id INTO #new_ids (Id) FROM RecoveredIds
SET #rowcount = ##ROWCOUNT
IF #rowcount < #required BEGIN
DECLARE #output TABLE (Id INT)
UPDATE NextId SET Id = Id + (#required-#rowcount)
OUTPUT DELETED.Id INTO #output
-- this assumes you have a numbers table around somewhere
INSERT #new_ids (Id)
SELECT n.Number+o.Id-1 FROM Numbers n, #output o
WHERE n.Number BETWEEN 1 AND #required-#rowcount
END
SET IDENTITY_INSERT TableName ON
;WITH inserted_CTE AS (SELECT _no = ROW_NUMBER() OVER (ORDER BY Id), * FROM inserted)
, new_ids_CTE AS (SELECT _no = ROW_NUMBER() OVER (ORDER BY Id), * FROM #new_ids)
INSERT TableName (Id, Attr1, Attr2)
SELECT n.Id, i.Attr1, i.Attr2
FROM inserted_CTE i JOIN new_ids_CTE n ON i._no = n._no
SET IDENTITY_INSERT TableName OFF
END
You could script the triggers out easily enough from system tables.
You would want to test this for concurrency. It should work as is, syntax errors notwithstanding: The OUTPUT clause guarantees atomicity of id lookup->increment as one step, and the entire operation occurs within a transaction, thanks to the trigger.
TableName.Id is still an identity column. All the common idioms like $IDENTITY and SCOPE_IDENTITY() will still work.
There is no central table of ids by table, but you could create one easily enough.
I don't have any help for finding the values not in use but if you really want to find them and set them yourself, you can use
set identity_insert on ....
in your code to do so.
I'm with everyone else though. Why bother? Don't you have a business problem to solve?

Possible to implement a manual increment with just simple SQL INSERT?

I have a primary key that I don't want to auto increment (for various reasons) and so I'm looking for a way to simply increment that field when I INSERT. By simply, I mean without stored procedures and without triggers, so just a series of SQL commands (preferably one command).
Here is what I have tried thus far:
BEGIN TRAN
INSERT INTO Table1(id, data_field)
VALUES ( (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]');
COMMIT TRAN;
* Data abstracted to use generic names and identifiers
However, when executed, the command errors, saying that
"Subqueries are not allowed in this
context. only scalar expressions are
allowed"
So, how can I do this/what am I doing wrong?
EDIT: Since it was pointed out as a consideration, the table to be inserted into is guaranteed to have at least 1 row already.
You understand that you will have collisions right?
you need to do something like this and this might cause deadlocks so be very sure what you are trying to accomplish here
DECLARE #id int
BEGIN TRAN
SELECT #id = MAX(id) + 1 FROM Table1 WITH (UPDLOCK, HOLDLOCK)
INSERT INTO Table1(id, data_field)
VALUES (#id ,'[blob of data]')
COMMIT TRAN
To explain the collision thing, I have provided some code
first create this table and insert one row
CREATE TABLE Table1(id int primary key not null, data_field char(100))
GO
Insert Table1 values(1,'[blob of data]')
Go
Now open up two query windows and run this at the same time
declare #i int
set #i =1
while #i < 10000
begin
BEGIN TRAN
INSERT INTO Table1(id, data_field)
SELECT MAX(id) + 1, '[blob of data]' FROM Table1
COMMIT TRAN;
set #i =#i + 1
end
You will see a bunch of these
Server: Msg 2627, Level 14, State 1, Line 7
Violation of PRIMARY KEY constraint 'PK__Table1__3213E83F2962141D'. Cannot insert duplicate key in object 'dbo.Table1'.
The statement has been terminated.
Try this instead:
INSERT INTO Table1 (id, data_field)
SELECT id, '[blob of data]' FROM (SELECT MAX(id) + 1 as id FROM Table1) tbl
I wouldn't recommend doing it that way for any number of reasons though (performance, transaction safety, etc)
It could be because there are no records so the sub query is returning NULL...try
INSERT INTO tblTest(RecordID, Text)
VALUES ((SELECT ISNULL(MAX(RecordID), 0) + 1 FROM tblTest), 'asdf')
I don't know if somebody is still looking for an answer but here is a solution that seems to work:
-- Preparation: execute only once
CREATE TABLE Test (Value int)
CREATE TABLE Lock (LockID uniqueidentifier)
INSERT INTO Lock SELECT NEWID()
-- Real insert
BEGIN TRAN LockTran
-- Lock an object to block simultaneous calls.
UPDATE Lock WITH(TABLOCK)
SET LockID = LockID
INSERT INTO Test
SELECT ISNULL(MAX(T.Value), 0) + 1
FROM Test T
COMMIT TRAN LockTran
We have a similar situation where we needed to increment and could not have gaps in the numbers. (If you use an identity value and a transaction is rolled back, that number will not be inserted and you will have gaps because the identity value does not roll back.)
We created a separate table for last number used and seeded it with 0.
Our insert takes a few steps.
--increment the number
Update dbo.NumberTable
set number = number + 1
--find out what the incremented number is
select #number = number
from dbo.NumberTable
--use the number
insert into dbo.MyTable using the #number
commit or rollback
This causes simultaneous transactions to process in a single line as each concurrent transaction will wait because the NumberTable is locked. As soon as the waiting transaction gets the lock, it increments the current value and locks it from others. That current value is the last number used and if a transaction is rolled back, the NumberTable update is also rolled back so there are no gaps.
Hope that helps.
Another way to cause single file execution is to use a SQL application lock. We have used that approach for longer running processes like synchronizing data between systems so only one synchronizing process can run at a time.
If you're doing it in a trigger, you could make sure it's an "INSTEAD OF" trigger and do it in a couple of statements:
DECLARE #next INT
SET #next = (SELECT (MAX(id) + 1) FROM Table1)
INSERT INTO Table1
VALUES (#next, inserted.datablob)
The only thing you'd have to be careful about is concurrency - if two rows are inserted at the same time, they could attempt to use the same value for #next, causing a conflict.
Does this accomplish what you want?
It seems very odd to do this sort of thing w/o an IDENTITY (auto-increment) column, making me question the architecture itself. I mean, seriously, this is the perfect situation for an IDENTITY column. It might help us answer your question if you'd explain the reasoning behind this decision. =)
Having said that, some options are:
using an INSTEAD OF trigger for this purpose. So, you'd do your INSERT (the INSERT statement would not need to pass in an ID). The trigger code would handle inserting the appropriate ID. You'd need to use the WITH (UPDLOCK, HOLDLOCK) syntax used by another answerer to hold the lock for the duration of the trigger (which is implicitly wrapped in a transaction) & to elevate the lock type from "shared" to "update" lock (IIRC).
you can use the idea above, but have a table whose purpose is to store the last, max value inserted into the table. So, once the table is set up, you would no longer have to do a SELECT MAX(ID) every time. You'd simply increment the value in the table. This is safe provided that you use appropriate locking (as discussed). Again, that avoids repeated table scans every time you INSERT.
use GUIDs instead of IDs. It's much easier to merge tables across databases, since the GUIDs will always be unique (whereas records across databases will have conflicting integer IDs). To avoid page splitting, sequential GUIDs can be used. This is only beneficial if you might need to do database merging.
Use a stored proc in lieu of the trigger approach (since triggers are to be avoided, for some reason). You'd still have the locking issue (and the performance problems that can arise). But sprocs are preferred over dynamic SQL (in the context of applications), and are often much more performant.
Sorry about rambling. Hope that helps.
How about creating a separate table to maintain the counter? It has better performance than MAX(id), as it will be O(1). MAX(id) is at best O(lgn) depending on the implementation.
And then when you need to insert, simply lock the counter table for reading the counter and increment the counter. Then you can release the lock and insert to your table with the incremented counter value.
Have a separate table where you keep your latest ID and for every transaction get a new one.
It may be a bit slower but it should work.
DECLARE #NEWID INT
BEGIN TRAN
UPDATE TABLE SET ID=ID+1
SELECT #NEWID=ID FROM TABLE
COMMIT TRAN
PRINT #NEWID -- Do what you want with your new ID
Code without any transaction scope (I use it in my engineer course as an exercice) :
-- Preparation: execute only once
CREATE TABLE increment (val int);
INSERT INTO increment VALUES (1);
-- Real insert
DECLARE #newIncrement INT;
UPDATE increment
SET #newIncrement = val,
val = val + 1;
INSERT INTO Table1 (id, data_field)
SELECT #newIncrement, 'some data';
declare #nextId int
set #nextId = (select MAX(id)+1 from Table1)
insert into Table1(id, data_field) values (#nextId, '[blob of data]')
commit;
But perhaps a better approach would be using a scalar function getNextId('table1')
Any critiques of this? Works for me.
DECLARE #m_NewRequestID INT
, #m_IsError BIT = 1
, #m_CatchEndless INT = 0
WHILE #m_IsError = 1
BEGIN TRY
SELECT #m_NewRequestID = (SELECT ISNULL(MAX(RequestID), 0) + 1 FROM Requests)
INSERT INTO Requests ( RequestID
, RequestName
, Customer
, Comment
, CreatedFromApplication)
SELECT RequestID = #m_NewRequestID
, RequestName = dbo.ufGetNextAvailableRequestName(PatternName)
, Customer = #Customer
, Comment = [Description]
, CreatedFromApplication = #CreatedFromApplication
FROM RequestPatterns
WHERE PatternID = #PatternID
SET #m_IsError = 0
END TRY
BEGIN CATCH
SET #m_IsError = 1
SET #m_CatchEndless = #m_CatchEndless + 1
IF #m_CatchEndless > 1000
THROW 51000, '[upCreateRequestFromPattern]: Unable to get new RequestID', 1
END CATCH
This should work:
INSERT INTO Table1 (id, data_field)
SELECT (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]';
Or this (substitute LIMIT for other platforms):
INSERT INTO Table1 (id, data_field)
SELECT TOP 1
MAX(id) + 1, '[blob of data]'
FROM
Table1
ORDER BY
[id] DESC;