Typically when you specify an identity column you get a convenient interface in SQL Server for asking for particular row.
SELECT * FROM $IDENTITY = #pID
You don't really need to concern yourself with the name if the identity column because there can only be one.
But what if I have a table which mostly consists of temporary data. Lots of inserts and lots of deletes. Is there a simple way for me to reuse the identity values.
Preferably I would want to be able to write a function that would return say NEXT_SMALLEST($IDENTITY) as next identity value and do so in a fail-safe manner.
Basically find the smallest value that's not in use. That's not entirely trivial to do, but what I want is to be able to tell SQL Server that this is my function that will generate the identity values. But what I know is that no such function exists...
I want to...
Implement global data base IDs, I need to provide a default value that I'm in control of.
My idea was based around that I should be able to have a table with all known IDs and then every row ID from some other table that needed a global ID would reference that table. The default value would be provided by something like
INSERT INTO GlobalID
RETURN SCOPE_IDENTITY()
No; it's not unique if it can be reused.
Why do you want to re-use them? Why do you concern yourself with this field? If you want to be in control of it, don't make it an identity; create your own scheme and use that.
Don't reuse identities, you'll just shoot your self in the foot. Use a large enough value so that it never rolls over (64 bit big int).
To find missing gaps in a sequence of numbers join the table against itself with a +/- 1 difference:
SELECT a.id
FROM table AS a
LEFT OUTER JOIN table AS b ON a.id = b.id+1
WHERE b.id IS NULL;
This query will find the numbers in the id sequence for which id-1 is not in the table, ie. contiguous sequence start numbers. You can then use SET IDENTITY INSERT OFF to insert a specific id and reuse a number. The cost of doing so is overwhelming (both runtime and code complexity) compared with the an ordinary identity based insert.
If you really want to reset Identity value to the lowest,
here is the trick you can use through DBCC CHECKIDENT
Basically following sql statements resets identity value so that identity value restarts from the lowest possible number
create table TT (id int identity(1, 1))
GO
insert TT default values
GO 10
select * from TT
GO
delete TT where id between 5 and 10
GO
--; At this point, next ID will be 11, not 5
select * from TT
GO
insert TT default values
GO
--; as you can see here, next ID is indeed 11
select * from TT
GO
--; Now delete ID = 11
--; so that we can reseed next highest ID to 5
delete TT where id = 11
GO
--; Now, let''s reseed identity value to the lowest possible identity number
declare #seedID int
select #seedID = max(id) from TT
print #seedID --; 4
--; We reseed identity column with "DBCC CheckIdent" and pass a new seed value
--; But we can't pass a seed number as argument, so let's use dynamic sql.
declare #sql nvarchar(200)
set #sql = 'dbcc checkident(TT, reseed, ' + cast(#seedID as varchar) + ')'
exec sp_sqlexec #sql
GO
--; Now the next
insert TT default values
GO
--; as you can see here, next ID is indeed 5
select * from TT
GO
I guess we would really need to know why you want to reuse your identity column. The only reason I can think of is because of the temporary nature of your data you might exhaust the possible values for the identity. That is not really likely, but if that is your concern, you can use uniqueidentifiers (guids) as the primary key in your table instead.
The function newid() will create a new guid and can be used in insert statements (or other statements). Then when you delete the row, you don't have any "holes" in your key because guids are not created in that order anyway.
[Syntax assumes SQL2008....]
Yes, it's possible. You need to two management tables, and two triggers on each participating table.
First, the management tables:
-- this table should only ever have one row
CREATE TABLE NextId (Id INT)
INSERT NextId VALUES (1)
GO
CREATE TABLE RecoveredIds (Id INT NOT NULL PRIMARY KEY)
GO
Then, the triggers, two on each table:
CREATE TRIGGER tr_TableName_RecoverId ON TableName
FOR DELETE AS BEGIN
IF ##ROWCOUNT = 0 RETURN
INSERT RecoveredIds (Id) SELECT Id FROM deleted
END
GO
CREATE TRIGGER tr_TableName_AssignId ON TableName
INSTEAD OF INSERT AS BEGIN
DECLARE #rowcount INT = ##ROWCOUNT
IF #rowcount = 0 RETURN
DECLARE #required INT = #rowcount
DECLARE #new_ids TABLE (Id INT PRIMARY KEY)
DELETE TOP (#required) OUTPUT DELETED.Id INTO #new_ids (Id) FROM RecoveredIds
SET #rowcount = ##ROWCOUNT
IF #rowcount < #required BEGIN
DECLARE #output TABLE (Id INT)
UPDATE NextId SET Id = Id + (#required-#rowcount)
OUTPUT DELETED.Id INTO #output
-- this assumes you have a numbers table around somewhere
INSERT #new_ids (Id)
SELECT n.Number+o.Id-1 FROM Numbers n, #output o
WHERE n.Number BETWEEN 1 AND #required-#rowcount
END
SET IDENTITY_INSERT TableName ON
;WITH inserted_CTE AS (SELECT _no = ROW_NUMBER() OVER (ORDER BY Id), * FROM inserted)
, new_ids_CTE AS (SELECT _no = ROW_NUMBER() OVER (ORDER BY Id), * FROM #new_ids)
INSERT TableName (Id, Attr1, Attr2)
SELECT n.Id, i.Attr1, i.Attr2
FROM inserted_CTE i JOIN new_ids_CTE n ON i._no = n._no
SET IDENTITY_INSERT TableName OFF
END
You could script the triggers out easily enough from system tables.
You would want to test this for concurrency. It should work as is, syntax errors notwithstanding: The OUTPUT clause guarantees atomicity of id lookup->increment as one step, and the entire operation occurs within a transaction, thanks to the trigger.
TableName.Id is still an identity column. All the common idioms like $IDENTITY and SCOPE_IDENTITY() will still work.
There is no central table of ids by table, but you could create one easily enough.
I don't have any help for finding the values not in use but if you really want to find them and set them yourself, you can use
set identity_insert on ....
in your code to do so.
I'm with everyone else though. Why bother? Don't you have a business problem to solve?
Related
I have a table that I'm trying to make sure that an aggregate sum of the inserts adds up to 1 (it's a mixture).
I want to constrain it so the whole FKID =2 fails because it adds up to 1.1.
Currently my constraint is
FUNCTION[dbo].[CheckSumTarget](#ID bigint)
RETURNS bit
AS BEGIN
DECLARE #Res BIT
SELECT #Res = Count(1)
FROM dbo.Test AS t
WHERE t.FKID = #ID
GROUP BY t.FKID
HAVING Sum([t.Value])<>1
RETURN #Res
END
GO
ALTER TABLE dbo.Test WITH CHECK ADD CONSTRAINT [CK_Target_Sum] CHECK (([dbo].[CheckSumTarget]([FKID])<>(1)))
but it's failing on the first insert because it doesn't add up to 1 yet. I was hoping if I add them all simultaneously, that wouldn't be the case.
This approach seems fraught with problems.
I would suggest another approach, starting with two tables:
aggregates, so "fkid" should really be aggregate_id
components
Then, in aggregates accumulate the sum() of the component values using a trigger. Maintain another flag that is computed:
alter table aggregates add is_valid as ( sum_value = 1.0 )
Then, create views on the two tables to only show records where is_valid = 1. For instance:
create view v_aggregates as
select c.*
from aggregates a join
components c
on a.aggregate_id = c.aggregate_id
where a.is_value = 1;
Here is a working version of solution
Here is table DDL
create table dbo.test(
id int,
fkid bigint,
value decimal(4,2)
);
The function definition
CREATE FUNCTION[dbo].[CheckSumTarget](#ID bigint)
RETURNS bit
AS BEGIN
DECLARE #Res decimal(4,2)
SELECT #Res = case when sum(value) > 1 then 1 else 0 end
FROM dbo.Test AS t
WHERE t.FKID = #ID
RETURN #Res
END
And the constraint defintion
ALTER TABLE dbo.Test WITH CHECK ADD CONSTRAINT [CK_Target_Sum] CHECK ([dbo].[CheckSumTarget]([FKID]) <> 1)
In your example
insert into dbo.test values (1, 2, 0.5);
insert into dbo.test values (1, 2, 0.4);
-- The following insert will fail, like you expect
insert into dbo.test values (1, 2, 0.2);
Note: This solution will be broken by UPDATE statement (as pointed out by 'Daniel Brughera') however that is a known behaviour. A better and common approach is use of trigger. You may want to explore that.
Your actual approach will work this way.....
You insert the firts component, the value must be 1
You try to insert a second component, it will be rejected because your sum is already 1
You update the existing component to .85
You insert the next component, the value must be .15
You back to step 2. with the third component
Since your constraint only takes care of the FKID column, it will be possible, and you may think that is working....
But.... if you left the process in step 3. your sum is not equal to 1 and is impossible for the constraint to foresee if you will insert the next value or not, even worst, you can update any value to be greater than 1 and it will be accepted.
If you add the value column to your constraint, it will prevent those updates, but you will never be able to go beyond step 1.
Personally I would't do that, but here you can get an approach
Use the computed column suggested by Gordon on your parent table. With computed columns you will always get the actual value, so, the parent wont be valid until the sum is equal to one
Use this solution to prevent the value to be greater than 1, so, at least you will be sure that any non valid parent is because a component is missing, that can be helpful for your business layer
As I mentioned in one comment, the rest of the logic belongs to the business and ui layers
Note as you can see the id and value parameters are not used in the function, but I need them to call them when I create the constraint, that way the constraint will validate updates too
CREATE TABLE ttest (id int, fkid int, value float)
go
create FUNCTION [dbo].[CheckSumTarget](#id int, #fkid int, #value float)
RETURNS FLOAT
AS BEGIN
DECLARE #Res float
SELECT #Res = sum(value)
FROM dbo.ttest AS t
WHERE t.FKID = #fkid
RETURN #Res
END
GO
ALTER TABLE dbo.ttest WITH CHECK ADD CONSTRAINT [CK_Target_Sum] CHECK (([dbo].[CheckSumTarget](id,[FKID],value)<=(1.0)))
I am using SQL Server 2008 and some times SQL Server 2012 and I need to do an insert on a pre-existing database where the schema does not seem to have the id column set for auto-increment.
So for example, here is a basic table My_Table that I have:
[Id] [Name]
-------------
1 JOHN
2 BOB
3 SALLY
I need to do an insert to get the MAX value of the last Id and then use this value for my new insert statement. I believe this is the best approach.
So for example the "pseudo syntax" for the insert would be:
INSERT INTO My_Table
VALUES(MAX(Id) + 1, 'MIKE');
How is the best way to do this when the Id is not set in the table schema to auto increment?
Note: I am using Microsoft SQL Server.
Thanks for any advice.
I recommend changing the structure of the table to use an identity column. The table definition should be:
create table my_table (
id int identity(1, 1) primary key,
. . .
);
This is the right solution. If you can't do that, you can express the logic as:
insert to my_table (id, name)
select coalesce(1 + max(id), 0), 'Mike'
from my_table;
However, this suffers from race conditions. Two threads could attempt an insert at the same time and end up with the same id. Avoiding such race conditions is why you want the database to do the work.
If you are in control of all inserts into the table, you can use a sequence as well.
You could create another table with an IDENTITY column:
CREATE TABLE ID_Insert (
ID INT IDENTITY(234, 1) primary key,
Val smallint null
)
(Where your seed value will be MAX(ID) + 1)
Insert any value into this table:
Insert ID_Insert(Val) values(NULL)
Use SCOPE_IDENTITY() to get the ID of the inserted value and use that in your insert into your other table.
NOTE: I have not tested this, but it gets around all the issues raised so far, so any criticism is welcome.
If you can't change the table structure so that the Id is an IDENTITY column, then this is probably your best bet:
SET XACT_ABORT ON;
BEGIN TRANSACTION;
DECLARE #maxId int = (SELECT MAX(id) FROM my_table);
INSERT INTO my_table (id, name)
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 1)) + #maxId, name
FROM my_other_table;
COMMIT TRANSACTION;
This works for batch inserts, not only single name inserts.
In SQL Server or maybe other databases, if a column is auto increment int type, the table can remember the highest value even if the record has been deleted.Let's say you have a table whose with some records previously deleted, and then you use SELECT MAX([ColumnName]), it might return a value associated with a deleted record. Has anyone seen scenarios like this?
This will not happen with SELECT MAX (as can be seen from the below script) in Sql Server.
CREATE TABLE TEST_TABLE (
ID INT IDENTITY (1,1),
Val INT
)
INSERT INTO TEST_TABLE (Val) SELECT 1
INSERT INTO TEST_TABLE (Val) SELECT 2
INSERT INTO TEST_TABLE (Val) SELECT 3
INSERT INTO TEST_TABLE (Val) SELECT 4
INSERT INTO TEST_TABLE (Val) SELECT 5
DELETE FROM TEST_TABLE WHERE Val >= 3
SELECT MAX(ID) MaxID
FROM TEST_TABLE
DROP TABLE TEST_TABLE
Output
MaxID
2
If you were looking for the Tables Identity Value, see #AdaTheDev answer.
Yes, that's by design. Used IDENTITY values are not re-assigned when a record is deleted.
To get the last identity value assigned, you'd need to use IDENT_CURRENT, which (quote)
Returns the last identity value generated for a specified table or view. The last identity value generated can be for any session and any scope.
e.g.
SELECT IDENT_CURRENT('TableName')
I'd say that if you're taking the MAX() on an identity/auto-increment column, then you're doing something wrong.
As #astander says, it's not behaviour I've observed under SQL Server.
You should treat such identifiers as opaque - that they happen to be structured as an integer is a mere convenience to you, with well known storage requirements, clustering behaviours, etc.
It is dev (not sysadmin's or DBA's) board - don't do this in production!
To check what is the next ID:
DBCC CHECKIDENT ('YourTableName', NORESEED)
Note that your next ID will be #MaxID+1:
DECLARE #MaxID INT;
SELECT #MaxID = COUNT(ID) FROM YourTableName;
DBCC CHECKIDENT('YourTableName', RESEED, #MaxID);
I have a two column table with a primary key (int) and a unique value (nvarchar(255))
When I insert a value to this table, I can use Scope_identity() to return the primary key for the value I just inserted. However, if the value already exists, I have to perform an additional select to return the primary key for a follow up operation (inserting that primary key into a second table)
I'm thinking there must be a better way to do this - I considered using covered indexes but the table only has two columns, most of what I've read on covered indexes suggests they only help where the table is significantly larger than the index.
Is there any faster way to do this? Would a covered index be faster even if its the same size as the table?
Building an index won't gain you anything since you have already created your value column as unique (which builds a index in the background). Effectively a full table scan is no different from an index scan in your scenario.
I assume you want to have a sort of insert-if-not-already-existsts behaviour. There is no way getting around a second select
if not exists (select ID from where name = #...)
insert into ...
select SCOPE_IDENTITY()
else
(select ID from where name = #...)
If the value happens to exist, the query will usually have been cached, so there should be no performance hit for the second ID select.
[Update statment here]
IF (##ROWCOUNT = 0)
BEGIN
[Insert statment here]
SELECT Scope_Identity()
END
ELSE
BEGIN
[SELECT id statment here]
END
I don't know about performance but it has no big overhead
As has already been mentioned this really shouldn't be a slow operation, especially if you index both columns. However if you are determined to reduce the expense of this operation then I see no reason why you couldn't remove the table entirely and just use the unique value directly rather than looking it up in this table. A 1-1 mapping like this is (theoretically) redundant. I say theoretically because there may be performance implications to using an nvarchar instead of an int.
I'll post this answer since everyone else seems to say you have to query the table twice in the event that the record exists... that's not true.
Step 1) Create a unique-index on the other column:
I recommend this as the index:
-- We're including the "ID" column so that SQL will not have to look far once the "WHERE" clause is finished.
CREATE INDEX MyLilIndex ON dbo.MyTable (Column2) INCLUDE (ID)
Step 2)
DECLARE #TheID INT
SELECT #TheID = ID from MyTable WHERE Column2 = 'blah blah'
IF (#TheID IS NOT NULL)
BEGIN
-- See, you don't have to query the table twice!
SELECT #TheID AS TheIDYouWanted
END
ELSE
INSERT...
SELECT SCOPE_IDENTITY() AS TheIDYouWanted
Create a unique index for the second entry, then:
if not exists (select null from ...)
insert into ...
else
select x from ...
You can't get away from the index, and it isn't really much overhead -- SQL server supports index columns upto 900-bytes, and does not discriminate.
The needs of your model are more important than any perceived performance issues, symbolising a string (which is what you are doing) is a common method to reduce database size, and this indirectly (and generally) means better performance.
-- edit --
To appease timothy :
declare #x int = select x from ...
if (#x is not null)
return x
else
...
You could use OUTPUT clause to return the value in the same statement. Here is an example.
DDL:
CREATE TABLE ##t (
id int PRIMARY KEY IDENTITY(1,1),
val varchar(255) NOT NULL
)
GO
-- no need for INCLUDE as PK column is always included in the index
CREATE UNIQUE INDEX AK_t_val ON ##t (val)
DML:
DECLARE #id int, #val varchar(255)
SET #val = 'test' -- or whatever you need here
SELECT #id = id FROM ##t WHERE val = #val
IF (#id IS NULL)
BEGIN
DECLARE #new TABLE (id int)
INSERT INTO ##t (val)
OUTPUT inserted.id INTO #new -- put new ID into table variable immediately
VALUES (#val)
SELECT #id = id FROM #new
END
PRINT #id
I have a primary key that I don't want to auto increment (for various reasons) and so I'm looking for a way to simply increment that field when I INSERT. By simply, I mean without stored procedures and without triggers, so just a series of SQL commands (preferably one command).
Here is what I have tried thus far:
BEGIN TRAN
INSERT INTO Table1(id, data_field)
VALUES ( (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]');
COMMIT TRAN;
* Data abstracted to use generic names and identifiers
However, when executed, the command errors, saying that
"Subqueries are not allowed in this
context. only scalar expressions are
allowed"
So, how can I do this/what am I doing wrong?
EDIT: Since it was pointed out as a consideration, the table to be inserted into is guaranteed to have at least 1 row already.
You understand that you will have collisions right?
you need to do something like this and this might cause deadlocks so be very sure what you are trying to accomplish here
DECLARE #id int
BEGIN TRAN
SELECT #id = MAX(id) + 1 FROM Table1 WITH (UPDLOCK, HOLDLOCK)
INSERT INTO Table1(id, data_field)
VALUES (#id ,'[blob of data]')
COMMIT TRAN
To explain the collision thing, I have provided some code
first create this table and insert one row
CREATE TABLE Table1(id int primary key not null, data_field char(100))
GO
Insert Table1 values(1,'[blob of data]')
Go
Now open up two query windows and run this at the same time
declare #i int
set #i =1
while #i < 10000
begin
BEGIN TRAN
INSERT INTO Table1(id, data_field)
SELECT MAX(id) + 1, '[blob of data]' FROM Table1
COMMIT TRAN;
set #i =#i + 1
end
You will see a bunch of these
Server: Msg 2627, Level 14, State 1, Line 7
Violation of PRIMARY KEY constraint 'PK__Table1__3213E83F2962141D'. Cannot insert duplicate key in object 'dbo.Table1'.
The statement has been terminated.
Try this instead:
INSERT INTO Table1 (id, data_field)
SELECT id, '[blob of data]' FROM (SELECT MAX(id) + 1 as id FROM Table1) tbl
I wouldn't recommend doing it that way for any number of reasons though (performance, transaction safety, etc)
It could be because there are no records so the sub query is returning NULL...try
INSERT INTO tblTest(RecordID, Text)
VALUES ((SELECT ISNULL(MAX(RecordID), 0) + 1 FROM tblTest), 'asdf')
I don't know if somebody is still looking for an answer but here is a solution that seems to work:
-- Preparation: execute only once
CREATE TABLE Test (Value int)
CREATE TABLE Lock (LockID uniqueidentifier)
INSERT INTO Lock SELECT NEWID()
-- Real insert
BEGIN TRAN LockTran
-- Lock an object to block simultaneous calls.
UPDATE Lock WITH(TABLOCK)
SET LockID = LockID
INSERT INTO Test
SELECT ISNULL(MAX(T.Value), 0) + 1
FROM Test T
COMMIT TRAN LockTran
We have a similar situation where we needed to increment and could not have gaps in the numbers. (If you use an identity value and a transaction is rolled back, that number will not be inserted and you will have gaps because the identity value does not roll back.)
We created a separate table for last number used and seeded it with 0.
Our insert takes a few steps.
--increment the number
Update dbo.NumberTable
set number = number + 1
--find out what the incremented number is
select #number = number
from dbo.NumberTable
--use the number
insert into dbo.MyTable using the #number
commit or rollback
This causes simultaneous transactions to process in a single line as each concurrent transaction will wait because the NumberTable is locked. As soon as the waiting transaction gets the lock, it increments the current value and locks it from others. That current value is the last number used and if a transaction is rolled back, the NumberTable update is also rolled back so there are no gaps.
Hope that helps.
Another way to cause single file execution is to use a SQL application lock. We have used that approach for longer running processes like synchronizing data between systems so only one synchronizing process can run at a time.
If you're doing it in a trigger, you could make sure it's an "INSTEAD OF" trigger and do it in a couple of statements:
DECLARE #next INT
SET #next = (SELECT (MAX(id) + 1) FROM Table1)
INSERT INTO Table1
VALUES (#next, inserted.datablob)
The only thing you'd have to be careful about is concurrency - if two rows are inserted at the same time, they could attempt to use the same value for #next, causing a conflict.
Does this accomplish what you want?
It seems very odd to do this sort of thing w/o an IDENTITY (auto-increment) column, making me question the architecture itself. I mean, seriously, this is the perfect situation for an IDENTITY column. It might help us answer your question if you'd explain the reasoning behind this decision. =)
Having said that, some options are:
using an INSTEAD OF trigger for this purpose. So, you'd do your INSERT (the INSERT statement would not need to pass in an ID). The trigger code would handle inserting the appropriate ID. You'd need to use the WITH (UPDLOCK, HOLDLOCK) syntax used by another answerer to hold the lock for the duration of the trigger (which is implicitly wrapped in a transaction) & to elevate the lock type from "shared" to "update" lock (IIRC).
you can use the idea above, but have a table whose purpose is to store the last, max value inserted into the table. So, once the table is set up, you would no longer have to do a SELECT MAX(ID) every time. You'd simply increment the value in the table. This is safe provided that you use appropriate locking (as discussed). Again, that avoids repeated table scans every time you INSERT.
use GUIDs instead of IDs. It's much easier to merge tables across databases, since the GUIDs will always be unique (whereas records across databases will have conflicting integer IDs). To avoid page splitting, sequential GUIDs can be used. This is only beneficial if you might need to do database merging.
Use a stored proc in lieu of the trigger approach (since triggers are to be avoided, for some reason). You'd still have the locking issue (and the performance problems that can arise). But sprocs are preferred over dynamic SQL (in the context of applications), and are often much more performant.
Sorry about rambling. Hope that helps.
How about creating a separate table to maintain the counter? It has better performance than MAX(id), as it will be O(1). MAX(id) is at best O(lgn) depending on the implementation.
And then when you need to insert, simply lock the counter table for reading the counter and increment the counter. Then you can release the lock and insert to your table with the incremented counter value.
Have a separate table where you keep your latest ID and for every transaction get a new one.
It may be a bit slower but it should work.
DECLARE #NEWID INT
BEGIN TRAN
UPDATE TABLE SET ID=ID+1
SELECT #NEWID=ID FROM TABLE
COMMIT TRAN
PRINT #NEWID -- Do what you want with your new ID
Code without any transaction scope (I use it in my engineer course as an exercice) :
-- Preparation: execute only once
CREATE TABLE increment (val int);
INSERT INTO increment VALUES (1);
-- Real insert
DECLARE #newIncrement INT;
UPDATE increment
SET #newIncrement = val,
val = val + 1;
INSERT INTO Table1 (id, data_field)
SELECT #newIncrement, 'some data';
declare #nextId int
set #nextId = (select MAX(id)+1 from Table1)
insert into Table1(id, data_field) values (#nextId, '[blob of data]')
commit;
But perhaps a better approach would be using a scalar function getNextId('table1')
Any critiques of this? Works for me.
DECLARE #m_NewRequestID INT
, #m_IsError BIT = 1
, #m_CatchEndless INT = 0
WHILE #m_IsError = 1
BEGIN TRY
SELECT #m_NewRequestID = (SELECT ISNULL(MAX(RequestID), 0) + 1 FROM Requests)
INSERT INTO Requests ( RequestID
, RequestName
, Customer
, Comment
, CreatedFromApplication)
SELECT RequestID = #m_NewRequestID
, RequestName = dbo.ufGetNextAvailableRequestName(PatternName)
, Customer = #Customer
, Comment = [Description]
, CreatedFromApplication = #CreatedFromApplication
FROM RequestPatterns
WHERE PatternID = #PatternID
SET #m_IsError = 0
END TRY
BEGIN CATCH
SET #m_IsError = 1
SET #m_CatchEndless = #m_CatchEndless + 1
IF #m_CatchEndless > 1000
THROW 51000, '[upCreateRequestFromPattern]: Unable to get new RequestID', 1
END CATCH
This should work:
INSERT INTO Table1 (id, data_field)
SELECT (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]';
Or this (substitute LIMIT for other platforms):
INSERT INTO Table1 (id, data_field)
SELECT TOP 1
MAX(id) + 1, '[blob of data]'
FROM
Table1
ORDER BY
[id] DESC;