tsql rowcount without executing update [duplicate] - sql

This question already has answers here:
how to know how many rows will be affected before running a query in microsoft sql server 2008
(5 answers)
Closed 4 years ago.
I have got any TSQL UPDATE statement and I'd like to know the number of rows that would be affected or the ##ROWCOUNT before really executing the updates.
I think I could replace the UPDATE part of the query with a SELECT COUNT(1), but it seems to me there should be an easier way.
Is there an easier way?
If there is no easy solution in SQL, a solution in .NET/C# would be ok for me too; I'm using System.Data.SqlClient.SqlCommand to execute the command anyway.
I'm using MSSQL 2008.
Example:
create table T (
A char(5),
B int
)
insert into T values ('a', 1)
insert into T values ('A', 1)
insert into T values ('B', 2)
insert into T values ('X', 1)
insert into T values ('D', 4)
-- I do not want to execute this query
-- update T set A = 'X' where B = 1
-- but I want ot get the ##ROWCOUNT of it, in this case
-- the wanted result would be 3.

One method would be to use transactions:
begin transaction;
declare #rc int;
update T set A = 'X' where B = 1;
set #rc = ##rowcount;
. . .
commit; -- or rollback
Just about any other method would have race conditions, if other threads might be updating the table.
However, I am suspicious that this solves a real problem. I suspect that your real problem might have a better solution. Perhaps you should ask another question explaining what you are really trying to do.

You could wrap your script in a transaction and use ROLLBACK at the end to forgo saving your changes.
create table T (
A char(5),
B int
)
insert into T values ('a',
insert into T values ('A', 1)
insert into T values ('B', 2)
insert into T values ('X', 1)
insert into T values ('D', 4)
BEGIN TRANSACTION;
update T set A = 'X' where B = 1
SELECT ##RowCount;
ROLLBACK TRANSACTION;
When you are ready to save your changes, switch ROLLBACK to COMMIT and execute, or you can simply remove those lines. You can also name your transactions. Ref: https://learn.microsoft.com/en-us/sql/t-sql/language-elements/transactions-transact-sql
You could add a SELECT * FROM T; after the rest of the script to confirm that your object was not actually updated.

Related

How to ignore the GO statements in IF EXISTS block?

I am trying to execute a script based on IF NOT EXISTS condition. But, the block within BEGIN and END gets executed even if IF NOT EXISTS returns 1. I cannot change statements within BEGIN and END blocks. How to handle this situation?
IF NOT EXISTS(SELECT 1 FROM [dbo].[UPGRADEHISTORY] WHERE SCRIPTNAME='001-MarkSubmitted.sql' AND RELEASENUMBER= '1')
BEGIN
IF NOT EXISTS(SELECT 1 FROM [dbo].[Action] WHERE Name='mark As Submitted')
BEGIN
SET IDENTITY_INSERT [dbo].[Action] ON
INSERT INTO [dbo].[Action](Id,Name,CreatedBy,CreatedOn) VALUES (6,'mark As Submitted',1,getdate())
SET IDENTITY_INSERT [dbo].[Action] OFF
END
GO
INSERT INTO [dbo].[StatusActionMapping](ArtifactType,StatusId,ActionId,RoleId) VALUES ('Report',11,6,1)
GO
INSERT INTO [dbo].[UpgradeHistory] ([ReleaseNumber],[ScriptNumber],[ScriptName],[ExecutionDate]) VALUES (1, (SELECT FORMAT(MAX(SCRIPTNUMBER) + 1, 'd3') FROM UpgradeHistory WHERE ReleaseNumber= 1),'001-MarkSubmitted.sql',GETDATE());
END
GO
As I mention in the comment, GO is not a Transact-SQL operator, it's interpreted by your IDE/CLI as a batch separator.SQL Server Utilities Statements - GO:
SQL Server provides commands that are not Transact-SQL statements, but are recognized by the sqlcmd and osql utilities and SQL Server Management Studio Code Editor. These commands can be used to facilitate the readability and execution of batches and scripts.
GO signals the end of a batch of Transact-SQL statements to the SQL Server utilities.
The answer is simply, remove the GOs. There is no need for them here; it's because they are there that you are getting the error because separating the statements into different batches makes no sense here. What you have would be equivalent to having the following 3 "files" and trying to run them independently:
File 1:
IF NOT EXISTS(SELECT 1 FROM [dbo].[UPGRADEHISTORY] WHERE SCRIPTNAME='001-MarkSubmitted.sql' AND RELEASENUMBER= '1')
BEGIN
IF NOT EXISTS(SELECT 1 FROM [dbo].[Action] WHERE Name='mark As Submitted')
BEGIN
SET IDENTITY_INSERT [dbo].[Action] ON
INSERT INTO [dbo].[Action](Id,Name,CreatedBy,CreatedOn) VALUES (6,'mark As Submitted',1,getdate())
SET IDENTITY_INSERT [dbo].[Action] OFF
END
Would error due to a BEGIN with out END.
File 2:
INSERT INTO [dbo].[StatusActionMapping](ArtifactType,StatusId,ActionId,RoleId) VALUES ('Report',11,6,1)
This would run fine.
File 3:
INSERT INTO [dbo].[UpgradeHistory] ([ReleaseNumber],[ScriptNumber],[ScriptName],[ExecutionDate]) VALUES (1, (SELECT FORMAT(MAX(SCRIPTNUMBER) + 1, 'd3') FROM UpgradeHistory WHERE ReleaseNumber= 1),'001-MarkSubmitted.sql',GETDATE());
END
Would error due to an END without a BEGIN.
There's nothing in your query that will causing a parsing error like you're adding a new column to an existing table and reference it later in the batch, so there's no need to separate the batches. Just remove the GOs in the middle of your BEGIN...ENDs and this works as you require:
IF NOT EXISTS (SELECT 1
FROM [dbo].[UPGRADEHISTORY]
WHERE SCRIPTNAME = '001-MarkSubmitted.sql'
AND RELEASENUMBER = '1')
BEGIN
IF NOT EXISTS (SELECT 1
FROM [dbo].[Action]
WHERE Name = 'mark As Submitted')
BEGIN
SET IDENTITY_INSERT [dbo].[Action] ON;
INSERT INTO [dbo].[Action] (Id, Name, CreatedBy, CreatedOn)
VALUES (6, 'mark As Submitted', 1, GETDATE());
SET IDENTITY_INSERT [dbo].[Action] OFF;
END;
INSERT INTO [dbo].[StatusActionMapping] (ArtifactType, StatusId, ActionId, RoleId)
VALUES ('Report', 11, 6, 1);
INSERT INTO [dbo].[UpgradeHistory] ([ReleaseNumber], [ScriptNumber], [ScriptName], [ExecutionDate])
VALUES (1, (SELECT FORMAT(MAX(SCRIPTNUMBER) + 1, 'd3') --This is a REALLY bad idea. Use an IDENTITY or SEQUENCE
FROM UpgradeHistory
WHERE ReleaseNumber = 1), '001-MarkSubmitted.sql', GETDATE());
END;
GO
Also note my point on your final INSERT. FORMAT(MAX(SCRIPTNUMBER) + 1, 'd3') is going to end up with race conditions. Use an IDENTITY or SEQUENCE.

Microsoft SQL Server - best way to 'Update if exists, or Insert'

I've been searching around for the answers to this question, and there's some conflicting or ambiguous information out there, finding it hard to find a for-sure answer.
My context: I'm in node.js using the 'mssql' npm package. My SQL server is Microsoft SQL Server 2014.
I have a record that may or may not exist in a table already -- if it exists I want to update it, otherwise I want to insert it. I'm not sure what the optimal SQL is, or if there's some kind of 'transaction' I should be running in mssql. I've found some options that seem good, but I'm not sure about any of them:
Option 1:
how to update if exists or insert
Problem with this is I'm not even sure this is valid syntax in MSSQL. I do like it though, and it seems to support doing multiple rows at once too which I like.
INSERT INTO table (id, user, date, points)
VALUES (1, 1, '2017-03-03', 25),
(2, 1, '2017-03-04', 25),
(3, 2, '2017-03-03', 100),
(4, 2, '2017-03-04', 150)
ON DUPLICATE KEY UPDATE points = VALUES(points)
Option 2:
don't know if there's any problem with this one, just not sure if it's optimal. Doesn't seem to support multiple simultaneous rows
update test set name='john' where id=3012
IF ##ROWCOUNT=0
insert into test(name) values('john');
Option 3: Merge, https://dba.stackexchange.com/questions/89696/how-to-insert-or-update-using-single-query
Some people say this is a bit buggy or something? This also apparently supports multiple at once which I like.
MERGE dbo.Test WITH (SERIALIZABLE) AS T
USING (VALUES (3012, 'john')) AS U (id, name)
ON U.id = T.id
WHEN MATCHED THEN
UPDATE SET T.name = U.name
WHEN NOT MATCHED THEN
INSERT (id, name)
VALUES (U.id, U.name);
Every one of them has different purpose, pros and cons.
Option 1 is good for multi row inserts/updates. However It only checks primary key constraints.
Option 2 is good for small sets of data. Single record insertion/update. It is more like script.
Option 3 is best for big queries. Lets say, reading from one table and inserting/updating to another accordingly. You can define which condition to be satisfied for insertion and/or update. You are not limited to primary key/unique constraint.
If your system is highly concurrent, and performance is important - you can try following pattern, if updates are more common than inserts:
BEGIN TRANSACTION;
UPDATE dbo.t WITH (UPDLOCK, SERIALIZABLE) SET val = #val WHERE [key] = #key;
IF ##ROWCOUNT = 0
BEGIN
INSERT dbo.t([key], val) VALUES(#key, #val);
END
COMMIT TRANSACTION;
Reference: https://sqlperformance.com/2020/09/locking/upsert-anti-pattern
Also read: https://michaeljswart.com/2017/07/sql-server-upsert-patterns-and-antipatterns/
If inserts are more common:
BEGIN TRY
INSERT INTO dbo.AccountDetails (Email, Etc) VALUES (#Email, #Etc);
END TRY
BEGIN CATCH
-- ignore duplicate key errors, throw the rest.
IF ERROR_NUMBER() IN (2601, 2627)
UPDATE dbo.AccountDetails
SET Etc = #Etc
WHERE Email = #Email;
END CATCH
I wouldn't use merge, while most of the bugs are apparently fixed - we have had major issues with it before in production.
EDIT ---
Yes above answers were for single rows - For multiple rows, you'd do something like this: The idea behind the locking is the same though
BEGIN TRANSACTION;
UPDATE t WITH (UPDLOCK, SERIALIZABLE)
SET val = tvp.val
FROM dbo.t AS t
INNER JOIN #tvp AS tvp
ON t.[key] = tvp.[key];
INSERT dbo.t([key], val)
SELECT [key], val FROM #tvp AS tvp
WHERE NOT EXISTS (SELECT 1 FROM dbo.t WHERE [key] = tvp.[key]);
COMMIT TRANSACTION;
Extending my comment here. There are known problems with MERGE in SQL Server, however, for what you're doing here you will likely be ok. Aaron Bertrand has an article on the subject which you can find here: Use Caution with SQL Server's MERGE Statement.
An alternative, however, for what you could do here would be using an "UPSERT"; UPDATE the existing rows, and then INSERT the ones that don't exist. This involves 2 separate statements, however, was the method used prior to MERGE:
UPDATE T
SET T.Name = U.Name
FROM dbo.Test T
JOIN (VALUES (3012, 'john')) AS U (id, name) ON T.id = U.id;
INSERT INTO dbo.Test (Name) --I'm assuming ID is an `IDENTITY` here
SELECT U.name
FROM (VALUES (3012, 'john')) AS U (id, name)
WHERE NOT EXISTS (SELECT 1
FROM dbo.Test T
WHERE T.ID = U.ID);
Note I have not declared any locking or transactions in this example, but you should in any implemented solution.

tSQL Trigger Logic, better to have IF clause, or IN clause?

I have been trying to create triggers to lessen the client side code that needs to be written. I have written the following two tSQL triggers and they both seem to produce the same results, I'm just wondering which one is the more proper way to do it. I'm using SQL Server 2012 if that makes any difference.
i.e.
which one uses less resources
executes faster
is more secure against attacks
etc...
CREATE TRIGGER tr_ProductGroup_INSERT_GroupMap
ON [qmgmt].[dbo].[ProductGroup]
After INSERT
AS
BEGIN
if (
select count([inserted].[groupID])
from [inserted]
where [inserted].[groupID] = 1
) = 0
begin
insert into [qmgmt].[dbo].[GroupMap]([parentGroupID], [childGroupID])
select 1, [inserted].[groupID]
from [inserted]
end
END
GO
OR
CREATE TRIGGER tr_ProductGroup_INSERT_GroupMap
ON [qmgmt].[dbo].[ProductGroup]
After INSERT
AS
BEGIN
insert into [qmgmt].[dbo].[GroupMap]([parentGroupID], [childGroupID])
select 1, [inserted].[groupID]
from [inserted]
Where[inserted].[groupID] in
(
select [inserted].[groupID]
from [inserted]
where [inserted].[groupID] <> 1
)
END
GO
UPDATE:
Based on some of the comments here are the inserts I am using. The GroupMap table has the same results no matter which trigger I use.
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('root', 'The root of all groups')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('orphans', 'This is where the members of deleted groups go')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('SMGMT', 'Support Management')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('ST1', 'Support Tier 1')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('ST2', ' Support Tier 2')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('ST3', 'Support Tier 3')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('SaaSMGMT', 'Express Management')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('SaaSSup', 'Support Express')
Since a comment is a bit too small to put this example in, I'm going to put it in an answer. The reason why people say your triggers are functionally different is that, although your test insert row by row, a trigger will also fire when you insert multiple rows into the table in one single operation. Based on your examples you could try the following:
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription])
SELECT 'root', 'The root of all groups'
UNION ALL
SELECT 'orphans', 'This is where the members of deleted groups go'
UNION ALL
SELECT 'SMGMT', 'Support Management'
When doing this query the inserted table will hold 3 rows and (depending on the data) the result of the 2 trigger-code-examples might give different results.
Don't worry, this is a common misconception. The rule of thumb with SQL is to always think in record-sets, never in 'a single record with fields'.
As for your question (yes, I'm going for a real answer =)
I would suggest a variation on the second one.
CREATE TRIGGER tr_ProductGroup_INSERT_GroupMap
ON [qmgmt].[dbo].[ProductGroup]
After INSERT
AS
BEGIN
insert into [qmgmt].[dbo].[GroupMap]([parentGroupID], [childGroupID])
select 1, [inserted].[groupID]
from [inserted]
where [inserted].[groupID] <> 1
END
This way the server only needs to run over inserted once, decide which records to 'keep' and then store them right-away into the destination table.
The question now is, if this does what you want it to do...

Simulate a deadlock using stored procedure

Does anyone know how to simulate a deadlock using a stored procedure inserting or updating values? I could only do so in sybase using individual commands.
Thanks,
Ver
Create two stored procedures.
The first should start a transaction, modify table 1 (and take a long time) and then modify table 2.
The second should start a transaction, modify table 2 (and take a long time) and then modify table 1.
Ideally, the modifications should affect the same rows, or create table locks.
Then, in a client application, start SP1, and immediately then also start SP2 (before SP1 has finished).
The simple and short answer to get a deadlock will be to access the tables data in a reverse order and hence introducing a cyclic deadlock between two connections. Let me show you code:
Create table vin_deadlock (id int, Name Varchar(30))
GO
Insert into vin_deadlock values (1, 'Vinod')
Insert into vin_deadlock values (2, 'Kumar')
Insert into vin_deadlock values (3, 'Saravana')
Insert into vin_deadlock values (4, 'Srinivas')
Insert into vin_deadlock values (5, 'Sampath')
Insert into vin_deadlock values (6, 'Manoj')
GO
Now with the tables ready. Just update the columns in the reverse order from two connections like:
-- Connection 1
Begin Tran
Update vin_deadlock
SET Name = 'Manoj'
Where id = 6
WAITFOR DELAY '00:00:10'
Update vin_deadlock
SET Name = 'Vinod'
Where id = 1
and from connection 2
-- Connection 2
Begin Tran
Update vin_deadlock
SET Name = 'Vinod'
Where id = 1
WAITFOR DELAY '00:00:10'
Update vin_deadlock
SET Name = 'Manoj'
Where id = 6
And this will result in a deadlock. You can see the deadlock graph from profiler.
Start a process which continously insert or update a table using while loop with script and run your desired sp.

Possible to implement a manual increment with just simple SQL INSERT?

I have a primary key that I don't want to auto increment (for various reasons) and so I'm looking for a way to simply increment that field when I INSERT. By simply, I mean without stored procedures and without triggers, so just a series of SQL commands (preferably one command).
Here is what I have tried thus far:
BEGIN TRAN
INSERT INTO Table1(id, data_field)
VALUES ( (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]');
COMMIT TRAN;
* Data abstracted to use generic names and identifiers
However, when executed, the command errors, saying that
"Subqueries are not allowed in this
context. only scalar expressions are
allowed"
So, how can I do this/what am I doing wrong?
EDIT: Since it was pointed out as a consideration, the table to be inserted into is guaranteed to have at least 1 row already.
You understand that you will have collisions right?
you need to do something like this and this might cause deadlocks so be very sure what you are trying to accomplish here
DECLARE #id int
BEGIN TRAN
SELECT #id = MAX(id) + 1 FROM Table1 WITH (UPDLOCK, HOLDLOCK)
INSERT INTO Table1(id, data_field)
VALUES (#id ,'[blob of data]')
COMMIT TRAN
To explain the collision thing, I have provided some code
first create this table and insert one row
CREATE TABLE Table1(id int primary key not null, data_field char(100))
GO
Insert Table1 values(1,'[blob of data]')
Go
Now open up two query windows and run this at the same time
declare #i int
set #i =1
while #i < 10000
begin
BEGIN TRAN
INSERT INTO Table1(id, data_field)
SELECT MAX(id) + 1, '[blob of data]' FROM Table1
COMMIT TRAN;
set #i =#i + 1
end
You will see a bunch of these
Server: Msg 2627, Level 14, State 1, Line 7
Violation of PRIMARY KEY constraint 'PK__Table1__3213E83F2962141D'. Cannot insert duplicate key in object 'dbo.Table1'.
The statement has been terminated.
Try this instead:
INSERT INTO Table1 (id, data_field)
SELECT id, '[blob of data]' FROM (SELECT MAX(id) + 1 as id FROM Table1) tbl
I wouldn't recommend doing it that way for any number of reasons though (performance, transaction safety, etc)
It could be because there are no records so the sub query is returning NULL...try
INSERT INTO tblTest(RecordID, Text)
VALUES ((SELECT ISNULL(MAX(RecordID), 0) + 1 FROM tblTest), 'asdf')
I don't know if somebody is still looking for an answer but here is a solution that seems to work:
-- Preparation: execute only once
CREATE TABLE Test (Value int)
CREATE TABLE Lock (LockID uniqueidentifier)
INSERT INTO Lock SELECT NEWID()
-- Real insert
BEGIN TRAN LockTran
-- Lock an object to block simultaneous calls.
UPDATE Lock WITH(TABLOCK)
SET LockID = LockID
INSERT INTO Test
SELECT ISNULL(MAX(T.Value), 0) + 1
FROM Test T
COMMIT TRAN LockTran
We have a similar situation where we needed to increment and could not have gaps in the numbers. (If you use an identity value and a transaction is rolled back, that number will not be inserted and you will have gaps because the identity value does not roll back.)
We created a separate table for last number used and seeded it with 0.
Our insert takes a few steps.
--increment the number
Update dbo.NumberTable
set number = number + 1
--find out what the incremented number is
select #number = number
from dbo.NumberTable
--use the number
insert into dbo.MyTable using the #number
commit or rollback
This causes simultaneous transactions to process in a single line as each concurrent transaction will wait because the NumberTable is locked. As soon as the waiting transaction gets the lock, it increments the current value and locks it from others. That current value is the last number used and if a transaction is rolled back, the NumberTable update is also rolled back so there are no gaps.
Hope that helps.
Another way to cause single file execution is to use a SQL application lock. We have used that approach for longer running processes like synchronizing data between systems so only one synchronizing process can run at a time.
If you're doing it in a trigger, you could make sure it's an "INSTEAD OF" trigger and do it in a couple of statements:
DECLARE #next INT
SET #next = (SELECT (MAX(id) + 1) FROM Table1)
INSERT INTO Table1
VALUES (#next, inserted.datablob)
The only thing you'd have to be careful about is concurrency - if two rows are inserted at the same time, they could attempt to use the same value for #next, causing a conflict.
Does this accomplish what you want?
It seems very odd to do this sort of thing w/o an IDENTITY (auto-increment) column, making me question the architecture itself. I mean, seriously, this is the perfect situation for an IDENTITY column. It might help us answer your question if you'd explain the reasoning behind this decision. =)
Having said that, some options are:
using an INSTEAD OF trigger for this purpose. So, you'd do your INSERT (the INSERT statement would not need to pass in an ID). The trigger code would handle inserting the appropriate ID. You'd need to use the WITH (UPDLOCK, HOLDLOCK) syntax used by another answerer to hold the lock for the duration of the trigger (which is implicitly wrapped in a transaction) & to elevate the lock type from "shared" to "update" lock (IIRC).
you can use the idea above, but have a table whose purpose is to store the last, max value inserted into the table. So, once the table is set up, you would no longer have to do a SELECT MAX(ID) every time. You'd simply increment the value in the table. This is safe provided that you use appropriate locking (as discussed). Again, that avoids repeated table scans every time you INSERT.
use GUIDs instead of IDs. It's much easier to merge tables across databases, since the GUIDs will always be unique (whereas records across databases will have conflicting integer IDs). To avoid page splitting, sequential GUIDs can be used. This is only beneficial if you might need to do database merging.
Use a stored proc in lieu of the trigger approach (since triggers are to be avoided, for some reason). You'd still have the locking issue (and the performance problems that can arise). But sprocs are preferred over dynamic SQL (in the context of applications), and are often much more performant.
Sorry about rambling. Hope that helps.
How about creating a separate table to maintain the counter? It has better performance than MAX(id), as it will be O(1). MAX(id) is at best O(lgn) depending on the implementation.
And then when you need to insert, simply lock the counter table for reading the counter and increment the counter. Then you can release the lock and insert to your table with the incremented counter value.
Have a separate table where you keep your latest ID and for every transaction get a new one.
It may be a bit slower but it should work.
DECLARE #NEWID INT
BEGIN TRAN
UPDATE TABLE SET ID=ID+1
SELECT #NEWID=ID FROM TABLE
COMMIT TRAN
PRINT #NEWID -- Do what you want with your new ID
Code without any transaction scope (I use it in my engineer course as an exercice) :
-- Preparation: execute only once
CREATE TABLE increment (val int);
INSERT INTO increment VALUES (1);
-- Real insert
DECLARE #newIncrement INT;
UPDATE increment
SET #newIncrement = val,
val = val + 1;
INSERT INTO Table1 (id, data_field)
SELECT #newIncrement, 'some data';
declare #nextId int
set #nextId = (select MAX(id)+1 from Table1)
insert into Table1(id, data_field) values (#nextId, '[blob of data]')
commit;
But perhaps a better approach would be using a scalar function getNextId('table1')
Any critiques of this? Works for me.
DECLARE #m_NewRequestID INT
, #m_IsError BIT = 1
, #m_CatchEndless INT = 0
WHILE #m_IsError = 1
BEGIN TRY
SELECT #m_NewRequestID = (SELECT ISNULL(MAX(RequestID), 0) + 1 FROM Requests)
INSERT INTO Requests ( RequestID
, RequestName
, Customer
, Comment
, CreatedFromApplication)
SELECT RequestID = #m_NewRequestID
, RequestName = dbo.ufGetNextAvailableRequestName(PatternName)
, Customer = #Customer
, Comment = [Description]
, CreatedFromApplication = #CreatedFromApplication
FROM RequestPatterns
WHERE PatternID = #PatternID
SET #m_IsError = 0
END TRY
BEGIN CATCH
SET #m_IsError = 1
SET #m_CatchEndless = #m_CatchEndless + 1
IF #m_CatchEndless > 1000
THROW 51000, '[upCreateRequestFromPattern]: Unable to get new RequestID', 1
END CATCH
This should work:
INSERT INTO Table1 (id, data_field)
SELECT (SELECT (MAX(id) + 1) FROM Table1), '[blob of data]';
Or this (substitute LIMIT for other platforms):
INSERT INTO Table1 (id, data_field)
SELECT TOP 1
MAX(id) + 1, '[blob of data]'
FROM
Table1
ORDER BY
[id] DESC;