Why doesn't this ROLLBACK TRANSACTION statement work?
BEGIN TRANSACTION;
DECLARE #foo INT
EXECUTE [database].[dbo].[get_counter] #CounterID='inventory_records', #nextValue=#foo OUTPUT;
ROLLBACK TRANSACTION;
Background
I'm inserting records into a customer's ERP system built on SQL Server 19. The ERP database doesn't have auto-incrementing primary keys. It instead uses a table called counters where each row has a counterID field and an integer value field.
To insert a new row into a table like inventory_record, I first need to call a stored procedure like this:
EXECUTE get_counter #counterID='inventory_record'
This procedure returns an OUT parameter called #nextValue which I then INSERT into the inventory_record table as its uid.
I need to ROLLBACK this stored procedure's behavior if my insert fails. That way the counter doesn't increase boundlessly on failed INSERT attempts.
Contents of get_counter stored procedure
It's dirt simple but also subject to copyright. I've summarized and truncated here. The counters are stored as sequences in the DB. So get_counter calls sp_sequence_get_range after checking that the requested counter is legitimate.
ALTER PROCEDURE get_counter
#strCounterID varchar(64),
#iIncrementValue integer = 1,
#LastValue BIGINT = NULL OUTPUT
AS
SET NOCOUNT ON
BEGIN
DECLARE
#nextSeqVar SQL_VARIANT
, #lastSeqVar SQL_VARIANT
-- code that confirms valid counter name
BEGIN TRY
-- code that calls [sp_sequence_get_range]
END TRY
BEGIN CATCH
THROW
END CATCH
RETURN(#LastValue)
END
The Problem
The inventory_record counter always increments. I can't roll it back.
If I run the SQL at the top of this question from SSMS, then SELECT value FROM counters WHERE counterID = 'inventory_record', the counter increments each time I execute.
I'm new to transaction handling in SQL Server. Any ideas what I'm missing?
re-post comments as answer for better readability.
The get_counter is using Sequence Numbers (sp_sequence_get_range). Please refer to documentation on Limitation section.
Sequence numbers are generated outside the scope of the current
transaction. They are consumed whether the transaction using the
sequence number is committed or rolled back
You may see a simple demo here
Related
Trying to perform multiple consecutive inserts in a table without identity key.
The unique id comes from a procedure called GetNextObjectId. GetNextObjectId is a stored procedure that has no output parameter and no return value.
Instead it selects a top 1 int field.
Tried this:
declare #nextid int;
exec #nextid = GetNextObjectId 1; insert into MyTable values (#nextid, ...);
exec #nextid = GetNextObjectId 1; insert into MyTable values (#nextid, ...);
go
Then this:
declare #nextid int; exec #nextid = GetNextObjectId 1; insert into MyTable values (#nextid, ...);
go
declare #nextid int; exec #nextid = GetNextObjectId 1; insert into MyTable values (#nextid, ...);
go
But the value of #nextid in the insert is always the same.
Question
What is the proper way to refresh the value of this variable without modifying the stored procedure?
Some context
The origin of this question is me looking for a quick way to insert test data in a table using the existing stored procedure, and not managing to do it. The question only relates to the fact the value of the variable does not get updated between statements, not to the proper way to insert data in a table. This is not production code. Also as I understand it, such a procedure is required using Entity Framework with concurrent code; as there are issues with Identity, each thread gets its own ids before saving the context as follows:
// Receive a batch of objects and persist in database
// using Entity Framework.
foreach (var req in requests)
{
// some validation
ctx.MyTable.Add(new Shared.Entities.MyTableType
{
Id = ctx.GetNextObjectId(Enums.ObjectTypes.MyTableType),
Code = req.Code,
Name = req.Name
});
// save to database every 1000 records
counter++;
if (counter % 1000 == 0)
{
ctx.SaveChanges();
counter = 0;
}
}
// save remaining if any
ctx.SaveChanges();
The procedure does this:
BEGIN TRAN T1
UPDATE [dbo].[ObjectsIds] WITH (ROWLOCK)
SET NextId = NextId + Increment
WHERE ObjectTypeId = #objectTypeId
SELECT NextId
FROM [dbo].[ObjectsIds]
WHERE ObjectTypeId = #objectTypeId
COMMIT TRAN T1
There are so many things wrong with this approach that a comment is not sufficient.
First, stored procedures return an integer which is occasionally used. When used, this should be a status value indicating success or failure. There is no requirement but that is how even Microsoft describes the value in the documentation. It sounds like your stored procedure is just running a query, not even returning a status value.
Second, using a stored procedure for this purpose means that you have race conditions. That means that even if the code seemed to work, it might not work for concurrent inserts.
Third, your code is requiring calling a stored procedure as part of every insert. That seems very dangerous, if you actually care about the value.
Fourth, you should be validating the data integrity using a unique index or constraint to prevent subsequent inserts with the same value.
What is the right solution? Well, the best solution is to simply enumerate every row with an identity() column. If you need to do specific counts by a column, then you can calculate that during querying.
If that doesn't meet your needs (although it has always been good enough for me), you can write a trigger. When writing a trigger, you need to be careful about locking the table to be sure that concurrent inserts don't produce the same value. That could suggest using a mechanism such as multiple sequences. Or it could suggest clustering the table around the groups.
The short message: triggers are the mechanism to do what you want (affect the data during a DML operation). Stored procedures are not the mechanism.
I am trying to set up a read/write lock in SQL Server. My stored procedure is
CREATE PROCEDURE test()
AS
BEGIN
SELECT VALUE FROM MYTABLE WHERE ID=1
UPDATE MYTABLE SET VALUE=VALUE+1 WHERE ID=1
END
I would like to be sure tha no-one else is going to read or update the "Value" field while this stored procedure is being executed.
I read lots of posts and I read that in SQL Server should be enough to set up a transaction.
CREATE PROCEDURE test()
AS
BEGIN
BEGIN TRANSACTION
SELECT VALUE FROM MYTABLE WHERE ID=1
UPDATE MYTABLE SET VALUE=VALUE+1 WHERE ID=1
COMMIT TRANSACTION
END
But to me this is not enough, since I tried launching two parallel connections, both of them using this stored procedure. With SQL Server Management Studio's debugger, i stopped the first execution inside the transaction, and i observed that the second transaction has been executed!
So i tried to add ISOLATION LEVEL
CREATE PROCEDURE test()
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRANSACTION
SELECT VALUE FROM MYTABLE WHERE ID=1
UPDATE MYTABLE SET VALUE=VALUE+1 WHERE ID=1
COMMIT TRANSACTION
END
but the result is the same.
I also tried to set isolation level in the client code
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
EXEC test
but again nothing changed.
My question is: in such situation, which is the correct way to set up a lock that blocks the others?
thank you
in such situation, which is the correct way to set up a lock that blocks the others?
The correct lock here is to read the table with an UPDLOCK, in a transaction.
SELECT VALUE FROM MYTABLE with (UPDLOCK)
WHERE ID=1
You can also use an OUTPUT clause to update and return the value in a single query, which will also prevent two sessions from reading and updating the same value
update MyTable set value = value+1
output inserted.value
However you should not generate keys like this. Only one session can generate a key at a time, and the locking to generate the key is held until the end of the session's current transaction. Instead use a SEQUENCE or an IDENTITY column.
I have implemented a SQL tablockx locking in a Procedure.It is working fine when this is running on one server.But duplicate policyNumber occurs when request comes from two different servers.
declare #plantype varchar(max),
#transid varchar(max),
#IsStorySolution varchar(10),
#outPolicyNumber varchar(max) output,
#status int output -- 0 mean error and 1 means success
)
as
begin
BEGIN TRANSACTION
Declare #policyNumber varchar(100);
Declare #polseqid int;
-- GET POLICY NUMBER ON THE BASIS OF STORY SOLUTION..
IF (UPPER(#IsStorySolution)='Y')
BEGIN
select top 1 #policyNumber=Policy_no from PLAN_POL_NO with (tablockx, holdlock) where policy_no like '9%'
and pol_id_seq is null and status='Y';
END
ELSE
BEGIN
select top 1 #policyNumber=pp.Policy_no from PLAN_POL_NO pp with (tablockx, holdlock) ,PLAN_TYP_MST pt where pp.policy_no like PT.SERIES+'%'
and pt.PLAN_TYPE in (''+ISNULL(#plantype,'')+'') and pol_id_seq is null and pp.status='Y'
END
-- GET POL_SEQ_ID NUMBER
select #polseqid=dbo.Sequence();
--WAITFOR DELAY '00:00:03';
set #policyNumber= ISNULL(#policyNumber,'');
-- UPDATE POLICY ID INFORMATION...
Update PLAN_POL_NO set status='N',TRANSID =#transid , POL_ID_SEQ=ISNULL(#polseqid,0) where Policy_no =#policyNumber
set #outPolicyNumber=#policyNumber;
if(##ERROR<>0) begin GOTO Fail end
COMMIT Transaction
set #status=1;
return;
Fail:
If ##TRANCOUNT>0
begin
Rollback transaction
set #status=0;
return;
This is function which i have called::
CREATE function [dbo].[Sequence]()
returns int
as
begin
declare #POL_ID int
/***************************************
-- Schema name is added with table name in below query
-- as there are two table with same name (PLAN_POL_NO)
-- on different schema (dbo & eapp).
*****************************************/
select #POL_ID=isnull(MAX(POL_ID_SEQ),2354) from dbo.PLAN_POL_NO
return #POL_ID+1
end
The problem you are facing is because concurrent requests are both getting the same POL_ID_SEQ from your table dbo.PLAN_POL_NO.
There are multiple solutions to your problem, but I can think of two that might help you and require none/small code changes:
Using a higher transaction isolation level instead of table hints.
In your stored procedure you can use the following:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
This will make sure that any data read/modified during the SP block is transactionally consistent and avoids phantom reads, duplicate records etc. It will potentially create higher deadlocks and if these tables are heavily queried/updated throughout your application you might have a whole new set of issues.
Make sure your update to the dbo.PLAN_POL_NO only succeeds if the sequence has not changed. If it has changed, error out (if it changed it means a concurrent transaction obtained the ID and completed first)
Something like this:
Update dbo.PLAN_POL_NO
SET status ='N',
TRANSID = #transid,
POL_ID_SEQ = ISNULL(#polseqid,0)
WHERE Policy_no = #policyNumber
AND POL_ID_SEW = #polseqid - 1
IF ##ROWCOUNT <> 1
BEGIN
-- Update failed, error out and let the SP rollback the transaction
END
A WITH (HOLDLOCK) option in the function might suffice.
The HOLDLOCK from the first queries may be being applied at a row or page level that then DOESN'T include the row of interest that is queried in the function.
However, the function is not reliable since it is not self-contained. I'd be looking to redesign this such that the Sequence can generate AND "take" the sequence number before returning it. The current design is fragile at best.
I've only been writing DB2 procedures for a few days, but trying to do a "batch delete" on a given table. My expected logic is:
to open a cursor
walk through it until EOF
issue a DELETE on each iteration
For sake of simplifying this question, assume I only want to issue a single COMMIT (of all DELETEs), after the WHILE loop is completed (ie. once cursor reaches EOF). So given the code sample below:
CREATE TABLE tableA (colA INTEGER, ...)
CREATE PROCEDURE "SCHEMA"."PURGE_PROC"
(IN batchSize INTEGER)
LANGUAGE SQL
SPECIFIC SQL140207163731500
BEGIN
DECLARE tempID INTEGER;
DECLARE eof_bool INTEGER DEFAULT 0;
DECLARE sqlString VARCHAR(1000);
DECLARE sqlStmt STATEMENT;
DECLARE myCurs CURSOR WITH HOLD FOR sqlStmt;
DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET eof_bool = 1;
SET sqlString = 'select colA from TableA';
PREPARE sqlStmt FROM sqlString;
OPEN myCurs;
FETCH myCurs INTO tempID;
WHILE (eof_bool = 0) DO
DELETE FROM TableA where colA = tempID;
FETCH myCurs INTO tempID;
END WHILE;
COMMIT;
CLOSE myCurs;
END
Note: In my real scenario:
I am not deleting all records from the table, just certain ones based on some additional criteria; and
I plan to perform a COMMIT every N# iterations of the WHILE loop (say 500 or 1000), not the entire mess like above; and
I plan to DELETE against multiple tables, not just this one;
But again, to simplify, I tested the above code, and what I'm seeing is that the DELETEs seem to be getting committed 1-by-1. I base this on the following test:
I pre-load the table with (say 50k) records;
then run the purge storedProc which takes ~60 secs to run;
during this time, from another sql client, I continuously "SELECT COUNT(*) FROM tableA" and see count reducing incrementally.
If all DELETEs were committed at once, I would expect to see the record count(*) only drop from to 0 at the end of the ~60 seconds. That is what I see with comparable SPs written for Oracle or SQLServer.
This is DB2 v9.5 on Win2003.
Any ideas what I'm missing?
You are missing the difference in concurrency control implementation between the different database engines. In an Oracle database another session would see data that have been committed prior to the beginning of its transaction, that is, it would not see any deletes until the first session commits.
In DB2, depending on the server configuration parameters (e.g. DB2_SKIPDELETED) and/or the second session isolation level (e.g. uncommitted read) it can in fact see (or not see) data affected by in-flight transactions.
If your business logic requires different transaction isolation, speak with your DBA.
It should be pointed out that you're deleting "outside of the cursor"
The right way to delete using the cursor would be using a "positioned delete"
DELETE FROM tableA WHERE CURRENT OF myCurs;
The above deletes the row just fetched.
Assuming that the table MyTable already exists, Why does the "In catch" is printed on the first statement, but not on the second?
It seems to be catching errors on duplicate table names but not on duplicate column names
First:
BEGIN TRY
BEGIN TRANSACTION
CREATE TABLE MyTable (id INT)
COMMIT TRANSACTION
END TRY
BEGIN CATCH
PRINT 'in Catch'
ROLLBACK TRANSACTION
END CATCH
Second:
BEGIN TRY
BEGIN TRANSACTION
ALTER TABLE MyTable ADD id INT
COMMIT TRANSACTION
END TRY
BEGIN CATCH
PRINT 'in Catch'
ROLLBACK TRANSACTION
END CATCH
The difference is that the alter table statement generates a compile time error, not a runtime error, so the catch block is never executed as the batch itself is not executed.
You can check this by using the display estimated execution plan button in SQL server management studio, you will see for the CREATE TABLE statement, an estimated plan is displayed, whereas for the ALTER TABLE statement, the error is thrown before SQL server can even generate a plan as it cannot compile the batch.
EDIT - EXPLANATION:
This is to do with the way deferred name resolution works in SQL server, if you are creating an object, SQL server does not check that the object already exists until runtime. However if you reference columns in an object that does exist, the columns etc that you reference must be correct or the statement will fail to compile.
An example of this is with stored procedures, say you have the following table:
create table t1
(
id int
)
then you create a stored procedure like this:
create procedure p1
as
begin
select * from t2
end
It will work as deferred name resolution does not require the object to exist when the procedure is created, but it will fail if it is executed
If, however, you create the procedure like this:
create procedure p2
as
begin
select id2 from t1
end
The procedure will fail to be created as you have referenced an object that does exist, so deferred name resolution rules no longer apply.