Why Insert creates a transaction for itself? SQL Server - sql

I execute this:
SET IMPLICIT_TRANSACTIONS ON
INSERT INTO Foo (counter) values ((select ##TRANCOUNT))
SELECT ##TRANCOUNT
COMMIT
Insert should trigger a transaction start.
Expected: 1 is displayed and stored in the table.
Actual: 1 is displayed, but 2 is stored in the table.
Why SQL Server behaves like that - INSERT creates a transaction internally? It doesn't really bothers me, but I'm just curious is there anything I'm missing.

Answer does not fulfill question. Inconsistent results where a transaction is left open sometimes. Left answer, but can be deleted if so desired/required.
In theory, since a transaction has started by the execution of sp_prepexec, the insert shouldn't begin a new transaction. However, since this is using a system stored proc to execute dynamic SQL, I'm not sure the rule of joining transactions, while Implicit Transactions are off, applies. Changing when ##Trancount is obtained to outside the INSERT results in a value of 1 in both cases, leading to possible evidence there are actually two transactions underway in your original query.
set implicit_transactions on
Declare #P2 int;
EXEC sp_prepexec #P2 output,
NULL,
N'DECLARE #tc INT
SET #tc = ##trancount
insert into FOO (counter) values ((select #tc))
select ##TRANCOUNT';
EXEC sp_unprepare #P2;

Related

Properly understanding the error Cannot use the ROLLBACK statement within an INSERT-EXEC statement - Msg 50000, Level 16, State 1

I understand there is a regularly quoted answer that is meant to address this question, but I believe there is not enough explanation on that thread to really answer the question.
Why earlier answers are inadequate
The first (and accepted) answer simply says this is a common problem and talks about having only one active insert-exec at a time (which is only the first half of the question asked there and doesn't address the ROLLBACK error). The given workaround is to use a table-valued function - which does not help my scenario where my stored procedure needs to update data before returning a result set.
The second answer talks about using openrowset but notes you cannot dynamically specify argument values for the stored procedure - which does not help my scenario because different users need to call my procedure with different parameters.
The third answer provides something called "the old single hash table approach" but does not explain whether it is addressing part 1 or 2 of the question, nor how it works, nor why.
No answer explains why the database is giving this error in the first place.
My use case / requirements
To give specifics for my scenario (although simplified and generic), I have procedures something like below.
In a nutshell though - the first procedure will return a result set, but before it does so, it updates a status column. Effectively these records represent records that need to be synchronised somewhere, so when you call this procedure the procedure will flag the records as being "in progress" for sync.
The second stored procedure calls that first one. Of course the second stored procedure wants to take those records and perform inserts and updates on some tables - to keep those tables in sync with whatever data was returned from the first procedure. After performing all the updates, the second procedure then calls a third procedure - within a cursor (ie. row by row on all the rows in the result set that was received from the first procedure) - for the purpose of setting the status on the source data to "in sync". That is, one by one it goes back and says "update the sync status on record id 1, to 'in sync'" ... and then record 2, and then record 3, etc.
The issue I'm having is that calling the second procedure results in the error
Msg 50000, Level 16, State 1, Procedure getValuesOuterCall, Line 484 [Batch Start Line 24]
Cannot use the ROLLBACK statement within an INSERT-EXEC statement.
but calling the first procedure directly causes no error.
Procedure 1
-- Purpose here is to return a result set,
-- but for every record in the set we want to set a status flag
-- to another value as well.
alter procedure getValues #username, #password, #target
as
begin
set xact_abort on;
begin try
begin transaction;
declare #tableVariable table (
...
);
update someOtherTable
set something = somethingElse
output
someColumns
into #tableVariable
from someTable
join someOtherTable
join etc
where someCol = #username
and etc
;
select
someCols
from #tableVariable
;
commit;
end try
begin catch
if ##trancount > 0 rollback;
declare #msg nvarchar(2048) = error_message() + ' Error line: ' + CAST(ERROR_LINE() AS nvarchar(100));
raiserror (#msg, 16, 1);
return 55555
end catch
end
Procedure 2
-- Purpose here is to obtain the result set from earlier procedure
-- and then do a bunch of data updates based on the result set.
-- Lastly, for each row in the set, call another procedure which will
-- update that status flag to another value.
alter procedure getValuesOuterCall #username, #password, #target
as
begin
set xact_abort on;
begin try
begin transaction;
declare #anotherTableVariable
insert into #anotherTableVariable
exec getValues #username = 'blah', #password = #somePass, #target = ''
;
with CTE as (
select someCols
from #anotherTableVariable
join someOtherTables, etc;
)
merge anUnrelatedTable as target
using CTE as source
on target.someCol = source.someCol
when matched then update
target.yetAnotherCol = source.yetAnotherCol,
etc
when not matched then
insert (someCols, andMoreCols, etc)
values ((select someSubquery), source.aColumn, source.etc)
;
declare #myLocalVariable int;
declare #mySecondLocalVariable int;
declare lcur_myCursor cursor for
select keyColumn
from #anotherTableVariable
;
open lcur_muCursor;
fetch lcur_myCursor into #myLocalVariable;
while ##fetch_status = 0
begin
select #mySecondLocalVariable = someCol
from someTable
where someOtherCol = #myLocalVariable;
exec thirdStoredProcForSettingStatusValues #id = #mySecondLocalVariable, etc
end
deallocate lcur_myCursor;
commit;
end try
begin catch
if ##trancount > 0 rollback;
declare #msg nvarchar(2048) = error_message() + ' Error line: ' + CAST(ERROR_LINE() AS nvarchar(100));
raiserror (#msg, 16, 1);
return 55555
end catch
end
The parts I don't understand
Firstly, I have no explicit 'rollback' (well, except in the catch block) - so I have to presume that an implicit rollback is causing the issue - but it is difficult to understand where the root of this problem is; I am not even entirely sure which stored procedure is causing the issue.
Secondly, I believe the statements to set xact_abort and begin transaction are required - because in procedure 1 I am updating data before returning the result set. In procedure 2 I am updating data before I call a third procedure to update further data.
Thirdly, I don't think procedure 1 can be converted to a table-valued function because the procedure performs a data update (which would not be allowed in a function?)
Things I have tried
I removed the table variable from procedure 2 and actually created a permanent table to store the results coming back from procedure 1. Before calling procedure 1, procedure 2 would truncate the table. I still got the rollback error.
I replaced the table variable in procedure 1 with a temporary table (ie. single #). I read the articles about how such a table persists for the lifetime of the connection, so within procedure 1 I had drop table if exists... and then create table #.... I still got the rollback error.
Lastly
I still don't understand exactly what is the problem - what is Microsoft struggling to accomplish here? Or what is the scenario that SQL Server cannot accommodate for a requirement that appears to be fairly straightforward: One procedure returns a result set. The calling procedure wants to perform actions based on what's in that result set. If the result set is scoped to the first procedure, then why can't SQL Server just create a temporary copy of the result set within the scope of the second procedure so that it can be acted upon?
Or have I missed it completely and the issue has something to do with the final call to a third procedure, or maybe to do with using try ... catch - for example, perhaps the logic is totally fine but for some reason it is hitting the catch block and the rollback there is the problem (ie. so if I fix the underlying reason leading us to the catch block, all will resolve)?

Commit transaction outside the current transaction (like autonomous transaction in Oracle)

I need to write into a log table from a stored procedure.
Now this log info has to survive a rollback offcourse.
I know this question has been asked before, but my situation is different and I cannot find an answer to my problem in these questions.
When there is no error in the stored procedure things are simple, the entry in the logtable will just be there.
When there is an error than things are complicated.
Inside the procedure I can do rollback in the catch and then insert the data into the log table, I know that and I am already doing that.
But the problem is when the stored procedure is called like this :
begin transaction
exec myStoredProcedure
rollback transaction
select * from myLogTable
I know this code makes not much sense, I kept it mimimal to demonstrate my problem.
If the caller of the stored procedure does the commit/rollback then it does not matters what I do in the stored procedure. My logentry will always be rolled back.
I also cannot use the temporary table trick, which is to return the data I want to log and let the caller use that data to insert it into the logtable after it has done the rollback, because the caller is an external application that I do not have the source from.
The logging is done in a seperate procedure that only has one line of code, the insert into the logtable.
What I need is a way to commit the insert in this procedure, outside the current transaction so it survives any rollback.
Is there a way to do this ?
The Solution:
I used lad2025 answer and thus far it is working without problems or performance issues.
But this procedure will only be called about 1000 times each day which is not that much so I guess I don't have to expect any problems either.
It is quite interesting topic so let's check how MS approaches it.
First documentation: Migrating-Oracle-to-SQL-Server-2014-and-Azure-SQL-DB.pdf
Page 152.
Simulating Oracle Autonomous Transactions
This section describes how SSMA for Oracle V6.0 handles autonomous transactions
(PRAGMA AUTONOMOUS_TRANSACTION). These autonomous transactions do not
have direct equivalents in Microsoft SQL Server 2014.
When you define a PL/SQL block (anonymous block, procedure, function, packaged
procedure, packaged function, database trigger) as an autonomous transaction, you
isolate the DML in that block from the caller's transaction context. The block becomes
an independent transaction started by another transaction, referred to as the main
transaction.
To mark a PL/SQL block as an autonomous transaction, you simply include the
following statement in your declaration section:
PRAGMA AUTONOMOUS_TRANSACTION;
SQL Server 2014 does not support autonomous transactions. The only way to isolate a
Transact-SQL block from a transaction context is to open a new connection.
Use the xp_ora2ms_exec2 extended procedure and its extended version
xp_ora2ms_exec2_ex, bundled with the SSMA 6.0 Extension Pack, to open new
transactions. The procedure's purpose is to invoke any stored procedure in a new
connection and help invoke a stored procedure within a function body. The
xp_ora2ms_exec2 procedure has the following syntax:
xp_ora2ms_exec2
<active_spid> int,
<login_time> datetime,
<ms_db_name> varchar,
<ms_schema_name> varchar,
<ms_procedure_name> varchar,
<bind_to_transaction_flag> varchar,
[optional_parameters_for_procedure];
Then you need to install on your server stored procedures and other scripts:
SSMA for Oracle Extension Pack (only SSMA for Oracle Extension Pack.7.5.0.msi).
Your stored procedure will become:
CREATE TABLE myLogTable(i INT IDENTITY(1,1),
d DATETIME DEFAULT GETDATE(),
t NVARCHAR(1000));
GO
CREATE OR ALTER PROCEDURE my_logging
#t NVARCHAR(MAX)
AS
BEGIN
INSERT INTO myLogTable(t) VALUES (#t);
END;
GO
CREATE OR ALTER PROCEDURE myStoredProcedure
AS
BEGIN
-- some work
SELECT 1;
INSERT INTO myLogTable(t)
VALUES ('Standard logging that will perish after rollback');
DECLARE #login_time DATETIME = GETDATE();
DECLARE #custom_text_to_log NVARCHAR(100);
SET #custom_text_to_log=N'some custom loging that should survive rollback';
DECLARE #database_name SYSNAME = DB_NAME();
EXEC master.dbo.xp_ora2ms_exec2_ex
##spid,
#login_time,
#database_name,
'dbo',
'my_logging',
'N',
#custom_text_to_log;
END;
And final call:
begin transaction
exec myStoredProcedure
rollback transaction
select * from myLogTable;
OUTPUT:
i d t
2 2017-08-21 some custom loging that should survive rollback
So you really search for some sort of Autonomous transaction (like in Oracle).
One ugly way to simulate it is to use loopback linked server.
Warning: This is PoC (I would think twice before I would use it in PROD) and do a lot of testing.
DECLARE #servername SYSNAME;
SET #servername = CONVERT(SYSNAME, SERVERPROPERTY(N'ServerName'));
EXECUTE sys.sp_addlinkedserver
#server = N'loopback',
#srvproduct = N'',
#provider = N'SQLNCLI',
#datasrc = #servername;
EXECUTE sys.sp_serveroption
#server = N'loopback',
#optname = 'RPC OUT',
#optvalue = 'ON';
EXECUTE sys.sp_serveroption
#server = N'loopback',
#optname = 'remote proc transaction promotion',
#optvalue = 'OFF';
And code:
DROP TABLE IF EXISTS myLogTable;
CREATE TABLE myLogTable(i INT IDENTITY(1,1),
d DATETIME DEFAULT GETDATE(),
t NVARCHAR(1000));
GO
CREATE OR ALTER PROCEDURE my_logging
#t NVARCHAR(MAX)
AS
BEGIN
INSERT INTO myLogTable(t) VALUES (#t);
END;
GO
CREATE OR ALTER PROCEDURE myStoredProcedure
AS
BEGIN
-- some work
SELECT 1;
INSERT INTO myLogTable(t)
VALUES ('Standard logging that will perish after rollback');
EXEC loopback.T1.dbo.my_logging
#t = N'some custom loging that should survive rollback';
END;
Final call:
begin transaction
exec myStoredProcedure
rollback transaction
select * from myLogTable
Output:
i d t
2 2017-08-17 some custom loging that should survive rollback

Insert fails within transaction, but sql server returns 1 row(s) affected?

This is the execution flow of my stored procedure:
ALTER procedure dbo.usp_DoSomething
as
declare #Var1 int
declare #Var2 int
declare #Var3 int
select
#Var1 = Var1,
#Var2 = Var2,
#Var3 = Var3
from Table
where
...
BEGIN TRY
BEGIN TRANSACTION
/* UPDATE Table. This executes successfully */
/* INSERT Table. This fails due to PK violation */
COMMIT TRAN /* This does not happen */
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRAN /* This occurs because TRANS failed */
END CATCH
The UPDATE runs successfully. The INSERT fails, so the transaction is rolled back.
After execution, the table looks correct and nothing has changed. But when I run the SP, I get the following messages:
(1 row(s) affected)
(0 row(s) affected)
So I'm asking myself, where is the first 1 row(s) affected coming from?
Then I'm thinking that this is the reason, but wanted to confirm: OUTPUT Clause (Transact-SQL)
An UPDATE, INSERT, or DELETE statement that has an OUTPUT clause will return
rows to the client even if the statement encounters errors and is rolled back.
The result should not be used if any error occurs when you run the statement.
By default, a rowcount will be returned for every DML statement, unless SET NOCOUNT ON is enabled. Regardless whether a transaction is successful or not, or rolled back or committed, your UPDATE statement was successful, thus the notification (1 row(s) affected).
The OUTPUT clause you mentioned has nothing to do with it, since you haven't specified it.
The first select with setting variables could produce 1 row affected

Select number of deleted records with t-SQL

I can't seem to figure out how to select the number of previously deleted records with SQL Server 2008. Is it something like this?
DELETE FROM [table] WHERE [id]=10
SELECT SCOPE_IDENTITY()
Use SELECT ##ROWCOUNT immediately after the DELETE statement. You can read more about ##ROWCOUNT on MSDN:
##ROWCOUNT
Returns the number of rows affected by the last statement.
Remarks
...
Data manipulation language (DML) statements set the ##ROWCOUNT value to the number of rows affected by the query and return that value to the client. The DML statements may not send any rows to the client.
Note that I say "immediately after" because other statements can change the value of ##ROWCOUNT, even if they don't affect rows, per se:
DECLARE CURSOR and FETCH set the ##ROWCOUNT value to 1.
...
Statements such as USE, SET <option>, DEALLOCATE CURSOR, CLOSE CURSOR, BEGIN TRANSACTION or COMMIT TRANSACTION reset the ROWCOUNT value to 0.
You can also SET NOCOUNT OFF.
Remarks
When SET NOCOUNT is ON, the count (indicating the number of rows affected by a Transact-SQL statement) is not returned. When SET NOCOUNT is OFF, the count is returned.
When debugging a stored procedure, I generally use this code snippet below:
DELCARE #Msg varchar(30)
...
SELECT #Msg = CAST(##ROWCOUNT AS VARCHAR(10)) + ' rows affected'
RAISERROR (#Msg, 0, 1) WITH NOWAIT
I use it before and after an operation, like for a delete. I'll put a number in the message to keep track of which snippet I'm on in the code. It's very helpful when you are dealing with a large stored procedure with lots of lines of code.

Procedure resuming even after the error

Below is my procedure in SQL Server 2005
PROCEDURE [dbo].[sp_ProjectBackup_Insert]
#prj_id bigint
AS
BEGIN
DECLARE #MSG varchar(200)
DECLARE #TranName varchar(200)
DECLARE #return_value int
-- 1. Starting the transaction
begin transaction #TranName
-- 2. Insert the records
SET IDENTITY_INSERT [PMS_BACKUP].[Common].[PROJECT] ON INSERT INTO [PMS_BACKUP].[Common].[PROJECT] ([PRJ_ID],[PRJ_NO1],[PRJ_NO2],[PRJ_NO3],[PRJ_DESC],[IS_TASKFORCE],[DATE_CREATED],[IS_APPROVED],[DATE_APPROVED],[IS_HANDEDOVER],[DATE_HANDEDOVER],[DATE_START],[DATE_FINISH],[YEAR_OF_ORDER],[CLIENT_DETAILS],[SCOPE_OF_WORK],[IS_PROPOSAL],[PRJ_MANAGER],[PRJ_NAME],[MANAGER_VALDEL],[MANAGER_CLIENT],[DEPT_ID],[locationid],[cut_off_date]) SELECT * FROM [pms].[Common].[PROJECT] T WHERE T.PRJ_ID = (#prj_id) SET IDENTITY_INSERT [PMS_BACKUP].[Common].[PROJECT] OFF IF ##ERROR <> 0 GOTO HANDLE_ERROR
SET IDENTITY_INSERT [PMS_BACKUP].[Common].[DEPARTMENT_CAP] ON INSERT INTO [PMS_BACKUP].[Common].[DEPARTMENT_CAP] ([CAP_ID],[DEPT_ID],[PRJ_ID],[IS_CAPPED],[DATE_CAPPED],[CAPPED_BY],[CAP_APPROVED_BY],[STATUS],[UNCAPPED_BY],[DATE_UNCAPPED],[DESCRIPTION],[UNCAP_APPROVED_BY],[LOCATIONID]) SELECT * FROM [pms].[Common].[DEPARTMENT_CAP] T WHERE T.PRJ_ID = (#prj_id) SET IDENTITY_INSERT [PMS_BACKUP].[Common].[DEPARTMENT_CAP] OFF IF ##ERROR <> 0 GOTO HANDLE_ERROR
INSERT INTO [PMS_BACKUP].[Common].[DOC_REG] SELECT * FROM [pms].[Common].[DOC_REG] T WHERE T.PRJ_ID = (#prj_id) IF ##ERROR <> 0 GOTO HANDLE_ERROR
-- 3. Commit transaction
COMMIT TRANSACTION #TranName;
return ##trancount;
HANDLE_ERROR:
rollback transaction #TranName
RETURN 1
END
and the issue is even if the first insert query fails, its not stopping the processing and resume the rest of the insert queries. The return value I am getting is 1, but in the results window I can see the log like this
(0 row(s) affected) Msg 2627, Level
14, State 1, Procedure
sp_ProjectBackup_Insert, Line 35
Violation of PRIMARY KEY constraint
'PK_PROJECT'. Cannot insert duplicate
key in object 'Common.PROJECT'. The
statement has been terminated.
(0 row(s) affected)
(0 row(s) affected)
I thought the return 1 will make the exit from error handling code but not happening. Any problem with my error handling?
There are so many things wrong with this I don't know where to start.
As far as your error call, you are trapping whether there is an error on the last step run before the error, not if any error has occurred so far. Since the last step is not the insert but the set_identity_insert statement, there is no error to trap.
Now, on to what needs fixing besides that.
If this is a backup table and is only used as a backup table, get rid of the identity property all together. No need to keep turning the insert on and off, just fix the table, it is not being directly written to by users data is alawys coming from another table, so why does it need an identity at all?
Next, the error you got indicates to me that what you need to be doing is inserting only records that don't already exist in the backup table not all records. You may also need to update existing records. Or you need to truncate the table first before doing the insert if you only need the most current data period and the data table being copied is not that large (you don't want to re-enter a million records when only 100 were new and 10 were changed).
In SQL Server 2005 you have TRY CATCH blocks available, you should start using those instead of goto.
Never, ever, ever use SELECT * in an insert. Or any time the code will go to production. Select * is a very poor programming technique. In the insert for instance it will cause problems when the initial table is changed as you define the columns to insert into but not those in the select.
Finally, you should not name stored procedures with sp at the start. System procs start with sp and SQL Server will look there first for the proc before looking at user procs. It's a little wasted time every time you call a proc. Overall it's bad for the system and if they happen to have a system proc with the same name yours will never be called.
You need to put proper error handling around your statements. With SQL 2005 and up, that means try/catch:
PROCEDURE [dbo].[sp_ProjectBackup_Insert]
#prj_id bigint
AS
BEGIN
DECLARE #MSG varchar(200)
DECLARE #TranName varchar(200)
DECLARE #return_value int
-- 1. Starting the transaction
BEGIN TRANSACTION #TranName
-- 2. Insert the records
BEGIN TRY
SET IDENTITY_INSERT [PMS_BACKUP].[Common].[PROJECT] ON INSERT INTO [PMS_BACKUP].[Common].[PROJECT] ([PRJ_ID],[PRJ_NO1],[PRJ_NO2],[PRJ_NO3],[PRJ_DESC],[IS_TASKFORCE],[DATE_CREATED],[IS_APPROVED],[DATE_APPROVED],[IS_HANDEDOVER],[DATE_HANDEDOVER],[DATE_START],[DATE_FINISH],[YEAR_OF_ORDER],[CLIENT_DETAILS],[SCOPE_OF_WORK],[IS_PROPOSAL],[PRJ_MANAGER],[PRJ_NAME],[MANAGER_VALDEL],[MANAGER_CLIENT],[DEPT_ID],[locationid],[cut_off_date]) SELECT * FROM [pms].[Common].[PROJECT] T WHERE T.PRJ_ID = (#prj_id) SET IDENTITY_INSERT [PMS_BACKUP].[Common].[PROJECT] OFF IF ##ERROR <> 0 GOTO HANDLE_ERROR
SET IDENTITY_INSERT [PMS_BACKUP].[Common].[DEPARTMENT_CAP] ON INSERT INTO [PMS_BACKUP].[Common].[DEPARTMENT_CAP] ([CAP_ID],[DEPT_ID],[PRJ_ID],[IS_CAPPED],[DATE_CAPPED],[CAPPED_BY],[CAP_APPROVED_BY],[STATUS],[UNCAPPED_BY],[DATE_UNCAPPED],[DESCRIPTION],[UNCAP_APPROVED_BY],[LOCATIONID]) SELECT * FROM [pms].[Common].[DEPARTMENT_CAP] T WHERE T.PRJ_ID = (#prj_id) SET IDENTITY_INSERT [PMS_BACKUP].[Common].[DEPARTMENT_CAP] OFF IF ##ERROR <> 0 GOTO HANDLE_ERROR
INSERT INTO [PMS_BACKUP].[Common].[DOC_REG] SELECT * FROM [pms].[Common].[DOC_REG] T WHERE T.PRJ_ID = (#prj_id) IF ##ERROR <> 0 GOTO HANDLE_ERROR
-- 3. Commit transaction
COMMIT TRANSACTION #TranName;
RETURN 0
END TRY
BEGIN CATCH
--HANDLE_ERROR
ROLLBACK TRANSACTION #TranName
RETURN 1
END CATCH
END
(Be sure to test and debug this -- should be good, but you never know.)
The RETURN value is only relevant to whatever called the procedure -- if it's not checking for success or failure, then you may have a problem.