I am working on pymssql, a python MSSQL driver. I have encountered an interesting situation that I can't seem to find documentation for. It seems that when a CREATE TABLE statement fails, the transaction it was run in is implicitly rolled back:
-- shows 0
select ##TRANCOUNT
BEGIN TRAN
-- will cause an error
INSERT INTO foobar values ('baz')
-- shows 1 as expected
select ##TRANCOUNT
-- will cause an error
CREATE TABLE badschema.t1 (
test1 CHAR(5) NOT NULL
)
-- shows 0, this is not expected
select ##TRANCOUNT
I would like to understand why this is happening and know if there are docs that describe the situation. I am going to code around this behavior in the driver, but I want to make sure that I do so for any other error types that implicitly rollback a transaction.
NOTE
I am not concerned here with typical transactional behavior. I specifically want to know why an implicit rollback is given in the case of the failed CREATE statement but not with the INSERT statement.
Here is the definitive guide to error handling in Sql Server:
http://www.sommarskog.se/error-handling-I.html
It's long, but in a good way, and it was written for Sql Server 2000 but most of it is still accurate. The part you're looking for is here:
http://www.sommarskog.se/error-handling-I.html#whathappens
In your case, the article says that Sql Server is performing a Batch Abortion, and that it will take this measure in the following situations:
Most conversion errors, for instance conversion of non-numeric string to a numeric value.
Superfluous parameter to a parameterless stored procedure.
Exceeding the maximum nesting-level of stored procedures, triggers and functions.
Being selected as a deadlock victim.
Mismatch in number of columns in INSERT-EXEC.
Running out of space for data file or transaction log.
There's a bit more to it than this, so make sure to read the entire section.
It is often, but not always, the point of a transaction to rollback the entire thing if any part of it fails:
http://www.firstsql.com/tutor5.htm
One of the most common reasons to use transactions is when you need the action to be atomic:
An atomic operation in computer
science refers to a set of operations
that can be combined so that they
appear to the rest of the system to be
a single operation with only two
possible outcomes: success or failure.
en.wikipedia.org/wiki/Atomic_(computer_science)
It's probably not documented, because, if I understand your example correctly, it is assumed you intended that functionality by beginning a transaction with BEGIN TRAN
If you run as one batch (which I did first time), the transaction stays open because the INSERT aborts the batch and CREATE TABLE is not run. Only if you run line-by-line does the transaction get rolled back
You can also generate an implicit rollback for the INSERT by setting SET XACT_ABORT ON.
My guess (just had a light bulb moment as I typed the sentence above) is that CREATE TABLE uses SET XACT_ABORT ON internalls = implicit rollback in practice
Some more stuff from me on SO about SET XACT_ABORT (we use it in all our code because it releases locks and rolls back TXNs on client CommandTimeout)
Related
Let's say I have a stored procedure with a SELECT, INSERT and UPDATE statement.
Nothing is inside a transaction block. There are no Try/Catch blocks either.
I also have XACT_ABORT set to OFF.
If the INSERT fails, is there a possibility for the UPDATE to still happen?
The reason the INSERT failed is because I passed in a null value to a column which didn't allow that. I only have access to the exception the program threw which called the stored procedure, and it doesn't have any severity levels in it as far as I can see.
Potentially. It depends on the severity level of the fail.
User code errors are normally 16.
Anything over 20 is an automatic fail.
Duplicate key blocking insert would be 14 i.e. non-fatal.
Inserting a NULL into a column which does not support it - this is counted as a user code error (16) - and consequently will not cause the batch to halt. The UPDATE will go ahead.
The other major factor would be if the batch has a configuration of XACT_ABORT to ON. This will cause any failure to abort the whole batch.
Here's some further reading:
list-of-errors-and-severity-level-in-sql-server-with-catalog-view-sysmessages
exceptionerror-handling-in-sql-server
And for the XACT_ABORT
https://www.red-gate.com/simple-talk/sql/t-sql-programming/defensive-error-handling/
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-xact-abort-transact-sql
In order to understand the outcome of any of the steps in the stored procedure, someone with appropriate permissions (e.g. an admin) will need to edit the stored proc and capture the error message. This will give feedback as to the progress of the stored proc. An unstructured error (i.e. not in try/catch) code of 0 indicates success, otherwise it will contain the error code (which I think will be 515 for NULL insertion). This is non-ideal as mentioned in the comments, as it still won't cause the batch to halt, but it will warn you that there was an issue.
The most simple example:
DECLARE #errnum AS int;
-- Run the insert code
SET #errnum = ##ERROR;
PRINT 'Error code: ' + CAST(#errornum AS VARCHAR);
Error handling can be a complicated issue; it requires significant understanding of the database structure and expected incoming data.
Options can include using an intermediate step (as mentioned by HLGEM), amending the INSERT to include ISNULL / COALESCE statements to purge nulls, checking the data on the client side to remove troublesome issues etc. If you know the number of rows you are expecting to insert, the stored proc can return SET #Rows=##ROWCOUNT in the same way as SET #errnum = ##ERROR.
If you have no authority over the stored proc and no ability to persuade the admin to amend it ... there's not a great deal you can do.
If you have access to run your own queries directly against the database (instead of only through stored proc or views) then you might be able to infer the outcome by running your own query against the original data, performing the stored proc update, then re-running your query and looking for changes. If you have permission, you could also try querying the transaction log (fn_dblog) or the error log (sp_readerrorlog).
BEGIN TRAN
SELECT * FROM AnySchema.AnyTable
WHERE AnyColumn = SomeCondition
COMMIT
I know the transaction is not required here because it is just a select but just want to know how bad a programming it is and whether it is going to be an overhead on the DB engine.
You may use transaction on SELECT statements to ensure nobody else could update / delete records of the table of while the bunch of your select queries are executing.
Using WITH(NOLOCK):
Anyways, you may also use WITH(NOLOCK) for t_sql
SELECT * FROM AnySchema.AnyTable WITH(NOLOCK)
WHERE AnyColumn = SomeCondition
WITH (NOLOCK) is the equivalent of using READ UNCOMMITED as a transaction isolation level. Here stand the risk of reading an uncommitted row that is subsequently rolled back, i.e. data that never made it into the database.
So, while it can prevent reads being deadlocked by other operations, it comes with a risk.
TRANSACTION Block :
Using TRANSACTION block will not cause much of extra DB overload but if you keep the same type practice on, and suppose , at any SQL block you forget (you / your developers may forget, right ?) to close the transaction, then other processes can't work on the same table.
Anyways, it depends on what type of application you are using. If very frequent update and select things are there , it is advised not to use such transaction blocks. If medium level of updates and select are there, occurs next to each other, you may use transaction blocks for select (but ensure to close the transaction).
That's actually a good question. To understand what's going on, you need to know about SET IMPLICIT_TRANSACTIONS.
When ON, the system is in implicit transaction mode. This means that if ##TRANCOUNT = 0, any of the following Transact-SQL statements begins a new transaction. It is equivalent to an unseen BEGIN TRANSACTION being executed first:
When OFF, each of the preceding T-SQL statements is bounded by an unseen BEGIN TRANSACTION and an unseen COMMIT TRANSACTION statement. When OFF, we say the transaction mode is autocommit. [this is the default]
If your T-SQL code visibly issues a BEGIN TRANSACTION, we say the transaction mode is explicit.
https://msdn.microsoft.com/en-us/library/ms187807.aspx
Since the SQL Server would have created a transaction for you, manually doing doesn't actually change anything. The exact same thing would have happened either way.
Summary: What you are doing isn't 'wrong' because it has no effect, but unnecessary and confusing to the reader.
I think you would be taking a holdlock and most likely a tablock
table hints
That is not always a good thing as you would block any update or deletes (maybe even inserts)
It would be better to let SQL decide what level of of locks to take. Most likely pagelocks. I would stay away from nolock as bad stuff can happen.
On a select on a single table just let the optimizer do it's thing.
I just discovered the idea of testing a stored proc by calling it from within a BEGIN TRAN t1 ROLLBACK TRAN t1 pair.
I am a bit afraid of this. Is that a common practice ? Is it reliable ?
My goal here is to quicly test a stored proc that reads and updates 2 databases (same server). The SP does not do any truncate but uses a table variable combined with an INSERT.. OUTPUT statement.
The volume will be low (less than 1000 lines affected).
Thanks
There are a few things that can go wrong:
The proc could do its own transaction management
It could execute non-transactable statements like CREATE DATABASE
It could have an error, causing the transaction to automatically rollback. If the proc then continues to run in some way, it might write stuff outside of a transaction
XACT_ABORT might be used inconsistently, causing the previously mentioned effect
In general, this is a good technique, though.
Truncate is transacted, btw.
I have created AFTER INSERT TRIGGER
Now if any case if an error occurs while executing Trigger. It should not effect Insert Operation on Triggered table.
In One word if any ERROR occurs in trigger it should Ignore it.
As I have used
BEGIN TRY
END TRY
BEGIN CATCH
END CATCH
But it give following error message and Rolled back Insert operation on Triggered table
An error was raised during trigger execution. The batch has been
aborted and the user transaction, if any, has been rolled back.
Interesting problem. By default, triggers are designed that if they fail, they rollback the command that fired it. So whenever trigger is executing there is an active transaction, whatever there was an explicit BEGIN TRANSACTION or not on the outside. And also BEGIN/TRY inside trigger will not work. Your best practice would be not to write any code in trigger that could possibly fail - unless it is desired to also fail the firing statement.
In this situation, to suppress this behavior, there are some workarounds.
Option A (the ugly way):
Since transaction is active at the beginning of trigger, you can just COMMIT it and continue with your trigger commands:
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
COMMIT;
... do whatever trigger does
END;
Note that if there is an error in trigger code this will still produce the error message, but data in Test1 table are safely inserted.
Option B (also ugly):
You can move your code from trigger to stored procedure. Then call that stored procedure from Wrapper SP that implements BEGIN/TRY and at the end - call Wrapper SP from trigger. This might be a bit tricky to move data from INSERTED table around if needed in the logic (which is in SP now) - probably using some temp tables.
SQLFiddle DEMO
You cannot, and any attempt to solve it is snake oil. No amount of TRY/CATCH or ##ERROR check will work around the fundamental issue.
If you want to use the tightly coupling of a trigger then you must buy into the lower availability induced by the coupling.
If you want to preserve the availability (ie. have the INSERT succeed) then you must give up coupling (remove the trigger). You must do all the processing you were planning to do in the trigger in a separate transaction that starts after your INSERT committed. A SQL Agent job that polls the table for newly inserted rows, an Service Broker launched procedure or even an application layer step are all going to fit the bill.
The accepted answer's option A gave me the following error: "The transaction ended in the trigger. The batch has been aborted.". I circumvented the problem by using the SQL below.
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
SET XACT_ABORT OFF
BEGIN TRY
SELECT [Column1] INTO #TableInserted FROM [inserted]
EXECUTE sp_executesql N'INSERT INTO [Table]([Column1]) SELECT [Column1] FROM #TableInserted'
END TRY
BEGIN CATCH
END CATCH
SET XACT_ABORT ON
END
I want that when I execute a query for example DELETE FROM Contact, and an error is raised during the transaction it should delete the rows that are able to be deleted raising all the relevant errors for the rows that cannot be deleted.
For SQL Server you are not going to break the atomicity of the Delete command within a single statement - even issued outside of an explicit transaction, you are going to be acting within an implicit one - e.g. all or nothing as you have seen.
Within the realms of an explicit transaction an error will by default roll back the entire transaction, but this can be altered to just try and rollback the single statement that errored within the overall transaction (of multiple statements) the setting for this is SET XACT_ABORT.
Since your delete is a single statement, the XACT_ABORT can not help you - the line will error and the delete will be rolled back.
If you know the error condition you are going to face (such as a FK constraint violation, then you could ensure you delete has a suitable where clause to not attempt to delete rows that you know will generate an error.
If you're using MySQL you can take advantage of the DELETE IGNORE syntax.
This is a feature which will depend entirely on which flavour of database you are using. Some will have it and some won't.
For instance, Oracle offers us the ability to log DML errors in bulk. The example in the documentation uses an INSERT statement but the same principle applies to any DML statement.