SQL Server 2012 using SELECT in trigger breaks table - sql

So let me first admit that I am a SQL Server newbie.
Here's the deal: I'm trying to create a trigger on a table in SQL Server 2012, and whenever I try any kind of SELECT statement in the trigger, the table quits working (as in NOTHING can be inserted until the trigger is deleted). As soon as I drop the trigger, everything starts working again. If I don't do any SELECTs, everything is peachy. Is there a permission or something somewhere that I'm missing?
Example:
CREATE TRIGGER sometrigger
ON sometable
FOR INSERT
AS
BEGIN
SELECT * FROM inserted
END
GO
Command completes successfully, but the table becomes frozen as described above.
CREATE TRIGGER sometrigger
ON sometable
FOR INSERT
AS
BEGIN
EXEC msdb.dbo.sp_send_dbmail
#recipients = N'someaddress#somedomain.com',
#subject = 'test',
#body = 'test body',
#profile_name = 'someprofile'
END
GO
Works like a charm.

You're may be falling foul of the disallow results from triggers option being set to 1, as it should be.
Note the warning on that page:
This feature will be removed in the next version of Microsoft SQL Server. Do not use this feature in new development work, and modify applications that currently use this feature as soon as possible. We recommend that you set this value to 1.
I suspect that wherever you're running your inserts from is hiding an error message or exception, since you should get:
Msg 524, Level 16, State 1, Procedure , Line
"A trigger returned a resultset and the server option 'disallow_results_from_triggers' is true."
Or, in the alternative, you're working with a database layer that wraps all inserts in a transaction and will roll the transaction back if anything unexpected happens - such as receiving a result set or even just an extra information message saying (x rows affected).
But all of this is dancing around the main issue - you shouldn't be issuing a select that attempts to return results from inside of a trigger. I might have been able to offer more help if you'd actually told us what you're trying to achieve.
If it's the second case, and it's something tripping over the (x rows affected) messages, that can be cured by placing SET NOCOUNT ON at the top of the trigger.

You should never return data from a trigger anyway, mainly for simplicity and maintenance reasons. It's confusing: I did an INSERT but get a resultset back.
If you need to get the values you just inserted, you'd use the OUTPUT clause
INSERT sometable (...)
OUTPUT INSERTED.*
VALUES (...);
This at least tells you that the INSERT gives results.
And it is nestable too as per, say, SQL Server concurrent transaction issue

Related

Transaction not rolling back with PK violation

As per my knowledge if we start transaction (begin tran/commit tran), it will be completely done or nothing. But when I am executing below TSQL code the first insert statement works while the 2nd doesn't.
Background: table A has two columns (ID primary key, Name varchar), and it already had 3 rows of data (ID of 1,2,3).
begin tran
insert into A values (4, 'Tim') -- this works
insert into A values (2, 'Tom') -- this doesn't work because it violates the PK constraint
commit tran
select * from A
Here is my question: since the 2nd insert statement violates the PK constraint and couldn't be committed, I was thinking that everything inside this transaction should all be rolled back because the transaction should be succeed or fail as one unit. But in fact, 'Tim' is added into A while 'Tom' didn't. Does this violate the automicity of transaction?
It depends on how you handle errors in your transaction. If you catch them, or if you ignore them (it seems you are ignoring them), then the transaction will continue and will commit.
Any decent "transaction manager" of a programming language/framework:
Will stop the execution of the code and will roll the transaction back, or
Will doom the transaction, so a commit will never be carried out. It will be replaced by a roll back instead.
If you run these commands at the SQL prompt, you are probably not using any transaction manager, and that's why you may be ignoring the error purposedly and carrying on as if everything was good.
That's not how transactions work in SQL Server. If you have a "statement-terminating" error, SQL Server just continues to the next statement. If you have a "batch-terminating" error, the transaction is aborted and rolled back.
Now I don't want that behaviour ever.
So the first line I write in every stored procedure is:
SET XACT_ABORT ON;
That tells SQL Server that "statement-terminating" errors should be automatically promoted to "batch-terminating" errors. Add that statement to the beginning of your script and you'll see that it now works as expected.

Will a stored procedure fail if one of the queries inside it fails?

Let's say I have a stored procedure with a SELECT, INSERT and UPDATE statement.
Nothing is inside a transaction block. There are no Try/Catch blocks either.
I also have XACT_ABORT set to OFF.
If the INSERT fails, is there a possibility for the UPDATE to still happen?
The reason the INSERT failed is because I passed in a null value to a column which didn't allow that. I only have access to the exception the program threw which called the stored procedure, and it doesn't have any severity levels in it as far as I can see.
Potentially. It depends on the severity level of the fail.
User code errors are normally 16.
Anything over 20 is an automatic fail.
Duplicate key blocking insert would be 14 i.e. non-fatal.
Inserting a NULL into a column which does not support it - this is counted as a user code error (16) - and consequently will not cause the batch to halt. The UPDATE will go ahead.
The other major factor would be if the batch has a configuration of XACT_ABORT to ON. This will cause any failure to abort the whole batch.
Here's some further reading:
list-of-errors-and-severity-level-in-sql-server-with-catalog-view-sysmessages
exceptionerror-handling-in-sql-server
And for the XACT_ABORT
https://www.red-gate.com/simple-talk/sql/t-sql-programming/defensive-error-handling/
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-xact-abort-transact-sql
In order to understand the outcome of any of the steps in the stored procedure, someone with appropriate permissions (e.g. an admin) will need to edit the stored proc and capture the error message. This will give feedback as to the progress of the stored proc. An unstructured error (i.e. not in try/catch) code of 0 indicates success, otherwise it will contain the error code (which I think will be 515 for NULL insertion). This is non-ideal as mentioned in the comments, as it still won't cause the batch to halt, but it will warn you that there was an issue.
The most simple example:
DECLARE #errnum AS int;
-- Run the insert code
SET #errnum = ##ERROR;
PRINT 'Error code: ' + CAST(#errornum AS VARCHAR);
Error handling can be a complicated issue; it requires significant understanding of the database structure and expected incoming data.
Options can include using an intermediate step (as mentioned by HLGEM), amending the INSERT to include ISNULL / COALESCE statements to purge nulls, checking the data on the client side to remove troublesome issues etc. If you know the number of rows you are expecting to insert, the stored proc can return SET #Rows=##ROWCOUNT in the same way as SET #errnum = ##ERROR.
If you have no authority over the stored proc and no ability to persuade the admin to amend it ... there's not a great deal you can do.
If you have access to run your own queries directly against the database (instead of only through stored proc or views) then you might be able to infer the outcome by running your own query against the original data, performing the stored proc update, then re-running your query and looking for changes. If you have permission, you could also try querying the transaction log (fn_dblog) or the error log (sp_readerrorlog).

How to test your query first before running it sql server

I made a silly mistake at work once on one of our in house test databases. I was updating a record I just added because I made a typo but it resulted in many records being updated because in the where clause I used the foreign key instead of the unique id for the particular record I just added
One of our senior developers told me to do a select to test out what rows it will affect before actually editing it. Besides this, is there a way you can execute your query, see the results but not have it commit to the db until I tell it to do so? Next time I might not be so lucky. It's a good job only senior developers can do live updates!.
It seems to me that you just need to get into the habit of opening a transaction:
BEGIN TRANSACTION;
UPDATE [TABLENAME]
SET [Col1] = 'something', [Col2] = '..'
OUTPUT DELETED.*, INSERTED.* -- So you can see what your update did
WHERE ....;
ROLLBACK;
Than you just run again after seeing the results, changing ROLLBACK to COMMIT, and you are done!
If you are using Microsoft SQL Server Management Studio you can go to Tools > Options... > Query Execution > ANSI > SET IMPLICIT_TRANSACTIONS and SSMS will open the transaction automatically for you. Just dont forget to commit when you must and that you may be blocking other connections while you dont commit / rollback close the connection.
First assume you will make a mistake when updating a db so never do it unless you know how to recover, if you don't don't run the code until you do,
The most important idea is it is a dev database expect it to be messed up - so make sure you have a quick way to reload it.
The do a select first is always a good idea to see which rows are affected.
However for a quicker way back to a good state of the database which I would do anyway is
For a simple update etc
Use transactions
Do a begin transaction and then do all the updates etc and then select to check the data
The database will not be affected as far as others can see until you do a last commit which you only do when you are sure all is correct or a rollback to get to the state that was at the beginning
If you must test in a production database and you have the requisite permissions, then write your queries to create and use temporary tables that in name are similar to the production tables and whose schema other than index names is identical. Index names are unique across a databse, at least on Informix.
Then run your queries and look at the data.
Other than that, IMHO you need a development database, and perhaps even a development server with a development instance. That's paranoid advice, but you'd have to be very careful, even if you were allowed -- MS SQLSERVER lingo here -- a second instance on the same server.
I can reload our test database at will, and that's why we have a test system. Our production system contains citizens' tax payments and other information that cannot be harmed, "or else".
For our production data changes, we always ensure that we use a BEGIN TRAN and a ROLLBACK TRAN and then all statements have an OUTPUT clause. This way we can run the script first (usually in a copy of PRODUCTION db first) and see what is affected before changing the ROLLBACK TRAN to COMMIT TRAN
Have you considered explain ?
If there is a mistake in the command, it will report it as with usual commands.
But if there are no mistakes it will not run the command, it will just explain it.
Example of a "passed" test:
testdb=# explain select * from sometable ;
QUERY PLAN
------------------------------------------------------------
Seq Scan on sometable (cost=0.00..12.60 rows=260 width=278)
(1 row)
Example of a "failed" test:
testdb=# explain select * from sometaaable ;
ERROR: relation "sometaaable" does not exist
LINE 1: explain select * from sometaaable ;
It also works with insert, update and delete (i.e. the "dangerous" ones)

How to Ignoring errors in Trigger and Perform respective operation in MS SQL Server

I have created AFTER INSERT TRIGGER
Now if any case if an error occurs while executing Trigger. It should not effect Insert Operation on Triggered table.
In One word if any ERROR occurs in trigger it should Ignore it.
As I have used
BEGIN TRY
END TRY
BEGIN CATCH
END CATCH
But it give following error message and Rolled back Insert operation on Triggered table
An error was raised during trigger execution. The batch has been
aborted and the user transaction, if any, has been rolled back.
Interesting problem. By default, triggers are designed that if they fail, they rollback the command that fired it. So whenever trigger is executing there is an active transaction, whatever there was an explicit BEGIN TRANSACTION or not on the outside. And also BEGIN/TRY inside trigger will not work. Your best practice would be not to write any code in trigger that could possibly fail - unless it is desired to also fail the firing statement.
In this situation, to suppress this behavior, there are some workarounds.
Option A (the ugly way):
Since transaction is active at the beginning of trigger, you can just COMMIT it and continue with your trigger commands:
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
COMMIT;
... do whatever trigger does
END;
Note that if there is an error in trigger code this will still produce the error message, but data in Test1 table are safely inserted.
Option B (also ugly):
You can move your code from trigger to stored procedure. Then call that stored procedure from Wrapper SP that implements BEGIN/TRY and at the end - call Wrapper SP from trigger. This might be a bit tricky to move data from INSERTED table around if needed in the logic (which is in SP now) - probably using some temp tables.
SQLFiddle DEMO
You cannot, and any attempt to solve it is snake oil. No amount of TRY/CATCH or ##ERROR check will work around the fundamental issue.
If you want to use the tightly coupling of a trigger then you must buy into the lower availability induced by the coupling.
If you want to preserve the availability (ie. have the INSERT succeed) then you must give up coupling (remove the trigger). You must do all the processing you were planning to do in the trigger in a separate transaction that starts after your INSERT committed. A SQL Agent job that polls the table for newly inserted rows, an Service Broker launched procedure or even an application layer step are all going to fit the bill.
The accepted answer's option A gave me the following error: "The transaction ended in the trigger. The batch has been aborted.". I circumvented the problem by using the SQL below.
CREATE TRIGGER tgTest1 ON Test1 AFTER INSERT
AS
BEGIN
SET XACT_ABORT OFF
BEGIN TRY
SELECT [Column1] INTO #TableInserted FROM [inserted]
EXECUTE sp_executesql N'INSERT INTO [Table]([Column1]) SELECT [Column1] FROM #TableInserted'
END TRY
BEGIN CATCH
END CATCH
SET XACT_ABORT ON
END

After Delete Trigger Fires Only After Delete?

I thought "after delete" meant that the trigger is not fired until after the delete has already taken place, but here is my situation...
I made 3, nearly identical SQL CLR after delete triggers in C#, which worked beautifully for about a month. Suddenly, one of the three stopped working while an automated delete tool was run on it.
By stopped working, I mean, records could not be deleted from the table via client software. Disabling the trigger caused deletes to be allowed, but re-enabling it interfered with the ability to delete.
So my question is 'how can this be the case?' Is it possible the tool used on it futzed up the memory? It seems like even if the trigger threw an exception, if it is AFTER delete, shouldn't the records be gone?
All the trigger looks like is this:
ALTER TRIGGER [sysdba].[AccountTrigger] ON [sysdba].[ACCOUNT] AFTER DELETE AS
EXTERNAL NAME [SQL_IO].[SQL_IO.WriteFunctions].[AccountTrigger]
GO
The CLR trigger does one select and one insert into another database. I don't yet know if there are any errors from SQL Server Mgmt Studio, but will update the question after I find out.
UPDATE:
Well after re-executing the same trigger code above, everything works again, so I may never know what if any error SSMS would give.
Also, there is no call to rollback anywhere in the trigger's code.
after means it just fires after the event, it can still be rolled back
example
create table test(id int)
go
create trigger trDelete on test after delete
as
print 'i fired '
rollback
do an insert
insert test values (1)
now delete the data
delete test
Here is the output from the trigger
i fired
Msg 3609, Level 16, State 1, Line 1
The transaction ended in the trigger. The batch has been aborted.
now check the table, and verify that nothing was deleted
select * from test
The CLR trigger does one select and
one insert into another database. I
don't yet know if there are any errors
from SQL Server Mgmt Studio, but will
update the question after I find out.
Suddenly, one of the three stopped
working while an automated delete tool
was run on it.
triggers fire per batch/statement not per row, is it possible that your trigger wasn't coded for multi-row operations and the automated tool deleted more than 1 row in the batch? Take a look at Best Practice: Coding SQL Server triggers for multi-row operations
Here is an example that will make the trigger fail without doing an explicit rollback
alter trigger trDelete on test after delete
as
print 'i fired '
declare #id int
select #id = (select id from deleted)
GO
insert some rows
insert test values (1)
insert test values (2)
insert test values (3)
run this
delete test
i fired
Msg 512, Level 16, State 1, Procedure trDelete, Line 6
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
The statement has been terminated.
check the table
select * from test
nothing was deleted
An error in the AFTER DELETE trigger will roll-back the transaction. It is after they are deleted but before the change is committed. Is there any particular reason you are using a CLR trigger for this? It seems like something that a pure SQL trigger ought to be able to do in a possibly more lightweight manner.
Well you shouldn't be doing a select in trigger (who will see the results) and if all you are doing is an insert it shouldn't be a CLR trigger either. CLR is not generally a good thing to have in a trigger, far better to use t-SQL code in a trigger unless you need to do something that t-sql can't handle which is probably a bad idea in a trigger anyway.
Have you reverted to the last version you have in source control? Perhaps that would clear the problem if it has gotten corrupted.