Is it possible to prevent a trigger from running in a transaction? - sql

According to several resources, such as this,
A query that is executed within the context of a trigger is automatically wrapped in a transaction. If there are any distributed queries in the trigger code, the transaction is promoted to a distributed transaction automatically.
Simple question - is there a way to prevent this behavior? I'm looking for a way to explicitly prevent code in my trigger from running in the context of a transaction.

If you are trying to do something asynchronous so that the calling transaction doesn't have to wait, you may consider Service Broker, which is designed to do exactly that - go fire off some asynchronous task, and return control to the caller, regardless of transaction scope.
Another idea is to not have your trigger perform the work, but instead pop a work item onto a queue table, and have a background process running continuously to process the queue. This isn't necessarily easy to do if your work item operates on the set of data in inserted/deleted but without more context it certainly seems like a viable option.
I don't know of a way to prevent a trigger from being a part of the calling transaction - in fact that's kind of the whole point.

This is called "autonomous transaction", and the simplest way to implement is by creating a linked server to point to the original database.
See this MSDN blog for a possible solution.

Related

Broker Queue - Move Poisoned Messages to Table

Currently I have a queue that stores merge queries which are run once it is read off the queue. This all works well, and currently if there is an error with the merge the queue will disable and I have to manually remove the message (or fix the merge, as it were).
I was wondering whether it was possible to simply move the poisoned message to a table? The queues run important (and different) merges that must continually run to ensure data is updated. It is not beneficial to me for the queue to, say, become disabled over night and gain a huge backlog.
Is there any way for me to simply push the bad message into a table? I have attempted this myself however I wound up having a TRY...CATCH inside a TRANSACTION, which performs a rollback on the error anyway (thus invoking the 5 rollbacks to disable rule). Most solutions online mention only manually removing the message.
Any suggestions? Is this just a bad idea? If so, why?
Thanks.
The disable-after-5-rollbacks can be switched off by setting POISON_MESSAGE_HANDLING status to OFF in the CREATE/ALTER QUEUE statement. You can then use TRY...CATCH to manually deal with transactions that fail.
Like you I don't find this feature very useful, so almost always turn it off in my applications and deal with problem messages in whatever way seems best.

execution context of database trigger in PostgreSQL

I want to implement an audit log using triggers which gets fired on created, changed and deleted data to store some values. Those triggers should be able to use user ids which made the changes and which are managed by the web application. I have some ideas on providing this data, but I don't seem to fully understand what the execution context of a trigger is. I've read through the PostgreSQL docs Overview of Trigger Behavior and others but my question doesn't seem to be answered.
What I want to know is the interaction between a client session with one running transaction and the trigger execution and the lifetime of both and how they depend on each other. From my understanding triggers are executed within the database independently from the client session which created the event which lead to trigger execution. Is that correct? That would mean triggers and their processing wouldn't impact performance of the client request and the client can close the session at any time. If both are independent, how would a trigger get notified about a client rolling back a transaction, which would logically mean that no data got changed at all? Or are triggers onyl executed after committing a transaction because they run independently?
Or are triggers executed async within the client session which created the events which lead to trigger execution? This would mean that if the client closes it's session for any reason, the trigger would abort, too. Their changes are directly bound to the clients transaction and can be rolled back, too.
I need to understand the behavior to know what I would like to do in another question.
Thanks for your input!
From my understanding triggers are executed within the database
independently from the client session which created the event which
lead to trigger execution. Is that correct? That would mean triggers
and their processing wouldn't impact performance of the client request
and the client can close the session at any time
No they totally depend on the client session, as part of the transaction which itself is tied to the session.
See this excerpt from CREATE TRIGGER (9.1):
They can be fired either at the end of the statement causing the
triggering event, or at the end of the containing transaction; in the
latter case they are said to be deferred
From your other question it appears you're using 8.4, which doesn't have deferred triggers, so it's even simpler. Triggers run always at the end of the statement (the triggering event), which means before the acknowledgment of execution is sent by the server to the client.
A COMMIT immediately following would be a new instruction, and could not be executed before the trigger is finished.

How to handle errors in a trigger?

I'm writing some SQL code that needs to be executed when rows are inserted in a database table, so I'm using an AFTER INSERT trigger; the code is quite complex, thus there could still be some bugs around.
I've discovered that, if an error happens when executing a trigger, SQL Server aborts the batch and/or the whole transaction. This is not acceptable for me, because it causes problems to the main application that uses the database; I also don't have the source code for that application, so I can't perform proper debugging on it. I absolutely need all database actions to succeed, even if my trigger fails.
How can I code my trigger so that, should an error happen, SQL Server will not abort the INSERT action?
Additionally, how can I perform proper error handling so that I can actually know the trigger has failed? Sending an email with the error data would be ok for me (the trigger's main purpose is actually sending emails), but how do I detect an error condition in a trigger and react to it?
Edit:
Thanks for the tips about optimizing performance by using something else than a trigger, but this code is not "complex" in the sense that it's long-running or performance intensive; it simply builds and sends a mail message, but in order to do so, it must retrieve data from various linked tables, and since I am reverse-engineering this application, I don't have the database schema available and am still trying to find my way around it; this is why conversion errors or unexpected/null values can still creep up, crashing the trigger execution.
Also, as stated above, I absolutely can't perform debugging on the application itself, nor modify it to do what I need in the application layer; the only way to react to an application event is by firing a database trigger when the application writes to the DB that something has just heppened.
If the operations in the trigger are complex and/or potentially long running, and you don't want the activity to affect the original transaction, then you need to find a way to decouple the activity.
One way might be to use Service Broker. In the trigger, just create message(s) (one per row) and send them on their way, then do the rest of the processing in the service.
If that seems too complex, the older way to do it is to insert the rows needing processing into a work/queue table, and then have a job continuously pulling rows from there are doing the work.
Either way, you're now not preventing the original transaction from committing.
Triggers are part of the transaction. You could do try catch swallow around the trigger code, or somewhat more professional try catch log swallow, but really you should let it go bang and then fix the real problem which can only be in your trigger.
If none of the above are acceptable, then you can't use a trigger.

ORM Support for Handling Deadlocks

Do you know of any ORM tool that offers deadlock recovery? I know deadlocks are a bad thing but sometimes any system will suffer from it given the right amount of load. In Sql Server, the deadlock message says "Rerun the transaction" so I would suspect that rerunning a deadlock statement is a desirable feature on ORM's.
I don't know of any special ORM tool support for automatically rerunning transactions that failed because of deadlocks. However I don't think that a ORM makes dealing with locking/deadlocking issues very different. Firstly, you should analyze the root cause for your deadlocks, then redesign your transactions and queries in a way that deadlocks are avoided or at least reduced. There are lots of options for improvement, like choosing the right isolation level for (parts) of your transactions, using lock hints etc. This depends much more on your database system then on your ORM. Of course it helps if your ORM allows you to use stored procedures for some fine-tuned command etc.
If this doesn't help to avoid deadlocks completely, or you don't have the time to implement and test the real fix now, of course you could simply place a try/catch around your save/commit/persist or whatever call, check catched exceptions if they indicate that the failed transaction is a "deadlock victim", and then simply recall save/commit/persist after a few seconds sleeping. Waiting a few seconds is a good idea since deadlocks are often an indication that there is a temporary peak of transactions competing for the same resources, and rerunning the same transaction quickly again and again would probably make things even worse.
For the same reason you probably would wont to make sure that you only try once to rerun the same transaction.
In a real world scenario we once implemented this kind of workaround, and about 80% of the "deadlock victims" succeeded on the second go. But I strongly recommend to digg deeper to fix the actual reason for the deadlocking, because these problems usually increase exponentially with the number of users. Hope that helps.
Deadlocks are to be expected, and SQL Server seems to be worse off in this front than other database servers. First, you should try to minimize your deadlocks. Try using the SQL Server Profiler to figure out why its happening and what you can do about it. Next, configure your ORM to not read after making an update in the same transaction, if possible. Finally, after you've done that, if you happen to use Spring and Hibernate together, you can put in an interceptor to watch for this situation. Extend MethodInterceptor and place it in your Spring bean under interceptorNames. When the interceptor is run, use invocation.proceed() to execute the transaction. Catch any exceptions, and define a number of times you want to retry.
An o/r mapper can't detect this, as the deadlock is always occuring inside the DBMS, which could be caused by locks set by other threads or other apps even.
To be sure a piece of code doesn't create a deadlock, always use these rules:
- do fetching outside the transaction. So first fetch, then perform processing then perform DML statements like insert, delete and update
- every action inside a method or series of methods which contain / work with a transaction have to use the same connection to the database. This is required because for example write locks are ignored by statements executed over the same connection (as that same connection set the locks ;)).
Often, deadlocks occur because either code fetches data inside a transaction which causes a NEW connection to be opened (which has to wait for locks) or uses different connections for the statements in a transaction.
I had a quick look (no doubt you have too) and couldn't find anything suggesting that hibernate at least offers this. This is probably because ORMs consider this outside of the scope of the problem they are trying to solve.
If you are having issues with deadlocks certainly follow some of the suggestions posted here to try and resolve them. After that you just need to make sure all your database access code gets wrapped with something which can detect a deadlock and retry the transaction.
One system I worked on was based on “commands” that were then committed to the database when the user pressed save, it worked like this:
While(true)
start a database transaction
Foreach command to process
read data the command need into objects
update the object by calling the command.run method
EndForeach
Save the objects to the database
If not deadlock
commit the database transaction
we are done
Else
abort the database transaction
log deadlock and try again
EndIf
EndWhile
You may be able to do something like with any ORM; we used an in house data access system, as ORM were too new at the time.
We run the commands outside of a transaction while the user was interacting with the system. Then rerun them as above (when you use did a "save") to cope with changes other people have made. As we already had a good ideal of the rows the command would change, we could even use locking hints or “select for update” to take out all the write locks we needed at the start of the transaction. (We shorted the set of rows to be updated to reduce the number of deadlocks even more)

Can SQL CLR triggers do this? Or is there a better way?

I want to write a service (probably in c#) that monitors a database table. When a record is inserted into the table I want the service to grab the newly inserted data, and perform some complex business logic with it (too complex for TSQL).
One option is to have the service periodically check the table to see if new records have been inserted. The problem with doing it that way is that I want the service to know about the inserts as soon as they happen, and I don't want to kill the database performance.
Doing a little research, it seems like maybe writing a CLR trigger could do the job. I could write trigger in c# that fires when an insert occurs, and then send the newly inserted data to a Windows or WCF service.
What do you think, is that a good (or even possible) use of SQL CLR triggers?
Any other ideas on how to accomplish this?
Probably you should de-couple postprocessing from inserting:
In the Insert trigger, add the record's PK into a queue table.
In a separate service, read from the queue table and do your complex operation. When finished, mark the record as processed (together with error/status info), or delete the record from the queue.
What you are describing is sometimes called a Job Queue or a Message Queue. There are several threads about using a DBMS table (as well as other techniques) for doing this that you can find by searching.
I would consider doing anything iike this with a Trigger as being an inappropriate use of a database feature that's easy to get into trouble with anyway. Triggers are best used for low-overhead dbms structural functionality (e.g. fine-grained referential integrity checking) and need to be lightweight and synchronous. It could be done, but probably wouldn't be a good idea.
I would suggest having a trigger on the table that calls the SQL Server Service Broker, that then (asynchronously) executes a CLR stored procedure that does all your work in a different thread.
I have a service that polls the database every minute, it doesn't cause that much performance problems and it is a clean solution. Plus if your service or other wcf endpoint is not there your trigger will fail or be lost and you will have to poll anyways later.
I would not recommend using a CLR trigger, or any sort of trigger for this. You are opening yourself to having serious maintainability and potential locking issues. (A very simple trigger that chucks stuff into an audit/queue table may be acceptable IF you don't care about ##identity after inserts and you will never lock the audit/queue table up)
Instead, from your application/orm you should trigger inserting stuff into a queue table and have this queue processed on a regular basis. This can be done by having a transaction in your ORM or kicking off a stored proc the starts a transaction commits the change and audit/queue atomically. (be careful with locking here)
If you need immediate action, look at spawning a job to clear the queue after you do a insert/update/delete on the table and
Also ensure you are double checking the queue once a minute or so in case the background process was not kicked off properly. If its a web app and you want to avoid spawning threads you could communicate with a background process to clear up the queue.
Why not implement the insert in a stored procedure, and do the business logic in the procedure after the insert? What is so complicated about it that it can't be written in T-SQL?