I have a stored proc which has complicated logic. Upon completion of which I want to run another logic to calculate something. But the second logic is independent and I want to return back the control to user once the stored proc is complete. What is the best way to do this?
Right now, I am using a log table and have created a trigger on update of a column "end_time". But this does not release the thread execution.
Let me know if the question is not clear.
Update triggers are synchronous and run un the context of the UPDATE transaction. If you need to run an asynchronous process using T-SQL alone, consider Service Broker. Be aware there's a bit of a learning curve if you haven't used SB before.
Related
According to several resources, such as this,
A query that is executed within the context of a trigger is automatically wrapped in a transaction. If there are any distributed queries in the trigger code, the transaction is promoted to a distributed transaction automatically.
Simple question - is there a way to prevent this behavior? I'm looking for a way to explicitly prevent code in my trigger from running in the context of a transaction.
If you are trying to do something asynchronous so that the calling transaction doesn't have to wait, you may consider Service Broker, which is designed to do exactly that - go fire off some asynchronous task, and return control to the caller, regardless of transaction scope.
Another idea is to not have your trigger perform the work, but instead pop a work item onto a queue table, and have a background process running continuously to process the queue. This isn't necessarily easy to do if your work item operates on the set of data in inserted/deleted but without more context it certainly seems like a viable option.
I don't know of a way to prevent a trigger from being a part of the calling transaction - in fact that's kind of the whole point.
This is called "autonomous transaction", and the simplest way to implement is by creating a linked server to point to the original database.
See this MSDN blog for a possible solution.
I have a table with a trigger assigned to it. And this trigger changes the same table data. Sure, this initiates a new trigger.
Every trigger instance knows (there are some rules), should it be the last one in the chain or not. And if it should, it has to turn the next trigger off.
I see the following problem: if I have a state (say, stop flag), it could work in an unexpected way. For instance, a user changes the table. A new trigger chain is being initiated. The trigger wants to be a terminator and set the stop flag up. In this moment another user changes the table => a new trigger chain is being initiated, that should be executed. But, as the stop flag is set up, it clear the flag and quits. Now, the recursive trigger (which is ignored we think) is started, looking whether the flag is cleared... Oops, it is executed!
I don't know, what is the order in such cases, will the recursive trigger be executed immediately after changing the data or the parent one is completed first, so I have no ideas, how to organize this process.
Regards,
Consider ditching the complicated triggers and simplifying everything into either stored procedures, or if possible, standard SQL set-based operations.
Stored procedures are easier to understand and maintain then many layers of triggers on a given table. Triggers do have value in some scenarios, but when you have triggers that invoke a chain of triggers, or have triggers that have dependencies on data being revised from other triggers, all on the same table, then you really begin to give yourself a maintenance nightmare. Simplify as a starting point by either improving your SQL update / insert statements, or refactor your triggers into a stored procedure of some sort.
If i have a stored procedure or a trigger in Sql Server 2008, can it do some sql calculations 'in another non-blocking thread'? ie. something in the background
also, can two sql code blocks be ran in parallel? or two stored procs be ran in parallel?
for example. Imagine we are given the job calculating the scores for each Stack Overflow user (and please leave all 'do that elsehwere/service/batch/overnight/etc, elswhere) after a user does some 'action'.
so we have a trigger on the Post table, so when a new post is INSERTED, the trigger fires off and part of that logic, it calculates the user's latest score. Instead of waiting for the stored proc to finish and block the current sql thread / executire, can we ask it to calc the score in the background OR parallel.
cheers!
SQL Server does not have parallel or deferred execution: each block of running code in a connection is serial, one line after the other.
To decouple processing, you usually have to use SQL Server Agent jobs or use Service broker. These start executing in a new connection, new session etc
This makes sense:
What if you want to rollback your changes? What does the background thread do and how does it know?
What data does it use? New, Old, lock wait, snapshot?
What if it gets ahead of the main thread and uses stale data?
No, but you could write the request to a queue. Service Broker, a SQL Server component, provides support for this kind of thing. It's probably the best option available for asynchronous processing.
I want to write a service (probably in c#) that monitors a database table. When a record is inserted into the table I want the service to grab the newly inserted data, and perform some complex business logic with it (too complex for TSQL).
One option is to have the service periodically check the table to see if new records have been inserted. The problem with doing it that way is that I want the service to know about the inserts as soon as they happen, and I don't want to kill the database performance.
Doing a little research, it seems like maybe writing a CLR trigger could do the job. I could write trigger in c# that fires when an insert occurs, and then send the newly inserted data to a Windows or WCF service.
What do you think, is that a good (or even possible) use of SQL CLR triggers?
Any other ideas on how to accomplish this?
Probably you should de-couple postprocessing from inserting:
In the Insert trigger, add the record's PK into a queue table.
In a separate service, read from the queue table and do your complex operation. When finished, mark the record as processed (together with error/status info), or delete the record from the queue.
What you are describing is sometimes called a Job Queue or a Message Queue. There are several threads about using a DBMS table (as well as other techniques) for doing this that you can find by searching.
I would consider doing anything iike this with a Trigger as being an inappropriate use of a database feature that's easy to get into trouble with anyway. Triggers are best used for low-overhead dbms structural functionality (e.g. fine-grained referential integrity checking) and need to be lightweight and synchronous. It could be done, but probably wouldn't be a good idea.
I would suggest having a trigger on the table that calls the SQL Server Service Broker, that then (asynchronously) executes a CLR stored procedure that does all your work in a different thread.
I have a service that polls the database every minute, it doesn't cause that much performance problems and it is a clean solution. Plus if your service or other wcf endpoint is not there your trigger will fail or be lost and you will have to poll anyways later.
I would not recommend using a CLR trigger, or any sort of trigger for this. You are opening yourself to having serious maintainability and potential locking issues. (A very simple trigger that chucks stuff into an audit/queue table may be acceptable IF you don't care about ##identity after inserts and you will never lock the audit/queue table up)
Instead, from your application/orm you should trigger inserting stuff into a queue table and have this queue processed on a regular basis. This can be done by having a transaction in your ORM or kicking off a stored proc the starts a transaction commits the change and audit/queue atomically. (be careful with locking here)
If you need immediate action, look at spawning a job to clear the queue after you do a insert/update/delete on the table and
Also ensure you are double checking the queue once a minute or so in case the background process was not kicked off properly. If its a web app and you want to avoid spawning threads you could communicate with a background process to clear up the queue.
Why not implement the insert in a stored procedure, and do the business logic in the procedure after the insert? What is so complicated about it that it can't be written in T-SQL?
I looked around and found some ideas about how to do this, but no definitive best way. One of the ideas was to use sp_start_job to kick off an SQL Server Agent job that runs the DTS package. If this is the best way to do it, then the next question would be, "How do I schedule a DTS package from a job and make it non-recurring?"
Thanks,
Tim
xp_cmdshell would allow you to execute dtsrun.
I wouldn't suggest tying this kind of functionality to a trigger. Triggers are supposed to be fast. I don't think there is any way to launch a DTS package that will be as fast as I would want a trigger to be. If this resonates with you, then I would suggest having your trigger simply insert a row into a special table, and then have a job that executes as often as you need for your purpose (every minute? every 10 seconds?) that monitors this table and kicks off the appropriate DTS package as needed.
Instead of using xp_cmdshell, I did this:
When a certain value in a table changes, the trigger uses msdb.sp_start_job to start a job. This job should not run on a schedule, only when initiated by a user. I set the job schedule to run one time, which is now in the past, and I unchecked the enabled box.
This job has one step, which is DTSRun /~Z0xHEXENCRYPTEDVALUE. The DTS package copies some rows from this server to another server on a different platform and on success resets values in the table with the trigger for next time. The trigger checks a table value before calling sp_start_job, so that the job starts only under certain conditions, not every time.
Since sp_start_job runs asyhchronously the trigger completes quickly. The only drawback to this is that I need to poll the value that was reset on success and either let the user know it worked, or after some time out period, it did not work.
The alternative would be to use xp_cmdshell if I needed synchronous operation, which might not be a good idea from inside of a trigger.