Managed SQL triggers recursion - sql

I have a table with a trigger assigned to it. And this trigger changes the same table data. Sure, this initiates a new trigger.
Every trigger instance knows (there are some rules), should it be the last one in the chain or not. And if it should, it has to turn the next trigger off.
I see the following problem: if I have a state (say, stop flag), it could work in an unexpected way. For instance, a user changes the table. A new trigger chain is being initiated. The trigger wants to be a terminator and set the stop flag up. In this moment another user changes the table => a new trigger chain is being initiated, that should be executed. But, as the stop flag is set up, it clear the flag and quits. Now, the recursive trigger (which is ignored we think) is started, looking whether the flag is cleared... Oops, it is executed!
I don't know, what is the order in such cases, will the recursive trigger be executed immediately after changing the data or the parent one is completed first, so I have no ideas, how to organize this process.
Regards,

Consider ditching the complicated triggers and simplifying everything into either stored procedures, or if possible, standard SQL set-based operations.
Stored procedures are easier to understand and maintain then many layers of triggers on a given table. Triggers do have value in some scenarios, but when you have triggers that invoke a chain of triggers, or have triggers that have dependencies on data being revised from other triggers, all on the same table, then you really begin to give yourself a maintenance nightmare. Simplify as a starting point by either improving your SQL update / insert statements, or refactor your triggers into a stored procedure of some sort.

Related

Trigger on update a table

I have a stored proc which has complicated logic. Upon completion of which I want to run another logic to calculate something. But the second logic is independent and I want to return back the control to user once the stored proc is complete. What is the best way to do this?
Right now, I am using a log table and have created a trigger on update of a column "end_time". But this does not release the thread execution.
Let me know if the question is not clear.
Update triggers are synchronous and run un the context of the UPDATE transaction. If you need to run an asynchronous process using T-SQL alone, consider Service Broker. Be aware there's a bit of a learning curve if you haven't used SB before.

Fire SQL Trigger only when a particular user update the row

There is a trigger in postgres that gets called whenever a particular table is updated.
It is used to send updates to another API.
Is there a way one can control the firing of this trigger?
Sometimes when I update the table I don't want the trigger to be fired. How do I do this?
Is there a silence trigger sql syntax?
If not
Can I fire triggers when a row is updated by PG user X and when PG user Y updates the table no trigger should be fired?
In recent Postgres versions, there is a when clause that you can use to conditionally fire the trigger. You could use it like:
... when (old.* is distinct from new.*) ...
I'm not 100% this one will work (can't test atm):
... when (current_user = 'foo') ...
(If not, try placing it in an if block in your plpgsql.)
http://www.postgresql.org/docs/current/static/sql-createtrigger.html
(There also is the [before|after] update of [col_name] syntax, but I tend to find it less useful because it'll fire even if the column's value remains the same.)
Adding this extra note, seeing that #CraigRinger's answer highlights what you're up to...
Trying to set up master-master replication between Salesforce and Postgres using conditional triggers is, I think, a pipe dream. Just forget it... There's going to be a lot more to it than that: you'll need to lock data as appropriate on both ends (which won't necessarily be feasible in a reasonable way), manage the resulting deadlocks (which might not automatically get detected), and deal with conflicting data.
Your odds of successfully pulling this off with a tiny team is about about zero -- especially if your Postgres skills are at the level where investing time in reading the manual would answer your own questions. You can safely bet that someone much more competent at Salesforce or some major SQL shop (e.g. like the one Craig works for) considered the same, and either miserably failed or ruled it out.
Moreover, I'd stress that implementing efficient, synchronous, multi-master replication is not a solved problem. You read that right: not solved. Just a few years ago, doing it at all wasn't well solved enough to make it in the Postgres core. So you've no prior art that works well to base your work on and iterate upon.
This seems to be the same problem as this post a few minutes ago, approaching it from a different direction.
If so, while you can indeed do as Denis suggests, don't attempt to reinvent this wheel. Use an established tool like Slony-I or Bucardo if you are attempting two-way (multi-master) replication. You also need to understand the major limitations involved in multi-master when dealing with conflicting updates.
In general, there are a few ways to control trigger firing:
Let the trigger fire, then put logic in the PL/PgSQL trigger body to cause it to take no action if a certain condition is met. This is often the only option when the rules are complex.
As Denis points out, use a trigger WHEN clause to conditionally fire the trigger
Use session_replication_role to control the firing of all triggers
Directly enable/disable triggers.
In particular, if your application shares a single SQL-level user ID for all database access and does its own user management above the SQL level, and you want to control trigger firing on a per-user basis, the only way to do it will be with in-trigger logic. You might find this prior answer about getting user IDs within triggers useful:
Passing user id to PostgreSQL triggers

How to handle errors in a trigger?

I'm writing some SQL code that needs to be executed when rows are inserted in a database table, so I'm using an AFTER INSERT trigger; the code is quite complex, thus there could still be some bugs around.
I've discovered that, if an error happens when executing a trigger, SQL Server aborts the batch and/or the whole transaction. This is not acceptable for me, because it causes problems to the main application that uses the database; I also don't have the source code for that application, so I can't perform proper debugging on it. I absolutely need all database actions to succeed, even if my trigger fails.
How can I code my trigger so that, should an error happen, SQL Server will not abort the INSERT action?
Additionally, how can I perform proper error handling so that I can actually know the trigger has failed? Sending an email with the error data would be ok for me (the trigger's main purpose is actually sending emails), but how do I detect an error condition in a trigger and react to it?
Edit:
Thanks for the tips about optimizing performance by using something else than a trigger, but this code is not "complex" in the sense that it's long-running or performance intensive; it simply builds and sends a mail message, but in order to do so, it must retrieve data from various linked tables, and since I am reverse-engineering this application, I don't have the database schema available and am still trying to find my way around it; this is why conversion errors or unexpected/null values can still creep up, crashing the trigger execution.
Also, as stated above, I absolutely can't perform debugging on the application itself, nor modify it to do what I need in the application layer; the only way to react to an application event is by firing a database trigger when the application writes to the DB that something has just heppened.
If the operations in the trigger are complex and/or potentially long running, and you don't want the activity to affect the original transaction, then you need to find a way to decouple the activity.
One way might be to use Service Broker. In the trigger, just create message(s) (one per row) and send them on their way, then do the rest of the processing in the service.
If that seems too complex, the older way to do it is to insert the rows needing processing into a work/queue table, and then have a job continuously pulling rows from there are doing the work.
Either way, you're now not preventing the original transaction from committing.
Triggers are part of the transaction. You could do try catch swallow around the trigger code, or somewhat more professional try catch log swallow, but really you should let it go bang and then fix the real problem which can only be in your trigger.
If none of the above are acceptable, then you can't use a trigger.

How to define a trigger ON COMMIT in Oracle?

Is there any way in oracle database to define trigger which will be fired synchronously before COMMIT (and ROLLBACK if it throws exception) in case when specified table is changed?
There is no ON COMMIT trigger mechanism in Oracle. There are workarounds however:
You could use a materialized view with ON COMMIT REFRESH and add triggers to this MV. This would allow you to trigger the logic when a base table has been modified at the time of commit. If the trigger raises an error, the transaction will be rolled back (you will lose all uncommited changes).
You can use DBMS_JOB to defer an action to after the commit. This would be an asynchronous action and may be desirable in some cases (for example when you want to send an email after the transaction has been successful). If you roll back the primary transaction, the job will be cancelled. The job and the primary session are independent: if the job fails the main transaction will not be rolled back.
In your case, you could probably use option (1). I personnaly don't like to code business logic in triggers since it adds a lot of complexity but technically I think it would be doable.
I had a similiar problem, but option 1 was unfortunately not convenient for my case.
Another possible solution, which is also suggested by "Ask Tom", is to specify a stored procedure and simply call that procedure before executing the COMMIT.
This solution is only convenient if you have access to the code which executes the COMMIT, but for my case this was the easiest solution.

Database safety: Intermediary "to_be_deleted" column/table?

Everyone has accidentally forgotten the WHERE clause on a DELETE query and blasted some un-backed up data once or twice. I was pondering that problem, and I was wondering if the solution I came up with is practical.
What if, in place of actual DELETE queries, the application and maintenance scripts did something like:
UPDATE foo SET to_be_deleted=1 WHERE blah = 50;
And then a cron job was set to go through and actually delete everything with the flag? The downside would be that pretty much every other query would need to have WHERE to_be_deleted != 1 appended to it, but the upside would be that you'd never mistakenly lose data again. You could see "2,349,325 rows affected" and say, "Hmm, looks like I forgot the WHERE clause," and reset the flags. You could even make the to_be_deleted field a DATE column, so the cron job would check to see if a row's time had come yet.
Also, you could remove DELETE permission from the production database user, so even if someone managed to inject some SQL into your site, they wouldn't be able to remove anything.
So, my question is: Is this a good idea, or are there pitfalls I'm not seeing?
That is fine if you want to do that, but it seems like a lot of work. How many people are manually changing the database? It should be very few, especially if your users have an app to work with.
When I work on the production db I put EVERYTHING I do in a transaction so if I mess up I can rollback. Just having a standard practice like that for me has helped me.
I don't see anything really wrong with that though other than ever single point of data manipulation in each applicaiton will have to be aware of this functionality and not just the data it wants.
This would be fine as long as your appliction does not require that the data is immediately deleted since you have to wait for the next interval of the cron job.
I think a better solution and the more common practice is to use a development server and a production server. If your development database gets blown out, simply reload it. No harm done. If you're testing code on your production database, you deserve anything bad that happens.
A lot of people have a delete flag or a row status flag. But if someone is doing a change through the back end (and they will be doing it since often people need batch changes done that can't be accomplished through the front end) and they make a mistake they will still often go for delete. Ultimately this is no substitute for testing the script before applying it to a production environment.
Also...what happens if the following query gets executed "UPDATE foo SET to_be_deleted=1" because they left off the where clause. Unless you have auditing columns with a time stamp how do you know which columns were deleted and which ones were done in error? But even if you have auditing columns with a time stamp, if the auditing is done via a stored procedure or programmer convention then these back end queries may not supply information letting you know that they were just applied.
Too complicated. The standard approach to this is to do all your work inside a transaction, so if you screw up and forget a WHERE clause, then you simply roll back when you see the "2,349,325 rows affected" result.
It may be easier to create a parallel table for deleted rows. A DELETE trigger (and UPDATE too if you want to undo changes as well) on the original table could copy the affected rows to the parallel table. Adding a datetime column to the parallel table to record the date & time of the change would let you permanently remove rows past a certain age using your cron job.
That way, you'd use normal DELETE statements on the original table, so there's no chance you'll forget to run your special "DELETE" statement. You also sidestep the to_be_deleted != 1 expression, which is just a bug waiting to happen when someone inevitably forgets.
It looks like you're describing three cases here.
Case 1 - maintenance scripts. Risk can be minimized by developing them and testing them in an environment other than your production box. For quick maintenance, do the maintenance in a single transaction, and check everything before committing. If you made a mistake, issue the rollback command. For more serious maintenance that you can't necessarily wait around for, or do in a single transaction, consider taking a backup directly before running the maintenance job, so that you can always restore back to the point before you ran your script if you encounter serious problems.
Case 2 - SQL Injection. This is an architecture issue. Your application shouldn't pass SQL into the database, access should be controlled through packages / stored procedures / functions, and values that are going to come from the UI and be used in a DDL statement should be applied using bind variables, rather than by creating dynamic SQL by appending strings together.
Case 3 - Regular batch jobs. These should have been tested before being deployed to production. If you delete too much, you have a bug, and are going to have to rely on your backup strategy.
Everyone has accidentally forgotten
the WHERE clause on a DELETE query and
blasted some un-backed up data once or
twice.
No. I always prototype my DELETEs as SELECTs and only if the latter gives the results I want to delete change the statement before WHERE to a DELETE. This let's me inspect in any needed detail the rows I want to affect before doing anything.
You could set up a view on that table that selects WHERE to_be_deleted != 1, and all of your normal selects are done on that view - that avoids having to put the WHERE on all of your queries.
The pitfall is that it's unnecessarily complicated and someone will inadvertently forget too check the flag in their query. There's also the issue of potentially needing to delete something immediately instead of wait for the scheduled job to run.
To avoid the to_be_deleted WHERE clause you could create a trigger before the delete command fires off to insert the deleted rows into a separate table. This table could be cleared out when you're sure everything in it really needs to be deleted, or you could keep it around for archive purposes.
You also get a "soft delete" feature so you can give the(certain) end-users the power of "undo" - there would have to be a pretty strong downside in the mix to cancel the benefits of soft deleting.
The "WHERE to_be_deleted <> 1" on every other query is a huge one. Another is once you've ran your accidentally rogue query, how will you determine which of the 2,349,325 were previously marked as deleted?
I think the practical solution is regular backups, and failing that, perhaps a delete trigger that captures the tuples to be axed.
The other option would be to create a delete trigger on each table. When anything is deleted, it would insert that "to be deleted" record into another table, ideally named TABLENAME_deleted.
The downside would be that the db would have twice as many tables.
I don't recommend triggers in general, but it might be what you are looking for.
This is why, whenever you are editing data by hand, you should BEGIN TRAN, edit your data, check that it looks good (for instance that you didn't delete more data than you were expecting) and then END TRAN. If you're using Postgres then you want to create lots of savepoints as well so that a typo doesn't wipe out your intermediate work.
But that said, in many applications it does make sense to have software mark records as invalid rather than deleting them. Add a last_modified date that is automatically updated, and you are all prepared to set up incremental updates into a data warehouse. Even if you don't have a data warehouse now, it never hurts to prepare for the future when preparing is cheap. Plus in the event of manual mistakes you still have the data, and can just find all of the records that got "deleted" when you made your mistake and fix them. (You should still use transactions though.)