Fire SQL Trigger only when a particular user update the row - sql

There is a trigger in postgres that gets called whenever a particular table is updated.
It is used to send updates to another API.
Is there a way one can control the firing of this trigger?
Sometimes when I update the table I don't want the trigger to be fired. How do I do this?
Is there a silence trigger sql syntax?
If not
Can I fire triggers when a row is updated by PG user X and when PG user Y updates the table no trigger should be fired?

In recent Postgres versions, there is a when clause that you can use to conditionally fire the trigger. You could use it like:
... when (old.* is distinct from new.*) ...
I'm not 100% this one will work (can't test atm):
... when (current_user = 'foo') ...
(If not, try placing it in an if block in your plpgsql.)
http://www.postgresql.org/docs/current/static/sql-createtrigger.html
(There also is the [before|after] update of [col_name] syntax, but I tend to find it less useful because it'll fire even if the column's value remains the same.)
Adding this extra note, seeing that #CraigRinger's answer highlights what you're up to...
Trying to set up master-master replication between Salesforce and Postgres using conditional triggers is, I think, a pipe dream. Just forget it... There's going to be a lot more to it than that: you'll need to lock data as appropriate on both ends (which won't necessarily be feasible in a reasonable way), manage the resulting deadlocks (which might not automatically get detected), and deal with conflicting data.
Your odds of successfully pulling this off with a tiny team is about about zero -- especially if your Postgres skills are at the level where investing time in reading the manual would answer your own questions. You can safely bet that someone much more competent at Salesforce or some major SQL shop (e.g. like the one Craig works for) considered the same, and either miserably failed or ruled it out.
Moreover, I'd stress that implementing efficient, synchronous, multi-master replication is not a solved problem. You read that right: not solved. Just a few years ago, doing it at all wasn't well solved enough to make it in the Postgres core. So you've no prior art that works well to base your work on and iterate upon.

This seems to be the same problem as this post a few minutes ago, approaching it from a different direction.
If so, while you can indeed do as Denis suggests, don't attempt to reinvent this wheel. Use an established tool like Slony-I or Bucardo if you are attempting two-way (multi-master) replication. You also need to understand the major limitations involved in multi-master when dealing with conflicting updates.
In general, there are a few ways to control trigger firing:
Let the trigger fire, then put logic in the PL/PgSQL trigger body to cause it to take no action if a certain condition is met. This is often the only option when the rules are complex.
As Denis points out, use a trigger WHEN clause to conditionally fire the trigger
Use session_replication_role to control the firing of all triggers
Directly enable/disable triggers.
In particular, if your application shares a single SQL-level user ID for all database access and does its own user management above the SQL level, and you want to control trigger firing on a per-user basis, the only way to do it will be with in-trigger logic. You might find this prior answer about getting user IDs within triggers useful:
Passing user id to PostgreSQL triggers

Related

Create trigger upon each table creation in SQL Server 2008 R2

I need to create an Audit table that is going to track the actions (insert, update, delete) of my tables in the database and add new row with date, row id, table name and a few more details, so I will know what action happened and when.
So basically from my understanding I need a trigger for each table which is going to track insert/update/delete and a trigger on the database which is going to track new table creation.
My main problem is understanding how to connect between those things so when a new table is being created a trigger will be created for that table which is going to track the actions and add new rows for the Audit table as needed.
Is it possible to make a DDL trigger for create_table and inside of it another trigger for insert / update / delete ?
What you're hoping for is not possible. And I'd strongly advise that you'd be better off thinking about what you really want to achieve at a business level with auditing. It will yield a much simpler and more practical solution.
First up
...trigger on the database which is going to track new table creation.
I cannot stress enough how terrible this idea is. Who exactly has such unfettered access to you database that they can create tables without going through code-review and QA? Which should of course be on the gated pathway towards production. Once you realise that schema changes should not happen ad-hoc, it's patently obvious that you don't need triggers (which are by their very nature reactive) to do something because the schema changed.
Even if you could write such triggers: it's at a meta-programming level that simply isn't worth the effort of trying to foresee all possible permutations.
Better options include:
Requirements assessment and acceptance: This is new information in the system. What are the audit requirements?
Design review: New table; does it need auditing?
Test design: How to test an audit requirements?
Code Review: You've added a new table. Does it need auditing?
Not to mention features provided by tools such as:
Source Control.
Db deployment utilities (whether home-grown or third party).
Part two
... a trigger will be created for that table which is going to track the actions and add new rows for the Audit table as needed.
I've already pointed out why doing the above automatically is a terrible. Now I'm going a step further to point out that doing the above at all is also a bad idea.
It's a popular approach, and I'm sure to get some flack from people who've nicely compartmentalised their particular flavour of it; swearing blind how much time it "saves" them. (There may even be claims to it being a "business requirement"; which I can assure you is more likely a misstated version of the real requirement.)
There are fundamental problems with this approach:
It's reactive instead of proactive. So it usually lacks context.
You'll struggle to audit attempted changes that get rolled back. (Which can be a nightmare for debugging and usually violates real business audit requirements.)
Interpreting audit will be a nightmare because it's just raw data. The information is lost in the detail.
As columns are added/renamed/deleted your audit data loses cohesion. (This is usually the least of problems though.)
These extra tables that always get updated as part of other updates can wreak havoc on performance.
Usually this style of auditing involves: every time a column is added to the "base" table, it's also added to the "audit" table. (This ultimately makes the "audit" table very much like a poorly architected persistent transaction log.)
Most people following this approach overlook the significance of NULLable columns in the "base" tables.
I can tell you from first hand experience, interpreting such audit trails in any but the simplest of cases is not easy. The amount of time wasted is ridiculous: investigating issues, training others to be able to interpret them correctly, writing utilities to try make working with these audit trails less painful, painstakingly documenting findings (because the information is not immediately apparent in the raw data).
If you have any sense of self-preservation you'll heed my advice.
Make it great
(Sorry, couldn't resist.)
A better approach is to proactively plan for what needs auditing. Push for specific business requirements. Note that different cases may need different auditing techniques:
If user performs action X, record A details about the action for legal traceability.
If user attempts to do Y but it prevented by system rules, record B details to track rule system integrity.
If user fails to log in, record C details for security purposes.
If system is upgraded, record D details for troubleshooting.
If certain system events occur, record E details ...
The important thing is that once you know the real business requirements, you won't be saying: "Uh, let's just track everything. It might be useful." Instead you'll:
Be able to produce a cleaner more appropriate and reliable design for each distinct kind of auditing.
Be able to test that it behaves as required!
Be able to use the audit data more easily whenever it's needed.

postgresql pendant of SqlDependency / notify on query result changes

I have a postgresql database and I want to send E-Mail notifications if the results of specific queries have changed.
For SQL-Server there is a C# class called SqlDependency which allows me to do this in a very easy way. I'ts possible to say: "Hey notify me if SELECT * FROM a WHERE d changes".
But I couldn't find any solution for postgresql. I've often seen NOTIFY, but as far as I understand it, it's not as powerful as this SQL-Server mechanism, because I have to build lot of triggers.
My additional problem is, that the queries can potentially be very complex :/
So: Has postgresql any mechanism for this scenario?
I recently also went into that problem,
I'm currently evaluating the listen/notify functionality of psql.
It seems the suiting thing, but you have to implement your db-observer your self.
How to fire NOTIFICATION event in front end when table data gets changed

Firebird lock table / lock record

Suppose you have one table for a Desktop application and several users.
When a user opens a record, i want to lock this record. I have tried "WITH LOCK" statement. It works fine.
But when a second users want to update the same record, i want to put a message "Sorry, you cannot work on this order because it is locked. Somebody else has opened this record before you". Firebird waits the first user to commit/rollback. I don t want to wait. I want to put an error message. Is there a simple way to ask firebird record lock status ?
Is there a way to lock a full table ? Or to put a semaphore/mutex (like get_lock on mysql)
i have tried reserving on set transaction statement but it does not work.
My wish is to display a message to the user. Not waiting.
Thanks
If you don't want to wait, then configure your transaction to use NO WAIT, or a wait timeout. However controlling business rules like this through database transactions is not advisable as it requires long running transactions which inhibit garbage collection, increases the chain of interesting transactions, and increases the chance of update conflicts.
I'd advise to use different options like:
First to update wins
Change detection (eg by a timestamp or record version counter which is also used as a condition in the update statement), and allowing the user to overwrite or abandon his update (or maybe merge)
Explicit reservation by updating the record (setting the username) in a separate transaction. This might require cleanup or the ability for a user to break the reservation (eg if someone had it open for too long).
Note that Firebird uses multi version concurrency control (MVCC), so explicit locking is not really natural. See also this answer to Locking tables firebird, delphi.
Locking tables using RESERVING should be possible, but I have never used it, so I am not entirely sure how to use it although you probably also need to specify FOR PROTECTED READ (see Interbase 6.0 Embedded SQL Guide, pages 70/71).

How to handle errors in a trigger?

I'm writing some SQL code that needs to be executed when rows are inserted in a database table, so I'm using an AFTER INSERT trigger; the code is quite complex, thus there could still be some bugs around.
I've discovered that, if an error happens when executing a trigger, SQL Server aborts the batch and/or the whole transaction. This is not acceptable for me, because it causes problems to the main application that uses the database; I also don't have the source code for that application, so I can't perform proper debugging on it. I absolutely need all database actions to succeed, even if my trigger fails.
How can I code my trigger so that, should an error happen, SQL Server will not abort the INSERT action?
Additionally, how can I perform proper error handling so that I can actually know the trigger has failed? Sending an email with the error data would be ok for me (the trigger's main purpose is actually sending emails), but how do I detect an error condition in a trigger and react to it?
Edit:
Thanks for the tips about optimizing performance by using something else than a trigger, but this code is not "complex" in the sense that it's long-running or performance intensive; it simply builds and sends a mail message, but in order to do so, it must retrieve data from various linked tables, and since I am reverse-engineering this application, I don't have the database schema available and am still trying to find my way around it; this is why conversion errors or unexpected/null values can still creep up, crashing the trigger execution.
Also, as stated above, I absolutely can't perform debugging on the application itself, nor modify it to do what I need in the application layer; the only way to react to an application event is by firing a database trigger when the application writes to the DB that something has just heppened.
If the operations in the trigger are complex and/or potentially long running, and you don't want the activity to affect the original transaction, then you need to find a way to decouple the activity.
One way might be to use Service Broker. In the trigger, just create message(s) (one per row) and send them on their way, then do the rest of the processing in the service.
If that seems too complex, the older way to do it is to insert the rows needing processing into a work/queue table, and then have a job continuously pulling rows from there are doing the work.
Either way, you're now not preventing the original transaction from committing.
Triggers are part of the transaction. You could do try catch swallow around the trigger code, or somewhat more professional try catch log swallow, but really you should let it go bang and then fix the real problem which can only be in your trigger.
If none of the above are acceptable, then you can't use a trigger.

Database safety: Intermediary "to_be_deleted" column/table?

Everyone has accidentally forgotten the WHERE clause on a DELETE query and blasted some un-backed up data once or twice. I was pondering that problem, and I was wondering if the solution I came up with is practical.
What if, in place of actual DELETE queries, the application and maintenance scripts did something like:
UPDATE foo SET to_be_deleted=1 WHERE blah = 50;
And then a cron job was set to go through and actually delete everything with the flag? The downside would be that pretty much every other query would need to have WHERE to_be_deleted != 1 appended to it, but the upside would be that you'd never mistakenly lose data again. You could see "2,349,325 rows affected" and say, "Hmm, looks like I forgot the WHERE clause," and reset the flags. You could even make the to_be_deleted field a DATE column, so the cron job would check to see if a row's time had come yet.
Also, you could remove DELETE permission from the production database user, so even if someone managed to inject some SQL into your site, they wouldn't be able to remove anything.
So, my question is: Is this a good idea, or are there pitfalls I'm not seeing?
That is fine if you want to do that, but it seems like a lot of work. How many people are manually changing the database? It should be very few, especially if your users have an app to work with.
When I work on the production db I put EVERYTHING I do in a transaction so if I mess up I can rollback. Just having a standard practice like that for me has helped me.
I don't see anything really wrong with that though other than ever single point of data manipulation in each applicaiton will have to be aware of this functionality and not just the data it wants.
This would be fine as long as your appliction does not require that the data is immediately deleted since you have to wait for the next interval of the cron job.
I think a better solution and the more common practice is to use a development server and a production server. If your development database gets blown out, simply reload it. No harm done. If you're testing code on your production database, you deserve anything bad that happens.
A lot of people have a delete flag or a row status flag. But if someone is doing a change through the back end (and they will be doing it since often people need batch changes done that can't be accomplished through the front end) and they make a mistake they will still often go for delete. Ultimately this is no substitute for testing the script before applying it to a production environment.
Also...what happens if the following query gets executed "UPDATE foo SET to_be_deleted=1" because they left off the where clause. Unless you have auditing columns with a time stamp how do you know which columns were deleted and which ones were done in error? But even if you have auditing columns with a time stamp, if the auditing is done via a stored procedure or programmer convention then these back end queries may not supply information letting you know that they were just applied.
Too complicated. The standard approach to this is to do all your work inside a transaction, so if you screw up and forget a WHERE clause, then you simply roll back when you see the "2,349,325 rows affected" result.
It may be easier to create a parallel table for deleted rows. A DELETE trigger (and UPDATE too if you want to undo changes as well) on the original table could copy the affected rows to the parallel table. Adding a datetime column to the parallel table to record the date & time of the change would let you permanently remove rows past a certain age using your cron job.
That way, you'd use normal DELETE statements on the original table, so there's no chance you'll forget to run your special "DELETE" statement. You also sidestep the to_be_deleted != 1 expression, which is just a bug waiting to happen when someone inevitably forgets.
It looks like you're describing three cases here.
Case 1 - maintenance scripts. Risk can be minimized by developing them and testing them in an environment other than your production box. For quick maintenance, do the maintenance in a single transaction, and check everything before committing. If you made a mistake, issue the rollback command. For more serious maintenance that you can't necessarily wait around for, or do in a single transaction, consider taking a backup directly before running the maintenance job, so that you can always restore back to the point before you ran your script if you encounter serious problems.
Case 2 - SQL Injection. This is an architecture issue. Your application shouldn't pass SQL into the database, access should be controlled through packages / stored procedures / functions, and values that are going to come from the UI and be used in a DDL statement should be applied using bind variables, rather than by creating dynamic SQL by appending strings together.
Case 3 - Regular batch jobs. These should have been tested before being deployed to production. If you delete too much, you have a bug, and are going to have to rely on your backup strategy.
Everyone has accidentally forgotten
the WHERE clause on a DELETE query and
blasted some un-backed up data once or
twice.
No. I always prototype my DELETEs as SELECTs and only if the latter gives the results I want to delete change the statement before WHERE to a DELETE. This let's me inspect in any needed detail the rows I want to affect before doing anything.
You could set up a view on that table that selects WHERE to_be_deleted != 1, and all of your normal selects are done on that view - that avoids having to put the WHERE on all of your queries.
The pitfall is that it's unnecessarily complicated and someone will inadvertently forget too check the flag in their query. There's also the issue of potentially needing to delete something immediately instead of wait for the scheduled job to run.
To avoid the to_be_deleted WHERE clause you could create a trigger before the delete command fires off to insert the deleted rows into a separate table. This table could be cleared out when you're sure everything in it really needs to be deleted, or you could keep it around for archive purposes.
You also get a "soft delete" feature so you can give the(certain) end-users the power of "undo" - there would have to be a pretty strong downside in the mix to cancel the benefits of soft deleting.
The "WHERE to_be_deleted <> 1" on every other query is a huge one. Another is once you've ran your accidentally rogue query, how will you determine which of the 2,349,325 were previously marked as deleted?
I think the practical solution is regular backups, and failing that, perhaps a delete trigger that captures the tuples to be axed.
The other option would be to create a delete trigger on each table. When anything is deleted, it would insert that "to be deleted" record into another table, ideally named TABLENAME_deleted.
The downside would be that the db would have twice as many tables.
I don't recommend triggers in general, but it might be what you are looking for.
This is why, whenever you are editing data by hand, you should BEGIN TRAN, edit your data, check that it looks good (for instance that you didn't delete more data than you were expecting) and then END TRAN. If you're using Postgres then you want to create lots of savepoints as well so that a typo doesn't wipe out your intermediate work.
But that said, in many applications it does make sense to have software mark records as invalid rather than deleting them. Add a last_modified date that is automatically updated, and you are all prepared to set up incremental updates into a data warehouse. Even if you don't have a data warehouse now, it never hurts to prepare for the future when preparing is cheap. Plus in the event of manual mistakes you still have the data, and can just find all of the records that got "deleted" when you made your mistake and fix them. (You should still use transactions though.)