Is there any way in oracle database to define trigger which will be fired synchronously before COMMIT (and ROLLBACK if it throws exception) in case when specified table is changed?
There is no ON COMMIT trigger mechanism in Oracle. There are workarounds however:
You could use a materialized view with ON COMMIT REFRESH and add triggers to this MV. This would allow you to trigger the logic when a base table has been modified at the time of commit. If the trigger raises an error, the transaction will be rolled back (you will lose all uncommited changes).
You can use DBMS_JOB to defer an action to after the commit. This would be an asynchronous action and may be desirable in some cases (for example when you want to send an email after the transaction has been successful). If you roll back the primary transaction, the job will be cancelled. The job and the primary session are independent: if the job fails the main transaction will not be rolled back.
In your case, you could probably use option (1). I personnaly don't like to code business logic in triggers since it adds a lot of complexity but technically I think it would be doable.
I had a similiar problem, but option 1 was unfortunately not convenient for my case.
Another possible solution, which is also suggested by "Ask Tom", is to specify a stored procedure and simply call that procedure before executing the COMMIT.
This solution is only convenient if you have access to the code which executes the COMMIT, but for my case this was the easiest solution.
Related
I want to implement an audit log using triggers which gets fired on created, changed and deleted data to store some values. Those triggers should be able to use user ids which made the changes and which are managed by the web application. I have some ideas on providing this data, but I don't seem to fully understand what the execution context of a trigger is. I've read through the PostgreSQL docs Overview of Trigger Behavior and others but my question doesn't seem to be answered.
What I want to know is the interaction between a client session with one running transaction and the trigger execution and the lifetime of both and how they depend on each other. From my understanding triggers are executed within the database independently from the client session which created the event which lead to trigger execution. Is that correct? That would mean triggers and their processing wouldn't impact performance of the client request and the client can close the session at any time. If both are independent, how would a trigger get notified about a client rolling back a transaction, which would logically mean that no data got changed at all? Or are triggers onyl executed after committing a transaction because they run independently?
Or are triggers executed async within the client session which created the events which lead to trigger execution? This would mean that if the client closes it's session for any reason, the trigger would abort, too. Their changes are directly bound to the clients transaction and can be rolled back, too.
I need to understand the behavior to know what I would like to do in another question.
Thanks for your input!
From my understanding triggers are executed within the database
independently from the client session which created the event which
lead to trigger execution. Is that correct? That would mean triggers
and their processing wouldn't impact performance of the client request
and the client can close the session at any time
No they totally depend on the client session, as part of the transaction which itself is tied to the session.
See this excerpt from CREATE TRIGGER (9.1):
They can be fired either at the end of the statement causing the
triggering event, or at the end of the containing transaction; in the
latter case they are said to be deferred
From your other question it appears you're using 8.4, which doesn't have deferred triggers, so it's even simpler. Triggers run always at the end of the statement (the triggering event), which means before the acknowledgment of execution is sent by the server to the client.
A COMMIT immediately following would be a new instruction, and could not be executed before the trigger is finished.
According to several resources, such as this,
A query that is executed within the context of a trigger is automatically wrapped in a transaction. If there are any distributed queries in the trigger code, the transaction is promoted to a distributed transaction automatically.
Simple question - is there a way to prevent this behavior? I'm looking for a way to explicitly prevent code in my trigger from running in the context of a transaction.
If you are trying to do something asynchronous so that the calling transaction doesn't have to wait, you may consider Service Broker, which is designed to do exactly that - go fire off some asynchronous task, and return control to the caller, regardless of transaction scope.
Another idea is to not have your trigger perform the work, but instead pop a work item onto a queue table, and have a background process running continuously to process the queue. This isn't necessarily easy to do if your work item operates on the set of data in inserted/deleted but without more context it certainly seems like a viable option.
I don't know of a way to prevent a trigger from being a part of the calling transaction - in fact that's kind of the whole point.
This is called "autonomous transaction", and the simplest way to implement is by creating a linked server to point to the original database.
See this MSDN blog for a possible solution.
I am connecting to a SQL Server using no autocommit. If everything is successful, I call commit. Otherwise, I just exit. Do I need to explicitly call rollback, or will it be rolled back automatically when we close the connection without committing?
In case it matters, I'm executing the SQL commands from within proc sql in SAS.
UPDATE: It looks like SAS may call commit automatically at the end of the proc sql block if rollback is not called. So in this case, rollback would be more than good practice; it would be necessary.
Final Update: We ended up switching to a new system, which seems to me to behave the opposite of our previous one. On ending the transaction without specifying committing or rolling back, it will roll back. So, the advice given below is definitely correct: always explicitly commit or rollback.
It should roll back on close of connection. Emphasis on should for a reason :-)
Proper transaction and error handling should have you always commit when the conditions for commit are met and rollback when they aren't. I think it is a great habit to always commit or rollback when done and not rely on disconnect/etc. All it takes is one mistake or incorrectly/not closed session to create a blocking chain nightmare for all :-)
I have a table with a trigger assigned to it. And this trigger changes the same table data. Sure, this initiates a new trigger.
Every trigger instance knows (there are some rules), should it be the last one in the chain or not. And if it should, it has to turn the next trigger off.
I see the following problem: if I have a state (say, stop flag), it could work in an unexpected way. For instance, a user changes the table. A new trigger chain is being initiated. The trigger wants to be a terminator and set the stop flag up. In this moment another user changes the table => a new trigger chain is being initiated, that should be executed. But, as the stop flag is set up, it clear the flag and quits. Now, the recursive trigger (which is ignored we think) is started, looking whether the flag is cleared... Oops, it is executed!
I don't know, what is the order in such cases, will the recursive trigger be executed immediately after changing the data or the parent one is completed first, so I have no ideas, how to organize this process.
Regards,
Consider ditching the complicated triggers and simplifying everything into either stored procedures, or if possible, standard SQL set-based operations.
Stored procedures are easier to understand and maintain then many layers of triggers on a given table. Triggers do have value in some scenarios, but when you have triggers that invoke a chain of triggers, or have triggers that have dependencies on data being revised from other triggers, all on the same table, then you really begin to give yourself a maintenance nightmare. Simplify as a starting point by either improving your SQL update / insert statements, or refactor your triggers into a stored procedure of some sort.
I was thinking of using AUTONOMOUS_TRANSACTION Pragma for some logging in a batch process. Does anyone have any experience with this ? If so any pros and cons would be appreciated.
IMO Autonomous Transactions are particularly adapted to logging: they run independently from the main session, meaning you can write in a table, commit or rollback changes without affecting the main transaction.
They also add little overhead: if you run big statements and add an autonomous transaction between each statement the performance cost will be negligible.
There is also a side-effect that you may find interesting: since the autonomous transactions are in independant sessions from the calling transaction, you can follow the progression of your main process as it is running. You don't have to wait for the main transaction to finish: you can query the logging table as it is filled by the autonomous transactions.
Obviously, any logging done in an autonomous transaction will remain in the database even if the main transaction rolls back. For logging this is probably what you want, but it is important to remember that a log record saying "inserted row X into table Y" doesn't mean that that insert actually got committed.