I have a dozen tables of whom I want to keep the history of the changes. For every one I created a second table with the ending _HISTO and added fields modtime, action, user.
At the moment before I insert, modify or delete a record in this tables I call ( from my delphi app ) a oracle procedure that copies the actual values to the histo table and then do the operation.
My procedure generates a dynamic sql via DBA_TAB_COLUMNS and then executes the generated ( insert into tablename_histo ( fields s ) select fields, sysdate, 'acition', userid from table_name
I was told that I can not call this procedure from a trigger because it has to select the table the trigger is triggered on. Is this true ? Is it possible to implement what I need ?
Assuming you want to maintain history using triggers (rather than any of the other methods of tracking history data in Oracle-- Workspace Manager, Total Recall, Streams, Fine_Grained Auditing etc.), you can use dynamic SQL in the trigger. But the dynamic SQL is subject to the same rules that static SQL is subject to. And even static SQL in a row-level trigger cannot in general query the table that the trigger is defined on without generating a mutating table exception.
Rather than calling dynamic SQL from your trigger, however, you can potentially write some dynamic SQL that generates the trigger in the first place using the same data dictionary tables. The triggers themselves would statically refer to :new.column_name and :old.column_name. Of course, you would have to either edit the trigger or re-run the procedure that dynamically creates the trigger when a new column gets added. Since you, presumably, need to add the column to both the main table and the history table, however, this generally isn't too big of a deal.
Oracle does not allow a trigger to execute a SELECT against the table on which the trigger is defined. If you try it you'll get the dreaded "mutating table" error (ORA-04091), and while there are ways to get around that error they add a lot of complexity for little value. If you really want to build a dynamic query every time your table is updated (IMO this is a bad idea from the standpoint of performance - I find that metadata queries are often slow, but YMMV) it should end up looking something like
strAction := CASE
WHEN INSERTING THEN 'INSERT'
WHEN UPDATING THEN 'UPDATE'
WHEN DELETING THEN 'DELETE'
END;
INSERT INTO TABLENAME_HISTO
(ACTIVITY_DATE, ACTION, MTC_USER,
old_field1, new_field1, old_field2, new_field2)
VALUES
(SYSDATE, strAction, USERID,
:OLD.field1, :NEW.field1, :OLD.field2, :NEW.field2)
Share and enjoy.
Related
I've read the Oracle docs on creating triggers and am doing things exactly how it shows, however this just isn't working. My goal is to update the TPM_PROJECT table with the minimum STARTDATE appearing in the TPM_TRAININGPLAN table. Thus, every time someone updates the STARTDATE column in TPM_TRAININGPLAN, I want to update teh TPM_PROJECT table. Here's what I'm trying:
CREATE TRIGGER Trigger_UpdateTrainingDelivery
AFTER DELETE OR INSERT OR UPDATE OF STARTDATE
ON TPM_TRAININGPLAN
FOR EACH ROW WHEN (new.TRAININGPLANTYPE='prescribed')
BEGIN
UPDATE TPM_PROJECT SET TRAININGDELIVERYSTART = (SELECT MIN(TP.STARTDATE) FROM TPM_TRAININGPLAN TP WHERE TP.PROJECTID = new.PROJECTID AND TP.TRAININGPLANTYPE='prescribed')
WHERE PROJECTID = new.PROJECTID
END;
The trigger is created with no errors, but I do get a warning:
Warnings: --->
W (1): Warning: execution completed with warning
<---
Of course Oracle isn't nice enough to actually tell me what the warning is, I simply am shown that there is one.
Next, if I update the training plan table with:
UPDATE TPM_TRAININGPLAN
set STARTDATE = to_date('03/12/2009','mm/dd/yyyy')
where TRAININGPLANID=15916;
I get the error message:
>[Error] Script lines: 20-22 ------------------------
ORA-04098: trigger 'TPMDBO.TRIGGER_UPDATETRAININGDELIVERY' is invalid and failed re-validation
Script line 20, statement line 1, column 7
Any ideas what I'm doing wrong? Thanks!
A few issues in no particular order.
First, in the body of a row-level trigger, you need to use :new and :old to reference the new and old records. The leading colon is necessary. So your WHERE clause would need to be
WHERE PROJECTID = :new.PROJECTID
Second, if you are running your CREATE TRIGGER in SQL*Plus, you can get a list of the errors and warnings using the SHOW ERRORS command, i.e.
SQL> show errors
You could also query the DBA_ERRORS table (or ALL_ERRORS or USER_ERRORS depending on your privilege level) but that's not something you normally need to resort to.
Third, assuming the syntax errors get corrected, you're going to get a mutating table error if you use this logic. A row level trigger on table A (TPM_TRAININGPLAN in this case) cannot query table A because the table may be in an inconsistent state. You can work around that, as Tim shows in his article, by creating a package with a collection, initializing that collection in a before statement trigger, populating the data in the collection in a row-level trigger, and then processing the modified rows in an after statement trigger. That's a decent amount of complexity to add to the system, however, since you'll have to manage multiple different objects.
Generally, you'd be better off implementing this logic as part of whatever API you use to manipulate the TPM_TRAININGPLAN table. If that is a stored procedure, it makes much more sense to put the logic to update TPM_PROJECT in that stored procedure rather than putting it in a trigger. It is notoriously painful to try to debug an application that has a lot of logic embedded in triggers because that makes it very difficult for developers to follow exactly what operations are being performed. Alternately, you could remove the TRAININGDELIVERYSTART column from TPM_PROJECT table and just compute the minimum start date at runtime.
Fourth, if your trigger fires on inserts, updates, and deletes, you can't simply reference :new values. :new is valid for inserts and updates but it is going to be NULL if you're doing a delete. :old is valid for deletes and updates but is going to be NULL if you're doing an insert. That means that you probably need to have logic along the lines of (referencing Tim's package solution)
BEGIN
IF inserting
THEN
trigger_api.tab1_row_change(p_id => :new.projectid, p_action => 'INSERT');
ELSIF updating
THEN
trigger_api.tab1_row_change(p_id => :new.projectid, p_action => 'UPDATE');
ELSIF deleting
THEN
trigger_api.tab1_row_change(p_id => :old.projectid, p_action => 'DELETE');
END IF;
END;
As Justin Cave have suggested, you can calculate the minimum start date when you need it. It might help if you create an index on (projectid, startdate);
If you really have a lot of projects and training plans, another solution could be to create a MATERIALIZED VIEW that has all the data that you need:
CREATE MATERIALIZED VIEW my_view
... add refresh options here ...
AS
SELECT t.projectid, MIN(t.start_date) AS min_start_date
FROM TPM_TRAININGPLAN t
GROUP BY t.projectid;
(sorry, don't have Oracle running, the above code is just for the reference)
How do I debug a complex query with multiple nested sub-queries in SQL Server 2005?
I'm debugging a stored procedure and trigger in Visual Studio 2005. I'd like to be able to see what the results of these sub-queries are, as I feel that this is where the bug is coming from. An example query (slightly redacted) is below:
UPDATE
foo
SET
DateUpdated = ( SELECT TOP 1 inserted.DateUpdated FROM inserted )
...
FROM
tblEP ep
JOIN tblED ed ON ep.EnrollmentID = ed.EnrollmentID
WHERE
ProgramPhaseID = ( SELECT ...)
Visual Studio doesn't seem to offer a way for me to Watch the result of the sub query. Also, if I use a temporary table to store the results (temporary tables are used elsewhere also) I can't view the values stored in that table.
Is there anyway that I can add a watch or in some other way view these sub-queries? I would love it if there was some way to "Step Into" the query itself, but I imagine that wouldn't be possible.
Ok first I would be leary of using subqueries in a trigger. Triggers should be as fast as possible, so get rid of any correlated subqueries which might run row by row instead of in a set-based fashion. Rewrite to joins. If you only want to update records based on what was in the inserted table, then join to it. Also join to the table you are updating. Exactly what are you trying to accomplish with this trigger? It might be easier to give advice if we understood the business rule you are trying to implement.
To debug a trigger this is what I do.
I write a script to:
Do the actual insert to the table
without the trigger on on it
Create a temp table named #inserted
(and/or one named #deleted)
Populate the table as I would expect
the inserted table in the trigger to
be populated from the insert you do.
Add the trigger code (minus the
create or alter trigger parts)
substituting #inserted every time I
reference inserted. (if you plan to
run multiple times until you are
ready to use it in a trigger throw
it in an explicit transaction and
rollback after checking your
results.
Add a query to check the table(s)
you are changing with the trigger for
the values you wanted to change.
Now if you need to add debug
statements to see what is happening
between steps, you can do so.
Run making changes until you get the
results you want.
Once you have the query working as
you expect it to, it is easy to take
the # signs off inserted and use it
to create the body of the trigger.
This is what I usually do in this type of scenerio:
Print out the exact sqls getting generated by each subquery
Then run each of then in the Management Studio as suggested above.
You should check if different parts are giving you the right data you expect.
I much prefer using this 'embedded' style inserts in a pl/sql block (opposed to the execute immediate style dynamic sql - where you have to delimit quotes etc).
-- a contrived example
PROCEDURE CreateReport( customer IN VARCHAR2, reportdate IN DATE )
BEGIN
-- drop table, create table with explicit column list
CreateReportTableForCustomer;
INSERT INTO TEMP_TABLE
VALUES ( customer, reportdate );
END;
/
The problem here is that oracle checks if 'temp_table' exists and that it has the correct number of colunms and throws a compile error if it doesn't exist.
So I was wondering if theres any way round that?! Essentially I want to use a placeholder for the table name to trick oracle into not checking if the table exists.
EDIT:
I should have mentioned that a user is able to execute any 'report' (as above). A mechanism that will execute an arbitrary query but always write to the temp_table ( in the user's schema). Thus each time the report proc is run it drops the temp_table and recreates it with, most probably, a different column list.
You could use a dynamic SQL statement to insert into the maybe-existent temp_table, and then catch and handle the exception that occurs when the table doesn't exist.
Example:
execute immediate 'INSERT INTO '||TEMP_TABLE_NAME||' VALUES ( :customer, :reportdate )' using customer, reportdate;
Note that having the table name vary in a dynamic SQL statement is not very good, so if you ensure the table names stay the same, that would be best.
Maybe you should be using a global temporary table (GTT). These are permanent table structures that hold temporary data for an Oracle session. Many different sessions can insert data into the same GTT, and each will only be able to see their own data. The data is automatically deleted either on COMMIT or when the session ends, according to the GTT's definition.
You create the GTT (once only) like this:
create globabal temporary table my_gtt
(customer number, report_date date)
on commit delete/preserve* rows;
* delete as applicable
Then your programs can just use it like any other table - the only difference being it always begins empty for your session.
Using GTTs are much preferable to dropping/recreating tables on the fly - if your application needs a different structure for each report, I strongly suggest you work out all the different structures that each report needs, and create separate GTTs as needed by each, instead of creating ordinary tables at runtime.
That said, if this is just not feasible (and I've seen good examples when it's not, e.g. in a system that supports a wide range of ad-hoc requests from users), you'll have to go with the EXECUTE IMMEDIATE approach.
I am currently not in a location to test any of this out but would like to know if this is an option so I can start designing the solution in my head.
I would like to create an insert trigger on a table. In this insert trigger, I would like to get values from the inserted virtual table and use them to UPDATE the same table. Would this work or would we enter some kind of infinite loop (even though the trigger is not for update commands).
As an example if a row was inserted (which represents a new rate/cost for a vendor) I would like to update the same table to expire the old rate/cost for that vendor. The expiration is necessary vs updating the record that already exists so a history of rates/costs can be kept for reporting purposes (not to mention that the current reporting infrastructure expects this type of thing to happen and we are migrating current reports/data to SQL Server).
Thanks!
If you have only an INSERT trigger and no UPDATE trigger then there isn't any problem, but I assume you want to catch also UPDATEs and perhaps even DELETEs.
The INSTEAD OF triggers are guaranteed not to behave recursively:
If an INSTEAD OF trigger defined on a
table executes a statement against the
table that would ordinarily fire the
INSTEAD OF trigger again, the trigger
is not called recursively
With and INSTEAD OF trigger you must do both the original INSERT and the UPDATE you desire.
This doesn't sound like it would cause any problems to me, providing you're not doing an INSERT in another UPDATE trigger.
I want to create a table trigger for insert and update. How can I get the values of the current record that is inserted/updated?
within the trigger, you can use a table called 'inserted' to access the values of the new records and the new version of the updated records. Similarly, the table called 'deleted' allows you to access deleted records and the original versions of updated records.
using function 'update' on column ( if you wanna check the fact of update) or retrieving rows from table 'inserted'
While triggers can be used for this, I'd be very careful about deciding to implement them. They are an absolute bear to debug, and can lead to a lack of maintainability.
if you need to do cascading updates (i.e. altering table A in turn changes table B), I would either use a stored procedure (which can be tested and debugged more easily than a trigger), or if you're fortunate enough to be using an ORM (Entity framework, NHibernate, etc.) perform this function within your model or repository.