To understand an application it would be quite helpful if every database change could be logged somehow. Then you can execute an action on the frontend and see what happens in the database. (I only care about the last 5 minutes, or so, but more would not hurt.) What possibilities exist for that?
I know it is possible to configure the JDBC driver to log the executed statements, but it logs rather more than I want to see (I don't care about queries, for instance) and is mixed wildly into your logfiles. :-/
Another thing I can think of is to create triggers for each table that write data on the changes into a logging-table. Did anyone manage to do that? Especially creating a script that creates those triggers for a given set of tables?
You can find many examples of triggers in Oracle documentation. What you need is after insert, update, delete for each row triggers to populate your log tables. Here's some example from Oracle docs:
http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/triggers.htm#LNPLS020
CREATE TABLE Emp_log (
Emp_id NUMBER,
Log_date DATE,
New_salary NUMBER,
Action VARCHAR2(20));
CREATE OR REPLACE TRIGGER Log_salary_increase_ARUID
AFTER UPDATE OR INSERT OR DELETE ON emp
FOR EACH ROW
BEGIN
-- Can be separated for Inserting then Updating with addl if
-- In this case it may be easier to control and/or add flags to your log tables
-- such as Action = 'INS' or Action = 'UPD' --
If (INSERTING OR UPDATING)
THEN
-- Insert newly created/updated values to your log table --
INSERT INTO Emp_log (Emp_id, Log_date, New_salary, Action)
VALUES (:NEW.Empno, SYSDATE, :NEW.SAL, 'INS_UPD');
ELSE
-- Deleting - insert old or deleted values to your logs --
INSERT INTO Emp_log (Emp_id, Log_date, New_salary, Action)
VALUES (:OLD.Empno, SYSDATE, :OLD.SAL, 'DEL');
END;
/
With the triggers you can just keep the previous data (a history of your data). why you don't simply insert your logs in a log table?
Related
Is there a way in Informix (v12 or higher) to retrieve the name of the current SAVEPOINT?
In Oracle there is something similar: You can name the transaction using SET TRANSACTION NAME and then select the transaction name from v$transaction:
SELECT name
FROM v$transaction
WHERE xidusn
|| '.'
|| xidslot
|| '.'
|| xidsqn = DBMS_TRANSACTION.LOCAL_TRANSACTION_ID;
That is not very straightforward, but it does the trick. Effectively we can use that to have a transaction scoped variable (yes, that is ugly, but it works for years now).
We have a mechanism based on this and would like to port that to Informix. Is there a way to do that?
Of course, if there is a different mechanism providing transaction scoped variables (so DEFINE GLOBAL is not what we are looking for), that would be helpful, too, but I doubt, there is one.
Thank you all for your comments so far.
Let me show the solution I have come up with. It is just a work in progress idea, but I hope it will lead somewhere:
I will need a "audit_lock" table which always contains a record for the current transaction carrying information about the current transaction, especially a username and a unique transaction_id (UUID or similar). That row will be inserted on starting the transaction and deleted before committing it.
Then I have a generic audit_trail table containing the audited information.
All audited tables fill the generic audit trail table using triggers, serializing each audited column into a separate record of the generic audit trail table.
The audit_lock and the audit_trail table need to use row locking. Also to avoid read locks on the audit_lock table we need to set the isolation level to COMMITTED READ LAST COMMITTED. If your use case does not support that, the suggested pattern does not work.
Here's the DDL:
CREATE TABLE audit_lock
(
transaction_id varchar(40) primary key,
username varchar(40)
);
alter table audit_lock
lock mode(ROW);
CREATE TABLE audit_trail
(
id serial primary key,
tablename varchar(255) NOT NULL,
record_id numeric(10) NOT NULL,
username varchar(40) NOT NULL,
transaction_id varchar(40) NOT NULL,
changed_column_name varchar(40),
old_value varchar(40),
new_value varchar(40),
operation varchar(40) NOT NULL,
operation_date datetime year to second NOT NULL
);
alter table audit_trail
lock mode(ROW);
Now we need to have the audited table:
CREATE TABLE audited_table
(
id serial,
somecolumn varchar(40)
);
And the table has an insert trigger writing into the audit_trail:
CREATE PROCEDURE proc_trigger_audit_audited_table ()
REFERENCING OLD AS o NEW AS n FOR audited_table;
INSERT INTO audit_trail
(
tablename,
record_id,
username,
transaction_id,
changed_column_name,
old_value,
new_value,
operation,
operation_date
)
VALUES
(
'audited_table',
n.id,
(SELECT username FROM audit_lock),
(SELECT transaction_id FROM audit_lock),
'somecolumn',
'',
n.somecolumn,
'INSERT',
sysdate
);
END PROCEDURE;
CREATE TRIGGER audit_insert_audited_table INSERT ON audited_table REFERENCING NEW AS post
FOR EACH ROW(EXECUTE PROCEDURE proc_trigger_audit_audited_table() WITH TRIGGER REFERENCES);
Now let's use that: First the caller of the transaction needs to generate a transaction_id for himself, maybe using a UUID generation mechanism. In the example below the transaction_id is simply '4711'.
BEGIN WORK;
SET ISOLATION TO COMMITTED READ LAST COMMITTED; --should be set globally
-- Issue the generation of the audit_lock entry at the beginnig of each transaction
insert into audit_lock (transaction_id, username) values ('4711', 'userA');
-- Is it there?
select * from audit_lock;
-- do data manipulation stuff
insert into audited_table (somecolumn) values ('valueA');
-- Issue that at the end of each transaction
delete from audit_lock
where transaction_id = '4711';
commit;
In a quick test, all of this worked even in simultaneaous transactions. Of course, that still needs a lot of work and testing, but I currently hope that path is feasible.
Let me also add a little bit more info on the other approach we are using in Oracle:
In Oracle we are (ab)using the transaction name, to store exactly the information that in the suggestion above is stored in the audit_lock table.
The rest is the same as above. The triggers work perfectly in that specific application, even though there are of course a lot of scenarios for other applications, where putting insert, delete and update triggers on each table generating records for each changed column in the table would be nuts. In our application it works perfectly for ten years now and it has no mentionable performance impact on the way the application is used.
In the java application server all code blocks, that are changing data, start with setting the transaction name first and then do loads of changes to various tables, that might be issuing all these triggers. All of these are running in the same transaction and since that has a transaction name which contains the application user, the triggers can write that information to the audit trail table.
I know there are other approaches to the problem, and you could even do that with hibernate features only, but our approach allows us to enforce some consistency through the database (NOT NULL constraint in the audit trail table on the username). Since everything is done via triggers, we can let those fail, if the transaction name is not containing the user (by requiring it to be in a specific format). If there any other portions of the application, other applications or ignorant administrators trying to issue updates to the audited tables without respecting to set the transaction name to the specific format, those updates will fail. This makes updates to the audited tables, that do not generate the required audit table entries harder (certainly not impossible, a ill willing admin can do anything, of course).
So all of you that are cringing now, let me quote Luis: Might seem like a terrible idea, but I have my use case ;)
The idea of #Luís to creating a specific table in each transaction to store the information causes a locking issue in systables. Let's call that "transaction info table". That idea did not cross my mind, since DDL causes commits in Oracle. So I tried that in Informix but if I try to create a table called "tblX" in two simultaneaous transactions, the second transaction get's a locking exception:
Cannot update system catalog (systables). [SQL State=IX000, DB Errorcode=-312]
Next: ISAM error: key value locked [SQL State=IX000, DB Errorcode=-144]
But letting all transactions use the same table as above works, as far as I tested it right now.
I've created a query out of 3 dependent tables:
car parts(id, name, value);
car_model(id, name,price);
part_and_model(car_model_id, car_part_id)
I get my car models from a list of values, and for car_id I use sequence for increment.
I have all the needed fields from my query, but now I want to insert data into car_parts and part_and_model at the same time. I've created two processes on submit to insert (the first for car_parts(fk) and the second for part_and_model). When I run this, I get this error and no data gets inserted into the tables:
ORA-02291: integrity constraint (PRACTICE.FK_CAR_PART_car_part) violated - parent key not found
When I delete the second process, the data is correctly inserted into the car_parts table. Why is this happening? Does the second process execute before first? What else am I missing?
item P8_CAR_MODEL = LOV;
item P8_ID=>sequence: SELECT CP_SEK.NEXTVAL FROM DUAL;
process1:insert into CAR_PARTS(CP_ID,CP_NAME,CP_PRICE) VALUES (:P8_ID, :P8_NAME, :P8_PRICE);
process2: insert into part_and_model(PART_ID, MODEL_ID) values (:P8_ID,:P8_CAR_MODEL);
EDIT:
I managed to insert into car_parts table, but the error still says that the parent cannot be found (even though) it's in the table, so it must be the sequence...
here is my code:
process1:
begin
select PART_SEK.NEXTVAL INTO :P8_ID from dual;
insert into car_parts(id,name,value) VALUES (:P8_ID, :P8_NAME, :P8_PRICE);
end;
process2:
insert into part_and_model(car_model_id, car_part_id) values (:P8_MODEL,:P8_ID);
EDIT2:
when i use PART_SEK.currval instead of p8_id in second process, it works
Once again, thank you all for your time.
Insert rows into both tables in the same process; master table first, details next.
Look at the sequence of the processes. The first process must have a lower sequence e.g 10 and the second must be greater.
Be sure that you in the first process you have to insert into the master table.
Change the processes execution point to processing.
DECLARE
a_id NUMBER;
BEGIN
a_id := your_sequence.nextval;
INSERT INTO car_parts (cp_id, cp_name, cp_price)
VALUES (a_id, :P8_NAME, :P8_PRICE);
INSERT INTO part_and_model (part_id, model_id)
VALUES (a_id, :P8_CAR_MODEL);
END;
By all rights this should work. I do believe your problem really is with the sequence, nothing else I can think of that would cause your error. So please try this and let us know.
I've a table. In this table I have two columns - 'insert_name' and 'modified_name'. I need to insert into this columns data about who has inserted data into the table('insert_name') and who has changed these data in the table (modified_name). How it can be done?
You are looking for basic DML statements.
If your record is already in the table, then you need to UPDATE it. Otherwise, when you are about to add your record to it and it doesn't already exist in the destination table then you are looking for INSERT INTO statement.
Example of updating information for record with first id:
UPDATE yourtable SET insert_name = 'value1', modified_name = 'value2' WHERE id = 1
Example of inserting new record:
INSERT INTO yourtable(id, company_name, product_name, insert_name)
VALUES (1, 'Google', 'PC', 'value1')
If you are looking for automatic changes to those columns then you need to look into triggers.
Remember that more often than not you may find that the application connecting to the database is using single database user in which case you probably know the context within the application itself (who inserts, who updates). This does eliminate triggers and put the task straight on simple insert/update commands from within your application layer.
You might be able to use the CURRENT_USER function to find the name of the user making the change.
The value from this function could then be used to update the appropriate column. This update could be done as part of the INSERT or UPDATE statement. Alternatively use an INSERT or UPDATE trigger.
Personally I avoid triggers if I can.
For those 2 columns add Current_User as Default constraint.
As the first time Insert Statement will save them with current login user names. For update write an Update trigger with the same Current_User statement for the column Modified_Name.
If and only if your application business logic can't update the column modified_nme then only go for Trigger.
See the use of Current_Use
https://msdn.microsoft.com/en-us/library/ms176050.aspx
I have written a Trigger which is transferring a record from a table members_new to members_old. The Function of trigger is to insert a record into members_old on after insert in members_new. So suppose a record is getting inserted into a members_new like
nMmbID nMmbName nMmbAdd
1 Abhi Bangalore
This record will get inserted into members_old with the same data structure of the table
My trigger is like :
create trigger add_new_record
after
insert on members_new
for each row
INSERT INTO `test`.`members_old`
(
`nMmbID`,
`nMmbName`,
`nMmbAdd`
)
(
SELECT
`members_new`.`nMmbID`,
`members_new`.`nMmbName`,
`members_new`.`nMmbAdd`
FROM `test`.`members_new`
where nMmbID = (select max(nMmbID) from `test`.`members_new` // written to read the last record from the members_new and stop duplication on the members_old , also this will reduce the chances of any error . )
)
This trigger is working for now , but my confusion is that what will happen if a multiple insertion is happening at one instance of time.
Will it reduce the performance?
Will I face deadlock condition ever in any case as my members_old have FKs?
If any better solution for this situation is there, please give limelight on that
From the manual:
You can refer to columns in the subject table (the table associated with the trigger) by using the aliases OLD and NEW. OLD.col_name refers to a column of an existing row before it is updated or deleted. NEW.col_name refers to the column of a new row to be inserted or an existing row after it is updated.
create trigger add_new_record
after
insert on members_new
for each row
INSERT INTO `test`.`members_old`
SET
`nMmbID` = NEW.nMmbID,
`nMmbName` = NEW.nMmbName,
`nMmbAdd` = NEW.nMmbAdd;
And you will have no problem with deadlocks or whatever. Also it should be much faster, because you don't have to read the max value before (which is also unsecure and might lead to compromised data). Read about isolation levels and transactions if you're interested why...
I have two tables
moduleprogress which contains fields:
studentid
modulecode
moduleyear
modules which contains fields:
modulecode
credits
I need a trigger to run when the user is attempting to insert or update data in the moduleprogress table.
The trigger needs to:
look at the studentid that the user has input and look at all modules that they have taken in moduleyear "1".
take the modulecode the user input and look at the modules table and find the sum of the credits field for all these modules (each module is worth 10 or 20 credits).
if the value is above 120 (yearly credit limit) then it needs to error; if not, input is ok.
Does this make sense? Is this possible?
#a_horse_with_no_name
This looks like it will work but I will only be using the database to input data manually so it needs to error on input. I'm trying to get a trigger similar to this to solve the problem(trigger doesn't work) and forget that "UOS_" is before everything. Just helps me with my database and other functions.
CREATE OR REPLACE TRIGGER "UOS_TESTINGS"
BEFORE UPDATE OR INSERT ON UOS_MODULE_PROGRESS
REFERENCING NEW AS NEW OLD AS OLD
DECLARE
MODULECREDITS INTEGER;
BEGIN
SELECT
m.UOS_CREDITS,
mp.UOS_MODULE_YEAR,
SUM(m.UOS_CREDITS)
INTO MODULECREDITS
FROM UOS_MODULE_PROGRESS mp JOIN UOS_MODULES m
ON m.UOS_MODULE_CODE = mp.UOS_MODULE_CODE
WHERE mp.UOS_MODULE_YEAR = 1;
IF MODULECREDITS >= 120 THEN
RAISE_APPLICATION_ERROR(-20000, 'Students are only allowed to take upto 120 credits per year');
END IF;
END;
I get the error message :
8 23 PL/SQL: ORA-00947: not enough values
4 1 PL/SQL: SQL Statement ignored
I'm not sure I understand your description, but the way I understand it, this can be solved using a materialized view, which might give better transactional behaviour than the trigger:
CREATE MATERIALIZED VIEW LOG
ON moduleprogress WITH ROWID (modulecode, studentid, moduleyear)
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG
ON modules with rowid (modulecode, credits)
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW mv_module_credits
REFRESH FAST ON COMMIT WITH ROWID
AS
SELECT pr.studentid,
SUM(m.credits) AS total_credits
FROM moduleprogress pr
JOIN modules m ON pr.modulecode = m.modulecode
WHERE pr.moduleyear = 1
GROUP BY pr.studentid;
ALTER TABLE mv_module_credits
ADD CONSTRAINT check_total_credits CHECK (total_credits <= 120)
But: depending on the size of the table this might however be slower than a pure trigger based solution.
The only drawback of this solution is, that the error will be thrown at commit time, not when the insert happens (because the MV is only refreshed on commit, and the check constraint is evaluated then)