Multi table auditing stucture - sql

In this scenario I have the following tables:
HEADER_TABLE
DETAIL_TABLE_1 (FK to HEADER_TABLE)
And
HEADER_TABLE_AUDIT
DETAIL_AUDIT_TABLE_1
What I would like to do is create a single snap shot of the two tables in the audit tables when a change...um "session".. occurs. So for instance, If the header record AND 3 of 20 associated child records are updated at the same time, or "session" then the triggers on each table will only result in writing of one header audit record and 20 detail audit records (as they were before changes are applied).
I had the idea to attach a change session id that I retrieve from a sequence, and attach to all the changes being made within that session (1 header record, and 3 child records) and passed on to the audit tables. This would result in 4 trigger fires between the 2 tables, but should only write 1 set of data (header record and associated detail records). A check for the change session id in the audit tables (either header or detail) would determine whether a new set needs to be created, or just skip if it already exists (1st trigger fires, doesn't find change session id, creates audit records for all tables, next trigger fires and sees same change session id exists already so it skips adding audit records and so on).
This works ok if I am jusy updating the header record only. The trouble I am running into is when I am updating child records, how do I select the 20 detail records within the trigger on the detail table (understandably, oracle doesn't allow this)?
Of course, I am open to other ideas on doing this, as this was the best I could think of to create a snap shot of data from all involved tables prior to it being updated. I have wrestled with this one for a while so any insight would be greatly appreciated.

There is a project I'm working on with a similar requirement. Let me describe how I have done this, it could be helpful for you.
Requirement: Have a copy of the entire record whenever there is a change. Suppose our table is the EMP table from the sample data provided by oracle.
Implementation:
EMP is no longer a table, it is a view. All the data (the active version AND all the older rows are stored in a table called EMP_ALL.
EMP_ALL has a couple of additional columns:
version_no: this is the timestamp in format YYYYMMDDHH24MISS so it is visible when the change was done.
active_version: contains 1 for the row which is current and 0 for all the audited rows.
emp_key: an id which is the same for each row for an employee - more about this later
EMP is a view with the definition:
CREATE OR REPLACE view EMP AS
SELECT
<all_columns>
FROM EMP_ALL
WHERE active_version = 1
There is an INSTEAD OF trigger on EMP so whenever EMP is updated, the old row is copied into EMP_ALL (with active_version=0) and the current row is updated EMP_ALL (with active_version = 1). The reason that we update the current row is that we want to keep the primary key of the active row because that can be used for any foreign key definitions. The foreign key needs to map to the active row, not to an archived row. The column EMP_KEY is used as an identifier for a single employee.
EMP_ALL has a BEFORE INSERT trigger to populate the audit columns.
The same logic can be applied to any child tables. The column "version_no" will indicate when something changed and determines the order of the changes. This is very useful when constructing an audit trail.
Note: You can not use the RETURNING INTO clause that is used by applications (like Apex) when doing insert on the EMP view.

You can get the a "session number" from the following:
select sid, serial# from v$session.
Use these along with current timestamp and the PK of the row being updated to generate a unique(? see not handled below) key for the AUDIT record. Write to the audit tables only the row being updated (update 3 of 20 details write only those 3). You can then reconstruct an "as of" version of the entire structure when ever necessary. This handles the following:
Header updated, but no details updated.
Detail(s) updated but not the header.
Multiple headers updated in single statement.
Multiple details across multiple headers updated in single statement.
What is not handled:
Multiple updates of single row within single transaction (may need
sequence to avoid duplicate generated key and provide proper sequencing).
Deletes. Especially troublesome delete header.
Inserts (if needed).

Related

Oracle APEX: Assign primary key as interactive grid ROWID, Use select Interactive Grid ROWID in SQL query

To preface -- I am as green as at gets.
I am tasked with building an app for internal org use. We have a DB with patient data, and in interface with a hospital electronic medical records system. patient data entered into the EMR is sent to us via interface to update the patient profile in our database. Partial matches require manual intervention.
Message is received in a table within a staging schema
attempts to match to existing patient
if there are only 'partial matches' a status is set to 'mismatch'
I want to:
Display a summary of all 'mismatch' status records. I want to use an interactive grid to select individual records.
Set ROWID of interactive grid rows to the respective primary key of the record in the staging table.
Use the selected Interactive Grid ROWID (user selects a specific record in the interactive grid) to select the matching primary key of the corresponding record in staging table to create SQL query to find potential matches within our DB. Potential matches are displayed in a second table.
Questions:
How do I set the rowID of an Interactive grid to the unique key column of the staging table?
--Some research shows I need a hidden item, is this correct?
How do I access a rowID that has been selected in the Interactive grid to use in a SQL query?
My humble thanks and appreciation
So, your question is a bit confusing, but as far as I understand it. You are getting some data from table A, trying to match it with table B. If it matches, it irrelevant for us. If a match is not found you want to show it so that it can be manually matched.
In apex in order to update a table, you need to select what is the primary key by which it will update the data. That is usually a column in the table, but it can also be rowid(just include it in the SQL like any other column).
What I would suggest for you from what I understand of your situation.
Display the mismatched rows in an interactive grid, with rowid as primary key. Then you will need to have a column by which you match, if these entries already have some sort of key by which you tried to match but failed, display that. And have that column be a PopupLOV so the user can edit what value is in that field and set it to the appropriate match. One thing you will need to be careful about. You are editing a Unique key, or perhaps even Primary key, you might get conflicts here. Even if you only display unmatched data in the LOV, you can still have a user editing multiple rows and trying to match two rows to the same value, that will fail with an error that isnt particularly user friendly.

Delete from audit table in runtime

We are using Oracle 12.1 database,
We want to create a table which will hold runtime audit data
The relevant/used data is only in a week time frame (older records will become irrelevant) and we'll delete older records using a job
Table holds 3 columns ID, Date (Primary key) and DAY_COUNT
We want to reset specific records, which can be achieve by updating DAY_COUNT to 0
But we want to keep the table small and the old data is irrelevant to us, so we consider using delete instead of update
Is it safe to reset records in runtime using delete ?
It seems the not documented convention to prevent using delete, but is it relevant in this case?

3 Level authorization structure

I am working on banking application, I want to add a feature of maker,checker and authorize for every record in a table. I am explaining in below details
Suppose I have one table called invmast table. There are 3 users one is maker, 2nd one is checker and last one is authorize. So when maker user creates a transaction in database then this record is not live (means this record can not be available in invmast table). Once checker checked the record and authorizer authorized the record the record will go live ( means this record will insert in invmast table ). Same thing is applicable for update and delete also. So I want a table structure how to achieve this in real time. Please advice if any.
I am using vb.net and sql server 2008
Reads like a homework assignment.....
Lots of ways to solve this, here's a common design pattern:
Have an invmast_draft table that is identical to invmast but has an additional status column in the table. Apps need to be aware of this table, status column and what its values mean. In your case, it can have at least 3 values - draft, checked, authorized. Makers first create a transaction in this table. Once maker is done, the row is committed with the value "draft" in the status column. Checker then knows there's a new row to check and does his job. When done, row is updated with status set to checked. Authorizer does her thing. When authorizer updates the status as "authorized" you can then copy or move the row to the final invmast table rightaway. Alternatively, you can have a process that wakes up periodically to copy/move batches of rows. All depends on your business requirements. All kinds of optimizations can be performed here but you get the general idea.

One to many relationship with ensuring at least 1 record exists in (many table)

How would you design tables in the following scenerio.
I have two tables in one-to-many relationship.
Table A - One
Table B - Many
Such relation doesn't give me
on database level the protection that at least 1 record will be present in B table.
Moreover Table A should know the last identifier from Table B (basing on any rule).
How could I accomplish such a task?
The foreign key will be in Table B which will guarantee that every row will have a corresponding row in Table A. In a one-to-one relationship, you could have a redundant FK in Table A to guarantee the reverse, but for a one-to-many, that's not possible.
I came across a similar requirement a few years ago when I designed a method for maintaining versions of data. Table A would be the static or unchanging data (or just data that may change but was not necessary to track) and Table B contained each version of the data as it changed. My solution was to force all DML access to the tables through a view. Actually there were two main views, one which performed a one-to-many join which provided a complete history of the data changes. This had a "do nothing" trigger on it to render it read only (one shouldn't be able to change history). The other was a one-to-one join of the static data and only the current version. This provided the data as it existed "now." All DML went through this view.
When a row was inserted, the trigger inserted into both tables their prospective fields. When a row was updated, the static fields (if changed) was updated and the versioned data was inserted as a new row. Deletions were handled as a soft delete.
The point is, there was no way to insert only to the static table. Even if all the versioned fields of a new row happened to contained NULLs, those fields were still inserted into the versioned table. So it was not possible to have a row in Table A (my static table) that did not have at least one corresponding row in Table B (my versioned table).

use triggers to keep history of relational tables

say I have 6 tables.
Workstation
Workstation_CL
Location
Location_CL
Features
Features_CL
I am currently using triggers to do inserts into the "_CL" version of each table with an additional field that denotes whether the change was an "UPDATE", "INSERT" or "DELETE".
the workstation table keeps track of the "modified_by" user. if a user updates the location of a "Workstation" object, the "Location" table gets updated as well as the "Workstation" table. the only modification to the Workstation table is the "modified_by" field so that I will know who made the change.
The problem I am having is when I think about pulling an audit report. How will I link records in the "Location_CL" to the ones in the "Workstation_CL" both are populated by separate triggers.
somehow my question portion was erased. sorry about that.
Question: how can I pull some type of unique identifier to have in both the "Workstation_CL" and the "Location_CL" so that I can identify each revision? for instance, when I pull all records from the "Location_CL" and I see all location changes, pulling the username from the "Workstation_CL" that made the location change?
Give each revision a GUID generated by the trigger. Populate a field (RevisionId) in both tables with the value.
You need 2, maybe 3 columns on each audit table.
1) Timestamp, so you know when the changes were made.
2) User changed, so you can track who made the changes - I assume that Location can change independently of Workstation.
3) You might need an identifier for the transaction, too. I THINK you can get an id from the DB, though I'm not sure.
I don't think you can have an effective report without timestamps and users, though, and I don't think you just have the user on one table.
During the trigger event, I was able to exec the following:
SELECT #trans_id=transaction_id FROM sys.dm_tran_current_transaction
which gives me the transaction id for the current operation.
with that, I am able to insert it in to the corresponding _CL table and then perform selects that will match the auto-gen id's.