SQL Trigger for maintaining modification history - sql

I have three tables:
Table1: Customers (CustomerId, Name, CustomerAddress)
Table2: AccountManagers(ManagerId, Name)
Table3: CustomerAccountManagers (CustomerId, ManagerId, Remarks)
Table4: CustomerHistory(CustomerAddress, ManagerId, Date)
The CustomerHistory table is used to store any changes made to "CustomerAddress" OR "ManagerId" for example CustomerAddress is updated from "Address1" to "Address2" OR The CustomerAccountManager changes from "Manager1" to "Manager2".
I need to store the changes in CustomerHistory table through SQL Trigger. The issue is that the on which table should i have my trigger? Please note that the changes are made at the same time to both the tables "Customers" & "CustomerAccountManagers".
Thanks

First of all, the CustomerHistory table should probably contain CustomerId as well, so that a history record can be tracked back to the proper customer.
You'll need two triggers: one on CustomerAccountManagers and one on Customers. If you can guarantee the order in which they are executed it's fine: the first trigger will insert, the second update the history record.
If you cannot guarantee the order, things get complicated, as each trigger would have to: 1) try to insert a new record, and failing that 2) update the existing. You'd have to protect yourself from the intermittent insertion by another trigger and this likely means running within a transaction with serializable isolation level (locking the whole table). This is deadlock-prone, so it's really better to use two history tables, as others already suggested.

You should have different historic tables for each normal table you have.
Then you can place a trigger in each normal table.
Finally, if you need, you can create a view CustomerHistory joining the different historic tables.

You need to add your trigger to the table where data changes will "trigger" a need to do something.
So a trigger on the Customers table to track CustomerAddress changes and a trigger on CustomerAccountManagers when the ManagerId is changed.

Related

How to Use Trigger to Log Changes in SQL Server?

Users table:
LoginLog table:
How can I log Name, Password, LastLogonTime to LoginLog table when Users table LastLogonTime column is updated and insert a row?
You need a fairly simple trigger on the update of the Users table. The trickier part is being aware of the fact that triggers are fired only once for each statement - and such a statement could potentially update multiple rows which would then be in your Inserted and Deleted pseudo tables in the trigger.
You need to write your trigger to be aware of this set-based manner and handle it correctly. In order to be properly able to link the old and new values, your table Users must have a proper primary key (you didn't mention anything about that) - something like a UserId or the like.
Try something like this:
CREATE TRIGGER dbo.trg_LogUserLogon
ON dbo.Users
FOR UPDATE
AS
-- inspect the Inserted (new values, after UPDATE) and Deleted (old values, before UPDATE)
-- pseudo tables to find out which rows have had an update in the LastLogonTime column
INSERT INTO dbo.LoginLog (Name, Password, LastLogonTime)
SELECT
i.Name, i.Password, i.LastLogonTime
FROM
Inserted i
INNER JOIN
-- join the two sets of data on the primary key (which you didn't specify)
-- could be i.UserId = d.UserId or something similar
Deleted d on i.PrimaryKey = d.PrimaryKey
WHERE
-- only select those rows that have had an update in the LastLogonTime column
i.LastLogonTime <> d.LastLogonTime
But please also by all means take #Larnu's comments about not EVER storing passwords in plain text into account! This is really a horribly bad thing to do and needs to be avoided at all costs.

What kind of approach is this in SQL, it actually exists? Is it viable/good pratice?

One of our teachers gave us the following challenge:
"Make a Database schema with the following principle:
you can't change any values on any table, only add new ones."
I came with the following schema:
CREATE TABLE TRANSACTIONS(ID PRIMARY KEY, TRANSACTION_TYPE_FK, DATE);
CREATE TABLE TRANSACTION_TYPE(ID PRIMARY KEY, NAME);
CREATE TABLE PRODUCTS_TRANSACTIONS(ID_PROD_FK, ID_TRANS_FK, MONEY, QTY);
CREATE TABLE PRODUCTS(ID PRIMARY KEY, NAME, PRICE_FK );
CREATE TABLE PRICES(ID PRIMARY KEY, DATE, DETAILS);
It's just a proof of concept. Basically everything is based on transactions.
Transactions can be Entry, Exit and Move Products and In & Out Money.
I can control my quantities and cash based on transactions.
The PRODUCTS_TRANSACTIONS "MONEY" field is used if a transaction involves money only or there are "discounts" or "taxes" on the transaction.
The Products Table has a "child" table called "prices", it storages all the price changes , the "details" field is for annotations like "Cost Price" etc.
I made it very quick, I am sorry for any inconsistency.
I liked this kind of approach, I am kinda of a newbie with SQL so I really wanted to know if this approach has a name and if it is viable perfomance-wise or a good pratice.
My idea is making a View and "update" it whenever a new transaction is made, since nothing needs to be "updated" I only need to add new rows to the View.
I am currently very sick, so I can't go to college to remedy my doubts.
Thanks in advance for any help
Let's take only one table TRANSACTION_TYPE(ID PRIMARY KEY, NAME) for example:
Now if you want to restrict update on the table, you can achieve that with following queries:
GRANT SELECT,INSERT,DELETE ON TRANSACTION_TYPE TO Username;
OR
Deny UPDATE ON TRANSACTION_TYPE TO Username;
Now to maintain history of insertion and deletion,you can store in another table by creating trigger on TRANSACTION_TYPE as follows:
CREATE or REPLACE TRIGGER my_trigger // name of trigger
AFTER INSERT OR DELETE
ON TRANSACTION_TYPE
FOR EACH ROW
BEGIN
IF INSERTING THEN
INSERT INTO TRANSACTION_INSERT_HISTORY(ID,NAME) //table that maintain history of insertion
VALUES(:new.ID,:new.NAME);
ELSIF DELETING THEN
INSERT INTO TRANSACTION_DELETE_HISTORY(ID,NAME) //table that maintain history of deleted records
VALUES(:old.ID,:old.NAME);
END IF;
END;
/
Before creating this trigger, you first have to create two tables:
TRANSACTION_INSERT_HISTORY(ID,NAME) and
TRANSACTION_DELETE_HISTORY(ID,NAME)
I have created two different tables for insertion and deletion for simplicity.
You can do it with one table too.
Hope it helps.
The table that holds the information, you could give permissions only to insert and select to the table, preventing update.
https://www.mssqltips.com/sqlservertip/1138/giving-and-removing-permissions-in-sql-server/
GRANT INSERT, SELECT ON TableX TO UserY
In a production system, you'd probably design this using a VIEW for selecting the data from the table (to only get the most recent revision of the audit data). With perhaps another VIEW that would allow you to see all the audit history. You'd probably also make use of a Stored Procedure for inserting the data and ensuring the data was being maintained in the audit history way you suggest.

Incremental load for Updates into Warehouse

I am planning for an incremental load into warehouse (especially for updates of source tables in RDBMS).
Capturing the updated rows in staging tables from RDBMS based the updates datetime. But how do I determine which column of a particular row needs to be updated in the target warehouse tables?
Or do I just delete a particular row in the warehouse table (based on the primary key of the row in staging table) and insert the new updated row?
Which is the best way to implement the incremental load between the RDBMS and Warehouse using PL/SQL and SQL coding?
In my opinion, the easiest way to accomplish this is as follows:
Create a stage table identical to your host table. When you do your incremental/net-change load, load all changed records into this table (based on whatever your "last updated" field is)
Delete the records from your actual table based on the primary key. For example, if your primary key is customer, part, the query might look like this:
delete from main_table m
where exists (
select null
from stage_table s
where
m.customer = s.customer and
m.part = s.part
);
Insert the records from the stage to the main table.
You could also do an update existing records / insert new records, but either way that's two steps. The advantage of the method I listed is that it will work even if your tables have partitions and the newly updated data violates one of the original partition rules, whereas an update would not accomplish that. Also, the syntax is much simpler as your update would have to list every single field, whereas the delete from / insert into allows you list only the primary key fields.
Oracle also has a merge clause that will update if it exists or insert if it does not. I honestly don't know how that would be impacted if you had partitions.
One major caveat. If your updates include deletes -- records that need to be deleted from the main table, none of these will resolve that and you will need some other way to handle that. It may not be necessary, depending on your circumstances, but it's something to consider.

Oracle: After update Trigger

I have 3 tables:
Category(CategoryId, Name)
Product(ProductId, Name, Description, CategoryId)
OrderItem(OrderId, OrdinalNumber, ProductId, CategoryId)
I want to create an AFTER UPDATE trigger that changes CategoryId (based on new ProductId) in OrderItem after update of ProductId in OrderItem.
Can somebody help with this trigger?
Duplicating the category ID in the order line isn't something you'd usually want to do, but if you're set on that, you need a 'before' trigger, not an 'after' one - since you need to change a value in the row being updated:
create or replace trigger orderitem_cat_trig
before insert or update on orderitem
for each row
begin
select categoryid
into :new.categoryid
from product
where productid = :new.productid;
end;
/
I've made it both insert and update on the assumption you'll want to set the value for new order items too.
Unless you like database deadlocks, general performance issues, data corruption and unpredictable results, this type of updates is not advisable. If your performance is a problem, check indexes and queries. Do not replicate your columns in tables, especially not when they're part of an foreign key. I'm not the dogmatic type, but in this case I will not budge ;-)

Audit Triggers: Use INSERTED or DELETED system tables

The topic of how to audit tables has recently sprung up in our discussions... so I like your opinion on whats the best way to approach this. We have a mix of both the approaches (which is not good) in our database, as each previous DBA did what he/she believed was the right way. So we need to change them to follow any one model.
CREATE TABLE dbo.Sample(
Name VARCHAR(20),
...
...
Created_By VARCHAR(20),
Created_On DATETIME,
Modified_By VARCHAR(20),
Modified_On DATETIME
)
CREATE TABLE dbo.Audit_Sample(
Name VARCHAR(20),
...
...
Created_By VARCHAR(20),
Created_On DATETIME,
Modified_By VARCHAR(20),
Modified_On DATETIME
Audit_Type VARCHAR(1) NOT NULL
Audited_Created_On DATETIME
Audit_Created_By VARCHAR(50)
)
Approach 1: Store, in audit tables, only those records that are replaced/deleted from the main table ( using system table DELETED). So for each UPDATE and DELETE in the main table, the record that is being replaced is INSERTED into the audit table with 'Audit_Type' column as wither 'U' ( for UPDATE ) or 'D' ( for DELETE)
INSERTs are not Audited. For current version of any record you always query the main table. And for history you query audit table.
Pros: Seems intutive, to store the previous versions of records
Cons: If you need to know the history of a particular record, you need to join audit table with main table.
Appraoch 2: Store, in audit table, every record that goes into main table ( using system table INSERTED).
Each record that is INSERTED/UPDATED/DELETED to main table is also stored in audit table. So when you insert a new record it is also inserted into audit table. When updated, the new version (from INSERTED) table is stored in Audit table. When deleted, old version (from DELETED) table is stored in audit table.
Pros: If you need to know the history of a particular record, you have everything in one location.
Though I did not list all of them here, each approach has its pros and cons?
I'd go with :
Appraoch 2: Store, in audit table, every record that goes into main table
( using system table INSERTED).
is one more row per item really going to kill the DB? This way you have the complete history together.
If you purge out rows (a range all older than X day) you can still tell if something has changed or not:
if an audit row exists (not purged) you can see if the row in question changed.
if no audit rows exist for the item (all were purged) nothing changed (since any change writes to the audit table, including completely new items)
if you go with Appraoch 1: and purge out a range, it will be hard (need to remember purge date) to tell new inserts vs. ones where all rows were purged.
A third approach we use alot is to only audit the interesting columns, and save both 'new' and 'old' value on each row.
So if you have your "name" column, the audit table would have "name_old" and "name_new".
In INSERT trigger, "name_old" is set to blank/null depending on your preference and "name_new" is set from INSERTED.
In UPDATE trigger, "name_old" is set from DELETED and "name_new" from INSERTED
In DELETE trigger, "name_old" is set from DELETED and "new_name" to blank/null.
(or you use a FULL join and one trigger for all cases)
For VARCHAR fields, this might not look like such a good idea, but for INTEGER, DATETIME, etc it provides the benefit that it's very easy to see the difference of the update.
I.e. if you have a quantity-field in your real table and update it from 5 to 7, you'd have in audit table:
quantity_old quantity_new
5 7
Easily you can calculate that the quantity was increased by 2 on the specific time.
If you have separate rows in audit table, you will have to join one row with "the next" to calculate difference - which can be tricky in some cases...