Query to View new/updated tables - sql

I am using the following to list the row count for all of my tables:
select convert(varchar(30),object_name(id)) [Table Name], rows, ModifiedOn from sysindexes
where object_name(id) not like 'sys%' and indid = 1
order by object_name(id)
I confess that I found this somewhere and only have a conceptual idea of what it is doing. But for my purposes, where I want to perform an application action and reverse engineer what happened in the database, it works well to identify new rows (I copy and paste before/after results into excel to compare).
Now, I would also like to know which tables have been updated. On (almost) all of my tables there is a ModifiedOn column, so I am hoping I can add the max of this to my output, which will tell me when the table's contents were last updated.
I have no idea how to join these two, and any help is appreciated.

I would strongly suggest against this approach, as it is DB dependent and unreliable.
Creating an ON INSERT on ON UPDATE trigger it the correct solution, then in the trigger you can put the new or updated data into a separate table which you can query. Triggers are the tool to monitor changes in the database without doing anything in the applications using them.
Example trigger for 'after update' on table MY_TABLE(id, name):
CREATE TRIGGER mark_changes AFTER UPDATE ON my_table FOR EACH ROW
BEGIN
INSERT INTO tracking_table VALUES "Change in table my_table", OLD.id, NEW.id
END$$
This assumes you have a table tracking_table(description, old_id, new_id)

Related

How to Use Trigger to Log Changes in SQL Server?

Users table:
LoginLog table:
How can I log Name, Password, LastLogonTime to LoginLog table when Users table LastLogonTime column is updated and insert a row?
You need a fairly simple trigger on the update of the Users table. The trickier part is being aware of the fact that triggers are fired only once for each statement - and such a statement could potentially update multiple rows which would then be in your Inserted and Deleted pseudo tables in the trigger.
You need to write your trigger to be aware of this set-based manner and handle it correctly. In order to be properly able to link the old and new values, your table Users must have a proper primary key (you didn't mention anything about that) - something like a UserId or the like.
Try something like this:
CREATE TRIGGER dbo.trg_LogUserLogon
ON dbo.Users
FOR UPDATE
AS
-- inspect the Inserted (new values, after UPDATE) and Deleted (old values, before UPDATE)
-- pseudo tables to find out which rows have had an update in the LastLogonTime column
INSERT INTO dbo.LoginLog (Name, Password, LastLogonTime)
SELECT
i.Name, i.Password, i.LastLogonTime
FROM
Inserted i
INNER JOIN
-- join the two sets of data on the primary key (which you didn't specify)
-- could be i.UserId = d.UserId or something similar
Deleted d on i.PrimaryKey = d.PrimaryKey
WHERE
-- only select those rows that have had an update in the LastLogonTime column
i.LastLogonTime <> d.LastLogonTime
But please also by all means take #Larnu's comments about not EVER storing passwords in plain text into account! This is really a horribly bad thing to do and needs to be avoided at all costs.

What's a good logic/design of a SQL script to incrementally update a table?

So there's this table of just about 40,000 rows I am looking to update. Colleague said it's best to incrementally update the table instead of complete delete and load.
So I've tried hashing out the design and logic of a script to do this, but my inexperience is getting to me. I just don't know what's efficient and unneeded to incrementally update a table.
Currently, the warehouse looks like this: data comes from source into a table (let's call this T1) in Teradata. Then it's sent into another table (let's call this T2) in Teradata with some added fields such as timestamp. Lastly, a view is built on that last table for security reasons.
So with that laid out, I was thinking of creating a temp/volatile table with data from T1. This would have all the data up to the time the script is run with new records. Then, go through the entire table seeing if the ID (primary index) already exists in T2, and if not, add it to another temp table. Then somehow combine the second temp table with T2 and override T2 and build a view on top of that.
Does this make any sense?
There's also the possibility of records being updated. So they would already exist in T2, but have updated data in a new version of T1. I think comparing the values of all the columns from T1 to T2 would be highly inefficient, but can't think of another way to do this
A 40,000 row delete and insert should be pretty painless for any modern database. Ditto for updates.
The real reason for doing and incremental delete/update/insert is so you can log the changes and timestamp rows in the permanent table with the date/time of nsertion and/or last update. The usual technique goes something like this:
remove rows from the permanent table that don't exist in the temp table
update rows that exist in both tables
insert rows that exist in the temp table, but don't exist in the permanent table.
Looking at the Teradata docs, that would be something like this (no warranties about this being syntactically correct, since I don't have a Teradata instance to play with):
delete permanent p
where not exists ( select *
from temp t
where t.id = p.id
)
update p
from permanent p ,
temp t
set ...
where t.id = p.id
insert permanent
select ...
from temp t
where not exists ( select *
from permanent p
where p.id = t.id
)
One might note that the deletes might get a little hairy if there are dependent foreign key constraints involved.
One might also note that on the update, the where clause might get a tad...complicated if you want to check for actual changes to column values: not much point in updating a row if nothing has changed.
There's a Teradata MERGE command that you might find useful, check this post:
https://forums.teradata.com/forum/database/merge-syntax-simple-version
merge into merge_tmp as t using (select 1 as a,'stf' as b,'uuj' as c) as s
on t.a = s.a
when matched then update set c = s.c
when not matched then insert values (s.a,s.b,s.c);
If you need to match on more columns simple put an and in the on statement.
Edit: If you want to use MERGE you might also need to use a delete statement like the one in nicholas' post.

PL SQL trigger to insert history record when a column is updated

I would like to insert a row into a history table when any column is updated in a table.
I'm just looking to capture the column name, old value and new value.
I'd like this trigger to be as reusable as possible as I'm going to use the same concept on other tables.
I'm familiar with triggers and with how to capture updates on one column. I'm specifically looking for how to write one trigger that inserts a record into a history table for any column that gets updated in the history table's corresponding table.
EDIT 1
I have stated NOWHERE in my post that I'm looking for source code so shame on anyone that downvotes me and thinks that I'm looking for that. You can check my previous questions/answers to see I'm not one looking for "free source code".
As I stated in my original question, I'm looking for how to write this. I've examined http://plsql-tutorial.com/plsql-triggers.htm and there's a code block which shows how to write a trigger for when ONE column is updated. I figured that maybe someone would have the know-how to give direction on having a more generic trigger for the scenario I've presented.
Assuming a regular table rather than an object table, you don't have a whole lot of options. Your trigger would have to be something of the form
CREATE OR REPLACE TRIGGER trigger_name
AFTER UPDATE ON table_name
FOR EACH ROW
BEGIN
IF( UPDATING( 'COLUMN1' ) )
THEN
INSERT INTO log_table( column_name, column_value )
VALUES( 'COLUMN1', :new.column1 );
END IF;
IF( UPDATING( 'COLUMN2' ) )
THEN
INSERT INTO log_table( column_name, column_value )
VALUES( 'COLUMN2', :new.column2 );
END IF;
<<repeat for all columns>>
END;
You could fetch the COLUMN1, COLUMN2, ... COLUMN<<n>> strings from the data dictionary (USER_TAB_COLS) rather than hard-coding them but you'd still have to hard-code the references to the columns in the :new pseudo-record.
You could potentially write a piece of code that generated the trigger above by querying the data dictionary (USER_TAB_COLS or ALL_TAB_COLS most likely), building a string with the DDL statement, and then doing an EXECUTE IMMEDIATE to execute the DDL statement. You'd then have to call this script any time a new column is added to any table to re-create the trigger for that column. It's tedious but not particularly technically challenging to write and debug this sort of DDL generation code. But it rarely is worthwhile because someone inevitably adds a new column and forgets to re-run the script or someone needs to modify a trigger to do some additional work and it's easier to just manually update the trigger than to modify and test the script that generates the triggers.
More generally, though, I would question the wisdom of storing data this way. Storing one row in the history table for every column of every row that is modified makes using the history data very challenging. If someone wants to know what state a particular row was in at a particular point in time, you would have to join the history table to itself N times where N is the number of columns in the table at that point in time. That's going to be terribly inefficient which very quickly is going to make people avoid trying to use the history data because they can't do useful stuff with it in a reasonable period of time without tearing their hair out. It's generally much more effective to have a history table with the same set of columns that the live table has (with a few more added for tracking dates and the like) and to insert one row in the history table each time the row is updated. That will consume more space but it is generally much easier to use.
And Oracle has a number of ways to audit data changes-- you can AUDIT DML, you can use fine-grained auditing (FGA), you can use Workspace Manager, or you can use Oracle Total Recall. If you are looking for more flexibility than writing your own trigger code, I'd strongly suggest that you investigate these other technologies which are inherently much more automatic rather than trying to develop your own architecture.
You might setup the history table to be the SAME as the main table, + a date and type field. You only need to capture the old values, as the new values are in the main table.
try this (untested):
create or replace trigger "MY_TRIGGER"
before update or delete
on MY_TABLE referencing new as new old as old
for each row
declare
l_dml_type varchar2(10);
begin
if (updating) then
l_dml_type := 'UPD';
else
l_dml_type := 'DEL';
end if;
insert into MY_TABLE_HIST
(
col1,
col2,
col3,
dml_type,
dml_date
)
values
(
:old.col1,
:old.col2,
:old.col3,
l_dml_type,
sysdate
);
end;
/
As a note, depending on your design, if space is a limit, you can create a view that would track the changes in the way you were going for, and just show what the record was at the time.

Copying contents of table A to table B (one more column than table A)

In our application, we have two sets of tables: One set of working tables (with the data that is currently analyzed) and another set of archive tables (with all data that has even been analyzed, same table-name but with a a_prefix). The structure of the tables is the same, except that the archive tables have an extra column run_id to distinguish between different sets of data.
Currently, we have a SQL script that copies the contents over with statements similar to this:
insert into a_deals (run_id, deal_id, <more columns>)
select maxrun, deal_id, <more columns>
from deals,
(select max(run_id) maxrun from batch_runs);
This works fine, but whenever we add a new column to the table, we also have to mpdify the script. Is there a better way to do this that is stable when we have new columns? (Of course the structures have to match, but we'd like to be able not to have to change the script as well.)
FWIW, we're using Oracle as our RDBMS.
Following up on the first answer, you could build a pl/sql procedure which will read all_tab_columns to build the insert statement, then execute immediate. Not too hard, but be careful about what input parameters you allow (table_name and the like) and who can run it since it could provide a great opportunity for SQL Injection.
If the 2 tables have the SAME columns in the same order (column_id from all_tab_columns) except for this run_id in front, then you can do something like:
insert into a_deals
select (select max(run_id) from maxrun), d.*
from deals
where ...;
This is a lazy approach imo, and you'll want to ensure that the columns are in the same position for both tables as part of this script (inspect all_tab_columns). 2 varchar2 fields that are switched will lead to data inserted into incorrect fields.

T-SQL Trigger After Specific Insertion

I need to write a trigger that only runs when an insertion that has a certain field with a specific value is inserted. There is no updating.
For example, if I insert an entry that has column X with the value "MAC ID", I want it to run a trigger, but ONLY if the value is "MAC ID".
I know how to test for this normally, but not sure how to test this in a trigger. So far I've found that I want to use "FOR INSERT", but beyond that I don't know how to implement it.
Any help is much appreciated! Thanks!
create trigger MaxIdInsert
on YourTable
after insert
as
if exists
(
select *
from inserted
where ColumnX = 'MAX ID'
)
begin
-- do what you want here if ColumnX has value of 'MAX ID'
end
go
There's no way to only fire a trigger on certain DML specifications (besides insert, update, and/or delete). Your best bet is to test out the dynamic inserted table, which contains records that are inserted into YourTable. In this, you can test for inserted records that have a ColumnX value of "MAX ID".
Edit: In case you were wondering, I know you specified a for trigger in your question. A for trigger is equivalent to an after trigger in SQL Server.
You need to be aware that triggers run once for a batch, not once per row. As such, you may be running in a circumstance in which some of the rows match your criteria, and others do not.
As such, you'd do best to write your trigger logic to select the matching rows from inserted. If there were no matching rows, the rest of your logic will be working with an empty set, but that's no real problem - SQL Server is plenty fast enough at doing nothing, when required.
E.g. if you're inserting these rows into an audit table, something like:
create trigger MacId
on T
for insert
as
insert into Audit(Col1,Col2,Col3)
select i.Col1,i.Col2,'inserted'
from inserted i
where i.Col4 = 'MAC ID'