I have a table with columns businessname, sortcode and accountnumber all populated, name, nationality and DOB all currently unpopulated. I need to create a trigger to shoot every update to an audit table when any of the null fields are updated so if I change just the name I'll get a timestamp, userid, the field changed, the old value and the new value.. If I changed all 3 null fields I'd like to send 3 rows to the audit table.
Can someone give me a pointer on the logic of this please?
In a very rudimentary format for testing I've got
CREATE TRIGGER RM_UPDATE_TRIGGER ON RM_BASE
ON UPDATE
AS
INSERT INTO RM_AUDITLOG
SELECT CURRENT_TIMESTAMP, SRT_CD
FROM RM_BASE
but this is sending all the current rows across after an UPDATE to any of them, I only want the row that has been affected. I'm not sure if I should be building more tables to join together to get the final answer or using the INSERT/DELETE tables.. I've seen an audit table in this format in previous roles so I know it works but can't figure it out!
Thanks
Yes, you need to use the INSERTED and/or DELETED pseudo-tables as those contain only the rows that have been modified. As in:
CREATE TRIGGER RM_UPDATE_TRIGGER ON RM_BASE
ON UPDATE
AS
INSERT INTO RM_AUDITLOG
SELECT CURRENT_TIMESTAMP, SRT_CD
FROM INSERTED
The INSERTED table has the "new" or "current" version of each row while the DELETED table has the "old" version that has been replaced via the UPDATE operation. This is the case for all versions of SQL Server, at least going back as far as SQL Server 2000.
In order to track the change itself (both "old" and "new" values), then you need to JOIN those two pseudo-tables, as in:
INSERT INTO RM_AUDITLOG
SELECT CURRENT_TIMESTAMP, ins.SRT_CD AS [SRT_CD_new], del.SRT_CD AS [SRT_CD_old]
FROM INSERTED ins
INNER JOIN DELETED del
ON del.PKfield = ins.PKfield
This is the basic operation for capturing changes (unless, of course, you use Change Data Capture) in a DML trigger.
If you want to unpivot this data such that each set of "old" and "new" columns becomes a row, that should be easily adaptable from the above. In that case, you could also add WHERE ISNULL(ins.column, '~~~~') <> ISNULL(del.column, '~~~~') COLLATE Latin1_General_BIN to avoid capturing fields that have not changed. The COLLATE ensures case-sensitive / accent-sensitive / etc comparisons.
Of course, unpivoting makes it really hard to reconstruct the entire row as you are then required to keep the full history forever. You would need to start with the base values for all fields and apply the changes incrementally. The typical audit scenario is to just to capture the row that has fields for both old and new for each source field (like I have already shown). If your audit table looks more like:
PKfield, DateModified, businessname_old, businessname_new, sortcode_old, sortcode_new
then you can write a query to identify which fields actually changed by comparing each set (given that more than 1 field can change in the same UPDATE operation), something like:
SELECT PKfield,
DateModified,
CASE
WHEN ISNULL(businessname_old, '~~~~') <> ISNULL(businessname_new, '~~~~')
COLLATE Latin1_General_BIN THEN 'BusinessName ' ELSE ''
END +
CASE
WHEN ISNULL(sortcode_old, '~~~~') <> ISNULL(sortcode_new, '~~~~')
COLLATE Latin1_General_BIN THEN 'SortCode ' ELSE ''
END AS [FieldsChanged]
FROM AuditTable
ORDER BY DateModified DESC;
BUT, if you really want to unpivot the data to have one row per actual changed field, then the following structure should work:
;WITH ins AS
(
SELECT PKfield, FieldName, Value
FROM (
SELECT PKfield, businessname, sortcode, accountnumber, name,
nationality, DOB
FROM INSERTED
) cols
UNPIVOT (Value FOR FieldName IN
(businessname, sortcode, accountnumber, name, nationality, DOB)
) colvals
), del AS
(
SELECT PKfield, FieldName, Value
FROM (
SELECT PKfield, businessname, sortcode, accountnumber, name,
nationality, DOB
FROM DELETED
) cols
UNPIVOT (Value FOR FieldName IN
(businessname, sortcode, accountnumber, name, nationality, DOB)
) colvals
)
INSERT INTO AuditTable (PKfield, DateModified, FieldName, OldValue, NewValue)
SELECT ins.PKfield, CURRENT_TIMESTAMP, ins.FieldName, del.Value, ins.Value
FROM ins
INNER JOIN del
ON del.PKfield = ins.PKfield
AND del.FieldName = ins.FieldName
WHERE ISNULL(del.Value, '~~~~') <>
ISNULL(ins.Value, '~~~~') COLLATE Latin1_General_BIN;
You might need to add a CONVERT(VARCHAR(1000), field) to the WHERE condition if DOB is a DATE or DATETIME field, or if SRT_CD is an INT or other type of number field:
WHERE ISNULL(CONVERT(VARCHAR(1000), del.Value), '~~~~') <>
ISNULL(CONVERT(VARCHAR(1000), ins.Value), '~~~~')
COLLATE Latin1_General_BIN;
Related
I am writing some sql in Postgres to update an audit table. My sql will update the table being audited based on some criteria and then select that updated record to update information in an audit table. This is what I have so far:
DO $$
DECLARE
jsonValue json;
revId int;
template RECORD;
BEGIN
jsonValue = '...Some JSON...'
UPDATE projectTemplate set json = jsonValue where type='InstallationProject' AND account_id IS NULL;
template := (SELECT pt FROM ProjectTemplate pt WHERE pt.type='InstallationProject' AND pt.account_id IS NULL);
IF EXISTS (template) THEN
(
revId := nextval('hibernate_sequence');
insert into revisionentity (id, timestamp) values(revId, extract(epoch from CURRENT_TIMESTAMP));
insert into projectTemplate_aud (rev, revtype, id, name, type, category, validfrom, json, account_id)
VALUES (revId, 1, template.id, template.name, template.type, template.category, template.validfrom, jsonValue, template.account_id);
)
END $$;
My understanding is that template will be undefined if there is nothing in the table that matches that query (and there isn't currently). I want to make it so my query will not attempt to update the audit table if template doesn't exist.
What can I do to update this sql to match what I am trying to do?
You cannot use EXISTS like that, it expects a subquery expression. Plus some other issues with your code.
This single SQL DML statement with data-modifying CTEs should replace your DO command properly. And faster, too:
WITH upd AS (
UPDATE ProjectTemplate
SET json = '...Some JSON...'
WHERE type = 'InstallationProject'
AND account_id IS NULL
RETURNING *
)
, ins AS (
INSERT INTO revisionentity (id, timestamp)
SELECT nextval('hibernate_sequence'), extract(epoch FROM CURRENT_TIMESTAMP)
WHERE EXISTS (SELECT FROM upd) -- minimal valid EXISTS expression!
RETURNING id
)
INSERT INTO ProjectTemplate_aud
(rev , revtype, id, name, type, category, validfrom, json, account_id)
SELECT i.id, 1 , u.id, u.name, u.type, u.category, u.validfrom, u.json, u.account_id
FROM upd u, ins i;
Inserts a single row in revisionentity if the UPDATE found any rows.
Inserts as many rows projectTemplate_aud as rows have been updated.
About data-modifying CTEs:
Insert data in 3 tables at a time using Postgres
Aside: I see a mix of CaMeL-case, some underscores, or just lowercased names. Consider legal, lower-case names exclusively (and avoid basic type names as column names). Most importantly, though, be consistent. Related:
Are PostgreSQL column names case-sensitive?
Misnamed field in subquery leads to join
I have a requirement to keep a history of changes made to a specific table when there is an UPDATE called, but only care about specific columns.
So, I have created a History table:
CREATE TABLE [dbo].[SourceTable_History](
[SourceTable_HistoryID] [int] IDENTITY(1,1) NOT NULL,
[SourceTableID] [int] NOT NULL,
[EventDate] [date] NOT NULL,
[EventUser] [date] NOT NULL,
[ChangedColumn] VARCHAR(50) NOT NULL,
[PreviousValue] VARCHAR(100) NULL,
[NewValue] VARCHAR(100) NULL
CONSTRAINT pk_SourceTable_History PRIMARY KEY ([SourceTable_HistoryID]),
CONSTRAINT fk_SourceTable_HistoryID_History_Source FOREIGN KEY ([SourceTableID]) REFERENCES SourceTable (SourceTableId)
)
Abd my plan is to create an Update trigger on the SourceTable. The business only cares about changes to certain columns, so, in psudo code, I was planning to do something like
If source.Start <> new.start
Insert into history (PrimaryKey, EventDate, EventUser, ColumnName, OldValue, NewValue)
(PK, GETDATYE(), updateuser, "StartDate", old.value, new.value)
And there would be a block like that per column we want history on.
We're NOT allowed to use CDC, so we have to roll our own, and this is my plan so far.
Does this seem a suitable plan?
There are 7 tables we need to monitor, with a column count of between 2 and 5 columns per table.
I just need to work out how to get a trigger to first comapr the before and after values of a specific columnm and then write a new row.
I thought it was something as simple as:
CREATE TRIGGER tr_PersonInCareSupportNeeds_History
ON PersonInCareSupportNeeds
FOR UPDATE
AS
BEGIN
IF(inserted.StartDate <> deleted.StartDate)
BEGIN
INSERT INTO [dbo].[PersonInCareSupportNeeds_History]
([PersonInCareSupportNeedsID], [EventDate], [EventUser], [ChangedColumn], [PreviousValue], [NewValue])
VALUES
(inserted.[PersonInCareSupportNeedsID], GETDATE(), [LastUpdateUser], 'StartDate', deleted.[StartDate], deleted.[StartDate])
END
END
We have trigger based auditing system and we basically created it by analyzing how third party tool for generating audit triggers ApexSQL Audit creates triggers and manages storage and developed our own system based on that.
I think your solution is generally ok but that you need to think about modifying storage a bit and plan for scaling.
What if business decides to keep track of all columns in all tables? What if they decide to track inserts and deletes as well? Will your solution be able to accommodate this?
Storage: Use two tables to hold your data. One table for holding all info about transactions (when, who, application name, table name, schema name, affected rows, etc… ). And another table to hold the actual data (before and after values, primary key, etc..).
Triggers: We ended up with a template for insert, update and delete triggers and very simple C# app where we enter tables and columns so application outputs DDL. This saved us a lot of time.
Depending on your requirments, I think history tables should mirror the table you want to capture, plus the extra audit details (who, when, why).
That can make it easier to use the same existing logic (sql, data classes, screens etc) to view historical data.
With your design getting the data in will be ok, but how easy will it be to pull the data out in a usable format?
Well I think that your idea is not so bad. Actually, I have the similar system in production. I will not give you my complete code (with acynchronious history saving), but I could give you some guidelines.
The main idea is to turn your data from relational model into Entity-Attribute-Value model. Also we want our triggers to be as much general as we can, that means - do not write column names explicitly. This could be done by different ways, but the most general I know in SQL Server is to use FOR XML and then select from xml:
declare #Data xml
select #Data = (select * from Test for xml raw('Data'))
select
T.C.value('../#ID', 'bigint') as ID,
T.C.value('local-name(.)', 'nvarchar(128)') as Name,
T.C.value('.', 'nvarchar(max)') as Value
from #Data.nodes('Data/#*') as T(C)
SQL FIDDLE EXAMPLE
To get different rows of two tables, you could use EXCEPT:
select * from Test1 except select * from Test2
union all
select * from Test2 except select * from Test1
SQL FIDDLE EXAMPLE
and, finally, your trigger could be something like this:
create trigger utr_Test_History on Test
after update
as
begin
declare #Data_Inserted xml, #Data_Deleted xml
select #Data_Inserted =
(
select *
from (select * from inserted except select * from deleted) as a
for xml raw('Data')
)
select #Data_Deleted =
(
select *
from (select * from deleted except select * from inserted) as a
for xml raw('Data')
)
;with CTE_Inserted as (
select
T.C.value('../#ID', 'bigint') as ID,
T.C.value('local-name(.)', 'nvarchar(128)') as Name,
T.C.value('.', 'nvarchar(max)') as Value
from #Data_Inserted.nodes('Data/#*') as T(C)
), CTE_Deleted as (
select
T.C.value('../#ID', 'bigint') as ID,
T.C.value('local-name(.)', 'nvarchar(128)') as Name,
T.C.value('.', 'nvarchar(max)') as Value
from #Data_Deleted.nodes('Data/#*') as T(C)
)
insert into History (Table_Name, Record_ID, Event_Date, Event_User, Column_Name, Value_Old, Value_New)
select 'Test', isnull(I.ID, D.ID), getdate(), system_user, isnull(D.Name, I.Name), D.Value, I.Value
from CTE_Inserted as I
full outer join CTE_Deleted as D on D.ID = I.ID and D.Name = I.Name
where
not
(
I.Value is null and D.Value is null or
I.Value is not null and D.Value is not null and I.Value = D.Value
)
end
SQL FIDDLE EXAMPLE
I have one very large, un-normalized table which I am in the process of fixing. From that large table I'm normalizing the data. I used the SQL statement
INSERT INTO smallTable(patientID, admissionDate, dischargeDate)
select distinct patientID, admissionDate, dischargeDate
FROM largeTable
So my smallTable is populated with the correct number of rows. There's another column, drgCode that I want to add to my smallTable. I tried the following query to do that
INSERT INTO smallTable(drgCode)
select drgCode from
(
SELECT DISTINCT patientID, admissionDate, dischargeDate, drgCode from largeTable) as t
The error I was given reads cannot insert the value NULL into patientID, column does not alloq nulls, insert fails.
The only way that the drgCode will be chosen correctly is if some variant of the select distinct query is used. How can I insert only one field, when the other fields must be included to narrow down the search.
I know I could do this if I emptied out my smallTable, but I figured there's gotta be a way around it.
with drg as (SELECT DISTINCT patientID, admissionDate, dischargeDate, drgCode from largeTable)
update s
set s.drgCode = l.drgCode
from smallTable s join drg l on
s.patientId = l.patientId and
s.admissionDate = l.admissionDate and
s.dischargeDate = l.dischargeDate
As per my understanding, if you have "PatientID" to be unique in both the tables, you can do something like below.
Update S
SET S.drgCode = L.drgCode
FROM
SmallTable S
INNER JOIN
LargeTable T
ON S.PatientID = T.PatientID
Hope this Helps!!
When you perform an insert to a table, any values not specified in the query are poulated with the default value for the column. If there is no default value on the column, NULL will be used. You recieved that particular error message because your column does not allow NULL and does not have a default.
Given your reply to Praveen, perhaps you should be further normalizing and put the drgCodes into a separate table.
When using this statement
create table demo (
ts timestamp
)
insert into demo select current_timestamp
I get the following error:
Cannot insert an explicit value into a timestamp column. Use INSERT with a column list to exclude the timestamp column, or insert a DEFAULT into the timestamp column
How do I insert the current time to a timestamp column?
According to MSDN, timestamp
Is a data type that exposes automatically generated, unique binary
numbers within a database. timestamp is generally used as a mechanism
for version-stamping table rows. The storage size is 8 bytes. The
timestamp data type is just an incrementing number and does not
preserve a date or a time. To record a date or time, use a datetime
data type.
You're probably looking for the datetime data type instead.
If you have a need to copy the exact same timestamp data, change the data type in the destination table from timestamp to binary(8) -- i used varbinary(8) and it worked fine.
This obviously breaks any timestamp functionality in the destination table, so make sure you're ok with that first.
You can't insert the values into timestamp column explicitly. It is auto-generated. Do not use this column in your insert statement. Refer http://msdn.microsoft.com/en-us/library/ms182776(SQL.90).aspx for more details.
You could use a datetime instead of a timestamp like this:
create table demo (
ts datetime
)
insert into demo select current_timestamp
select ts from demo
Returns:
2014-04-04 09:20:01.153
How to insert current time into a timestamp with SQL Server:
In newer versions of SQL Server, timestamp is renamed to RowVersion. Rightly so, because timestamp name is misleading.
SQL Server's timestamp IS NOT set by the user and does not represent a date or a time. Timestamp is only good for making sure a row hasn't changed since it's been read.
If you want to store a date or a time, do not use timestamp, you must use one of the other datatypes, like for example datetime, smalldatetime, date, time or DATETIME2
For example:
create table foo (
id INT,
leet timestamp
)
insert into foo (id) values (15)
select * from foo
15 0x00000000000007D3
'timestamp' in mssql is some kind of internal datatype. Casting that number to datetime produces a nonsense number.
Assume Table1 and Table2 have three columns A, B and TimeStamp. I want to insert from Table1 into Table2.
This fails with the timestamp error:
Insert Into Table2
Select Table1.A, Table1.B, Table1.TimeStamp From Table1
This works:
Insert Into Table2
Select Table1.A, Table1.B, null From Table1
There is some good information in these answers. Suppose you are dealing with databases which you can't alter, and that you are copying data from one version of the table to another, or from the same table in one database to another. Suppose also that there are lots of columns, and you either need data from all the columns, or the columns which you don't need don't have default values. You need to write a query with all the column names.
Here is a query which returns all the non-timestamp column names for a table, which you can cut and paste into your insert query. FYI: 189 is the type ID for timestamp.
declare #TableName nvarchar(50) = 'Product';
select stuff(
(select
', ' + columns.name
from
(select id from sysobjects where xtype = 'U' and name = #TableName) tables
inner join syscolumns columns on tables.id = columns.id
where columns.xtype <> 189
for xml path('')), 1, 2, '')
Just change the name of the table at the top from 'Product' to your table name. The query will return a list of column names:
ProductID, Name, ProductNumber, MakeFlag, FinishedGoodsFlag, Color, SafetyStockLevel, ReorderPoint, StandardCost, ListPrice, Size, SizeUnitMeasureCode, WeightUnitMeasureCode, Weight, DaysToManufacture, ProductLine, Class, Style, ProductSubcategoryID, ProductModelID, SellStartDate, SellEndDate, DiscontinuedDate, rowguid, ModifiedDate
If you are copying data from one database (DB1) to another database(DB2) you could use this query.
insert DB2.dbo.Product (ProductID, Name, ProductNumber, MakeFlag, FinishedGoodsFlag, Color, SafetyStockLevel, ReorderPoint, StandardCost, ListPrice, Size, SizeUnitMeasureCode, WeightUnitMeasureCode, Weight, DaysToManufacture, ProductLine, Class, Style, ProductSubcategoryID, ProductModelID, SellStartDate, SellEndDate, DiscontinuedDate, rowguid, ModifiedDate)
select ProductID, Name, ProductNumber, MakeFlag, FinishedGoodsFlag, Color, SafetyStockLevel, ReorderPoint, StandardCost, ListPrice, Size, SizeUnitMeasureCode, WeightUnitMeasureCode, Weight, DaysToManufacture, ProductLine, Class, Style, ProductSubcategoryID, ProductModelID, SellStartDate, SellEndDate, DiscontinuedDate, rowguid, ModifiedDate
from DB1.dbo.Product
create table demo (
id int,
ts timestamp
)
insert into demo(id,ts)
values (1, DEFAULT)
If I create a VIEW using this pivot table query, it isn't editable. The cells are read-only and give me the SQL2005 error: "No row was updated. The data in row 2 was not committed. Update or insert of view or function 'VIEWNAME' failed because it contains a derived or constant field."
Any ideas on how this could be solved OR is a pivot like this just never editable?
SELECT n_id,
MAX(CASE field WHEN 'fId' THEN c_metadata_value ELSE ' ' END) AS fId,
MAX(CASE field WHEN 'sID' THEN c_metadata_value ELSE ' ' END) AS sID,
MAX(CASE field WHEN 'NUMBER' THEN c_metadata_value ELSE ' ' END) AS NUMBER
FROM metadata
GROUP BY n_id
Assuming you have a unique constraint on n_id, field which means that at most one row can match you can (in theory at least) use an INSTEAD OF trigger.
This would be easier with MERGE (but that is not available until SQL Server 2008) as you need to cover UPDATES of existing data, INSERTS (Where a NULL value is set to a NON NULL one) and DELETES where a NON NULL value is set to NULL.
One thing you would need to consider here is how to cope with UPDATES that set all of the columns in a row to NULL I did this during testing the code below and was quite confused for a minute or two until I realised that this had deleted all the rows in the base table for an n_id (which meant the operation was not reversible via another UPDATE statement). This issue could be avoided by having the VIEW definition OUTER JOIN onto what ever table n_id is the PK of.
An example of the type of thing is below. You would also need to consider potential race conditions in the INSERT/DELETE code indicated and whether you need some additional locking hints in there.
CREATE TRIGGER trig
ON pivoted
INSTEAD OF UPDATE
AS
BEGIN
SET nocount ON;
DECLARE #unpivoted TABLE (
n_id INT,
field VARCHAR(10),
c_metadata_value VARCHAR(10))
INSERT INTO #unpivoted
SELECT *
FROM inserted UNPIVOT (data FOR col IN (fid, sid, NUMBER) ) AS unpvt
WHERE data IS NOT NULL
UPDATE m
SET m.c_metadata_value = u.c_metadata_value
FROM metadata m
JOIN #unpivoted u
ON u.n_id = m.n_id
AND u.c_metadata_value = m.field;
/*You need to consider race conditions below*/
DELETE FROM metadata
WHERE NOT EXISTS(SELECT *
FROM #unpivoted u
WHERE metadata.n_id = u.n_id
AND u.field = metadata.field)
INSERT INTO metadata
SELECT u.n_id,
u.field,
u.c_metadata_value
FROM #unpivoted u
WHERE NOT EXISTS (SELECT *
FROM metadata m
WHERE m.n_id = u.n_id
AND u.field = m.field)
END
You'll have to create trigger on view, because direct update is not possible:
CREATE TRIGGER TrMyViewUpdate on MyView
INSTEAD OF UPDATE
AS
BEGIN
SET NOCOUNT ON;
UPDATE MyTable
SET ...
FROM INSERTED...
END