TSQL update trigger: joining inserted and deleted - sql

I have an on update trigger in SQL Server 2008. I only need to perform the trigger action if certain columns have been modified. Thus I'd like to check what has changed.
T-SQL offers an "if update( columnName )" construct. However, if many rows have been updated and only a single one of them has the particular column value changed "if update()" will have to return true. This'll make me perform the trigger action for far more rows than required.
So instead of using "if update()" I thought I'd just join the virtual deleted and inserted tables (the rows before and after update) and compare the relevant columns myself. However, how can I join the two tables? I cannot use the table's primary key since that may have been modified by the update. The only thing I can think of is joining by row_number(), i.e. implicit table ordering. This feels very wrong though and I don't know whether SQL Server actually offers any guarantuees that rows in inserted are ordered the same as in deleted.

With your design (that allows changing primary keys) it seems very hard to build a consistent logic.
Say, you have this table:
id value
1 2
2 1
and issue this operation:
UPDATE mytable
SET id = CASE WHEN id = 1 THEN 2 ELSE 1 END,
value = CASE WHEN value = 1 THEN 2 ELSE 1 END
which updates both records but leaves the table as this:
id value
2 1
1 2
, which from relational point of view is similar to not changing the table at all.
The whole point of the primary keys is that they never change.

If you use IDENTITY columns as Primary Keys, you don't have the problem of updated PK columns.

to prevent a PK from changing, add this to the top of your trigger:
IF (UPDATE(yourPKcol1) OR UPDATE(yourPKcol2))
BEGIN
RAISERROR('Primary Key change not permitted!',16,1)
ROLLBACK
RETURN
END

Your best bet might be to (as I mentioned in a comment above) create a new table, if possible, that includes all the data in the original but also includes an immutable primary key (IDENTITY works, or you can use something else if you prefer). You can then expose a view of this new table that mimics the name and schema of the original table. This will give you a fixed ID that you can use to track changes as you wish.
All this assumes that a view works adequately in your particular case -- if the app is doing something very weird it might not work properly, but if it's just throwing standard CRUD-style SQL at it it should work fine.

Your trade off is simplicity & maintainability vs performance
if performance is not prioritized use if update(YourTriggerActionColumn) directly
if performance is prioritized then the way to do it is use "if update(PrimaryKeyColumn)" so if primary key didn't change use inserted-deleted join else if the primary key did changed then check "if update(YourTriggerActionColumn)"
since PKs don't often change then most of the time the inserted-deleted join method will be used thus solves your performance problem.
little late but just my 2 cents :)

Related

Informix select trigger to update column

Is it possible to increase the value of a number in a column with a trigger every time it gets selected? We have special tables where we store the new id and when we update it in the app, it tends to get conflicts before the update happens, even when it all takes less than a second. So I was wondering if it is not possible to set database to increase value after every select on that column? Do not ask me why we do not use autoincrement for ids because I do not know.
Informix provides the SERIAL and BIGSERIAL types (and also SERIAL8, but don't use that) which provide autoincrement support. It also provides SEQUENCES with more sophisticated autoincrements. You should aim to use one of those.
Trying to use a SELECT trigger to update the table being selected from is, at best, fraught with problems about transactions and the like (problems which both the types and sequences carefully avoid).
If your design team needs help making effective use of these, ask a new question outlining what you want to achieve.
Normally, the correct way to proceed is to make the ID column in each table that defines 'something' (the Orders table, the Customer table, …) into a SERIAL column and either not insert a value into the ID column or insert 0 into it. The generated value can be retrieved and used when creating auxilliary information — order items, etc.
Note that you could think about using:
CREATE TABLE xyz_sequence
(
xyz SERIAL NOT NULL PRIMARY KEY
);
and using:
INSERT INTO xyz_sequence VALUES(0);
and then retrieving the inserted value — in Informix ESQL/C, you'd use sqlca.sqlerrd[1], in other languages, other techniques. You can also delete the newly inserted record, or even all the records in the table. You can afford to ignore errors from the DELETE statement; sooner or later, the rows will be deleted. The next value inserted will continue where the prior ones left off.
In a stored procedure, you'd use DBINFO('sqlca.sqlerrd1') to get the inserted value. You'd use DBINFO('bigserial') to get the value if you use a BIGSERIAL type.
I found out possible answer in this question update with return value instead of doing it with select it seems better to return value directly from update as update use locks it should be more safer even when you use multithreading application. But these are just my assumptions. Hopefully it will help someone.

Insert & Delete from SQL best practice

I have a database with 2 tables: CurrentTickets & ClosedTickets. When a user creates a ticket via web application, a new row is created. When the user closes a ticket, the row from currenttickets is inserted into ClosedTickets and then deleted from CurrentTickets. If a user reopens a ticket, the same thing happens, only in reverse.
The catch is that one of the columns being copied back to CurrentTickets is the PK column (TicketID)that idendity is set to ON.
I know I can set the IDENTITY_INSERT to ON but as I understand it, this is generally frowned upon. I'm assuming that my database is a bit poorly designed. Is there a way for me to accomplish what I need without using IDENTITY_INSERT? How would I keep the TicketID column autoincremented without making it an identity column? I figure I could add another column RowID and make that the PK but I still want the TicketID column to autoincrement if possible but still not be considered an Idendity column.
This just seems like bad design with 2 tables. Why not just have a single tickets table that stores all tickets. Then add a column called IsClosed, which is false by default. Once a ticket is closed you simply update the value to true and you don't have to do any copying to and from other tables.
All of your code around this part of your application will be much simpler and easier to maintain with a single table for tickets.
Simple answer is DO NOT make an Identity column if you want your influence on the next Id generated in that column.
Also I think you have a really poor schema, Rather than having two tables just add another column in your CurrentTickets table, something like Open BIT and set its value to 1 by default and change the value to 0 when client closes the Ticket.
And you can Turn it On/Off as many time as client changes his mind, with having to go through all the trouble of Insert Identity and managing a whole separate table.
Update
Since now you have mentioned its SQL Server 2014, you have access to something called Sequence Object.
You define the object once and then every time you want a sequential number from it you just select next value from it, it is kind of hybrid of an Identity Column and having a simple INT column.
To achieve this in latest versions of SQL Server use OUTPUT clause (definition on MSDN).
OUTPUT clause used with a table variable:
declare #MyTableVar (...)
DELETE FROM dbo.CurrentTickets
OUTPUT DELETED.* INTO #MyTableVar
WHERE <...>;
INSERT INTO ClosedTicket
Select * from #MyTableVar
Second table should have ID column, but without IDENTITY property. It is enforced by the other table.

Avoiding a two step insert in SQL

Let's say I have a table defined as follows:
CREATE TABLE SomeTable
(
P_Id int PRIMARY KEY IDENTITY,
CompoundKey varchar(255) NOT NULL,
)
CompoundKey is a string with the primary key P_Id concatenated to the end, like Foo00000001 which comes from "Foo" + 00000001. At the moment, entries insertions into this table happen in 2 steps.
Insert a dummy record with a place holder string for CompoundKey.
Update the CompoundKey with the column with the generated compound key.
I'm looking for a way to avoid the 2nd update entirely and do it all with one insert statement. Is this possible? I'm using MS SQL Server 2005.
p.s. I agree that this is not the most sensible schema in the world, and this schema will be refactored (and properly normalized) but I'm unable to make changes to the schema for now.
Your could use a computed column; change the schema to read:
CREATE TABLE SomeTable
(
P_Id int PRIMARY KEY IDENTITY,
CompoundKeyPrefix varchar(255) NOT NULL,
CompoundKey AS CompoundKeyPrefix + CAST(P_Id AS VARCHAR(10))
)
This way, SQL Server will automagically give you your compound key in a new column, and will automatically maintain it for you. You may also want to look into the PERSIST keyword for computed columns which will cause SQL Server to materialise the value in the data files rather than having to compute it on the fly. You can also add an index against the column should you so wish.
A trigger would easily accomplish this
This is simply not possible.
The "next ID" doesn't exist and thus cannot be read to fulfill the UPDATE until the row is inserted.
Now, if you were sourcing your autonumbers from somwhere else you could, but I don't think that's a good answer to your question.
Even if you want to use triggers, an UPDATE is still executed even if you don't manually execute it.
You can obscure the population of the CompoundKey, but at the end of the day it's still going to be an UPDATE
I think your safest bet is just to make sure the UPDATE is in the same transaction as the INSERT or use a trigger. But, for the academic argument of it, an UPDATE still occurs.
Two things:
1) if you end up using two inserts, you must use transaction! Otherwise other processes may see the database in inconsistent state (i.e. seeing record without CompoundKey).
2) I would refrain from trying to paste the Id to the end of CompoundKey in transaction, trigger etc. It is much cleaner to do it at the output if you need it, e.g. in queries (select concat(CompoundKey, Id) as CompoundKeyId ...). If you need it as a foreign key in other tables, just use the pair (CompoundKey, Id).

SQL Schema design question - delete flags

in our database schema, we like to use delete flags. When a record is deleted, we then update that field, rather than run a delete statement. The rest of our queries then check for the delete flag when returning data.
Here is the problem:
The delete flag is a date, with a default value of NULL. This is convenient because when a record is deleted we can easily see the date that it was deleted on.
However, to enforce unique constraints properly, we need to include the delete flag in the unique constraint. The problem is, on MS SQL , it behaves in accordance to what we want (for this design), but in postgresql, if any field in a multi column unique constraint is NULL, it allows the field. This behavior fits the SQL standard, but it makes our design broken.
The options we are considering are:
make a default value for the deleted field to be some hardcoded date
add a bit flag for deleted, then each table would have 2 delete related fields - date_deleted and is_deleted (for example)
change the date_deleted to is_deleted (bit field)
I suspect option 1 is a performance hit, each query would have to check for the hardcoded date, rather than just checking for IsNUll. Plus it feels wrong.
Option 2, also, feels wrong - 2 fields for "deleted" is non-dry.
Option 3, we lose the "date" information. There is a modified field, which would, in theory reflect the date deleted, but only assuming the last update to the row was the update to the delete bit.
So, Any suggestions? What have you done in the past to deal with "delete flags" ?
Update
Thanks to everyone for the super quick, and thoughtful responses.
We ended up going with a simple boolean field and a modified date field (with a trigger). I just noticed the partial index suggestion, and that looks like the perfect solution for this problem (but I havent actually tried it)
If just retaining the deleted records is important to you, have you considered just moving them to a history table?
This could easily be achieved with a trigger.
Application logic doesn't need to account for this deleted flag.
Your tables would stay lean and mean when selecting from it.
It would solve your problem with unique indexes.
Option 3, we lose the "date"
information. There is a modified
field, which would, in theory reflect
the date deleted, but only assuming
the last update to the row was the
update to the delete bit.
Is there a business reason that the record would be modified after it was deleted? If not, are you worrying about something that's not actually an issue? =)
In the system I currently work on we have the following "metadata" columns _Deleted, _CreatedStamp, _UpdatedStamp, _UpdatedUserId, _CreatedUserId ... quite a bit, but it's important for this system to carry that much data. I'd suggest going down the road of having a separate flag for Deleted to Modified Date / Deleted Date. "Diskspace is cheap", and having two fields to represent a deleted record isn't world-ending, if that's what you have to do for the RDBMS you're using.
What about triggers? When a record is deleted, a post-update trigger copies the row into an archive table which has the same structure plus any additional columns, and an additional column of the date/time and perhaps the user that deleted it.
That way your "live" table only has records that are actually live, so is better performance-wise, and your application doesn't have to worry about whether a record has been deleted or not.
One of my favourite solutions is an is_deleted bit flag, and a last_modified date field.
The last_modified field is updated automatically every time the row is modified (using any technique supported by your DBMS.) If the is_deleted bit flag is TRUE, then the last_modified value implies the time when the row was deleted.
You will then be able to set the default value of last_modified to GETDATE(). No more NULL values, and this should work with your unique constraints.
Just create a conditional unique constraint:
CREATE UNIQUE INDEX i_bla ON yourtable (colname) WHERE date_deleted IS NULL;
Would creating a multi column unique index that included the deleted date achieve the same constraint limit you need?
http://www.postgresql.org/docs/current/interactive/indexes-unique.html
Alternately, can you store a non-NULL and check that the deleted date to the minimum sql date = 0 or "1/1/1753" instead of NULL for undeleted records.
Is it possible to exclude the deleted date field from your unique index? In what way does this field contribute to the uniqueness of each record, especially if the field is usually null?

SQL Server Database - Hidden Fields?

I'm implementing CRUD on my silverlight application, however I don't want to implement the Delete functionality in the traditional way, instead I'd like to set the data to be hidden instead inside the database.
Does anyone know of a way of doing this with an SQL Server Database?
Help greatly appreciated.
You can add another column to the table "deleted" which has value 0 or 1, and display only those records with deleted = 0.
ALTER TABLE TheTable ADD COLUMN deleted BIT NOT NULL DEFAULT 0
You can also create view which takes only undeleted rows.
CREATE VIEW undeleted AS SELECT * FROM TheTable WHERE deleted = 0
And you delete command would look like this:
UPDATE TheTable SET deleted = 1 WHERE id = ...
Extending Lukasz' idea, a datetime column is useful too.
NULL = current
Value = when soft deleted
This adds simple versioning that a bit column can not which may work better
In most situations I would rather archive the deleted rows to an archive table with a delete trigger. This way I can also capture who deleted each row and the deleted rows don't impact my performance. You can then create a view that unions both tables together when you want to include the deleted ones.
You could do as Lukasz Lysik suggests, and have a field that serves as a flag for "deleted" rows, filtering them out when you don't want them showing up. I've used that in a number of applications.
An alternate suggestion would be to add an extra status assignment if there's a pre-existing status code. For example, in a class attendance app we use internally an attendance record could be "Imported", "Registered", "Completed", "Incomplete", etc.* - we added a "Deleted" option for times where there are unintentional duplicates. That way we have a record and we're not just throwing a new column at the problem.
*That is the display name for a numeric code used behind the scenes. Just clarifying. :)
Solution with triggers
If you are friends with DB trigger, then you might consider:
add a DeletedAt and DeletedBy columns to your tables
create a view for each tables (ex: for table Customer have a CustomerView view, which would filter out rows that have DeletedAt not null (idea of gbn with date columns)
all your CRUD operations perform as usual, but not on the Customer table, but on the CustomerView
add INSTEAD OF DELETE trigger that would mark the row as delete instead of physically deleting it.
you may want to do a bit more complex stuff there like ensuring that all FK references to this row are also "logically" deleted in order to still have logical referential integrity
I you choose to use this pattern, I would probably name my tables differently like TCustomer, and views just Customer for clarity of client code.
Be careful with this kind of implementation because soft deletes break referential integrity and you have to enforce integrity in your entities using custom logic.