I've got a trigger attached to a table.
ALTER TRIGGER [dbo].[UpdateUniqueSubjectAfterInsertUpdate]
ON [dbo].[Contents]
AFTER INSERT,UPDATE
AS
BEGIN
-- Grab the Id of the row just inserted/updated
DECLARE #Id INT
SELECT #Id = Id
FROM INSERTED
END
Every time a new entry is inserted or modified, I wish to update a single field (in this table). For the sake of this question, imagine i'm updating a LastModifiedOn (datetime) field.
Ok, so what i've got is a batch insert thingy..
INSERT INTO [dbo].[Contents]
SELECT Id, a, b, c, d, YouDontKnowMe
FROM [dbo].[CrapTable]
Now all the rows are correctly inserted. The LastModifiedOn field defaults to null. So all the entries for this are null -- EXCEPT the first row.
Does this mean that the trigger is NOT called for each row that is inserted into the table, but once AFTER the insert query is finished, ie. ALL the rows are inserted? Which mean, the INSERTED table (in the trigger) has not one, but 'n' number of rows?!
If so .. er.. :( Would that mean i would need a cursor in this trigger? (if i need to do some unique logic to each single row, which i do currently).
?
UPDATE
I'll add the full trigger code, to see if it's possible to do it without a cursor.
BEGIN
SET NOCOUNT ON
DECLARE #ContentId INTEGER,
#ContentTypeId TINYINT,
#UniqueSubject NVARCHAR(200),
#NumberFound INTEGER
-- Grab the Id. Also, convert the subject to a (first pass, untested)
-- unique subject.
-- NOTE: ToUriCleanText just replaces bad uri chars with a ''.
-- eg. an '#' -> ''
SELECT #ContentId = ContentId, #ContentTypeId = ContentTypeId,
#UniqueSubject = [dbo].[ToUriCleanText]([Subject])
FROM INSERTED
-- Find out how many items we have, for these two keys.
SELECT #NumberFound = COUNT(ContentId)
FROM [dbo].[Contents]
WHERE ContentId = #ContentId
AND UniqueSubject = #UniqueSubject
-- If we have at least one identical subject, then we need to make it
-- unique by appending the current found number.
-- Eg. The first instance has no number.
-- Second instance has subject + '1',
-- Third instance has subject + '2', etc...
IF #NumberFound > 0
SET #UniqueSubject = #UniqueSubject + CAST(#NumberFound AS NVARCHAR(10))
-- Now save this change.
UPDATE [dbo].[Contents]
SET UniqueSubject = #UniqueSubject
WHERE ContentId = #ContentId
END
Why not change the trigger to deal with multiple rows?
No cursor or loops needed: it's the whole point of SQL ...
UPDATE
dbo.SomeTable
SET
LastModifiedOn = GETDATE()
WHERE
EXIST (SELECT * FROM INSERTED I WHERE I.[ID] = dbo.SomeTable.[ID]
Edit: Something like...
INSERT #ATableVariable
(ContentId, ContentTypeId, UniqueSubject)
SELECT
ContentId, ContentTypeId, [dbo].[ToUriCleanText]([Subject])
FROM
INSERTED
UPDATE
[dbo].[Contents]
SET
UniqueSubject + CAST(NumberFound AS NVARCHAR(10))
FROM
--Your original COUNT feels wrong and/or trivial
--Do you expect 0, 1 or many rows.
--Edit2: I assume 0 or 1 because of original WHERE so COUNT(*) will suffice
-- .. although, this implies an EXISTS could be used but let's keep it closer to OP post
(
SELECT ContentId, UniqueSubject, COUNT(*) AS NumberFound
FROM #ATableVariable
GROUP BY ContentId, UniqueSubject
HAVING COUNT(*) > 0
) foo
JOIN
[dbo].[Contents] C ON C.ContentId = foo.ContentId AND C.UniqueSubject = foo.UniqueSubject
Edit 2: and again with RANKING
UPDATE
C
SET
UniqueSubject + CAST(foo.Ranking - 1 AS NVARCHAR(10))
FROM
(
SELECT
ContentId, --not needed? UniqueSubject,
ROW_NUMBER() OVER (PARTITION BY ContentId ORDER BY UniqueSubject) AS Ranking
FROM
#ATableVariable
) foo
JOIN
dbo.Contents C ON C.ContentId = foo.ContentId
/* not needed? AND C.UniqueSubject = foo.UniqueSubject */
WHERE
foo.Ranking > 1
The trigger will be run only once for an INSERT INTO query. The INSERTED table will contain multiple rows.
Ok folks, I think I figure it out myself. Inspired by the previous answers and comments, I've done the following. (Can you folks have a quick look over to see if i've over-enginered this baby?)
.1. Created an Index'd View, representing the 'Subject' field, which needs to be cleaned. This is the field that has to be unique .. but before we can make it unique, we need to group by it.
-- Create the view.
CREATE VIEW ContentsCleanSubjectView with SCHEMABINDING AS
SELECT ContentId, ContentTypeId,
[dbo].[ToUriCleanText]([Subject]) AS CleanedSubject
FROM [dbo].[Contents]
GO
-- Index the view with three index's. Custered PK and a non-clustered,
-- which is where most of the joins will be done against.
-- Last one is because the execution plan reakons i was missing statistics
-- against one of the fields, so i added that index and the stats got gen'd.
CREATE UNIQUE CLUSTERED INDEX PK_ContentsCleanSubjectView ON
ContentsCleanSubjectView(ContentId)
CREATE NONCLUSTERED INDEX IX_BlahBlahSnipSnip_A ON
ContentsCleanSubjectView(ContentTypeId, CleanedSubject)
CREATE INDEX IX_BlahBlahSnipSnip_B ON
ContentsCleanSubjectView(CleanedSubject)
.2. Create the trigger code which now
a) grabs all the items 'changed' (nothing new/hard about that)
b) orders all the inserted rows, row numbered with partitioning by a clean subject
c) update the single row we're upto in the main update clause.
here's the code...
ALTER TRIGGER [dbo].[UpdateUniqueSubjectAfterInsertUpdate]
ON [dbo].[Contents]
AFTER INSERT,UPDATE
AS
BEGIN
SET NOCOUNT ON
DECLARE #InsertRows TABLE (ContentId INTEGER PRIMARY KEY,
ContentTypeId TINYINT,
CleanedSubject NVARCHAR(300))
DECLARE #UniqueSubjectRows TABLE (ContentId INTEGER PRIMARY KEY,
UniqueSubject NVARCHAR(350))
DECLARE #UniqueSubjectRows TABLE (ContentId INTEGER PRIMARY KEY,
UniqueSubject NVARCHAR(350))
-- Grab all the records that have been updated/inserted.
INSERT INTO #InsertRows(ContentId, ContentTypeId, CleanedSubject)
SELECT ContentId, ContentTypeId, [dbo].[ToUriCleanText]([Subject])
FROM INSERTED
-- Determine the correct unique subject by using ROW_NUMBER partitioning.
INSERT INTO #UniqueSubjectRows
SELECT SubResult.ContentId, UniqueSubject = CASE SubResult.RowNumber
WHEN 1 THEN SubResult.CleanedSubject
ELSE SubResult.CleanedSubject + CAST(SubResult.RowNumber - 1 AS NVARCHAR(5)) END
FROM (
-- Order all the cleaned subjects, partitioned by the cleaned subject.
SELECT a.ContentId, a.CleanedSubject, ROW_NUMBER() OVER (PARTITION BY a.CleanedSubject ORDER BY a.ContentId) AS RowNumber
FROM ContentsCleanSubjectView a
INNER JOIN #InsertRows b ON a.ContentTypeId = b.ContentTypeId AND a.CleanedSubject = b.CleanedSubject
GROUP BY a.contentId, a.cleanedSubject
) SubResult
INNER JOIN [dbo].[Contents] c ON c.ContentId = SubResult.ContentId
INNER JOIN #InsertRows d ON c.ContentId = d.ContentId
-- Now update all the effected rows.
UPDATE a
SET a.UniqueSubject = b.UniqueSubject
FROM [dbo].[Contents] a INNER JOIN #UniqueSubjectRows b ON a.ContentId = b.ContentId
END
Now, the subquery correctly returns all the cleaned subjects, partitioned correctly and numbered correctly. I never new about the 'PARTITION' command, so that trick was the big answer here :)
Then i just join'd the subquery with the row that is being updated in the parent query. The row number is correct, so now i just do a case. if this is the first time the cleaned subject exists (eg. row_number = 1), don't modify it. otherwise, append the row_number minus one. This means the 2nd instance of the same subject, the unique subject will be => cleansubject + '1'.
The reason why i believe i need to have an index'd view is because if i have two very similar subjects, that when you have stripped out (ie. cleaned) all the bad chars (which i've determined are bad) .. it's possible that the two clean subjects are the same. As such, I need to do all my joins on a cleanedSubject, instead of a subject. Now, for the massive amount of rows I have, this is crap for performance when i don't have the view. :)
So .. is this over engineered?
Edit 1:
Refactored trigger code so it's waay more performant.
Related
I have something like the table below:
CREATE TABLE updates (
id INT PRIMARY KEY IDENTITY (1, 1),
name VARCHAR (50) NOT NULL,
updated DATETIME
);
And I'm updating it like so:
INSERT INTO updates (name, updated)
VALUES
('fred', '2020-11-11),
('fred', '2020-11-11'),
...
('bert', '2020-11-11');
I need to write an after update Trigger and enumerate all the name(s) that were added and add each one to another table but can't work out how enumerate each one.
EDIT: - thanks to those who pointed me in the right direction, I know very little SQL.
What I need to do is something like this
foreach name in inserted
look it up in another table and
retrieve a count of the updates a 'name' has done
add 1 to the count
and update it back into the other table
I can't get to my laptop at the moment, but presumably I can do something like:
BEGIN
SET #count = (SELECT UCount from OTHERTAB WHERE name = ins.name)
SET #count = #count + 1
UPDATE OTHERTAB SET UCount = #count WHERE name = ins.name
SELECT ins.name
FROM inserted ins;
END
and that would work for each name in the update?
Obviously I'll have to read up on set based SQL processing.
Thanks all for the help and pointers.
Based on your edits you would do something like the following... set based is a mindset, so you don't need to compute the count in advance (in fact you can't). It's not clear whether you are counting in the same table or another table - but I'm sure you can work it out.
Points:
Use the Inserted table to determine what rows to update
Use a sub-query to calculate the new value if its a second table, taking into account the possibility of null
If you are really using the same table, then this should work
BEGIN
UPDATE OTHERTAB SET
UCount = COALESCE(UCount,0) + 1
WHERE [name] in (
SELECT I.[name]
FROM Inserted I
);
END;
If however you are using a second table then this should work:
BEGIN
UPDATE OTHERTAB SET
UCount = COALESCE((SELECT UCount+1 from OTHERTAB T2 WHERE T2.[name] = OTHERTAB.[name]),0)
WHERE [name] in (
SELECT I.[name]
FROM Inserted I
);
END;
Using inserted and set-based approach(no need for loop):
CREATE TRIGGER trg
ON updates
AFTER INSERT
AS
BEGIN
INSERT INTO tab2(name)
SELECT name
FROM inserted;
END
I have a main table . I will get some real time records added to that table .I want to fetch all records which has been added ,altered or changed in previous existing records.
How can i Achieve this ?
You can use 2 commonly used approaches:
Track changes with another table through a trigger.
Should be something similar to this:
CREATE TABLE Tracking (
ID INT,
-- Your original table columns
TrackDate DATETIME DEFAULT GETDATE(),
TrackOperation VARCHAR(100))
GO
CREATE TRIGGER TrackingTrigger ON OriginalTable AFTER UPDATE, INSERT, DELETE
AS
BEGIN
INSERT INTO Tracking(
ID,
TrackOperation
-- Other columns
)
SELECT
ID = ISNULL(I.ID, D.ID),
TrackOperation = CASE
WHEN I.ID IS NOT NULL AND D.ID IS NOT NULL THEN 'Update'
WHEN I.ID IS NOT NULL THEN 'Insert'
ELSE 'Delete' END
-- Other columns
FROM
inserted AS I
FULL JOIN deleted AS D ON I.ID = D.ID -- ID is primary key
END
GO
Include CreatedDate, ModifiedDate and IsDeleted columns on your table. CreatedDate should have a default with current date, ModifiedDate should be updated each time your data is updated and IsDeleted should be flagged when you are deleting (and not actually being deleted). This option requires a lot more handling that the previous one, and you won't be able to track consecutive updates.
You have to search your table first from the sys.objects and grab that object id before using the usage_stats table.
declare #objectid int
select #objectid = object_id from sys.objects where name = 'YOURTABLENAME'
select top 1 * from sys.dm_db_index_usage_stats where object_id = #objectid
and last_user_update is not null
order by last_user_update
If you have Identity column in your table you may find last inserted row information through SQL query. And for that, we have multiple options like:
##IDENTITY
SCOPE_IDENTITY
IDENT_CURRENT
All three functions return last-generated identity values. However, the scope and session on which last is defined in each of these functions differ.
I am trying to create a simple to insert trigger that gets the count from a table and adds it to another like this
CREATE TABLE [poll-count](
id VARCHAR(100),
altid BIGINT,
option_order BIGINT,
uip VARCHAR(50),
[uid] VARCHAR(100),
[order] BIGINT
PRIMARY KEY NONCLUSTERED([order]),
FOREIGN KEY ([order]) references ord ([order]
)
GO
CREATE TRIGGER [get-poll-count]
ON [poll-count]
FOR INSERT
AS
BEGIN
DECLARE #count INT
SET #count = (SELECT COUNT (*) FROM [poll-count] WHERE option_order = i.option_order)
UPDATE [poll-options] SET [total] = #count WHERE [order] = i.option_order
END
GO
when i ever i try to run this i get this error:
The multi-part identifier "i.option_order" could not be bound
what is the problem?
thanks
Your trigger currently assumes that there will always be one-row inserts. Have you tried your trigger with anything like this?
INSERT dbo.[poll-options](option_order --, ...)
VALUES(1 --, ...),
(2 --, ...);
Also, you say that SQL Server "cannot access inserted table" - yet your statement says this. Where do you reference inserted (even if this were a valid subquery structure)?
SET #count = (SELECT COUNT (*) FROM [poll-count]
WHERE option_order = i.option_order)
-----------------------^ "i" <> "inserted"
Here is a trigger that properly references inserted and also properly handles multi-row inserts:
CREATE TRIGGER dbo.pollupdate
ON dbo.[poll-options]
FOR INSERT
AS
BEGIN
SET NOCOUNT ON;
;WITH x AS
(
SELECT option_order, c = COUNT(*)
FROM dbo.[poll-options] AS p
WHERE EXISTS
(
SELECT 1 FROM inserted
WHERE option_order = p.option_order
)
GROUP BY option_order
)
UPDATE p SET total = x.c
FROM dbo.[poll-options] AS p
INNER JOIN x
ON p.option_order = x.option_order;
END
GO
However, why do you want to store this data on every row? You can always derive the count at runtime, know that it is perfectly up to date, and avoid the need for a trigger altogether. If it's about the performance aspect of deriving the count at runtime, a much easier way to implement this write-ahead optimization for about the same maintenance cost during DML is to create an indexed view:
CREATE VIEW dbo.[poll-options-count]
WITH SCHEMABINDING
AS
SELECT option_order, c = COUNT_BIG(*)
FROM dbo.[poll-options]
GROUP BY option_order;
GO
CREATE UNIQUE CLUSTERED INDEX oo ON dbo.[poll-options-count](option_order);
GO
Now the index is maintained for you and you can derive very quick counts for any given (or all) option_order values. You'll have test, of course, whether the improvement in query time is worth the increased maintenance (though you are already paying that price with the trigger, except that it can affect many more rows in any given insert, so...).
As a final suggestion, don't use special characters like - in object names. It just forces you to always wrap it in [square brackets] and that's no fun for anyone.
Typically when you specify an identity column you get a convenient interface in SQL Server for asking for particular row.
SELECT * FROM $IDENTITY = #pID
You don't really need to concern yourself with the name if the identity column because there can only be one.
But what if I have a table which mostly consists of temporary data. Lots of inserts and lots of deletes. Is there a simple way for me to reuse the identity values.
Preferably I would want to be able to write a function that would return say NEXT_SMALLEST($IDENTITY) as next identity value and do so in a fail-safe manner.
Basically find the smallest value that's not in use. That's not entirely trivial to do, but what I want is to be able to tell SQL Server that this is my function that will generate the identity values. But what I know is that no such function exists...
I want to...
Implement global data base IDs, I need to provide a default value that I'm in control of.
My idea was based around that I should be able to have a table with all known IDs and then every row ID from some other table that needed a global ID would reference that table. The default value would be provided by something like
INSERT INTO GlobalID
RETURN SCOPE_IDENTITY()
No; it's not unique if it can be reused.
Why do you want to re-use them? Why do you concern yourself with this field? If you want to be in control of it, don't make it an identity; create your own scheme and use that.
Don't reuse identities, you'll just shoot your self in the foot. Use a large enough value so that it never rolls over (64 bit big int).
To find missing gaps in a sequence of numbers join the table against itself with a +/- 1 difference:
SELECT a.id
FROM table AS a
LEFT OUTER JOIN table AS b ON a.id = b.id+1
WHERE b.id IS NULL;
This query will find the numbers in the id sequence for which id-1 is not in the table, ie. contiguous sequence start numbers. You can then use SET IDENTITY INSERT OFF to insert a specific id and reuse a number. The cost of doing so is overwhelming (both runtime and code complexity) compared with the an ordinary identity based insert.
If you really want to reset Identity value to the lowest,
here is the trick you can use through DBCC CHECKIDENT
Basically following sql statements resets identity value so that identity value restarts from the lowest possible number
create table TT (id int identity(1, 1))
GO
insert TT default values
GO 10
select * from TT
GO
delete TT where id between 5 and 10
GO
--; At this point, next ID will be 11, not 5
select * from TT
GO
insert TT default values
GO
--; as you can see here, next ID is indeed 11
select * from TT
GO
--; Now delete ID = 11
--; so that we can reseed next highest ID to 5
delete TT where id = 11
GO
--; Now, let''s reseed identity value to the lowest possible identity number
declare #seedID int
select #seedID = max(id) from TT
print #seedID --; 4
--; We reseed identity column with "DBCC CheckIdent" and pass a new seed value
--; But we can't pass a seed number as argument, so let's use dynamic sql.
declare #sql nvarchar(200)
set #sql = 'dbcc checkident(TT, reseed, ' + cast(#seedID as varchar) + ')'
exec sp_sqlexec #sql
GO
--; Now the next
insert TT default values
GO
--; as you can see here, next ID is indeed 5
select * from TT
GO
I guess we would really need to know why you want to reuse your identity column. The only reason I can think of is because of the temporary nature of your data you might exhaust the possible values for the identity. That is not really likely, but if that is your concern, you can use uniqueidentifiers (guids) as the primary key in your table instead.
The function newid() will create a new guid and can be used in insert statements (or other statements). Then when you delete the row, you don't have any "holes" in your key because guids are not created in that order anyway.
[Syntax assumes SQL2008....]
Yes, it's possible. You need to two management tables, and two triggers on each participating table.
First, the management tables:
-- this table should only ever have one row
CREATE TABLE NextId (Id INT)
INSERT NextId VALUES (1)
GO
CREATE TABLE RecoveredIds (Id INT NOT NULL PRIMARY KEY)
GO
Then, the triggers, two on each table:
CREATE TRIGGER tr_TableName_RecoverId ON TableName
FOR DELETE AS BEGIN
IF ##ROWCOUNT = 0 RETURN
INSERT RecoveredIds (Id) SELECT Id FROM deleted
END
GO
CREATE TRIGGER tr_TableName_AssignId ON TableName
INSTEAD OF INSERT AS BEGIN
DECLARE #rowcount INT = ##ROWCOUNT
IF #rowcount = 0 RETURN
DECLARE #required INT = #rowcount
DECLARE #new_ids TABLE (Id INT PRIMARY KEY)
DELETE TOP (#required) OUTPUT DELETED.Id INTO #new_ids (Id) FROM RecoveredIds
SET #rowcount = ##ROWCOUNT
IF #rowcount < #required BEGIN
DECLARE #output TABLE (Id INT)
UPDATE NextId SET Id = Id + (#required-#rowcount)
OUTPUT DELETED.Id INTO #output
-- this assumes you have a numbers table around somewhere
INSERT #new_ids (Id)
SELECT n.Number+o.Id-1 FROM Numbers n, #output o
WHERE n.Number BETWEEN 1 AND #required-#rowcount
END
SET IDENTITY_INSERT TableName ON
;WITH inserted_CTE AS (SELECT _no = ROW_NUMBER() OVER (ORDER BY Id), * FROM inserted)
, new_ids_CTE AS (SELECT _no = ROW_NUMBER() OVER (ORDER BY Id), * FROM #new_ids)
INSERT TableName (Id, Attr1, Attr2)
SELECT n.Id, i.Attr1, i.Attr2
FROM inserted_CTE i JOIN new_ids_CTE n ON i._no = n._no
SET IDENTITY_INSERT TableName OFF
END
You could script the triggers out easily enough from system tables.
You would want to test this for concurrency. It should work as is, syntax errors notwithstanding: The OUTPUT clause guarantees atomicity of id lookup->increment as one step, and the entire operation occurs within a transaction, thanks to the trigger.
TableName.Id is still an identity column. All the common idioms like $IDENTITY and SCOPE_IDENTITY() will still work.
There is no central table of ids by table, but you could create one easily enough.
I don't have any help for finding the values not in use but if you really want to find them and set them yourself, you can use
set identity_insert on ....
in your code to do so.
I'm with everyone else though. Why bother? Don't you have a business problem to solve?
If there's:
IF UPDATE (col1)
...in the SQL server trigger on a table, does it return true only if col1 has been changed or been updated?
I have a regular update query like
UPDATE table-name
SET col1 = 'x',
col2 = 'y'
WHERE id = 999
Now what my concern is if the "col1" was 'x' previously then again we updated it to 'x'
would IF UPDATE ("col1") trigger return True or not?
I am facing this problem as my save query is generic for all columns, but when I add this condition it returns True even if it's not changed...So I am concerned what to do in this case if I want to add condition like that?
It returns true if a column was updated. An update means that the query has SET the value of the column. Whether the previous value was the same as the new value is largely irelevant.
UPDATE table SET col = col
it's an update.
UPDATE table SET col = 99
when the col already had value 99 also it's an update.
Within the trigger, you have access to two internal tables that may help. The 'inserted' table includes the new version of each affected row, The 'deleted' table includes the original version of each row. You can compare the values in these tables to see if your field value was actually changed.
Here's a quick way to scan the rows to see if ANY column changed before deciding to run the contents of a trigger. This can be useful for example when you want to write a history record, but you don't want to do it if nothing really changed.
We use this all the time in ETL importing processes where we may re-import data but if nothing really changed in the source file we don't want to create a new history record.
CREATE TRIGGER [dbo].[TR_my_table_create_history]
ON [dbo].[my_table] FOR UPDATE AS
BEGIN
--
-- Insert the old data row if any column data changed
--
INSERT INTO [my_table_history]
SELECT d.*
FROM deleted d
INNER JOIN inserted i ON i.[id] = d.[id]
--
-- Use INTERSECT to see if anything REALLY changed
--
WHERE NOT EXISTS( SELECT i.* INTERSECT SELECT d.* )
END
Note that this particular trigger assumes that your source table (the one triggering the trigger) and the history table have identical column layouts.
What you do is check for different values in the inserted and deleted tables rather than use updated() (Don't forget to account for nulls). Or you could stop doing unneeded updates.
Trigger:
CREATE TRIGGER boo ON status2 FOR UPDATE AS
IF UPDATE (id)
BEGIN
SELECT 'DETECT';
END;
Usage:
UPDATE status2 SET name = 'K' WHERE name= 'T' --no action
UPDATE status2 SET name = 'T' ,id= 8 WHERE name= 'K' --detect
To shortcut the "No actual update" case, you need also check at the beginning whether your query affected any rows at all:
set nocount on; -- this must be the first statement!
if not exists (select 1 from inserted) and not exists (select 1 from deleted)
return;
SET NOCOUNT ON;
declare #countTemp int
select #countTemp = Count (*) from (
select City,PostCode,Street,CountryId,Address1 from Deleted
union
select City,PostCode,Street,CountryId,Address1 from Inserted
) tempTable
IF ( #countTemp > 1 )
Begin
-- Your Code goes Here
End
-- if any of these "City,PostCode,Street,CountryId,Address1" got updated then trigger
-- will work in " IF ( #countTemp > 1 ) " Code)
This worked for me
DECLARE #LongDescDirty bit = 0
Declare #old varchar(4000) = (SELECT LongDescription from deleted)
Declare #new varchar(4000) = (SELECT LongDescription from inserted)
if (#old <> #new)
BEGIN
SET #LongDescDirty = 1
END
Update table
Set LongDescUpdated = #LongDescUpdated
.....