I have a SQL MERGE script that updates a table where an update trigger exists.
When the MERGE scripts comes with only one update to the table the trigger works fine. When the MERGE command comes with multiple updates to the table the trigger returns an error.
Here is the trigger:
ALTER TRIGGER [dbo].[userupd]
ON [dbo].[users]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
DECLARE #navn varchar(255), #fornavn varchar(255), #efternavn varchar(255),#initialer varchar(255), #areagroups varchar(255)
SET #fornavn = (SELECT Fornavn FROM DELETED)
SET #efternavn = (SELECT Efternavn FROM DELETED)
SET #initialer = (SELECT Initialer FROM DELETED)
IF #initialer IS NULL SET #initialer = 'Extern'
SET #navn = #fornavn + ' ' + #efternavn + ' (' + #initialer + ')'
SET #areagroups = (SELECT AddedAreaGroups FROM NOX.dbo.simscodesusers WHERE Username = #navn)
SELECT #areagroups OriginalString, RTRIM(LTRIM(#areagroups)) TrimmedValue
SET #areagroups = ' ' + #areagroups
INSERT INTO NOX.dbo.SIMScodesAutoUpdate
( Action ,
Username
)
SELECT 'DELETE' ,
D.Fornavn + ' ' + D.Efternavn + ' (' + D.Initialer + ')'
FROM DELETED D;
INSERT INTO NOX.dbo.SIMScodesAutoUpdate
( Action ,
Username ,
NoxAutoCode ,
NoxAutoCodePIN ,
UserGroup ,
Startdate ,
EndDate ,
AddedAreaGroups
)
SELECT 'ADD' ,
I.Fornavn + ' ' + I.Efternavn + ' (' + I.Initialer + ')' ,
I.Kortnummer ,
I.PINkode ,
I.Brugerniveau ,
I.Startdato ,
I.Slutdato,
#areagroups
FROM INSERTED I
END
This is the error returned from the SQL job that contains the MERGE script:
Executed as user: CPCORP\SQDKRTV96service. Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. [SQLSTATE 21000] (Error 512) The statement has been terminated. [SQLSTATE 01000] (Error 3621). The step failed.
Can the trigger be edited to handle multiple values?
Thanks in advance.
In the second case ("When the MERGE command comes with multiple updates...") the DELETED table contains MANY rows. So you can't assign MULTI rows table to the ONE SCALAR variable. Here is the source of the error 'Subquery returned more than 1 value....':
SET #fornavn = (SELECT Fornavn FROM DELETED)
Microsoft: Create DML Triggers to Handle Multiple Rows of Data
Related
I am creating a query where in I select data on a table, then select a number of rows from that table, to then insert those rows into another identical table in another Database, and then repeat the proces to select the next number of rows from the orignal table.
For Reference, this is what i try to do (already build it for Oracle):
$" INSERT INTO {destination-table}
SELECT * FROM {original-table}
WHERE ROWID IN (SELECT B.RID
FROM (SELECT ROWID AS RID, rownum as RID2
FROM {original-table}
WHERE {Where Claus}
AND ROWNUM <= {recordsPerStatement * iteration}
) B WHERE RID2 > {recordsPerStatement * (iteration - 1)})"
This is put through a loop in .net
For SQL server however I fail to get this done. The data i retrieve with:
$" Select B.* from (Select A.* from (Select Row_NUMBER()
OVER (order by %%physloc%%) As RowID, {original-table}.* FROM
{original-table} where {where-claus})
A Where A.RowID between {recordsPerStatement * (iteration - 1)}
AND {recordsPerStatement * iteration} B"
The problem here is that above select produces an extra column (ROWID) which prevents me from inserting the above data into the destination-table
I have been looking at ways to get rid of the ROWID column in the top select or to insert data from original-table based on the data retrieved
(something like insert into destination-table select * from original-table where exists in (rest of select query)..... but to no avail
TLDR = Get rid of a ROWID column used in calculations to then be able to insert rows into an identical table
specifications:
A LOT (millions of rows) of data (therefor processing it in bits)
Unknown tables (so i cannot call on specific column names, as they are unknown)
needs to have an order (thus the row_number) so the same data is not copied twice.
insert using a select query (as first retrieving it and doing some magic locally would severly impact performance)
If necessary additional variables can be added in here (like an order claus variable) however, any reference to data in the query will ALWAYS be a variable + If I can find a way to not add more varriables in the query then that would be preferable
I hope that someone would have an idea on what i could look at further.
This approach uses a temporary table to save the paginated data before processing it page by page. It has worked for me, but not sure if you might have problems with very large data sets. You could put the whole thing in an SP then call the SP with parameters from .net. You will need to add a parameter for the destination table name and construct/execute an INSERT statement in the final loop.
-- Parameters
DECLARE #PageSize integer = 100;
DECLARE #TableName nVarchar(200) = 'WRD_WordHits';
DECLARE #OrderBy nVarchar(3000) = 'WordID'
STEP_010: BEGIN
-- Get the column definitions for the table
DECLARE #Cols int;
SELECT TABLE_NAME, ORDINAL_POSITION, COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH
, IS_NULLABLE
INTO #Tspec
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = #TableName;
-- Number of columns
SET #Cols = ##ROWCOUNT;
END;
STEP_020: BEGIN
-- Create the temporary table that will hold the paginated data
CREATE TABLE #TT2 ( PageNumber int, LineNumber int, SSEQ int )
DECLARE #STMT nvarchar(3000);
END;
STEP_030: BEGIN
-- Add columns to #TT2 using the column definitions
DECLARE #Ord int = 0;
DECLARE #Colspec nvarchar(3000) = '';
DECLARE #AllCols nvarchar(3000) = '';
DECLARE #ColName nvarchar(200) = '';
WHILE #Ord < #Cols BEGIN
SELECT #Ord = #Ord + 1;
-- Get the column name and specification
SELECT #ColName = Column_Name
, #Colspec =
Column_Name + ' ' + DATA_TYPE + CASE WHEN CHARACTER_MAXIMUM_LENGTH IS NULL THEN ''
ELSE '(' + CAST(CHARACTER_MAXIMUM_LENGTH AS varchar(30) ) + ')' END
FROM #Tspec WHERE ORDINAL_POSITION = #Ord;
-- Create and execute statement to add the column and the columns list used later
SELECT #STMT = ' ALTER TABLE #TT2 ADD ' + #Colspec + ';'
, #AllCols = #AllCols + ', ' + #ColName ;
EXEC sp_ExecuteSQL #STMT;
END;
-- Remove leading comma from columns list
SELECT #AllCols = SUBSTRING(#AllCols, 3, 3000);
PRINT #AllCols
-- Finished with the source table spec
DROP TABLE #Tspec;
END;
STEP_040: BEGIN -- Create and execute the statement used to fill #TT2 with the paginated data from the source table
-- The first two cols are the page number and row number within the page
-- The sequence is arbitrary but could use a key list for the order by clause
SELECT #STMT =
'INSERT #TT2
SELECT FLOOR( CAST( SSEQ as float) /' + CAST(#PageSize as nvarchar(10)) + ' ) + 1 PageNumber, (SSEQ) % ' + CAST(#PageSize as nvarchar(10)) + ' + 1 LineNumber, * FROM
(
SELECT ROW_NUMBER() OVER ( ORDER BY ' + #OrderBy + ' ) - 1 AS SSEQ, * FROM ' + #TableName + '
)
A; ' ;
EXEC sp_ExecuteSQL #STMT;
-- *** Test only to show that the table contains the data
--SELECT * FROM #TT2;
--SELECT #STMT = 'SELECT NULL AS EXECSELECT, ' + #AllCols + ' FROM #TT2;' ;
--EXEC sp_ExecuteSQL #STMT;
-- ***
END;
STEP_050: BEGIN -- Loop through paginated data, one page at a time.
-- Variables to control the paginated loop
DECLARE #PageMAX int;
SELECT #PageMAX = MAX(PageNumber) FROM #TT2;
PRINT 'Generated ' + CAST( #PageMAX AS varchar(10) ) + ' pages from table';
DECLARE #Page int = 0;
WHILE #Page < #PageMax BEGIN
SELECT #Page = #Page + 1;
-- Create and execute the statement to get one page of data - this could be any statement to process data page by page
SELECT #STMT = 'SELECT ' + #AllCols + ' FROM #TT2 WHERE PageNumber = ' + CAST(#Page AS Varchar(10 )) + ' ORDER BY LineNumber '
-- Execute the statment.
PRINT #STMT -- For testing
--EXEC sp_EXECUTESQL #STMT;
END;
-- Finished with Paginated data
DROP TABLE #TT2;
END;
The solution i came up with:
First reading the column_names from the database and storing them locally, to then use them again in building up the insert / select query and only select those columns from the view (which are all apart from ROWID).
commandText = $"SELECT column_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = N'{table}'"
columnNames = "executionfunction with commandText"
columnNamesCount = columnNames.Rows.Count
Dim counter As Int16 = 0
commandText = String.Empty
commandText = $"INSERT INTO {destination} SELECT "
For Each row As DataRow In columnNames.Rows
If counter = columnNamesCount - 1 Then
commandText += $"B.{row("column_name")} "
Else
commandText += $"B.{row("column_name")}, "
End If
counter = counter + 1
Next
commandText += $"FROM
(Select A.* FROM (Select Row_NUMBER()
OVER(order by %%physloc%%) AS RowID, {table}.*
FROM {table} where {filter}) A
WHERE A.RowID between ({recordsPerStatement} * ({iteration}-1)) + 1
AND ({recordsPerStatement} * {iteration})) B"
EDIT: To remove the %%physloc%% clause AN OFFSET FETCH NEXT part has been build in. new approach:
commandText += $"INSERT INTO {destination} SELECT * FROM {table} WHERE {filter}"
For i As Int16 = 1 To columnNamesCount
If i = 1 Then
commandText += $"ORDER BY {columnNames.Rows(i - 1)("column_name")} ASC"
Else
commandText += $"{columnNames.Rows(i - 1)("column_name")} ASC"
End If
If i <> columnNamesCount Then
commandText += ", "
End If
Next
commandText += $" OFFSET ({recordsPerStatement} * ({iteration} -1)) ROWS FETCH Next {recordsPerStatement} ROWS ONLY"
I have the following SQL:
DECLARE #HospitalReport TABLE (Registrator VARCHAR (20))
INSERT INTO #HospitalReport (Registrator)
VALUES("64")
SELECT
#HospitalReport.Registrator
FROM
#HospitalReport
IF Registrator > 0
BEGIN
SELECT
Database.dbo.Users.Firstname, Database.dbo.Users.Lastname
FROM
StradaAnv.dbo.Anvandare
WHERE
Id = Registrator
IF Firstname != NULL AND Lastname != NULL
BEGIN
UPDATE #HospitalReport
SET Registrator = Firstname + ' ' + Lastname
WHERE Registrator = Registrator
END
END
SELECT * FROM #HospitalReport
When I run this code, I get the following error:
Msg 137, Level 16, State 1, Line 9
Must declare the scalar variable "#HospitalReport"
What I see, I already have declared #HospitalReport as a table?
Don't split everything out into procedural steps. Tell the system what you want, not how to do it:
DECLARE #HospitalReport TABLE (Registrator VARCHAR (20))
INSERT INTO #HospitalReport (Registrator)
VALUES("64")
UPDATE H
SET Registrator = Firstname + ' ' + Lastname
FROM
#HospitalReport H
INNER JOIN
StradaAnv.dbo.Anvandare A
ON
H.Registrator = A.Registrator
WHERE A.Firstname IS NOT NULL AND
A.Lastname IS NOT NULL
SELECT * FROM #HospitalReport
I.e. I'm not first querying the table. Then seeing whether particular columns are not null1. Then deciding whether or not to perform an update. I'm describing the entire operation in a single query and then letting the optimizer work out how best to perform this task.
1Which, as shown above, should be done using the IS NULL operator rather than != since NULL is neither equal not not equal to NULL
I'm trying to figure out a way to update a record without having to list every column name that needs to be updated.
For instance, it would be nice if I could use something similar to the following:
// the parts inside braces are what I am trying to figure out
UPDATE Employee
SET {all columns, without listing each of them}
WITH {this record with id of '111' from other table}
WHERE employee_id = '100'
If this can be done, what would be the most straightforward/efficient way of writing such a query?
It's not possible.
What you're trying to do is not part of SQL specification and is not supported by any database vendor. See the specifications of SQL UPDATE statements for MySQL, Postgresql, MSSQL, Oracle, Firebird, Teradata. Every one of those supports only below syntax:
UPDATE table_reference
SET column1 = {expression} [, column2 = {expression}] ...
[WHERE ...]
This is not posible, but..
you can doit:
begin tran
delete from table where CONDITION
insert into table select * from EqualDesingTabletoTable where CONDITION
commit tran
be carefoul with identity fields.
Here's a hardcore way to do it with SQL SERVER. Carefully consider security and integrity before you try it, though.
This uses schema to get the names of all the columns and then puts together a big update statement to update all columns except ID column, which it uses to join the tables.
This only works for a single column key, not composites.
usage: EXEC UPDATE_ALL 'source_table','destination_table','id_column'
CREATE PROCEDURE UPDATE_ALL
#SOURCE VARCHAR(100),
#DEST VARCHAR(100),
#ID VARCHAR(100)
AS
DECLARE #SQL VARCHAR(MAX) =
'UPDATE D SET ' +
-- Google 'for xml path stuff' This gets the rows from query results and
-- turns into comma separated list.
STUFF((SELECT ', D.'+ COLUMN_NAME + ' = S.' + COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = #DEST
AND COLUMN_NAME <> #ID
FOR XML PATH('')),1,1,'')
+ ' FROM ' + #SOURCE + ' S JOIN ' + #DEST + ' D ON S.' + #ID + ' = D.' + #ID
--SELECT #SQL
EXEC (#SQL)
In Oracle PL/SQL, you can use the following syntax:
DECLARE
r my_table%ROWTYPE;
BEGIN
r.a := 1;
r.b := 2;
...
UPDATE my_table
SET ROW = r
WHERE id = r.id;
END;
Of course that just moves the burden from the UPDATE statement to the record construction, but you might already have fetched the record from somewhere.
How about using Merge?
https://technet.microsoft.com/en-us/library/bb522522(v=sql.105).aspx
It gives you the ability to run Insert, Update, and Delete. One other piece of advice is if you're going to be updating a large data set with indexes, and the source subset is smaller than your target but both tables are very large, move the changes to a temporary table first. I tried to merge two tables that were nearly two million rows each and 20 records took 22 minutes. Once I moved the deltas over to a temp table, it took seconds.
If you are using Oracle, you can use rowtype
declare
var_x TABLE_A%ROWTYPE;
Begin
select * into var_x
from TABLE_B where rownum = 1;
update TABLE_A set row = var_x
where ID = var_x.ID;
end;
/
given that TABLE_A and TABLE_B are of same schema
It is possible. Like npe said it's not a standard practice. But if you really have to:
1. First a scalar function
CREATE FUNCTION [dte].[getCleanUpdateQuery] (#pTableName varchar(40), #pQueryFirstPart VARCHAR(200) = '', #pQueryLastPart VARCHAR(200) = '', #pIncludeCurVal BIT = 1)
RETURNS VARCHAR(8000) AS
BEGIN
DECLARE #pQuery VARCHAR(8000);
WITH cte_Temp
AS
(
SELECT
C.name
FROM SYS.COLUMNS AS C
INNER JOIN SYS.TABLES AS T ON T.object_id = C.object_id
WHERE T.name = #pTableName
)
SELECT #pQuery = (
CASE #pIncludeCurVal
WHEN 0 THEN
(
STUFF(
(SELECT ', ' + name + ' = ' + #pQueryFirstPart + #pQueryLastPart FROM cte_Temp FOR XML PATH('')), 1, 2, ''
)
)
ELSE
(
STUFF(
(SELECT ', ' + name + ' = ' + #pQueryFirstPart + name + #pQueryLastPart FROM cte_Temp FOR XML PATH('')), 1, 2, ''
)
) END)
RETURN 'UPDATE ' + #pTableName + ' SET ' + #pQuery
END
2. Use it like this
DECLARE #pQuery VARCHAR(8000) = dte.getCleanUpdateQuery(<your table name>, <query part before current value>, <query part after current value>, <1 if current value is used. 0 if updating everything to a static value>);
EXEC (#pQuery)
Example 1: make all employees columns 'Unknown' (you need to make sure column type matches the intended value:
DECLARE #pQuery VARCHAR(8000) = dte.getCleanUpdateQuery('employee', '', 'Unknown', 0);
EXEC (#pQuery)
Example 2: Remove an undesired text qualifier (e.g. #)
DECLARE #pQuery VARCHAR(8000) = dte.getCleanUpdateQuery('employee', 'REPLACE(', ', ''#'', '''')', 1);
EXEC (#pQuery)
This query can be improved. This is just the one I saved and sometime I use. You get the idea.
Similar to an upsert, you could check if the item exists on the table, if so, delete it and insert it with the new values (technically updating it) but you would lose your rowid if that's something sensitive to keep in your case.
Behold, the updelsert
IF NOT EXISTS (SELECT * FROM Employee WHERE ID = #SomeID)
INSERT INTO Employee VALUES(#SomeID, #Your, #Vals, #Here)
ELSE
DELETE FROM Employee WHERE ID = #SomeID
INSERT INTO Employee VALUES(#SomeID, #Your, #Vals, #Here)
you could do it by deleting the column in the table and adding the column back in and adding a default value of whatever you needed it to be. then saving this will require to rebuild the table
I have written a trigger that sends email once a row INSERT is performed.
ALTER TRIGGER TR_SendMailOnDataRequest
ON DataRequest
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE
#DR_Id INT,
#DR_FullName VARCHAR(200),
#DR_Email VARCHAR(200),
#DR_Phone VARCHAR(20),
#UT_Name VARCHAR(50),
#DR_UserTypeOther VARCHAR(50) = NULL,
#D_Name VARCHAR(200),
#DR_RequestDate DATETIME,
#UF_LinkedFiles VARCHAR(MAX),
#DRN_Names VARCHAR(200),
#DR_Description VARCHAR(1200),
#DR_CreatedOn DATETIME,
#analystMailList VARCHAR(MAX),
#tableHtml NVARCHAR(MAX),
#downloadLink VARCHAR(MAX) = N'NONE'
SELECT #DR_Id = MAX(DR_Id) FROM dbo.DataRequest
SELECT
#DR_FullName = DR_FullName,
#DR_Email = DR_Email,
#DR_Phone = DR_Phone,
#UT_Name = UT_Name,
#DR_UserTypeOther = DR_UserTypeOther,
#D_Name = D_Name,
#DR_RequestDate = DR_RequestDate,
#UF_LinkedFiles = UF_LinkedFiles,
#DRN_Names = DRN_Names,
#DR_Description = DR_Description,
#DR_CreatedOn = DR_CreatedOn
FROM
dbo.FN_GetDataRequest(#DR_Id)
SELECT #analystMailList = dbo.FN_GetAnalystsMailList()
IF (LEN(#UF_LinkedFiles) > 0)
BEGIN
SET #downloadLink = N'Downloads'
END
SET #tableHTML =
N'<H1>Data Request</H1>' +
N'<UL>' +
N'<LI>Full Name: ' + #UF_LinkedFiles + N'</LI>' +
N'<LI>Email: ' + #DR_Email + N'</LI>' +
N'<LI>Phone: ' + CAST(#DR_Phone AS VARCHAR(20)) + N'</LI>' +
N'<LI>User Type: ' + #UT_Name + N'</LI>' +
N'<LI>User Type Other: ' + COALESCE(#DR_UserTypeOther, N'NONE') + N'</LI>' +
N'<LI>Reuest Date: ' + CONVERT(VARCHAR(20), #DR_RequestDate, 107) + N'</LI>' +
N'<LI>Downloads: ' + #downloadLink + N'</LI>' +
N'</UL>';
BEGIN
EXEC msdb.dbo.sp_send_dbmail
#profile_name = 'Example',
#recipients = 'John Doe<jdoe#example>',
--#recipients = #analystMailList,
#reply_to = #DR_Email,
#subject = 'Email Test',
#body_format = 'HTML',
#body = #tableHtml
END
END
GO
The above trigger is fired when there is a ROW INSERT operation on table DataRequest. After the row insert operation, I take the IDENTITY element generated after the INSERT operation and use that as the foreign key, and INSERT other values in a different table. Finally, I use the values from both the tables and create an email to be sent.
I wasn't getting the values from the other tables (e.g. #UF_LinkedFiles), so I realized that the TRIGGER is being fired just after the INSERT in FIRST table but before the INSERT in the SECOND table, thus no values available when SENDING EMAIL.
So how do I make sure that TRIGGER is fired only after the SPROC that does all the INSERT activities in multiple tables has completed the transaction.
Here is the table diagram -
Instead of using a trigger, I have included the EMAIL SENDING code in the SPROC where rows are being inserted.
Not sure if that is your case because you don't explain how is the behavior between the tables. But i had an scenario where i try to execute a SELECT during a series of insert and i couldn't find the row because the transaction wasn't finish yet.
What i did was create an additional table
tblProgress
id integer,
fieldA integer,
fieldB integer,
fieldC integer
So if you have 3 tables TableA, TableB and TableC each table will have one INSERT trigger and will do some job then access tblProgress.
TableA create a row
TableB and TableC update.
Then tblProgress will also have an AFTER UPDATE trigger, where you validate all 3 field have NOT NULL value
When you have all 3 values you can send the email.
I would like to create a trigger based on a column but only for those records that end in _ess. How can I set up an audit trigger to do this?
Here is the current trigger but it just checks for all changes to username, whereas I just want it to check when username is updated to or from a username ending in _ess.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER [dbo].[AUDIT_UPD_HRPERSONS_USERNAME] ON [dbo].[HRPersons] FOR UPDATE NOT FOR REPLICATION As
BEGIN
DECLARE
#OperationNum int,
#DBMSTransaction VARCHAR(255),
#OSUSER VARCHAR(50),
#DBMSUSER VARCHAR(50),
#HostPhysicalAddress VARCHAR(17),
#contexto varchar(128),
#ApplicationModifierUser varchar(50),
#SessionInfo_OSUser varchar(50),
#HostLogicalAddress varchar(30)
Set NOCOUNT On
IF ##trancount>0
BEGIN
EXECUTE sp_getbindtoken #DBMSTransaction OUTPUT
END
ELSE BEGIN
SET #DBMSTransaction = NULL
END
IF PatIndex( '%\%',SUSER_SNAME()) > 0
BEGIN
set #OSUSER = SUSER_SNAME()
set #DBMSUSER = NULL
END
ELSE BEGIN
SET #OSUSER = NULL
SET #DBMSUSER = SUSER_SNAME()
END
set #HostPhysicalAddress = (SELECT net_address FROM master..sysprocesses where spid=##spid )
set #HostPhysicalAddress = substring (#HostPhysicalAddress,1,2) + '-' + substring (#HostPhysicalAddress,3,2) + '-' + substring (#HostPhysicalAddress,5,2) + '-' + substring (#HostPhysicalAddress,7,2) + '-' + substring (#HostPhysicalAddress,9,2) + '-' + substring (#HostPhysicalAddress,11,2)
SELECT #contexto=CAST(context_info AS varchar(128)) FROM master..sysprocesses WHERE spid=##SPID
IF (PatIndex( '%APPLICATION_USER=%',#contexto) is not null) and (PatIndex( '%APPLICATION_USER=%',#contexto) > 0)
set #ApplicationModifierUser=substring(ltrim(substring(#contexto,PatIndex( '%APPLICATION_USER=%',#contexto)+17,128)),1, charIndex( '///',ltrim(substring(#contexto,PatIndex( '%APPLICATION_USER=%',#contexto)+17,128) ) ) - 1 )
ELSE
set #ApplicationModifierUser=NULL
IF (PatIndex( '%OS_USER=%',#contexto) is not null) and ( PatIndex( '%OS_USER=%',#contexto)>0 )
set #SessionInfo_OSUser=substring(ltrim(substring(#contexto,PatIndex( '%OS_USER=%',#contexto)+8,128)),1, charIndex( '///',ltrim(substring(#contexto,PatIndex( '%OS_USER=%',#contexto)+8,128) ) ) - 1 )
ELSE
set #SessionInfo_OSUser=NULL
IF (PatIndex( '%LOGICAL_ADDRESS=%',#contexto) is not null) and (PatIndex( '%LOGICAL_ADDRESS=%',#contexto)>0)
set #HostLogicalAddress=substring(ltrim(substring(#contexto,PatIndex( '%LOGICAL_ADDRESS=%',#contexto)+16,128)),1, charIndex( '///',ltrim(substring(#contexto,PatIndex( '%LOGICAL_ADDRESS=%',#contexto)+16,128) ) ) - 1 )
ELSE
set #HostLogicalAddress=NULL
INSERT INTO AuditedOperations ( Application, Object, OperationType, ModifiedDate, ApplicationModifierUser, OSModifierUser, DBMSModifierUser, Host, HostLogicalAddress, HostPhysicalAddress, DBMSTransaction)
VALUES (APP_NAME(), 'HRPERSONS', 'U', GETDATE(), #ApplicationModifierUser, #OSUSER, #DBMSUSER, HOST_NAME(), #HostLogicalAddress, #HostPhysicalAddress, #DBMSTransaction)
Set #OperationNum = ##IDENTITY
INSERT INTO AuditedRows (OperationNum, RowPK)
SELECT #OperationNum, ISNULL(CAST(INSERTED.ID as nvarchar),CAST(DELETED.ID as nvarchar))
FROM INSERTED FULL OUTER JOIN DELETED ON INSERTED.ID=DELETED.ID
INSERT INTO AuditedRowsColumns (OperationNum, RowPK, ColumnName, ColumnAudReg, OldValue, NewValue)
SELECT #OperationNum, ISNULL(CAST(INSERTED.ID as nvarchar),CAST(DELETED.ID as nvarchar)), 'USERNAME','A', CONVERT( VARCHAR(3500),DELETED.USERNAME), CONVERT( VARCHAR(3500),INSERTED.USERNAME)
FROM INSERTED FULL OUTER JOIN DELETED ON INSERTED.ID=DELETED.ID
END
GO
Just add this:
INSERT INTO AuditedRows (OperationNum, RowPK)
SELECT #OperationNum, ISNULL(CAST(INSERTED.ID as nvarchar),CAST(DELETED.ID as nvarchar))
FROM INSERTED FULL OUTER JOIN DELETED ON INSERTED.ID=DELETED.ID
-- Restrict it to only those where the username is changing from or to %_ess
WHERE (deleted.username like '%_ess' or inserted.username like '%_ess')
INSERT INTO AuditedRowsColumns (OperationNum, RowPK, ColumnName, ColumnAudReg, OldValue, NewValue)
SELECT #OperationNum, ISNULL(CAST(INSERTED.ID as nvarchar),CAST(DELETED.ID as nvarchar)), 'USERNAME','A', CONVERT( VARCHAR(3500),DELETED.USERNAME), CONVERT( VARCHAR(3500),INSERTED.USERNAME)
FROM INSERTED FULL OUTER JOIN DELETED ON INSERTED.ID=DELETED.ID
-- Restrict it to only those where the username is changing from or to %_ess
WHERE (deleted.username like '%_ess' or inserted.username like '%_ess')