SQL Statement from DML Trigger - sql

How can i know which sql statement fired through trigger for select, insert, update and delete on table?

As Jonas says, Profiler is your best option (and only option for SELECT queries). For INSERT, UPDATE, DELETEs, the closest you can get without Profiler may be to look at the input buffer via DBCC INPUTBUFFER(##SPID). This will only work for ad-hoc language events, not RPC calls, and will only show you the first 256 characters of the SQL statement (depending on version, I believe). Some example code, (run as dbo):
CREATE TABLE TBL (a int, b varchar(50))
go
INSERT INTO TBL SELECT 1,'hello'
INSERT INTO TBL SELECT 2,'goodbye'
go
GRANT SELECT, UPDATE ON TBL TO guest
go
CREATE TABLE AUDIT ( audittime datetime default(getdate())
, targettable sysname
, loginname sysname
, spid int
, sqltext nvarchar(max))
go
CREATE TRIGGER TR_TBL ON TBL FOR INSERT, UPDATE, DELETE
AS BEGIN
CREATE TABLE #DBCC (EventType varchar(50), Parameters varchar(50), EventInfo nvarchar(max))
INSERT INTO #DBCC
EXEC ('DBCC INPUTBUFFER(##SPID)')
INSERT INTO AUDIT (targettable, loginname, spid, sqltext)
SELECT targettable = 'TBL'
, suser = suser_name()
, spid = ##SPID
, sqltext = EventInfo
FROM #DBCC
END
GO
/* Test the Audit Trigger (can be run as guest) */
UPDATE TBL SET a = 3 WHERE a = 2

First, there are no select dml triggers, only triggers that work on INSERT, UPDATE or DELETE
Secondly, you can't know which sql statement triggered the trigger (at least not in the trigger). However, you can use profiler to debug what's happening in the database. There's a decent explanation of this here.

Related

How do I update triggers across multiple databases?

I have a query that I can select the databases from the sys.databases with the triggers that I wish to update. From there I can create a cursor. However when I go into the cursor to update my triggers using a dynamic db name #DatabaseExecuteName that is set to MyDatabaseName.dbo I receive the error ''CREATE/ALTER TRIGGER' does not allow specifying the database name as a prefix to the object name.' Because I am in a cursor I am not able to execute a USE MyDatabaseName ... GO, the GO statement is not allowed inside the CURSOR. I have tried SQLCMD MODE :setvar DatabaseName "MyDatabaseName" with USE [$(DatabaseName)]; to try to set the use database. I feel I am very close however my strength is not SQL queries. I could use some assistance on what I am missing.
You can nest EXEC calls so that you can use a USE and then execute a further statement and you don't need to use GO to seperate the batches. This is a complete script to demonstrate the technique:
create database DB1
go
create database DB2
go
use DB2
go
create table T1 (ID int not null)
go
create table T2 (ID int not null)
go
use DB1
go
exec('use DB2; exec(''create trigger T_T on T1 after insert as
insert into T2(ID) select i.ID from inserted i'')');
select DB_NAME()
insert into DB2..T1(ID) values (1),(2);
select * from DB2..T2
Which then shows that this connection is still in the DB1 database, but the trigger was successfully created on the T1 table within the DB2 database.
What you have to watch for is getting your quote-escaping correct.

SQL Server trigger: identify specific Update statement use

I need to use a series of relatively simple update statements on a large table, for example as below:
UPDATE Table1
SET Col1 = 'A'
WHERE Col2 = '1'
UPDATE Table1
SET Col1 = 'A'
WHERE Col3 = 'X'
UPDATE Table1
SET Col1 = 'B'
WHERE Col2 = '2'
I am using a trigger to track which records are updated. How would I go about identifying which specific update statement had resulted in the update in the table output from the trigger?
Would it be possible to reference a variable set next to the update statement in the trigger script?
Sometimes you may want to find out what exact statement that updated your table. Or you may want to find out how the WHERE clause of the DELETE statement (Executed by someone) looked like.
DBCC INPUTBUFFER can provide you with this kind of information. You can create a trigger on your table, that uses DBCC INPUTBUFFER command to find out the exact command that caused the trigger to fire.
The following trigger code works in SQL Sever 2000 (In SQL Server 7.0, you can't create tables inside a trigger. So, you'll have to create a permanent table before hand and use that inside the trigger). This code only displays the SQL statement, login name, user name and current time, but you can alter the code, so that this information gets logged in a table for tracking/auditing purposes.
CREATE TRIGGER TriggerName
ON TableName
FOR INSERT, UPDATE, DELETE AS
BEGIN
SET NOCOUNT ON
DECLARE #ExecStr varchar(50), #Qry nvarchar(255)
CREATE TABLE #inputbuffer
(
EventType nvarchar(30),
Parameters int,
EventInfo nvarchar(255)
)
SET #ExecStr = 'DBCC INPUTBUFFER(' + STR(##SPID) + ')'
INSERT INTO #inputbuffer
EXEC (#ExecStr)
SET #Qry = (SELECT EventInfo FROM #inputbuffer)
SELECT #Qry AS 'Query that fired the trigger',
SYSTEM_USER as LoginName,
USER AS UserName,
CURRENT_TIMESTAMP AS CurrentTime
END
From the above code, replace the TableName and TriggerName with your table name and trigger name respectively and you can test the trigger by creating the trigger first and then by inserting/updating/deleting data.
Taken from here!

From within a TSQL block, can I retrieve the originating SQL statement?

I am wondering if I can retrieve the original SQL statement which fired of a particular SQL block.
Say I have a table with an AFTER INSERT, UPDATE trigger on it. From within the trigger, I would like to get the full text of the original INSERT or UPDATE statement that fired the trigger.
Is this possible? Mainly I want to be able to do this for logging/debugging purposes.
I haven't tried to do something like this in a trigger (nor would I necessarily) but you might try something like this.
select top 100
q.[text]
from sys.dm_exec_requests r
outer apply sys.dm_exec_sql_text(r.sql_handle) q
where r.session_id = ##spid
Great question! I use eventdata all the time with ddl triggers, but haven't thought about what to use for dml triggers. I think this is what you're looking for. It's definitely going into my toolbox!
Please note that the code below is for demonstration only. Returning output from a trigger is deprecated. In practice, you'd insert the output of the dbcc command into a log table.
if schema_id(N'log') is null execute (N'create schema log');
go
if object_id(N'[log].[data]'
, N'U') is not null
drop table [log].[data];
go
create table [log].[data] (
[id] [int] identity(1, 1)
, [flower] [sysname]);
go
if exists
(select *
from sys.triggers
where parent_class = 0
and name = 'get_log_dml')
drop trigger [get_log_dml] on database;
go
create trigger [get_log_dml] on [log].[data]
after insert, update
as
declare #dbcc table (
[event_type] [sysname]
, [parameters] [int]
, [event_info] [nvarchar](max)
);
select *
from inserted;
insert into #dbcc
([event_type],[parameters],[event_info])
execute (N'dbcc inputbuffer(##spid)');
select [event_type]
, [parameters]
, [event_info]
from #dbcc;
go
insert into [log].[data]
([flower])
values (N'rose');

TSQL implementing double check locking

I have an arbitrary stored procedure usp_DoubleCheckLockInsert that does an INSERT for multiple clients and I want to give the stored procedure exclusive access to writing to a table SomeTable when it is within the critical section Begin lock and End lock.
CREATE PROCEDURE usp_DoubleCheckLockInsert
#Id INT
,#SomeValue INT
AS
BEGIN
IF (EXISTS(SELECT 1 FROM SomeTable WHERE Id = #Id AND SomeValue = #SomeValue)) RETURN
BEGIN TRAN
--Begin lock
IF (EXISTS(SELECT 1 FROM SomeTable WHERE Id = #Id AND SomeValue = #SomeValue)) ROLLBACK
INSERT INTO SomeTable(Id, SomeValue)
VALUES(#Id,#SomeValue);
--End lock
COMMIT
END
I have seen how Isolation Level relates to updates, but is there a way to implement locking in the critical section, give the transaction the writing lock, or does TSQL not work this way?
Obtain Update Table Lock at start of Stored Procedure in SQL Server
A second approach which works for me is to combine the INSERT and the SELECT into a single operation.
This index needed only for efficiently querying SomeTable. Note that there is NOT a uniqueness constraint. However, if I were taking this approach, I would actually make the index unique.
CREATE INDEX [IX_SomeTable_Id_SomeValue_IsDelete] ON [dbo].[SomeTable]
(
[Id] ASC,
[SomeValue] ASC,
[IsDelete] ASC
)
The stored proc, which combines the INSERT/ SELECT operations:
CREATE PROCEDURE [dbo].[usp_DoubleCheckLockInsert]
#Id INT
,#SomeValue INT
,#IsDelete bit
AS
BEGIN
-- Don't allow dirty reads
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRAN
-- insert only if data not existing
INSERT INTO dbo.SomeTable(Id, SomeValue, IsDelete)
SELECT #Id, #SomeValue, #IsDelete
where not exists (
select * from dbo.SomeTable WITH (HOLDLOCK, UPDLOCK)
where Id = #Id
and SomeValue = #SomeValue
and IsDelete = #IsDelete)
COMMIT
END
I did try this approach using multiple processes to insert data. (I admit though that I didn't exactly put a lot of stress on SQL Server). There were never any duplicates or failed inserts.
It seems all you are trying to do is to prevent duplicate rows from being inserted. You can do this by adding a unique index, with the option IGNORE_DUP_KEY = ON:
CREATE UNIQUE INDEX [IX_SomeTable_Id_SomeValue_IsDelete]
ON [dbo].[SomeTable]
(
[Id] ASC,
[SomeValue] ASC,
[IsDelete] ASC
) WITH (IGNORE_DUP_KEY = ON)
Any inserts with duplicate keys will be ignored by SQL Server. Running the following:
INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(0,0,0)
INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(1,1,0)
INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(2,2,0)
INSERT INTO [dbo].[SomeTable] ([Id],[SomeValue],[IsDelete])
VALUES(0,0,0)
Results in:
(1 row(s) affected)
(1 row(s) affected)
(1 row(s) affected)
Duplicate key was ignored.
(0 row(s) affected)
I did not test the above using multiple processes (threads), but the results in that case should be the same - SQL Server should still ignore any duplicates, no matter which thread is attempting the insert.
See also Index Options at MSDN.
I think I may not understand the question but why couldn't you do this:
begin tran
if ( not exists ( select 1 from SomeTable where Id = #ID and SomeValue = #SomeValue ) )
insert into SomeTable ( Id, SomeValue ) values ( #ID, #SomeValue )
commit
Yes you have a transaction every time you do this but as long as your are fast that shouldn't be a problem.
I have a feeling I'm not understanding the question.
Jeff.
As soon as you start messing with sql preferred locking management, you are taking the burdon on, but if you're certain this is what you need, update your sp to select a test variable and replace your "EXISTS" check with that variable. When you query the variable use an exclusive table lock, and the table is yours till your done.
CREATE PROCEDURE usp_DoubleCheckLockInsert
#Id INT
,#SomeValue INT
AS
BEGIN
IF (EXISTS(SELECT 1 FROM SomeTable WHERE Id = #Id AND SomeValue = #SomeValue)) RETURN
BEGIN TRAN
--Begin lock
DECLARE #tId as INT
-- You already checked and the record doesn't exist, so lock the table
SELECT #tId
FROM SomeTable WITH (TABLOCKX)
WHERE Id = #Id AND SomeValue = #SomeValue
IF #tID IS NULL
BEGIN
-- no one snuck in between first and second checks, so commit
INSERT INTO SomeTable(Id, SomeValue)
VALUES(#Id,#SomeValue);
--End lock
COMMIT
END
If you execute this as a query, but don't hit the commit, then try selecting from the table from a different context, you will sit and wait till the commit is enacted.
Romoku, the answers you're getting are basically right, except
that you don't even need BEGIN TRAN.
you don't need to worry about isolation levels.
All you need is a simple insert ... select ... where not exists (select ...) as suggested by Jeff B and Chue X.
Your concerns about concurrency ("I'm talking about concurrency and your answer will not work.") reveal a profound misunderstanding of how SQL works.
SQL INSERT is atomic. You don't have to lock the table; that's what the DBMS does for you.
Instead of offering a bounty for misbegotten questions based on erroneous preconceived notions -- and then summarily dismissing right answers as wrong -- I recommend sitting down with a good book. On SQL. I can suggest some titles if you like.

Updating a Table after some values are inserted into it in SQL Server 2008

I am trying to write a stored procedure in SQL Server 2008 which updates a table after some values are inserted into the table.
My stored procedure takes the values from a DMV and stores them in a table. In the same procedure after insert query, I have written an update query for the same table.
Insert results are populated fine, but the results of updates are getting lost.
But when I try to do only inserts in the stored procedure and I execute the update query manually everything is fine.
Why it is happening like this?
there should not be a problem in this.
below code working as expected.
create procedure dbo.test
as
begin
create table #temp (
name varchar(100) ,
id int
)
insert #temp
select name ,
id
from master..sysobjects
update #temp
set name='ALL Same'
from #temp
select * from #temp
drop table #temp
end
go
Best approach is to use Trigger, sample of AFTER UPDATE trigger is below:
ALTER TRIGGER [dbo].[tr_MyTriggerName]
ON [dbo].[MyTableName] AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
--if MyColumnName is updated the do..
IF UPDATE (MyColumnName)
BEGIN
UPDATE MyTableName
SET AnotherColumnInMyTable = someValue
FROM MyTableName
INNER JOIN Inserted
ON MyTableName.PrimaryKeyColumn = Inserted.PrimaryKeyColumn
END
END