Get job that ran SQL query on UPDATE trigger - sql

I am trying to create an audit trail for actions that are performed within a web application, SQL server agent jobs and manually run queries to the database. I am trying to use triggers to catch updates, inserts and deletes on certain tables.
In the whole this process is working. Example, user performs update in web application and the trigger writes the updated data to an audit trail table I have defined, including the username of the person who performed the action. This works fine from a web application or manual query perspective, but we also have dozens of SQL Server Agent Jobs that I would like to capture which one ran specific queries. Each of the agent jobs are ran with the same username. This works fine also and inputs the username correctly into the table but I can't find which job calls this query.
My current "solution" was to find which jobs are currently running at the time of the trigger, as one of them must be the correct one. Using:
CREATE TABLE #xp_results
(
job_id UNIQUEIDENTIFIER NOT NULL,
last_run_date INT NOT NULL,
last_run_time INT NOT NULL,
next_run_date INT NOT NULL,
next_run_time INT NOT NULL,
next_run_schedule_id INT NOT NULL,
requested_to_run INT NOT NULL, -- BOOL
request_source INT NOT NULL,
request_source_id sysname COLLATE database_default NULL,
running INT NOT NULL, -- BOOL
current_step INT NOT NULL,
current_retry_attempt INT NOT NULL,
job_state INT NOT NULL
)
INSERT INTO #xp_results
EXECUTE master.dbo.xp_sqlagent_enum_jobs 1, 'sa'
SELECT #runningJobs = STUFF((SELECT ',' + j.name
FROM #xp_results r
INNER JOIN msdb..sysjobs j ON r.job_id = j.job_id
WHERE running = 1
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '')
DROP TABLE #xp_results
I ran a specific job to test and it seems to work, in that any OTHER job which is running will be listed in #runningJobs, but it doesn't record the job that runs it. I assume that by the time the trigger runs the job has finished.
Is there a way I can find out what job calls the query that kicks off the trigger?
EDIT: I tried changing the SELECT query above to get any job that ran within the past 2 mins or is currently running. The SQL query is now:
SELECT #runningJobs = STUFF((SELECT ',' + j.name
FROM #xp_results r
INNER JOIN msdb..sysjobs j ON r.job_id = j.job_id
WHERE (last_run_date = CAST(REPLACE(LEFT(CONVERT(VARCHAR, getdate(), 120), 10), '-', '') AS INT)
AND last_run_time > CAST(REPLACE(LEFT(CONVERT(VARCHAR,getdate(),108), 8), ':', '') AS INT) - 200)
OR running = 1
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '')
When I run a job, then run the above query while the job is running, the correct jobs are returned. But when the SSIS package is run, either via the SQL Server Agent job or manually ran in SSIS, the #runningJobs is not populated and just returns NULL.
So I am now thinking it is a problem with permissions of SSIS and master.dbo.xp_sqlagent_enum_jobs. Any other ideas?
EDIT #2: Actually don't think it is a permissions error. There is an INSERT statement below this code, if it IS a permissions error the INSERT statement does not run and therefore the audit line does not get added to the database. So, as there IS a line added to the database, just not with the runningJobs field populated. Strange times.
EDIT #3: I just want to clarify, I am searching for a solution which DOES NOT require me to go into each job and change anything. There are too many jobs to make this a feasible solution.

WORKING CODE IS IN FIRST EDIT - (anothershrubery)
Use the app_name() function http://msdn.microsoft.com/en-us/library/ms189770.aspx in your audit trigger to get the name of the app running the query.
For SQL Agent jobs, app_name includes the job step id in the app name (if a T-SQL step). We do this in our audit triggers and works great. An example of the app_name() results when running from within an audit trigger:
SQLAgent - TSQL JobStep (Job 0x96EB56A24786964889AB504D9A920D30 : Step
1)
This job can be looked up via the job_id column in msdb.dbo.sysjobs_view.
Since SSIS packages initiate the SQL connection outside of the SQL Agent job engine, those connections will have their own application name, and you need to set the application name within the connection strings of the SSIS packages. In SSIS packages, Web apps, WinForms, or any client that connects to SQL Server, you can set the value that is returned by the app_name function by using this in your connection string :
"Application Name=MyAppNameGoesHere;"
http://www.connectionstrings.com/use-application-name-sql-server/
If the "Application Name" is not set within a .NET connection string, then the default value when using the System.Data.SqlClient.SqlConnection is ".Net SqlClient Data Provider".
Some other fields that are commonly used for auditing:
HOST_NAME(): http://technet.microsoft.com/en-us/library/ms178598.aspx Returns the name of the client computer that is connecting. This is helpful if you have an intranet app.
CONNECTIONPROPERTY('local_net_address'): For getting the client IP address.
CONTEXT_INFO(): http://technet.microsoft.com/en-us/library/ms187768.aspx You can use this to store information for the duration of the connection/session. Context_Info is a binary 128 byte field, so you might need to do conversions to/from strings when using it.
Here are SQL helper methods for setting/getting context info:
CREATE PROC dbo.usp_ContextInfo_SET
#val varchar(128)
as
begin
set nocount on;
DECLARE #c varbinary(128);
SET #c=cast(#val as varbinary(128));
SET CONTEXT_INFO #c;
end
GO
CREATE FUNCTION [dbo].[ufn_ContextInfo_Get] ()
RETURNS varchar(128)
AS
BEGIN
--context_info is binary data type, so will pad any values will CHAR(0) to the end of 128 bytes, so need to replace these with empty string.
RETURN REPLACE(CAST(CONTEXT_INFO() AS varchar(128)), CHAR(0), '')
END
EDIT:
The app_name() is the preferred way to get the application that is involved in the query, however since you do not want to update any of the SSIS packages, then here is an updated query to get currently executing jobs using the following documented SQL Agent tables. You may have to adjust the GRANTs for SELECT in the msdb database for these tables in order for the query to succeed, or create a view using this query, and adjust the grants for that view.
msdb.dbo.sysjobactivity http://msdn.microsoft.com/en-us/library/ms190484.aspx
msdb.dbo.syssessions http://msdn.microsoft.com/en-us/library/ms175016.aspx
msdb.dbo.sysjobs http://msdn.microsoft.com/en-us/library/ms189817.aspx
msdb.dbo.sysjobhistory http://msdn.microsoft.com/en-us/library/ms174997.aspx
Query:
;with cteSessions as
(
--each time that SQL Agent is started, a new record is added to this table.
--The most recent session is the current session, and prior sessions can be used
--to identify the job state at the time that SQL Agent is restarted or stopped unexpectedly
select top 1 s.session_id
from msdb.dbo.syssessions s
order by s.agent_start_date desc
)
SELECT runningJobs =
STUFF(
( SELECT N', [' + j.name + N']'
FROM msdb.dbo.sysjobactivity a
inner join cteSessions s on s.session_id = a.session_id
inner join msdb.dbo.sysjobs j on a.job_id = j.job_id
left join msdb.dbo.sysjobhistory h2 on h2.instance_id = a.job_history_id
WHERE
--currently executing jobs:
h2.instance_id is null
AND a.start_execution_date is not null
AND a.stop_execution_date is null
ORDER BY j.name
FOR XML PATH(''), ROOT('root'), TYPE
).query('root').value('.', 'nvarchar(max)') --convert the xml to nvarchar(max)
, 1, 2, '') -- replace the leading comma and space with empty string.
;
EDIT #2:
Also if you are on SQL 2012 or higher, then checkout the SSISDB.catalog.executions view http://msdn.microsoft.com/en-us/library/ff878089(v=sql.110).aspx to get the list of currently running SSIS packages, regardless of if they were started from within a scheduled job. I have not seen an equivalent view in SQL Server versions prior to 2012.

I would add an extra column to your table e.g. Update_Source, and get all the source apps (including SSIS) to set it when they update the table.
You could use the USER as a DEFAULT for that column to minimize the changes needed.

You could try using CONTEXT_INFO
Try adding a T-SQL step with SET CONTEXT_INFO 'A Job' in to your job
Then try reading that in your trigger using sys.dm_exec_sessions
I'm curious to see if it works - please post your findings.
http://msdn.microsoft.com/en-us/library/ms187768(v=sql.105).aspx

Related

SSIS Job , passing the value in job steps

I have a SSIS job with multiple steps, each steps is from different package or project.
When I execute the job , I write log in db with a guid, but each guid is created in its package or project.
I need to update the same value for all the steps , kind of link all the steps together with one value/ID.
Any suggestions how to do this?
SQL Agent jobs do not allow for values to be passed in to job steps.
But, you can rethink how your current invocation of SSIS packages works to meet your goals.
What if you added a precursor step to your SQL Agent job? Type of SQL Task and use that to generate the GUID you'd like for your packages to share. Store it into either a 1 row table or create a key/value style historical table.
CREATE TABLE dbo.CorrelateAgentToSSIS
(
jobid uniqueidentifier NOT NULL
, runid uniqueidentifier NOT NULL
, insert_date datetime NOT NULL CONSTRAINT DF__CorrelateAgentToSSIS__insert_date DEFAULT (GETDATE())
);
4 columns there. The first will be the guid an instance of SQL Server Agent generates. The second column is your tracking guid.
Step 0 would look something like
declare #jobid uniqueidentifier = CONVERT(uniqueidentifier, $(ESCAPE_NONE(JOBID)))
-- Populate this however it needs to be done
, #myguid uniqueidentifier = newid()
Your job steps for SSIS will change a bit. Instead of using the native SSIS jobstep type, you're going to use the TSQL type and do something like this.
DECLARE #execution_id bigint
, #jobid uniqueidentifier = CONVERT(uniqueidentifier, $(ESCAPE_NONE(JOBID)));
DECLARE #runid uniqueidentifier = (SELECT TOP 1 runid FROM dbo.CorrelateAgentToSSIS AS CATS WHERE CATS.jobid = #jobid);
EXEC SSISDB.catalog.create_execution
#package_name = N'SomePackage.dtsx'
, #execution_id = #execution_id OUTPUT
, #folder_name = N'MyFolder'
, #project_name = N'MyProject'
, #use32bitruntime = False
, #reference_id = NULL;
-- ddl left as exercise to the reader
INSERT INTO dbo.RunToSSIS
SELECT
#run
, #execution_id;
DECLARE #var0 smallint = 1;
EXEC SSISDB.catalog.set_execution_parameter_value
#execution_id
, #object_type = 50
, #parameter_name = N'LOGGING_LEVEL'
, #parameter_value = #var0;
-- This assumes you have a parameter defined in SSIS packages to receive the
-- runid guid
EXEC SSISDB.catalog.set_execution_parameter_value
#execution_id
, #object_type = 50
, #parameter_name = N'RUN_ID'
, #parameter_value = #runid;
EXEC SSISDB.catalog.start_execution
#execution_id;
GO
Finally, while you're collecting metrics, you might also want to think about linking a job run to the data the packages collect in the SSISDB. You can bridge that gap by recording the jobid/runid to a bigint of execution_id. If you're running packages from the SSISDB, you can plumb in System variable ServerExecutionID. I do this in the first step of every package with an Execute SQL Task. In packages run from VS, the value is 0. Otherwise, it's the value you see in SSISDB.catalog.operations Knowing those three things will allow you to see how the Agent job did, correlate it to your custom guid and whatever metrics you collect and you can pull apart performance data from the SSIS catalog.
https://dba.stackexchange.com/questions/13347/get-job-id-or-job-name-from-within-executing-job
https://dba.stackexchange.com/questions/38808/relating-executioninstanceguid-to-the-ssisdb

SQL: CHANGE TRACKING FOR ENTITY

QUESTION:
What approach should I use to notify one databases about the changes made to a table in another database. Note: I need one notification per statement level event, this includes the merge statement which does a insert, update and delete in one.
BACKGROUND:
We're working with a third party to transfer some data from one system to another. There are two databases of interest here, one which the third party populates with normalised staging data and a second database which will be populated with the de-normalised post processed data. I've created MERGE scripts which do the heavy lifting of processing and transferral of the data from these staging tables into our shiny denormalised version, and I've written a framework which manages the data dependencies such that look-up tables are populated prior to the main data etc.
I need a reliable way to be notified of when the staging tables are updated so that my import scripts are run autonomously.
METHODS CONSIDERED:
SQL DML Triggers
I initially created a generic trigger which sends change information to the denormalised database via service broker, however this trigger is firing three times, once for insert, update and delete and is thus sending three distinct messages which is causing the import process to run three times for a single data change. It should be noted that these staging tables are also being updated using the MERGE functionality within SQL Server, so is handled in a single statement.
SQL Query Notification
This appears to be perfect for what I need, however there doesn't appear to be anyway to subscribe to notifications from within SQL Server, this can only be of used to notify change at an application layer written in .net. I guess I maybe able to manage this via CLR integration, however I'd still need to drive the notification down to the processing database to trigger the import process. This appears to be my best option although it will be long winded, difficult to debug and monitor, and probably over complicating an otherwise simple issue.
SQL Event Notification
This would be perfect although doesn't appear to function for DML, regardless of what you might find in the MS documentation. The create event notification command takes a single parameter for event_type so can be thought of as operating at the database level. DML operates at an entity level and there doesn't appear to be anyway to target a specific entity using the defined syntax.
SQL Change Tracking
This appears to capture changes on a database table but at a row level and this seems to be too heavy handed for what I require. I simply need to know that a change has happened, I'm not really interested in which rows or how many, besides I'd still need to convert this into an event to trigger the import process.
SQL Change Data Capture
This is an extension of Change Tracking and records both the change and the history of the change at the row level. This is again far too detailed and still leaves me with the issue of turning this into a notification of some kind so that import process can be kicked off.
SQL Server Default Trace / Audit
This appears to require a target which must be of either a Windows Application / Security event log or a file on the IO which I'd struggle to monitor and hook into for changes.
ADDITIONAL
My trigger based method would work wonderfully if only the trigger was fired once. I have considered creating a table to record the first of the three DML commands which could then be used to suspend the posting of information within the other two trigger operations, however I'm reasonable sure that all three DML triggers (insert, update delete) will fire in parallel rending this method futile.
Can anyone please advise on a suitable approach that ideally doesn't use a scheduled job to check for changes. Any suggestions gratefully received.
This simplest approach has been to create a secondary table to record when the trigger code is run.
CREATE TABLE [service].[SuspendTrigger]
(
[Index] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](200) NOT NULL,
[DateTime] [datetime] NOT NULL,
[SPID] [int] NOT NULL,
CONSTRAINT [pk_suspendtrigger_index] PRIMARY KEY CLUSTERED
(
[Index] ASC
) ON [PRIMARY]
) ON [PRIMARY]
Triggers run sequentially so even when a merge statement is applied to an existing table the insert, update and delete trigger code run one after the other.
The first time we enter the trigger we can therefore write to this suspension table to record the event and then execute what ever code needs to be executed.
The second time we enter the trigger we can check to see if a record already exists and therefore prevent execution of any further statements.
alter trigger [dbo].[trg_ADDRESSES]
on [dbo].[ADDRESSES]
after insert, update, delete
as
begin
set nocount on;
-- determine the trigger action - not trigger may fire
-- when nothing in either update or delete table
------------------------------------------------------
declare #action as nvarchar(6) = (case when ( exists ( select top 1 1 from inserted )
and exists ( select top 1 1 from deleted )) then N'UPDATE'
when exists ( select top 1 1 from inserted ) then N'INSERT'
when exists ( select top 1 1 from deleted ) then N'DELETE'
end)
-- check for valid action
-------------------------
if #action is not null
begin
if not exists ( select *
from [service].[SuspendTrigger] as [suspend]
where [suspend].[SPID] = ##SPID
and [suspend].[DateTime] >= dateadd(millisecond, -300, getdate())
)
begin
-- insert a suspension event
-----------------------------
insert into [service].[SuspendTrigger]
(
[Name] ,
[DateTime] ,
[SPID]
)
select object_name(##procid) as [Name] ,
getdate() as [DateTime] ,
##SPID as [SPID]
-- determine the message content to send
----------------------------------------
declare #content xml = (
select getdate() as [datetime] ,
db_name() as [source/catelogue] ,
'DBO' as [source/table] ,
'ADDRESS' as [source/schema] ,
(select [sessions].[session_id] as [#id] ,
[sessions].[login_time] as [login_time] ,
case when ([sessions].[total_elapsed_time] >= 864000000000) then
formatmessage('%02i DAYS %02i:%02i:%02i.%04i',
(([sessions].[total_elapsed_time] / 10000 / 1000 / 60 / 60 / 24)),
(([sessions].[total_elapsed_time] / (1000*60*60)) % 24),
(([sessions].[total_elapsed_time] / (1000*60)) % 60),
(([sessions].[total_elapsed_time] / (1000*01)) % 60),
(([sessions].[total_elapsed_time] ) % 1000))
else
formatmessage('%02i:%02i:%02i.%i',
(([sessions].[total_elapsed_time] / (1000*60*60)) % 24),
(([sessions].[total_elapsed_time] / (1000*60)) % 60),
(([sessions].[total_elapsed_time] / (1000*01)) % 60),
(([sessions].[total_elapsed_time] ) % 1000))
end as [duration] ,
[sessions].[row_count] as [row_count] ,
[sessions].[reads] as [reads] ,
[sessions].[writes] as [writes] ,
[sessions].[program_name] as [identity/program_name] ,
[sessions].[host_name] as [identity/host_name] ,
[sessions].[nt_user_name] as [identity/nt_user_name] ,
[sessions].[login_name] as [identity/login_name] ,
[sessions].[original_login_name] as [identity/original_name]
from [sys].[dm_exec_sessions] as [sessions]
where [sessions].[session_id] = ##SPID
for xml path('session'), type)
for xml path('persistence_change'), root('change_tracking'))
-- holds the current procedure name
-----------------------------------
declare #procedure_name nvarchar(200) = object_name(##procid)
-- send a message to any remote listeners
-----------------------------------------
exec [service].[usp_post_content_via_service_broker] #MessageContentType = 'Source Data Change', #MessageContent = #content, #CallOriginator = #procedure_name
end
end
end
GO;
All we need to do now is create an index on the [datetime] field within the suspension table so that this is used during the check. I'll probably also create a job to clear down any entries older than a couple of minutes to try to keep the contents down.
Either way, this provides a way of ensuring that only on notification is generated per table level modification.
if your interested the message contents will look something like this ...
<change_tracking>
<persistence_change>
<datetime>2016-08-01T16:08:10.880</datetime>
<source>
<catelogue>[MY DATABASE NAME]</catelogue>
<table>DBO</table>
<schema>ADDRESS</schema>
</source>
<session id="1014">
<login_time>2016-08-01T15:03:01.993</login_time>
<duration>00:00:01.337</duration>
<row_count>1</row_count>
<reads>37</reads>
<writes>68</writes>
<identity>
<program_name>Microsoft SQL Server Management Studio - Query</program_name>
<host_name>[COMPUTER NAME]</host_name>
<nt_user_name>[MY ACCOUNT]</nt_user_name>
<login_name>[MY DOMAIN]\[MY ACCOUNT]</login_name>
<original_name>[MY DOMAIN]\[MY ACCOUNT]</original_name>
</identity>
</session>
</persistence_change>
</change_tracking>
I could send over the action that triggered the notification but I'm only really interested in the fact that some data has changed in this table.

How to find out what is locking my tables?

I have a SQL table that all of a sudden cannot return data unless I include with (nolock) on the end, which indicates some kind of lock left on my table.
I've experimented a bit with sys.dm_tran_locks to identify that there are in fact a number of locks on the table, but how do I identify what is locking them (ie the request element of the sys.dm_tran_locks)?
EDIT: I know about sp_lock for pre SQL 2005, but now that that sp is deprecated, AFAIK the right way to do this is with sys.dm_tran_locks. I'm using SQL Server 2008 R2.
Take a look at the following system stored procedures, which you can run in SQLServer Management Studio (SSMS):
sp_who
sp_lock
Also, in SSMS, you can view locks and processes in different ways:
Different versions of SSMS put the activity monitor in different places. For example, SSMS 2008 and 2012 have it in the context menu when you right-click on a server node.
For getting straight to "who is blocked/blocking" I combined/abbreviated sp_who and sp_lock into a single query which gives a nice overview of who has what object locked to what level.
--Create Procedure WhoLock
--AS
set nocount on
if object_id('tempdb..#locksummary') is not null Drop table #locksummary
if object_id('tempdb..#lock') is not null Drop table #lock
create table #lock ( spid int, dbid int, objId int, indId int, Type char(4), resource nchar(32), Mode char(8), status char(6))
Insert into #lock exec sp_lock
if object_id('tempdb..#who') is not null Drop table #who
create table #who ( spid int, ecid int, status char(30),
loginame char(128), hostname char(128),
blk char(5), dbname char(128), cmd char(16)
--
, request_id INT --Needed for SQL 2008 onwards
--
)
Insert into #who exec sp_who
Print '-----------------------------------------'
Print 'Lock Summary for ' + ##servername + ' (excluding tempdb):'
Print '-----------------------------------------' + Char(10)
Select left(loginame, 28) as loginame,
left(db_name(dbid),128) as DB,
left(object_name(objID),30) as object,
max(mode) as [ToLevel],
Count(*) as [How Many],
Max(Case When mode= 'X' Then cmd Else null End) as [Xclusive lock for command],
l.spid, hostname
into #LockSummary
from #lock l join #who w on l.spid= w.spid
where dbID != db_id('tempdb') and l.status='GRANT'
group by dbID, objID, l.spid, hostname, loginame
Select * from #LockSummary order by [ToLevel] Desc, [How Many] Desc, loginame, DB, object
Print '--------'
Print 'Who is blocking:'
Print '--------' + char(10)
SELECT p.spid
,convert(char(12), d.name) db_name
, program_name
, p.loginame
, convert(char(12), hostname) hostname
, cmd
, p.status
, p.blocked
, login_time
, last_batch
, p.spid
FROM master..sysprocesses p
JOIN master..sysdatabases d ON p.dbid = d.dbid
WHERE EXISTS ( SELECT 1
FROM master..sysprocesses p2
WHERE p2.blocked = p.spid )
Print '--------'
Print 'Details:'
Print '--------' + char(10)
Select left(loginame, 30) as loginame, l.spid,
left(db_name(dbid),15) as DB,
left(object_name(objID),40) as object,
mode ,
blk,
l.status
from #lock l join #who w on l.spid= w.spid
where dbID != db_id('tempdb') and blk <>0
Order by mode desc, blk, loginame, dbID, objID, l.status
(For what the lock level abbreviations mean, see e.g. https://technet.microsoft.com/en-us/library/ms175519%28v=sql.105%29.aspx)
Copied from: sp_WhoLock – a T-SQL stored proc combining sp_who and sp_lock...
NB the [Xclusive lock for command] column can be misleading -- it shows the current command for that spid; but the X lock could have been triggered by an earlier command in the transaction.
exec sp_lock
This query should give you existing locks.
exec sp_who SPID -- will give you some info
Having spids, you could check activity monitor(processes tab) to find out what processes are locking the tables ("details" for more info and "kill process" to kill it).
I have a stored procedure that I have put together, that deals not only with locks and blocking, but also to see what is running in a server.
I have put it in master.
I will share it with you, the code is below:
USE [master]
go
CREATE PROCEDURE [dbo].[sp_radhe]
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
-- the current_processes
-- marcelo miorelli
-- CCHQ
-- 04 MAR 2013 Wednesday
SELECT es.session_id AS session_id
,COALESCE(es.original_login_name, '') AS login_name
,COALESCE(es.host_name,'') AS hostname
,COALESCE(es.last_request_end_time,es.last_request_start_time) AS last_batch
,es.status
,COALESCE(er.blocking_session_id,0) AS blocked_by
,COALESCE(er.wait_type,'MISCELLANEOUS') AS waittype
,COALESCE(er.wait_time,0) AS waittime
,COALESCE(er.last_wait_type,'MISCELLANEOUS') AS lastwaittype
,COALESCE(er.wait_resource,'') AS waitresource
,coalesce(db_name(er.database_id),'No Info') as dbid
,COALESCE(er.command,'AWAITING COMMAND') AS cmd
,sql_text=st.text
,transaction_isolation =
CASE es.transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'Read Uncommitted'
WHEN 2 THEN 'Read Committed'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot'
END
,COALESCE(es.cpu_time,0)
+ COALESCE(er.cpu_time,0) AS cpu
,COALESCE(es.reads,0)
+ COALESCE(es.writes,0)
+ COALESCE(er.reads,0)
+ COALESCE(er.writes,0) AS physical_io
,COALESCE(er.open_transaction_count,-1) AS open_tran
,COALESCE(es.program_name,'') AS program_name
,es.login_time
FROM sys.dm_exec_sessions es
LEFT OUTER JOIN sys.dm_exec_connections ec ON es.session_id = ec.session_id
LEFT OUTER JOIN sys.dm_exec_requests er ON es.session_id = er.session_id
LEFT OUTER JOIN sys.server_principals sp ON es.security_id = sp.sid
LEFT OUTER JOIN sys.dm_os_tasks ota ON es.session_id = ota.session_id
LEFT OUTER JOIN sys.dm_os_threads oth ON ota.worker_address = oth.worker_address
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) AS st
where es.is_user_process = 1
and es.session_id <> ##spid
and es.status = 'running'
ORDER BY es.session_id
end
GO
this procedure has done very good for me in the last couple of years.
to run it just type sp_radhe
Regarding putting sp_radhe in the master database
I use the following code and make it a system stored procedure
exec sys.sp_MS_marksystemobject 'sp_radhe'
as you can see on the link below
Creating Your Own SQL Server System Stored Procedures
Regarding the transaction isolation level
Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask
Jonathan Kehayias
Once you change the transaction isolation level it only changes when
the scope exits at the end of the procedure or a return call, or if
you change it explicitly again using SET TRANSACTION ISOLATION LEVEL.
In addition the TRANSACTION ISOLATION LEVEL is only scoped to the
stored procedure, so you can have multiple nested stored procedures
that execute at their own specific isolation levels.
This should give you all the details of the existing locks.
DECLARE #tblVariable TABLE(SPID INT, Status VARCHAR(200), [Login] VARCHAR(200), HostName VARCHAR(200),
BlkBy VARCHAR(200), DBName VARCHAR(200), Command VARCHAR(200), CPUTime INT,
DiskIO INT, LastBatch VARCHAR(200), ProgramName VARCHAR(200), _SPID INT,
RequestID INT)
INSERT INTO #tblVariable
EXEC Master.dbo.sp_who2
SELECT v.*, t.TEXT
FROM #tblVariable v
INNER JOIN sys.sysprocesses sp ON sp.spid = v.SPID
CROSS APPLY sys.dm_exec_sql_text(sp.sql_handle) AS t
ORDER BY BlkBy DESC, CPUTime DESC
You can then kill, with caution, the SPID that blocks your table.
kill 104 -- Your SPID
You can also use sp_who2 which gives more information
Here is some info http://dbadiaries.com/using-sp_who2-to-help-with-sql-server-troubleshooting
As per the official docs the sp_lock is mark as deprecated:
This feature is in maintenance mode and may be removed in a future
version of Microsoft SQL Server. Avoid using this feature in new
development work, and plan to modify applications that currently use
this feature.
and it is recommended to use sys.dm_tran_locks instead. This dynamic management object returns information about currently active lock manager resources. Each row represents a currently active request to the lock manager for a lock that has been granted or is waiting to be granted.
It generally returns more details in more user friendly syntax then sp_lock does.
The whoisactive routine written by Adam Machanic is very good to check the current activity in your environment and see what types of waits/locks are slowing your queries. You can very easily find what is blocking your queries and tons of other handy information.
For example, let's say we have the following queries running in the default SQL Server Isolation Level - Read Committed. Each query is executing in separate query window:
-- creating sample data
CREATE TABLE [dbo].[DataSource]
(
[RowID] INT PRIMARY KEY
,[RowValue] VARCHAR(12)
);
INSERT INTO [dbo].[DataSource]([RowID], [RowValue])
VALUES (1, 'samle data');
-- query window 1
BEGIN TRANSACTION;
UPDATE [dbo].[DataSource]
SET [RowValue] = 'new data'
WHERE [RowID] = 1;
--COMMIT TRANSACTION;
-- query window 2
SELECT *
FROM [dbo].[DataSource];
Then execute the sp_whoisactive (only part of the columns are displayed):
You can easily seen the session which is blocking the SELECT statement and even its T-SQL code. The routine has a lot of parameters, so you can check the docs for more details.
If we query the sys.dm_tran_locks view we can see that one of the session is waiting for a share lock of a resource, that has exclusive lock by other session:
Plot twist!
You can have orphaned distributed transactions holding exclusive locks and you will not see them if your script assumes there is a session associated with the transaction (there isn't!). Run the script below to identify these transactions:
;WITH ORPHANED_TRAN AS (
SELECT
dat.name,
dat.transaction_uow,
ddt.database_transaction_begin_time,
ddt.database_transaction_log_bytes_reserved,
ddt.database_transaction_log_bytes_used
FROM
sys.dm_tran_database_transactions ddt,
sys.dm_tran_active_transactions dat,
sys.dm_tran_locks dtl
WHERE
ddt.transaction_id = dat.transaction_id AND
dat.transaction_id = dtl.request_owner_id AND
dtl.request_session_id = -2 AND
dtl.request_mode = 'X'
)
SELECT DISTINCT * FROM ORPHANED_TRAN
Once you have identified the transaction, use the transaction_uow column to find it in MSDTC and decide whether to abort or commit it. If the transaction is marked as In Doubt (with a question mark next to it) you will probably want to abort it.
You can also kill the Unit Of Work (UOW) by specifying the transaction_uow in the KILL command:
KILL '<transaction_uow>'
References:
https://learn.microsoft.com/en-us/sql/t-sql/language-elements/kill-transact-sql?view=sql-server-2017#arguments
https://www.mssqltips.com/sqlservertip/4142/how-to-kill-a-blocking-negative-spid-in-sql-server/
A colleague and I have created a tool just for this.
It's a visual representation of all the locks that your sessions produce.
Give it a try (http://www.sqllockfinder.com), it's open source (https://github.com/LucBos/SqlLockFinder)

How can I conditionally use a linked server depending on the environment a stored proc is currently running in?

Here's the issue I'm having. I am trying to create a stored proc that will be deployed to DEV, QA, and PROD environments. Because of the strict requirements on the deployment process, I have to make sure my proc in the same across all three environments and has to work (of course!). The problem is that this proc references a table in a different database. In DEV and QA this is ok, because the database is on the same server, however in PROD the database in question is located on a separate server. The following is a code snippet from my proc that tries to deal with the different environment issues:
IF ##SERVERNAME<>'Production'
BEGIN
select distinct m.acct_id
from l_map m (nolock)
join #llist ll on ll.acct_id = m.acct_id
where ll.acct_id not in (select l_number from [OTHERDATABASE].[dbo].[OTHERTABLE] where lmi_status_code not in (select item from #ruleItems))
END
ELSE
BEGIN
select distinct m.acct_id
from l_map m (nolock)
join #llist ll on ll.acct_id = m.acct_id
where ll.acct_id not in (select l_number from [OTHERSERVER].[OTHERDATABASE].[dbo].[OTHERTABLE] where lmi_status_code not in (select item from #ruleItems))
END
My proc is called from within a different proc. When I test the above logic directly, I get the results I expect. However, when I try to test it in context in DEV or QA (from the top level proc), I get an error saying that [OTHERSERVER] could not be found. I cannot (and don't need to) create a linked server in DEV and QA, but I need to be able to use the linked server in the PROD environment. Does anyone know how to accomplish this?
Use synonyms, see here.
Also see these two SO examples: one, two.
Synonym definition on each server may be (is) different, but the code (stored procedure) does not change.
My suggestion is to create a view on the table in the linked server. On your test server you can create a view onto a local table with test data.
In this way information about the linked server is isolated to the view. Then you can write your stored proc or other queries referencing the view, rather than referencing the linked server directly.
Note that this will not enable you to test the security and permissions you need, only that the query works with the schema.
I have the same situation.
Using an alias, I can not use OpenQuery that I need to execute functions with parameters on the destination server, where a simple SELECT INTO or EXECUTE was not possible.
Using EXEC will return (using my configuration) in error Msg 7411:
Server 'linked_server_name' is not configured for RPC.
Here is an example of my approach using a string query. Note that on testing I don't use linked server but you can use one if you need:
-- Prepare Source Query Fragment
IF ##SERVERNAME = 'production_server'
SET #SelectQuery = ' OPENQUERY (['
+ #SourceServer + '],''EXEC [production_source_db].[schema_name].['
+ #FuncrionName+'] '''''
+ #param_1 + ''''', '''''
+ #param_2 + ''''''')';
ELSE
SET #SelectQuery = ' EXEC [testing_schema].['
+ #FuncrionName+'] '''
+ #param_1 + ''', '''
+ #param_2 + ''')';
-- Prepare Destination Query Fragment
IF ##SERVERNAME = 'production_server'
SET #Destination = '[production_destination_server].[production_destination_db].[schema_name]';
ELSE
SET #Destination = '[testing_schema]';
-- Execute the data transfer
EXEC ('
INSERT INTO ' + #Destination + '.[Destination_Table] (
[Col1]
, [Col2])
SELECT
[Col1]
, [Col2]
FROM ' + #SelectQuery )

Is my stored procedure executing out of order?

Brief history:
I'm writing a stored procedure to support a legacy reporting system (using SQL Server Reporting Services 2000) on a legacy web application.
In keeping with the original implementation style, each report has a dedicated stored procedure in the database that performs all the querying necessary to return a "final" dataset that can be rendered simply by the report server.
Due to the business requirements of this report, the returned dataset has an unknown number of columns (it depends on the user who executes the report, but may have 4-30 columns).
Throughout the stored procedure, I keep a column UserID to track the user's ID to perform additional querying. At the end, however, I do something like this:
UPDATE #result
SET Name = ppl.LastName + ', ' + ppl.FirstName
FROM #result r
LEFT JOIN Users u ON u.id = r.userID
LEFT JOIN People ppl ON ppl.id = u.PersonID
ALTER TABLE #result
DROP COLUMN [UserID]
SELECT * FROM #result r ORDER BY Name
Effectively I set the Name varchar column (that was previously left NULL while I was performing some pivot logic) to the desired name format in plain text.
When finished, I want to drop the UserID column as the report user shouldn't see this.
Finally, the data set returned has one column for the username, and an arbitrary number of INT columns with performance totals. For this reason, I can't simply exclude the UserID column since SQL doesn't support "SELECT * EXCEPT [UserID]" or the like.
With this known (any style pointers are appreciated but not central to this problem), here's the problem:
When I execute this stored procedure, I get an execution error:
Invalid column name 'userID'.
However, if I comment out my DROP COLUMN statement and retain the UserID, the stored procedure performs correctly.
What's going on? It certainly looks like the statements are executing out of order and it's dropping the column before I can use it to set the name strings!
[Edit 1]
I defined UserID previously (the whole stored procedure is about 200 lies of mostly irrelevant logic, so I'll paste snippets:
CREATE TABLE #result ([Name] NVARCHAR(256), [UserID] INT);
Case sensitivity isn't the problem but did point me to the right line - there was one place in which I had userID instead of UserID. Now that I fixed the case, the error message complains about UserID.
My "broken" stored procedure also works properly in SQL Server 2008 - this is either a 2000 bug or I'm severely misunderstanding how SQL Server used to work.
Thanks everyone for chiming in!
For anyone searching this in the future, I've added an extremely crude workaround to be 2000-compatible until we update our production version:
DECLARE #workaroundTableName NVARCHAR(256), #workaroundQuery NVARCHAR(2000)
SET #workaroundQuery = 'SELECT [Name]';
DECLARE cur_workaround CURSOR FOR
SELECT COLUMN_NAME FROM [tempdb].INFORMATION_SCHEMA.Columns WHERE TABLE_NAME LIKE '#result%' AND COLUMN_NAME <> 'UserID'
OPEN cur_workaround;
FETCH NEXT FROM cur_workaround INTO #workaroundTableName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #workaroundQuery = #workaroundQuery + ',[' + #workaroundTableName + ']'
FETCH NEXT FROM cur_workaround INTO #workaroundTableName
END
CLOSE cur_workaround;
DEALLOCATE cur_workaround;
SET #workaroundQuery = #workaroundQuery + ' FROM #result ORDER BY Name ASC'
EXEC(#workaroundQuery);
Thanks everyone!
A much easier solution would be to not drop the column, but don't return it in the final select.
There are all sorts of reasons why you shouldn't be returning select * from your procedure anyway.
EDIT: I see now that you have to do it this way because of an unknown number of columns.
Based on the error message, is the database case sensitive, and so there's a difference between userID and UserID?
This works for me:
CREATE TABLE #temp_t
(
myInt int,
myUser varchar(100)
)
INSERT INTO #temp_t(myInt, myUser) VALUES(1, 'Jon1')
INSERT INTO #temp_t(myInt, myUser) VALUES(2, 'Jon2')
INSERT INTO #temp_t(myInt, myUser) VALUES(3, 'Jon3')
INSERT INTO #temp_t(myInt, myUser) VALUES(4, 'Jon4')
ALTER TABLE #temp_t
DROP Column myUser
SELECT * FROM #temp_t
DROP TABLE #temp_t
It says invalid column for you. Did you check the spelling and ensure there even exists that column in your temp table.
You might try wrapping everything preceding the DROP COLUMN in a BEGIN...COMMIT transaction.
At compile time, SQL Server is probably expanding the * into the full list of columns. Thus, at run time, SQL Server executes "SELECT UserID, Name, LastName, FirstName, ..." instead of "SELECT *". Dynamically assembling the final SELECT into a string and then EXECing it at the end of the stored procedure may be the way to go.