Please forgive me, I am fairly new to the art of crafting SQL Server triggers.
I've crafted a SQL Server trigger that will execute a PowerShell script (to send a JSON message to entity X) after a particular table has been updated. The script ran successfully as expected alone in DEV. However when instantiated as a trigger it caused an error on the Front End UI after the user submits an update. The users update did not post, and obviously did not instantiate the trigger.
I'm guessing it has something to do with table locks during the posting of the user input via the Web UI, but it's just a guess. Is there something I should consider in the trigger that would not interfere with the front end UI's controls process of updating the table first before my trigger runs?
This is my (rather primitive) trigger for everyone's perusal
USE [Hamburger_Chefs32];
GO
SET ANSI_NULLS ON;
GO
SET QUOTED_IDENTIFIER ON;
GO
CREATE TRIGGER [dbo].[WD_SendIngredientsMessageOnScoreOverrideUPD]
ON dbo.DeliciousBurgers
AFTER UPDATE
AS
BEGIN
DECLARE #cmd sysname
SET #cmd = 'powershell -File "E:\Program Files (x86)\TheWhopperCorporation\Burgers\v1.0.0\service\SendIngredients.ps1"'
EXEC xp_cmdshell #cmd
END
GO
My humble thanks in advance for any help provided.
Update: Had a suggestion not to run the script from within the TRIGGER as it would have to wait for it to finish. Good point. Is there a way to simply execute the script without having to wait for a success (1), or fail (0) from the script? It runs perfectly 100% of the time, but I don't want to suffer a rollback of the UPDATE because of timing and/or dependency on the script.
Change your trigger this way:
CREATE TRIGGER [dbo].[WD_SendIngredientsMessageOnScoreOverrideUPD]
ON dbo.DeliciousBurgers
AFTER UPDATE
AS
BEGIN
set xact_abort off;
begin try
DECLARE #cmd sysname
SET #cmd = 'powershell -File "E:\Program Files (x86)\TheWhopperCorporation\Burgers\v1.0.0\service\SendIngredients.ps1"'
EXEC xp_cmdshell #cmd
end try
begin catch
print ERROR_MESSAGE()
end catch
END
This way you'll catch the error.
The most probable error here is
The EXECUTE permission was denied on the object 'xp_cmdshell',
database 'mssqlsystemresource', schema 'sys'.
Unless the user that runs your app is sysadmin, or is granted explicitely this permission, the error will occur.
And the whole transaction is rolled back, that is why "The users update did not post".
Related
I have the following trigger:
USE SomeDB
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [Staging].[RunPivot15]
ON [Staging].[UriData]
AFTER INSERT, UPDATE
AS
BEGIN
SET NOCOUNT ON
EXEC [Staging].[PivotData]
END
It will not fire. The table concerned receives about 30 rows every five minutes. From that point I am stuck. I have been reading that because more than one row is being inserted I have to run a cursor. I have tried the cursor and cannot get that to work either.
Can you advise what the best approach here is?
TIA
It's highly unlikely the trigger to not run. Add a couple of print statements around the procedure call in your trigger, eventually in your stored procedure too. This will help you to trace the execution when you run an update statement in Management Studio to fire the trigger.
USE SomeDB
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [Staging].[RunPivot15]
ON [Staging].[UriData]
AFTER INSERT, UPDATE
AS
BEGIN
SET NOCOUNT ON
PRINT 'Before call.'
EXEC [Staging].[PivotData]
PRINT 'After call.'
END
Then run an update statement in Management Studio and check Messages tab to see is your messages printed.
update Staging.RunPivot15 set SomeColumn = SomeColumn where SomeColumn = SomeValue
No, you do not need cursors. When your trigger is executed, if more than one row is affected, there will be multiple rows in inserted / deleted pseudo tables too. In your case you do not read which rows are updated either, so just run the procedure. If you need to know which rows exactly are modified, then write code to process them in set-based approach (all rows at once). Looping with cursors is practically never good idea in the database.
I am trying to run a batch file in a SQL Server trigger to get information from a query and put it in a text file. I am trying to do this because I want to do stuff with the information in that same batch file later on. The problem is that when the trigger is called, it stays stuck on executing.
I have tested the query and batch file call in Microsoft SQL Server Management Studio and they both work but when I call the batch file in the trigger is stays stuck at executing.
Here is my code. First the batch file, then trigger, query the batch file is calling, and my query to test the trigger
#echo off
echo start
sqlcmd -S AZ7GH2\SQLEXPRESS -h -1 -i C:\Users\user1\Documents\test3.sql -o C:\Users\user1\Documents\test.txt
echo end
exit
SQL Server trigger
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[ffupdate]
ON [dbo].[feat]
AFTER UPDATE
AS
BEGIN
IF UPDATE (act)
BEGIN
EXEC xp_CMDShell 'C:\Users\user1\Documents\ffscript.bat'
END
END
test3.sql (query being called by batch file)
:setvar SQLCMDERRORLEVEL 1
SET NOCOUNT ON
USE [dev]
DECLARE #ver INT
SET #ver = CHANGE_TRACKING_CURRENT_VERSION() - 1
CREATE TABLE #ctb(fuid INT)
INSERT INTO #ctb
SELECT featid
FROM CHANGETABLE(CHANGES feat, #ver) AS tb
SELECT fl.flg
FROM fl, #ctb
WHERE fl.fid = #ctb.fuid
GO
:setvar SQLCMDERRORLEVEL 0
SET NOCOUNT OFF
Query to test trigger
USE [dev]
UPDATE feat
SET act = 0
WHERE featid = 1;
I don't know what is wrong. I have looked for an answer and can't find one. Like I said, everything works fine by itself but when put together it stays stuck at executing. Any help would be greatly appreciated.
You are locking yourself !
You have a trigger on the table FEAT , and when you update one row ( session A ) , you create a new session ( session B via the batch ) that try to read some capture information on the table FEAT .
For future reference... you might want to consider adding "COMMIT" at the end of your input file to make sure no tables are being locked for change.
I am trying to build an SSIS package that dynamically rebuilds the indexes for all the tables in my database. The general idea is that the package will make sure that the table is not being update and then execute a stored procedure that drops the old index, if it exists, and then recreates it. The logic behind the package seems to be sound. The problem that I am having is when I execute the package I keep getting the error:
Cannot find object...because it does not exist or you do not have permission...
The index existing should be irrelevant due to the IF EXISTS part.
The procedure looks like this:
REFERENCE_NAME AS VARCHAR(50),
COLUMN_NAME AS VARCHAR(50),
INDEX_NAME AS VARCHAR(50)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sql NVARCHAR(MAX)
SET #sql = 'IF EXISTS (SELECT name FROM sysindexes WHERE name = '+CHAR(39)+#INDEX_NAME+CHAR(39)+') '+
'DROP INDEX '+#INDEX_NAME+' ON '+#REFERENCE_NAME+' '+
'CREATE INDEX '+#INDEX_NAME+' ON '+#REFERENCE_NAME+'('+#COLUMN_NAME+') ON [INDEX]'
EXEC sp_executesql #sql
END
GO
I am able to execute the procedure through SSMS just fine, no error and it builds the index. When I execute the package in SSIS it errors out the minute it gets to the task that executes the stored procedure. I have made sure that SSIS is passing the variables to the execute SQL task and I have verified that I have db_ddladmin rights. Outside of that I am at a loss and have been beating my head against the wall for a day and a half on this.
Is there something I am missing, some permissions I need to request, or some work around for the issue?
Any information would be much appreciated.
Bartover, its definitely not looking at the wrong database. I have checked that the proc is there and the only connection on the package is to that specific database. Yes, I am executing the package manually with Visual Studios 2010 Shell Data Tools.
Sorrel, I tried your idea of a sanity check on the #sql statement on the drop, on both the drop and create, and on whole #sql statement, no joy.
Gnackenson, I had that same thought, but the connection authentication method is set to Windows Authentication, same as ssms. Do you have any ideas as to why it might use different permissions?
It looks like IF EXISTS is being ignored by SSIS SQL Task. To fix my problem, I altered my SQL tasks from DROP - CREATE to DISABLE - ENABLE.
As a follow up to my previous question where I ask about storedproc_Task1 calling storedproc_Task2, I want to know if SQL (SQL Server 2012) has a way to check if a proc is currently running, before calling it.
For example, if storedproc_Task2 can be called by both storedproc_Task1 and storedproc_Task3, I don't want storedproc_Task1 to call storedproc_Task2 only 20 seconds after storedproc_Task3. I want the code to look something like the following:
declare #MyRetCode_Recd_In_Task1 int
if storedproc_Task2 is running then
--wait for storedproc_Task2 to finish
else
execute #MyRetCode_Recd_In_Task1 = storedproc_Task2 (with calling parameters if any).
end
The question is how do I handle the if storedproc_Task2 is running boolean check?
UPDATE: I initially posed the question using general names for my stored procedures, (i.e. sp_Task1) but have updated the question to use names like storedproc_Task1 instead. Per srutzky's reminder, the prefix sp_ is reserved for system procs in the [master] database.
Given that the desire is to have any process calling sp_Task2 wait until sp_Task2 completes if it is already running, that is essentially making sp_Task2 single-threaded.
This can be accomplished through the use of Application Locks (see sp_getapplock and sp_releaseapplock). Application Locks let you create locks around arbitrary concepts. Meaning, you can define the #Resource as "Task2" which will force each caller to wait their turn. It would follow this structure:
BEGIN TRANSACTION;
EXEC sp_getapplock #Resource = 'Task2', #LockMode = 'Exclusive';
...single-threaded code...
EXEC sp_releaseapplock #Resource = 'Task2';
COMMIT TRANSACTION;
You need to manage errors / ROLLBACK yourself (as stated in the linked MSDN documentation) so put in the usual TRY / CATCH. But, this does allow you to manage the situation.
This code can be placed either in sp_Task2 at the beginning and end, as follows:
CREATE PROCEDURE dbo.Task2
AS
SET NOCOUNT ON;
BEGIN TRANSACTION;
EXEC sp_getapplock #Resource = 'Task2', #LockMode = 'Exclusive';
{current logic for Task2 proc}
EXEC sp_releaseapplock #Resource = 'Task2';
COMMIT TRANSACTION;
Or it can be placed in all of the locations that calls sp_Task2, as follows:
CREATE PROCEDURE dbo.Task1
AS
SET NOCOUNT ON;
BEGIN TRANSACTION;
EXEC sp_getapplock #Resource = 'Task2', #LockMode = 'Exclusive';
EXEC dbo.Task2 (with calling parameters if any);
EXEC sp_releaseapplock #Resource = 'Task2';
COMMIT TRANSACTION;
I would think that the first choice -- placing the logic in sp_Task2 -- would be the cleanest since a) it is in a single location and b) cannot be avoided by someone else calling sp_Task2 outside of the currently defined paths (ad hoc query or a new proc that doesn't take this precaution).
Please see my answer to your initial question regarding not using the sp_ prefix for stored procedure names and not needing the return value.
Please note: sp_getapplock / sp_releaseapplock should be used sparingly; Application Locks can definitely be very handy (such as in cases like this one) but they should only be used when absolutely necessary.
If you are using a global table as stated in the answer to your previous question then just drop the global table at the end of the procedure and then to check if the procedure is still running just check for the existence of the table:
If Object_ID('tempdb...##temptable') is null then -- Procedure is not running
--do something
else
--do something else
end
I would like to get to the bottom of this because it's confusing me. Can anyone explain when I should use the GO statement in my scripts?
As I understand it the GO statement is not part of the T-SQL language, instead it is used to send a batch of statements to SQL server for processing.
When I run the following script in Query Analyser it appears to run fine. Then I close the window and it displays a warning:
"There are uncommitted transactions. Do you wish to commit these transactions before closing the window?"
BEGIN TRANSACTION;
GO
ALTER PROCEDURE [dbo].[pvd_sp_job_xxx]
#jobNum varchar(255)
AS
BEGIN
SET NOCOUNT ON;
UPDATE tbl_ho_job SET delete='Y' WHERE job = #job;
END
COMMIT TRANSACTION;
GO
However if I add a GO at the end of the ALTER statement it is OK (as below). How come?
BEGIN TRANSACTION;
GO
ALTER PROCEDURE [dbo].[pvd_sp_xxx]
#jobNum varchar(255)
AS
BEGIN
SET NOCOUNT ON;
UPDATE tbl_ho_job SET delete='Y' WHERE job = #job;
END
GO
COMMIT TRANSACTION;
GO
I thought about removing all of the GO's but then it complains that the alter procedure statement must be the first statement inside a query batch? Is this just a requirement that I must adhere to?
It seems odd because if I BEGIN TRANSACTION and GO....that statement is sent to the server for processing and I begin a transaction.
Next comes the ALTER procedure, a COMMIT TRANSACTION and a GO (thus sending those statements to the server for processing with a commit to complete the transaction started earlier), how come it complains when I close the window still? Surely I have satisfied that the alter procedure statement is the first in the batch. How come it complains about are uncommitted transactions.
Any help will be most appreciated!
In your first script, COMMIT is part of the stored procedure...
The BEGIN and END in the stored proc do not define the scope (start+finish of the stored proc body): the batch does, which is the next GO (or end of script)
So, changing spacing and adding comments
BEGIN TRANSACTION;
GO
--start of batch. This comment is part of the stored proc too
ALTER PROCEDURE [dbo].[pvd_sp_job_xxx]
#jobNum varchar(255)
AS
BEGIN --not needed
SET NOCOUNT ON;
UPDATE tbl_ho_job SET delete='Y' WHERE job = #job;
END --not needed
--still in the stored proc
COMMIT TRANSACTION;
GO--end of batch and stored procedure
To check, run
SELECT OBJECT_DEFINITION(OBJECT_ID('dbo.pvd_sp_job_xxx'))
Although this is a old post, the question is still in my mind after I compiled one of my procedure successfully without any begin transaction,commit transaction or GO. And the procedure can be called and produce the expected result as well.
I am working with SQL Server 2012. Does it make some change
I know this is for an answer. But words are too small to notice in comment section.