So this is something I already do with Stored Procedures, and a bunch of other database items, and now I'm trying to do it with jobs. I write a bunch of items to a single .sql file. Other programs I use require this format. It looks clean, and it works.
I'm having an issue trying this with jobs, as it seems to not be dumping variable values when I start a new transaction. For example:
USE msdb;
BEGIN TRANSACTION
DECLARE #JobName = 'MyJob'
/*blah blah blah*/
COMMIT TRANSACTION
USE msdb;
BEGIN TRANSACTION
DECLARE #JobName = 'MySecondJob'
/*blah blah blah*/
COMMIT TRANSACTION
But when I run this file I get an error:
The variable name '#JobName' has already been declared. Variable names
must be unique within a query batch or stored procedure.
I don't see how this is possible, as they are separate transactions. I tried clearing the intellisense cache, as I know that can cause issues, but so far no minor fixes have helped. This is in SQL Server 2014.
Try using GO statements between each execution block. For example:
USE msdb;
BEGIN TRANSACTION
DECLARE #JobName = 'MyJob'
/*blah blah blah*/
COMMIT TRANSACTION
GO
USE msdb;
BEGIN TRANSACTION
DECLARE #JobName = 'MySecondJob'
/*blah blah blah*/
COMMIT TRANSACTION
GO
Per Microsoft SQL documentation, GO signals the end of a batch of Transact-SQL statements to the SQL Server utilities
Related
As a follow up to my previous question where I ask about storedproc_Task1 calling storedproc_Task2, I want to know if SQL (SQL Server 2012) has a way to check if a proc is currently running, before calling it.
For example, if storedproc_Task2 can be called by both storedproc_Task1 and storedproc_Task3, I don't want storedproc_Task1 to call storedproc_Task2 only 20 seconds after storedproc_Task3. I want the code to look something like the following:
declare #MyRetCode_Recd_In_Task1 int
if storedproc_Task2 is running then
--wait for storedproc_Task2 to finish
else
execute #MyRetCode_Recd_In_Task1 = storedproc_Task2 (with calling parameters if any).
end
The question is how do I handle the if storedproc_Task2 is running boolean check?
UPDATE: I initially posed the question using general names for my stored procedures, (i.e. sp_Task1) but have updated the question to use names like storedproc_Task1 instead. Per srutzky's reminder, the prefix sp_ is reserved for system procs in the [master] database.
Given that the desire is to have any process calling sp_Task2 wait until sp_Task2 completes if it is already running, that is essentially making sp_Task2 single-threaded.
This can be accomplished through the use of Application Locks (see sp_getapplock and sp_releaseapplock). Application Locks let you create locks around arbitrary concepts. Meaning, you can define the #Resource as "Task2" which will force each caller to wait their turn. It would follow this structure:
BEGIN TRANSACTION;
EXEC sp_getapplock #Resource = 'Task2', #LockMode = 'Exclusive';
...single-threaded code...
EXEC sp_releaseapplock #Resource = 'Task2';
COMMIT TRANSACTION;
You need to manage errors / ROLLBACK yourself (as stated in the linked MSDN documentation) so put in the usual TRY / CATCH. But, this does allow you to manage the situation.
This code can be placed either in sp_Task2 at the beginning and end, as follows:
CREATE PROCEDURE dbo.Task2
AS
SET NOCOUNT ON;
BEGIN TRANSACTION;
EXEC sp_getapplock #Resource = 'Task2', #LockMode = 'Exclusive';
{current logic for Task2 proc}
EXEC sp_releaseapplock #Resource = 'Task2';
COMMIT TRANSACTION;
Or it can be placed in all of the locations that calls sp_Task2, as follows:
CREATE PROCEDURE dbo.Task1
AS
SET NOCOUNT ON;
BEGIN TRANSACTION;
EXEC sp_getapplock #Resource = 'Task2', #LockMode = 'Exclusive';
EXEC dbo.Task2 (with calling parameters if any);
EXEC sp_releaseapplock #Resource = 'Task2';
COMMIT TRANSACTION;
I would think that the first choice -- placing the logic in sp_Task2 -- would be the cleanest since a) it is in a single location and b) cannot be avoided by someone else calling sp_Task2 outside of the currently defined paths (ad hoc query or a new proc that doesn't take this precaution).
Please see my answer to your initial question regarding not using the sp_ prefix for stored procedure names and not needing the return value.
Please note: sp_getapplock / sp_releaseapplock should be used sparingly; Application Locks can definitely be very handy (such as in cases like this one) but they should only be used when absolutely necessary.
If you are using a global table as stated in the answer to your previous question then just drop the global table at the end of the procedure and then to check if the procedure is still running just check for the existence of the table:
If Object_ID('tempdb...##temptable') is null then -- Procedure is not running
--do something
else
--do something else
end
I have an existing stored procedure that need to be used now as inline SQL statement in my VB console application. How do I change it?
Stored Procedure:
:Setvar CUSTOMDBNAME "My_DB"
USE [$(CUSTOMDBNAME)]
GO
DECLARE #TranName Varchar(25)
Declare #TranCounter Int
Set #TranName = 'MyTransaction';
Set #TranCounter = ##Trancount;
BEGIN TRANSACTION #TranName;
BEGIN Try
UPDATE tbl.FileUpload
SET UserCreate= 1
WHERE ID = 10
IF #TranCounter=0
COMMIT TRANSACTION #TranName;
END Try
BEGIN Catch
IF #TranCounter = 0
Rollback Transaction;
ELSE
IF XACT_STATE() <> -1
Rollback Transaction #TranName;
END Catch
GO
You can access the text of an SQL Server Stored Procedure by querying the sys.syscomments view. From there you can extract the text and do whatever you want with it.
Extensive documentation on how to access data related to different aspects of Stored Procedures is available on MSDN at Viewing Stored Procedures. It explains how to:
See the definition of the stored procedure. That is, the Transact-SQL statements used to create a stored procedure. This can be useful if you do not have the Transact-SQL script files used to create the stored procedure.
Get information about a stored procedure such as its schema, when it was created, and its parameters.
List the objects used by the specified stored procedure, and the procedures that use the specified stored procedure. This information can be used to identify the procedures affected by the changing or removal of an object in the database.
It seems like using these data sources, you will be able to obtain the data you need, with the remaining work being implementation-specific.
In addition, here is an article demonstrating a use case for querying SPROC text: http://blog.sqlauthority.com/2007/09/03/sql-server-2005-search-stored-procedure-code-search-stored-procedure-text/ .
I need to provide an auto update feature to my application.
I am having problem in applying the SQL updates. I have the updated SQL statement in my .sql file and what i want to achieve is that if one statment fails then entire script file must be rolled back
Ex.
create procedure [dbo].[test1]
#P1 varchar(200),
#C1 int
as
begin
Select 1
end
GO
Insert into test (name) values ('vv')
Go
alter procedure [dbo].[test2]
#P1 varchar(200),
#C1 int
as
begin
Select 1
end
GO
Now in the above example, if i get the error in third statement of "alter procedure [dbo].[test2]" then i want to rollback the first two changes also which is creating SP of "test1" and inserting data into "test" table
How should i approach this task? Any help will be much appreciated.
If you need any more info then let me know
Normally, you would want to add a BEGIN TRAN at the beginning, remove the GO statements, and then handle the ROLLBACK TRAN/COMMIT TRAN with a TRY..CATCH block.
When dealing with DML though there are often statements that have to be at the start of a batch, so you can't wrap them in a TRY..CATCH block. In that case you need to put together a system that knows how to roll itself back.
A simple system would be just to backup the database at the start and restore it if anything fails (assuming that you are the only one accessing the database the whole time). Another method would be to log each batch that runs successfully and to have corresponding rollback scripts which you can run to put everything back should a later batch fail. This obviously requires much more work (writing an undo script for every script PLUS fully testing the rollbacks) and can also be a problem if people are still accessing the database while the upgrade is happening.
EDIT:
Here's an example of a simple TRY..CATCH block with transaction handling:
BEGIN TRY
BEGIN TRANSACTION
-- All of your code here, with `RAISERROR` used for any of your own error conditions
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
END CATCH
However, the TRY..CATCH block cannot span batches (maybe that's what I was thinking of when I said transactions couldn't), so in your case it would probably be something more like:
IF (OBJECT_ID('dbo.Error_Happened') IS NOT NULL)
DROP TABLE dbo.Error_Happened
GO
BEGIN TRANSACTION
<Some line of code>
IF (##ERROR <> 0)
CREATE TABLE dbo.Error_Happened (my_id INT)
IF (OBJECT_ID('dbo.Error_Happened') IS NOT NULL)
BEGIN
<Another line of code>
IF (##ERROR <> 0)
CREATE TABLE dbo.Error_Happened (my_id INT)
END
...
IF (OBJECT_ID('dbo.Error_Happened) IS NOT NULL)
BEGIN
ROLLBACK TRANSACTION
DROP TABLE dbo.Error_Happened
END
ELSE
COMMIT TRANSACTION
Unfortunately, because of the separate batches from the GO statements you can't use GOTO, you can't use the TRY..CATCH, and you can't persist a variable across the batches. This is why I used the very kludgy trick of creating a table to indicate an error.
A better way would be to simply have an error table and look for rows in it. Just keep in mind that your ROLLBACK will remove those rows at the end as well.
I would like to get to the bottom of this because it's confusing me. Can anyone explain when I should use the GO statement in my scripts?
As I understand it the GO statement is not part of the T-SQL language, instead it is used to send a batch of statements to SQL server for processing.
When I run the following script in Query Analyser it appears to run fine. Then I close the window and it displays a warning:
"There are uncommitted transactions. Do you wish to commit these transactions before closing the window?"
BEGIN TRANSACTION;
GO
ALTER PROCEDURE [dbo].[pvd_sp_job_xxx]
#jobNum varchar(255)
AS
BEGIN
SET NOCOUNT ON;
UPDATE tbl_ho_job SET delete='Y' WHERE job = #job;
END
COMMIT TRANSACTION;
GO
However if I add a GO at the end of the ALTER statement it is OK (as below). How come?
BEGIN TRANSACTION;
GO
ALTER PROCEDURE [dbo].[pvd_sp_xxx]
#jobNum varchar(255)
AS
BEGIN
SET NOCOUNT ON;
UPDATE tbl_ho_job SET delete='Y' WHERE job = #job;
END
GO
COMMIT TRANSACTION;
GO
I thought about removing all of the GO's but then it complains that the alter procedure statement must be the first statement inside a query batch? Is this just a requirement that I must adhere to?
It seems odd because if I BEGIN TRANSACTION and GO....that statement is sent to the server for processing and I begin a transaction.
Next comes the ALTER procedure, a COMMIT TRANSACTION and a GO (thus sending those statements to the server for processing with a commit to complete the transaction started earlier), how come it complains when I close the window still? Surely I have satisfied that the alter procedure statement is the first in the batch. How come it complains about are uncommitted transactions.
Any help will be most appreciated!
In your first script, COMMIT is part of the stored procedure...
The BEGIN and END in the stored proc do not define the scope (start+finish of the stored proc body): the batch does, which is the next GO (or end of script)
So, changing spacing and adding comments
BEGIN TRANSACTION;
GO
--start of batch. This comment is part of the stored proc too
ALTER PROCEDURE [dbo].[pvd_sp_job_xxx]
#jobNum varchar(255)
AS
BEGIN --not needed
SET NOCOUNT ON;
UPDATE tbl_ho_job SET delete='Y' WHERE job = #job;
END --not needed
--still in the stored proc
COMMIT TRANSACTION;
GO--end of batch and stored procedure
To check, run
SELECT OBJECT_DEFINITION(OBJECT_ID('dbo.pvd_sp_job_xxx'))
Although this is a old post, the question is still in my mind after I compiled one of my procedure successfully without any begin transaction,commit transaction or GO. And the procedure can be called and produce the expected result as well.
I am working with SQL Server 2012. Does it make some change
I know this is for an answer. But words are too small to notice in comment section.
I have SQL Server 2005 stored procedure. Someone one is calling my stored procedure within a transaction. In my stored proc I'm logging some information (insert into a table). When the higher level transaction rolls back it removes my insert.
Is there anyway I can commit my insert and prevent the higher level rollback from removing my insert?
Thanks
Even if you start a new transaction, it will be nested within the outer transaction. SQL Server guarantees that a rollback will result in an unmodified database state. So there is no way you can insert a row inside an aborted transaction.
Here's a way around it, it's a bit of a trick. Create a linked server with rpc out = true and remote proc transaction promotion = false. The linked server can point to the same server as your procedure is running on. Then, you can use execte (<query>) at <server> to execute something in a new transaction.
if OBJECT_ID('logs') is not null drop table logs
create table logs (id int primary key identity, msg varchar(max))
if OBJECT_ID('TestSp') is not null drop procedure TestSp
go
create procedure TestSp as
execute ('insert into dbo.logs (msg) values (''test message'')') at LINKEDSERVER
go
begin transaction
exec TestSp
rollback transaction
select top 10 * from logs
This will end with a row in the log table, even though the transaction was rolled back.
Here's example code to create such a linked server:
IF EXISTS (SELECT srv.name FROM sys.servers srv WHERE srv.server_id != 0 AND
srv.name = N'LINKEDSERVER')
EXEC master.dbo.sp_dropserver #server=N'LINKEDSERVER',
#droplogins='droplogins'
EXEC master.dbo.sp_addlinkedserver #server = N'LINKEDSERVER',
#srvproduct=N'LOCALHOST', #provider=N'SQLNCLI', #datasrc=N'LOCALHOST',
#catalog=N'DatabaseName'
EXEC master.dbo.sp_serveroption #server=N'LINKEDSERVER', #optname=N'rpc out',
#optvalue=N'true'
EXEC master.dbo.sp_addlinkedsrvlogin #rmtsrvname=N'LINKEDSERVER',
#useself=N'True', #locallogin=NULL,#rmtuser=NULL, #rmtpassword=NULL
EXEC master.dbo.sp_serveroption #server=N'LINKEDSERVER',
#optname=N'remote proc transaction promotion', #optvalue=N'false'
In Oracle you would use autonomous transactions for that, however, SQL Server does not support them.
It is possible to declare a table variable and return it from your stored procedure.
The table variables survive the ROLLBACK, however, the upper level code should be modified to read the variable and store its data permanently.
Depending on permissions, you could call out using xp_cmdshell to OSQL thereby creating an entirely separate connection. You might be able to do something similar with the CLR, although I've never tried it. However, I strongly advise against doing something like this.
Your best bet is to establish what the conventions are for your code and the calling code - what kind of a contract is supported between the two. You could make it a rule that your code is never called within another transaction (probably not a good idea) or you could give requirements on what the calling code is responsible for when an error occurs.
Anything inside of a transaction will be part of that transaction. If you don't want it to be part of that transaction then do not put it inside.