Here is one scenario i came across,
- I have a SQL job which has around four SQL stored procedure
- These are getting executed sequentially one after another
- Now Case is : If any of the stored procedure fails or raise an exception, entire Job should get halt.
How I could do that ?
Make yourself a little procedure with something like this:
BEGIN TRY
DECLARE #Return INTEGER
-- Run first procedure
EXEC #Return = firstProcedure
IF (#Return <> 0)
BEGIN
-- Do some error handling
END
-- Run second procedure
EXEC #Return = secondProcedure
IF (#Return <> 0)
BEGIN
-- Do some error handling
END
-- etc...
END TRY
BEGIN CATCH
-- Do some error handling
END CATCH
Although there are several different ways to do this, I would suggest that you use the facilities in SQL Server Agent. Make each of the calls a separate step in the job.
This will allow you to move from one step to the next when successful. You'll also be able to use SQL Server Agent's logging and error handling mechanisms to determine the error and handle it.
I am trying to explore the possibility of selecting from a stored procedure.
Something like this
SELECT name
FROM exec msdb..sp_help_job
WHERE name = 'SampleJob'
I understand SQL Server - SELECT FROM stored procedure that a user-defined function or view can be used, but these are not options for me.
Reason being I am not able to run the following SQL statement due to permission limitations on AWS-RDS.
SELECT name as Jobs
FROM msdb..sysjobs
This leaves me with no choice but to use msdb..sp_help_job.
What I am ultimately trying to achieve is this "If job is not created, then run create job script". The reason I need to select from the stored procedure is to see if the job exists.
Appreciate any advice / directions.
If you want to create something, but are concerned that it might already exist, then use try/catch blocks.
begin try
exec dbo.sp_add_job . . .
end try
begin catch
print 'Error encountered . . . job probably already exists'
end catch;
To be honest, I haven't done this with jobs/job steps. However, this is one way of re-creating tables, views, and so on.
According to the documentation for sp_help_job on MSDN this stored procedure has a #job_name parameter and a simple return code (0 = success or 1 = failure).
If you set the #job_name parameter on your call to sp_help_job and get the return code you should be able to test the value of the return code to accomplish what you want.
Something like this should work:
DECLARE #return_value int
EXEC #return_value = msdb..sp_help_job #job_name = 'MyJobName'
-- #return_value = 1 means the specified #job_name does not exist
IF #return_value = 1
BEGIN
-- run create job script
END
In my application I need to call a Stored Proc Asynchronously.
For this I am using Sql Service Broker.
These are the steps Involved in creating the asynchronous calling.
1) I created Message,Contract,Queue,Service.
And Sending messages.I can see my messages in 'ReceiveQueue1'.
2) I created a stored Proc and a Queue
When I execute the Stored Proc(proc_AddRecord) its executing only once.
Its reading all the records in the Queues and adding those records to the table.
Upto this point its working fine.
But when I add some new messages to 'ReceiveQueue1' my stored proc is not adding those
records automatically to the table. I have to re execute the Stored Proc(proc_AddRecord)
inorder to add the new messages. Why is the Stored proc is not getting executed.
What I am supposed to do in order to call the Stored Proc Asynchronously.
The whole point of using Service Broker is to call stored procs asynchronously.
I am totally new to SQL Server Service Broker.
Appreciate any help.
Here is my code for the stored Proc
#
--exec proc_AddRecord
ALTER PROCEDURE proc_AddRecord
AS
Declare
#Conversation UniqueIdentifier,
#msgTypeName nvarchar(200),
#msg varbinary(max)
While (1=1)
Begin
Begin Transaction;
WAITFOR
(
Receive Top (1)
#Conversation = conversation_handle,
#msgTypeName = message_type_name,
#msg = message_body
from dbo.ReceiveQueue1
), TIMEOUT 5000
IF ##Rowcount = 0
Begin
Rollback Transaction
Break
End
PRINT #msg
If #msg = 'Sales'
BEGIN
insert into TableCity(deptNo,Manager,Group,EmpCount) VALUES(101,'Reeves',51, 29)
COMMIT Transaction
Continue
End
If #msg = 'HR'
BEGIN
insert into TableCity(deptNo,Manager,Group,EmpCount) VALUES(102,'Cussac',55, 14)
COMMIT Transaction
Continue
End
Begin
Print 'Process end of dialog messages here.'
End Conversation #Conversation
Commit Transaction
Continue
End
Rollback Transaction
END
ALTER QUEUE AddRecorQueue
WITH ACTIVATION (
PROCEDURE_NAME=proc_AddRecord,
MAX_QUEUE_READERS = 1,
STATUS = ON,
EXECUTE AS 'dbo');
You say you are executing the stored procedure, you shouldn't need to do that, not even once, it should always be done with the activation.
Should your activation be on your 'ReceiveQueue1' instead of your 'AddRecorQueue' I can't see the rest of your code, but the names suggest it.
Where does your stored procedure begin and end? Generally I'd put BEGIN just after the AS statement and END where the stored procedure should end, If you don't have these then you'd need a GO statement to separate it off. Otherwise your ALTER QUEUE statement would be part of the stored procedure
You also have "Rollback Transaction" so even if the activation was working it would all get rolled back, or raise an error saying there was no transaction had one of the IF statements been triggered.
I suggest you follow this tutorial for service broker in general and this one about internal activation. They should get you started.
I have 10 stored procedures.
For example -
stored procedure fetches the rows from table A
then stored procedure runs and then third...
How can I do error handling in this.. for example I have to check with if first stored procedure executed successfully run second else throw error. If first executed successfully run second stored procedure if second runs successfully run third otherwise throw error.
ALTER PROCEDURE [dbo].[MASTER_PROCEDURE] AS
EXEC QRY_STEP3
EXEC QRY_STEP_3_1_1
EXEC OQRY_STEP_3_1_1
I would add logic to each of your subsidiary stored procedures to determine whether they have succeeded or not. eg test for existence of the temporary table. Then use a return value to indicate success of the proc. Typically this would be 0 for succes and non-zero for failure.
You would then call the procs from your master proc like this
DECLARE #ReturnValue INT
EXEC #ReturnValue = QRY_STEP1
IF(#ReturnValue = 0)
BEGIN
EXEC #ReturnValue = QRY_STEP2
END
ELSE
BEGIN
--REPORT ERROR
END
Using this approach, your master proc doesnt need to know about the inner workings of each child proc, and your master proc code will be cleaner and more readable.
use ##error.. Can be done like this
ALTER PROCEDURE [dbo].[MASTER_PROCEDURE] AS
EXEC QRY_STEP3
IF ##error =0
begin
EXEC QRY_STEP_3_1_1
else
begin
print "error in proc name"
return 1
End
if (##error=0)
Begin
EXEC OQRY_STEP_3_1_1
Else
print "error in proc name"
return 1
End
END
END
First to do this correctly you should use TRY CATCH blocks in the child packages. Those should return to the calling proc if there is an error. This way you can also return an error code to the calling proc if results are unexpected, such as a temp table with zero records which is not an error but which might make the subsequent procs fail.
Next, why are you using child procs at all? Honestly this is something that is probably better done in one proc. You say for instance that you are creating temp tables in one proc that you use in subsequent procs. To do this you need global temp tables. The problem is that global temp tables are not specific to the orginal connection that called them and thus two people trying to do this simluatnaeously might have their data mixedup. Whereas if you use one proc and local temp tables that can't happen.
SQL Server 2008 R2
Here is a simplified example:
EXECUTE sp_executesql N'PRINT ''1st '' + convert(varchar, getdate(), 126) WAITFOR DELAY ''000:00:10'''
EXECUTE sp_executesql N'PRINT ''2nd '' + convert(varchar, getdate(), 126)'
The first statement will print the date and delay 10 seconds before proceeding.
The second statement should print immediately.
The way T-SQL works, the 2nd statement won't be evaluated until the first completes. If I copy and paste it to a new query window, it will execute immediately.
The issue is that I have other, more complex things going on, with variables that need to be passed to both procedures.
What I am trying to do is:
Get a record
Lock it for a period of time
while it is locked, execute some other statements against this record and the table itself
Perhaps there is a way to dynamically create a couple of jobs?
Anyway, I am looking for a simple way to do this without having to manually PRINT statements and copy/paste to another session.
Is there a way to EXEC without wait / in parallel?
Yes, there is a way, see Asynchronous procedure execution.
However, chances are this is not what you need. T-SQL is a data access language, and when you take into consideration transactions, locking and commit/rollback semantics is almost impossible to have a parallel job. Parallel T-SQL works for instance with requests queues, where each requests is independent and there is no correlation between jobs.
What you describe doesn't sound at all like something that can, nor should, actually be paralellized.
If you want to lock a record so you can execute statements against it, you may want to execute those statements as a transaction.
To execute SQL in parallel, you need to paralellize SQL calls, by executing your SQL from within separate threads/processes in Java, C++, perl, or any other programming language (hell, launching "isql" in shell script in background will work)
If after reading all above about potential problems and you still want to run things in parallel, you probably can try sql jobs, put your queries in different jobs, then execute by calling the jobs like this
EXEC msdb..sp_start_job 'Job1'
EXEC msdb..sp_start_job 'Job2'
SQL Agent Jobs can run in parallel and be created directly from TSQL. The answer by Remus Rusanu contains a link that mentions this along with some disadvantages.
Another disadvantage is that additional security permissions are required to create the job. Also, for the implementation below, the job must run as a specific user+login with additional job management privileges.
It is possible to run the arbitrary SQL as a different (safer) user however I believe it requires sysadmin privilege to designate the job as such.
The returned #pJobIdHexOut can be used to stop the job if needed.
create function Common.ufn_JobIdFromHex(
#pJobIdBinary binary(16)
)
returns varchar(100) as
/*---------------------------------------------------------------------------------------------------------------------
Purpose: Convert the binary represenation of the job_id into the job_id string that can be used in queries
against msdb.dbo.sysjobs.
http://stackoverflow.com/questions/68677/how-can-i-print-a-binary-value-as-hex-in-tsql
http://stackoverflow.com/questions/3604603
MsgBoards
Modified By Description
---------- -------------- ---------------------------------------------------------------------------------------
2014.08.22 crokusek Initial version, http://stackoverflow.com/questions/3604603 and MsgBoards.
---------------------------------------------------------------------------------------------------------------------*/
begin
-- Convert from binary and strip off the '0x'.
--
declare
#jobIdHex varchar(100) = replace(convert(varchar(300), #pJobIdBinary, 1), '0x', '');
-- The endianness appears to be backwards and there are dashes needed.
--
return
substring(#jobIdHex,7,2) +
substring(#jobIdHex,5,2) +
substring(#jobIdHex,3,2) +
substring(#jobIdHex,1,2) +
'-' +
substring(#jobIdHex,11,2) +
substring(#jobIdHex,9,2) +
'-' +
substring(#jobIdHex,15,2) +
substring(#jobIdHex,13,2) +
'-' +
substring(#jobIdHex,17,4) +
'-' +
substring(#jobIdHex,21,12);
end
go
create proc [Common].[usp_CreateExecuteOneTimeBackgroundJob]
#pJobNameKey varchar(100), -- Caller should ensure uniqueness to avoid a violation
#pJobDescription varchar(1000),
#pSql nvarchar(max),
#pJobIdHexOut varchar(100) = null out, -- JobId as Hex string. For SqlServer 2014 binary(16) = varchar(64)
#pDebug bit = 0 -- True to include print messages
--
with execute as 'TSqlBackgroundJobOwner' -- requires special permissions (See below)
as
/*---------------------------------------------------------------------------------------------------------------------
Purpose: Create a one time background job and launch it immediately. The job is owned by the "execute as" UserName
Caller must ensure the #pSql argument is safe.
Required Permissions for "execute as" user:
-- User must be created with associated login (w/ deny connect).
use [msdb];
create user [$UserName$] for login [$LoginName$];
alter role [SQLAgentUserRole] add member [$UserName$];
alter role [SQLAgentReaderRole] add member [$UserName$];
alter role [SQLAgentOperatorRole] add member [$UserName$];
grant select on dbo.sysjobs to [$UserName$];
grant select on dbo.sysjobactivity to [$UserName$];',
use [Master];
create user [$UserName$] for login [$LoginName$];
grant execute on xp_sqlagent_is_starting to [$UserName$];
grant execute on xp_sqlagent_notify to [$UserName$];';
Modified By Description
---------- ----------- ------------------------------------------------------------------------------------------
2014.08.22 crokusek Initial version
2015.12.22 crokusek Use the SP caller as the job owner (removed the explicit setting of the job owner).
---------------------------------------------------------------------------------------------------------------------*/
begin try
declare
#usp varchar(100) = object_name(##procid),
#currentDatabase nvarchar(100) = db_name(),
#jobId binary(16),
#jobOwnerLogin nvarchar(100);
set xact_abort on; -- ensure transaction is aborted on non-catchables like client timeout, etc.
begin transaction
exec msdb.dbo.sp_add_job
#job_name=#pJobNameKey,
#enabled=1,
#notify_level_eventlog=0,
#notify_level_email=2,
#notify_level_netsend=2,
#notify_level_page=2,
#delete_level=3,
#description=#pJobDescription,
#category_name=N'Database Maintenance',
-- If not overridden then the the current login is the job owner
--#owner_login_name=#jobOwnerLogin, -- Requires sysadmin to set this so avoiding.
#job_id = #jobId output;
-- Get the job_id string of the jobId (special format)
--
set #pJobIdHexOut = Common.ufn_JobIdFromHex(#jobId);
if (#pDebug = 1)
begin
print 'JobId: ' + #pJobIdHexOut;
print 'Sql: ' + #pSql;
end
exec msdb.dbo.sp_add_jobserver #job_id=#jobId; -- default is local server
exec msdb.dbo.sp_add_jobstep
#job_id=#jobId,
#step_name=N'One-Time Job Step 1',
#step_id=1,
#command=#pSql,
#database_name=#currentDatabase,
#cmdexec_success_code=0,
#on_success_action=1,
#on_fail_action=2,
#retry_attempts=0,
#retry_interval=0,
#os_run_priority=0,
#subsystem=N'TSQL',
#flags=0
;
declare
#startResult int;
exec #startResult = msdb.dbo.sp_start_job
#job_id = #jobId;
-- End the transaction
--
if (#startResult != 0)
raiserror('Unable to start the job', 16, 1); -- causes rollback in catch block
else
commit; -- Success
end try
begin catch
declare
#CatchingUsp varchar(100) = object_name(##procid);
if (xact_state() = -1)
rollback;
--exec Common.usp_Log
-- #pMethod = #CatchingUsp;
--exec Common.usp_RethrowError
-- #pCatchingMethod = #CatchingUsp;
end catch
go
It might be worth to check out the article Asynchronous T-SQL Execution Without Service Broker.
You can create an SSIS that has 2 tasks that run in parallel. Then make an unscheduled agent job to call this SSIS. You can finally execute this unscheduled agent job using sp_start_job.
Hi Picking one answer above it is possible to produce something similar to a multi-thread execution using SQL Agent jobs and some auxiliary tables and using SQL Server metadata. I made it already and I was able to call the same procedure 32 times on a server processing 1/32 parts of the processed data each.
Of course one needs to pay high attention to the data partitioning logic so datasets do not overlap. The best way is to use the mod operator over a numeric field.
This logic even allows different partitioning sets between steps on the procedure. On one step you can use field A on the next step field B.
As mentioned above you need to be very carefully with table locks and something I noticed is partitioning tables will also speed data insertion and updates.
I built a master job generator engine using T-SQL that triggered the requested number of procedures using jobs. All of this was called from a SSIS job.
The process was far from being simple to develop but it mimics quite well C# or Java multi-thread logic.
I also had to build some auxiliary tables that hold each job status so the master T-SQL job engine procedure was able to check each job status.
I used SQL Server metadata but each job that was created and started knew how to update its own status -> Job X when it starts updates its status on the main table status monitor, when running the same and when it finishes it closes its status. The main job procedure keeps checking on those auxiliary tables if there are Jobs on running status and will only end when all of them have the status finished.
Microsoft could think on developing something similar on SSIS.