Async calling a stored Proc using service Broker - sql-server-2005

In my application I need to call a Stored Proc Asynchronously.
For this I am using Sql Service Broker.
These are the steps Involved in creating the asynchronous calling.
1) I created Message,Contract,Queue,Service.
And Sending messages.I can see my messages in 'ReceiveQueue1'.
2) I created a stored Proc and a Queue
When I execute the Stored Proc(proc_AddRecord) its executing only once.
Its reading all the records in the Queues and adding those records to the table.
Upto this point its working fine.
But when I add some new messages to 'ReceiveQueue1' my stored proc is not adding those
records automatically to the table. I have to re execute the Stored Proc(proc_AddRecord)
inorder to add the new messages. Why is the Stored proc is not getting executed.
What I am supposed to do in order to call the Stored Proc Asynchronously.
The whole point of using Service Broker is to call stored procs asynchronously.
I am totally new to SQL Server Service Broker.
Appreciate any help.
Here is my code for the stored Proc
#
--exec proc_AddRecord
ALTER PROCEDURE proc_AddRecord
AS
Declare
#Conversation UniqueIdentifier,
#msgTypeName nvarchar(200),
#msg varbinary(max)
While (1=1)
Begin
Begin Transaction;
WAITFOR
(
Receive Top (1)
#Conversation = conversation_handle,
#msgTypeName = message_type_name,
#msg = message_body
from dbo.ReceiveQueue1
), TIMEOUT 5000
IF ##Rowcount = 0
Begin
Rollback Transaction
Break
End
PRINT #msg
If #msg = 'Sales'
BEGIN
insert into TableCity(deptNo,Manager,Group,EmpCount) VALUES(101,'Reeves',51, 29)
COMMIT Transaction
Continue
End
If #msg = 'HR'
BEGIN
insert into TableCity(deptNo,Manager,Group,EmpCount) VALUES(102,'Cussac',55, 14)
COMMIT Transaction
Continue
End
Begin
Print 'Process end of dialog messages here.'
End Conversation #Conversation
Commit Transaction
Continue
End
Rollback Transaction
END
ALTER QUEUE AddRecorQueue
WITH ACTIVATION (
PROCEDURE_NAME=proc_AddRecord,
MAX_QUEUE_READERS = 1,
STATUS = ON,
EXECUTE AS 'dbo');

You say you are executing the stored procedure, you shouldn't need to do that, not even once, it should always be done with the activation.
Should your activation be on your 'ReceiveQueue1' instead of your 'AddRecorQueue' I can't see the rest of your code, but the names suggest it.
Where does your stored procedure begin and end? Generally I'd put BEGIN just after the AS statement and END where the stored procedure should end, If you don't have these then you'd need a GO statement to separate it off. Otherwise your ALTER QUEUE statement would be part of the stored procedure
You also have "Rollback Transaction" so even if the activation was working it would all get rolled back, or raise an error saying there was no transaction had one of the IF statements been triggered.
I suggest you follow this tutorial for service broker in general and this one about internal activation. They should get you started.

Related

How can we avoid Stored Procedures being executed in parallel?

We have the following situation:
A Stored Procedure is invoked by a middleware and is given a XML file as parameter. The Procedure then parses the XML file and inserts values into temporary tables inside a loop. After looping, the values inside the temporary tables are inserted into physical tables.
Problem is, the Stored Procedure has a relatively long run-time (about 5 Minutes). In this period, it is likely that it is being invoked a second time, which would cause both processes to be suspended.
Now my question:
How can we avoid a second execution of a Stored Procedure if it is already running?
Best regards
I would recommend designing your application layer to prevent multiple instances of this process being run at once. For example, you could move the logic into a queue that is processed 1 message at a time. Another option would be locking at the application level to prevent the database call from being executed.
SQL Server does have a locking mechanism to ensure a block of code is not run multiple times: an "app lock". This is similar in concept to the lock statement in C# or other semaphores you might see in other languages.
To acquire an application lock, call sp_getapplock. For example:
begin tran
exec sp_getapplock #Resource = 'MyExpensiveProcess', #LockMode = 'Exclusive', #LockOwner = 'Transaction'
This call will block if another process has acquired the lock. If a second RPC call tries to run this process, and you would rather have the process return a helpful error message, you can pass in a #LockTimeout of 0 and check the return code.
For example, the code below raises an error if it could not acquire the lock. Your code could return something else that the application interprets as "process is already running, try again later":
begin tran
declare #result int
exec #result = sp_getapplock #Resource = 'MyExpensiveProcess', #LockMode = 'Exclusive', #LockOwner = 'Transaction', #LockTimeout = 0
if #result < 0
begin
rollback
raiserror (N'Could not acquire application lock', 16, 1)
end
To release the lock, call sp_releaseapplock.
exec sp_releaseapplock #Resource = 'MyExpensiveProcess'
Stored procedures are meant to be run multiple times and in parallel as well. The idea is to reuse the code.
If you want to avoid multiple run for same input, you need to take care of it manually. By implementing condition check for the input or using some locking mechanism.
If you don't want your procedure to run in parallel at all (regardless of input) best strategy is to acquire lock using some entry in DB table or using global variables depending on DBMS you are using.
You can check if the stored procedure is already running using exec sp_who2. This may be an approach to consider. In your SP, check this first and simply exit if it is. It will run again the next time the job executes.
You would need to filter out the current thread, make sure the count of that SP is 1 (1 will be for the current process, 2 means already running), or have a helper SP that is called first.
Here are other ideas: Check if stored procedure is running

Trying to run a SQL Server stored procedure every n seconds using Activation and Timers not working

I'm trying to run a stored procedure every few seconds that'll do some maintenance (move some rows from a staging to production table). I've looked at this answer on another SO question and not been able to get it to work on SQL Server 2014 Enterprise Edition. I read through the comments and found this question from another user, where the original question answerer suggested he ask a separate question. I didn't find a separate question from him about it.
I copied the example directly from the other question into SQL Server but it would always return zero rows on the last statement. Here is the SQL provided by the other question:
-- create a table to store the results of some dummy procedure
create table Activity (
InvokeTime datetime not null default getdate()
, data float not null);
go
-- create a dummy procedure
create procedure createSomeActivity
as
begin
insert into Activity (data) values (rand());
end
go
-- set up the queue for activation
create queue Timers;
create service Timers on queue Timers ([DEFAULT]);
go
-- the activated procedure
create procedure ActivatedTimers
as
begin
declare #mt sysname, #h uniqueidentifier;
begin transaction;
receive top (1)
#mt = message_type_name
, #h = conversation_handle
from Timers;
if ##rowcount = 0
begin
commit transaction;
return;
end
if #mt in (N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
, N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog')
begin
end conversation #h;
end
else if #mt = N'http://schemas.microsoft.com/SQL/ServiceBroker/DialogTimer'
begin
exec createSomeActivity;
-- set a new timer after 2s
begin conversation timer (#h) timeout = 2;
end
commit
end
go
-- attach the activated procedure to the queue
alter queue Timers with activation (
status = on
, max_queue_readers = 1
, execute as owner
, procedure_name = ActivatedTimers);
go
-- seed a conversation to start activating every 2s
declare #h uniqueidentifier;
begin dialog conversation #h
from service [Timers]
to service N'Timers', N'current database'
with encryption = off;
begin conversation timer (#h) timeout = 1;
-- wait 15 seconds
waitfor delay '00:00:15';
-- end the conversation, will stop activating
end conversation #h;
go
-- check that the procedure executed
select * from Activity;
The begin dialog conversation #h statement will return me a proper uniqueidentifier that I'm able to use in the end conversation call, but it seems like nothing is ever being placed into the queue.
Based on the comment by shurik, I tested this on a database that I created simply with
CREATE DATABASE TestDB
and everything worked. I took a look at the CREATE script for the database I was developing against (which I created through the SSMS UI) and noticed that the script contained
ALTER DATABASE [DATABASENAME] SET DISABLE_BROKER
GO
meaning, obviously, that the broker is disabled for my database. I found it odd because I didn't explicitly disable the broker when creating my database in the UI.
I scripted the CREATE of the table I created with the CREATE statement and noticed that the option
ALTER DATABASE [TestDB] SET ENABLE_BROKER
GO
was in the script.
Basically, any database created through the SSMS UI will have the broker disabled by default (I checked with a new database), and any database created through a CREATE statement will have it enabled by default.
I'm surprised that through trying to get it to work I never once got a notification that the broker was disabled.

SQL Service Broker Internal Activation Questions

I setup Internal Activation for two stored procedures. One, inserts one or more records , the other, updates one or more records in the same table. So, I have two initiator, two target queues.
It works fine on development so far, but I wonder what types of problems I might encounter when we move it to prod where these two stored procedures are frequently called. We have already experiencing deadlock issues caused by these two stored procedures. Asynchronous execution is my main goal with this implementation.
Questions :
Is there a way to use one target queue for both stored procedures to prevent any chance of deadlocks?
Is there anything I can do to make it more reliable? like one execution error should not stop incoming requests
to the queue?
Tips to improve scalability (high number of execution per second)?
Can I set RETRY if there is a deadlock?
Here is the partial code of the insert stored procedure;
CREATE QUEUE [RecordAddUsersQueue];
CREATE SERVICE [RecordAddUsersService] ON QUEUE [RecordAddUsersQueue];
ALTER QUEUE [AddUsersQueue] WITH ACTIVATION
( STATUS = ON,
MAX_QUEUE_READERS = 1, --or 10?
PROCEDURE_NAME = usp_AddInstanceUsers,
EXECUTE AS OWNER);
CREATE PROCEDURE [dbo].[usp_AddInstanceUsers] #UsersXml xml
AS
BEGIN
DECLARE #Handle uniqueidentifier;
BEGIN DIALOG CONVERSATION #Handle
FROM SERVICE [RecordAddUsersService]
TO SERVICE 'AddUsersService'
ON CONTRACT [AddUsersContract]
WITH ENCRYPTION = OFF;
SEND ON CONVERSATION #Handle
MESSAGE TYPE [AddUsersXML] (#UsersXml);
END
GO
CREATE PROCEDURE [dbo].[usp_SB_AddInstanceUsers]
AS
BEGIN
DECLARE #Handle uniqueidentifier;
DECLARE #MessageType sysname;
DECLARE #UsersXML xml;
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION;
WAITFOR
(RECEIVE TOP (1)
#Handle = conversation_handle,
#MessageType = message_type_name,
#UsersXML = message_body
FROM [AddUsersQueue]), TIMEOUT 5000;
IF (##ROWCOUNT = 0)
BEGIN
ROLLBACK TRANSACTION;
BREAK;
END
IF (#MessageType = 'ReqAddUsersXML')
BEGIN
--<INSERT>....
DECLARE #ReplyMsg nvarchar(100);
SELECT
#ReplyMsg = N'<ReplyMsg>Message for AddUsers Initiator service.</ReplyMsg>';
SEND ON CONVERSATION #Handle
MESSAGE TYPE [RepAddUsersXML] (#ReplyMsg);
END
ELSE
IF #MessageType = N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog'
BEGIN
END CONVERSATION #Handle;
END
ELSE
IF #MessageType = N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
BEGIN
END CONVERSATION #Handle;
END
COMMIT TRANSACTION;
END
END
GO
Thank you,
Kuzey
Is there a way to use one target queue for both stored procedures to prevent any chance of deadlocks?
You can and you should. There is no reason for having two target services/queues/procedures. Send, to the same service, two different message types for the two operations you desire. The activated procedure should then execute logic for Add or logic for Update, depending on message type.
Is there anything I can do to make it more reliable? like one execution error should not stop incoming requests to the queue?
SSB activation will be very reliable, that's not going to be a problem. As long as you adhere strictly to transaction boundaries (do not commit dequeue operations before processing is complete), you'll never lose a message/update.
Tips to improve scalability (high number of execution per second)?
Read Writing Service Broker Procedures and Reusing Conversations. To achieve a high throughput processing, you will have to dequeue and process in batches (TOP(1000)) into #table variables. See Exception handling and nested transactions for a pattern that can be applied to process a batch of messages. You'll need to read and understand Conversation Group Locks.
Can I set RETRY if there is a deadlock?
No need to, SSB activation will retry for you. As you rollback, the dequeue (RECEIVE) will rollback thus making the messages again available for activation, and the procedure will automatically retry. Note that 5 rollbacks in a row will trigger the poison message trap
MAX_QUEUE_READERS = 1, --or 10?
If 1 cannot handle the load, add more. As long as you understand proper conversation group locking, the parallel activated procedures should handle unrelated business items and never deadlock. If you encounter deadlocks between instances of activated procedure on the same queue, it means you have a flaw in the conversation group logic and you allow messages seen by SSB as uncorrelated (different groups) to modify the same database records (same business entities) and lead to deadlocks.
BTW, you must have an activated procedure on the initiator service queue as well. See How to prevent conversation endpoint leaks.

How to log events in a transaction

I have a SQL Server 2008 R2 stored procedure that runs a few INSERTs and UPDATEs in a TRANSACTION. After each statement, I need to log what just happened before doing the next step.
Here is my code:
BEGIN TRY
BEGIN TRANSACTION
INSERT INTO... -- 1st statement
INSERT INTO MyEventLog (EventDescription) VALUES ('Did Step 1') -- log
UPDATE... -- 2nd statement
INSERT INTO MyEventLog (EventDescription) VALUES ('Did Step 2') -- log
COMMIT TRANSACTION
END TRY
BEGIN CATCH
IF (##TRANCOUNT<>0) ROLLBACK TRANSACTION
EXEC LogError 'I got an error'
END CATCH
Problem is: if there is an error, the transaction rolls back all statements -- including the logging which I need. in the event of an error, how do I roll back the transactions but keep the logging.
I was going to ask why you would want to log an event that technically didn't happen, since the transaction would have been rolled back and the database would be in the state it was in before the transaction. But then it occurred to me that you probably just want to log it in order to know WHERE it failed so you can fix the underlying issue, which is a smart thing to do.
If that is indeed the case, the best thing to do is to rollback the entire transaction as you are currently doing, and to use your LogError SP to log the error message in another table. This is what I use:
CREATE PROCEDURE [dbo].[Error_Handler]
#returnMessage bit = 'False'
WITH EXEC AS CALLER
AS
BEGIN
DECLARE #number int,
#severity int,
#state int,
#procedure varchar(100),
#line int,
#message varchar(4000)
INSERT INTO Errors (Number,Severity,State,[Procedure],Line,[Message])
VALUES (
ERROR_NUMBER(),
ERROR_SEVERITY(),
ERROR_STATE(),
isnull(ERROR_PROCEDURE(),'Ad-Hoc Query'),
isnull(ERROR_LINE(),0),
ERROR_MESSAGE())
IF(#returnMessage = 'True')
BEGIN
select *
from Errors
where ErrorID = scope_identity()
END
END
The error message should let you know what went wrong in what table, and that should be enough info to fix the problem.
See Logging messages during a transaction. Is a bit convoluted:
use sp_trace_generateevent to generate the logged event
use event notifications to capture the custom trace event into a message
use internal activation to process the message and write it into the logging table
But it does allow you to log messages during a transaction and the messages will be persisted even if the transaction rolls back. Order of logging is preserved.
You also need to make your transaction and stored procedure play nice when one procedure fails but the transaction can continue (eg. when processing a batch and one item fails, you want to continue wit the rest of the batch). See Exception handling and nested transactions.
How about putting the logging statements into a separate transaction?
I'd put it down in the CATCH block:
BEGIN CATCH
IF (##TRANCOUNT<>0)
ROLLBACK TRANSACTION
EXEC LogError 'I got an error'
BEGIN TRANSACTION
INSERT INTO MyEventLog (EventDescription) VALUES ('Error Updating') -- log
END TRANSACTION
END CATCH
As it turns out, table variables don't obey transaction semantics. So, you could insert into a table variable and then insert from your table variable into your logging table after the catch block.

Confusion with the GO statement, uncommitted transactions and alter procedure

I would like to get to the bottom of this because it's confusing me. Can anyone explain when I should use the GO statement in my scripts?
As I understand it the GO statement is not part of the T-SQL language, instead it is used to send a batch of statements to SQL server for processing.
When I run the following script in Query Analyser it appears to run fine. Then I close the window and it displays a warning:
"There are uncommitted transactions. Do you wish to commit these transactions before closing the window?"
BEGIN TRANSACTION;
GO
ALTER PROCEDURE [dbo].[pvd_sp_job_xxx]
#jobNum varchar(255)
AS
BEGIN
SET NOCOUNT ON;
UPDATE tbl_ho_job SET delete='Y' WHERE job = #job;
END
COMMIT TRANSACTION;
GO
However if I add a GO at the end of the ALTER statement it is OK (as below). How come?
BEGIN TRANSACTION;
GO
ALTER PROCEDURE [dbo].[pvd_sp_xxx]
#jobNum varchar(255)
AS
BEGIN
SET NOCOUNT ON;
UPDATE tbl_ho_job SET delete='Y' WHERE job = #job;
END
GO
COMMIT TRANSACTION;
GO
I thought about removing all of the GO's but then it complains that the alter procedure statement must be the first statement inside a query batch? Is this just a requirement that I must adhere to?
It seems odd because if I BEGIN TRANSACTION and GO....that statement is sent to the server for processing and I begin a transaction.
Next comes the ALTER procedure, a COMMIT TRANSACTION and a GO (thus sending those statements to the server for processing with a commit to complete the transaction started earlier), how come it complains when I close the window still? Surely I have satisfied that the alter procedure statement is the first in the batch. How come it complains about are uncommitted transactions.
Any help will be most appreciated!
In your first script, COMMIT is part of the stored procedure...
The BEGIN and END in the stored proc do not define the scope (start+finish of the stored proc body): the batch does, which is the next GO (or end of script)
So, changing spacing and adding comments
BEGIN TRANSACTION;
GO
--start of batch. This comment is part of the stored proc too
ALTER PROCEDURE [dbo].[pvd_sp_job_xxx]
#jobNum varchar(255)
AS
BEGIN --not needed
SET NOCOUNT ON;
UPDATE tbl_ho_job SET delete='Y' WHERE job = #job;
END --not needed
--still in the stored proc
COMMIT TRANSACTION;
GO--end of batch and stored procedure
To check, run
SELECT OBJECT_DEFINITION(OBJECT_ID('dbo.pvd_sp_job_xxx'))
Although this is a old post, the question is still in my mind after I compiled one of my procedure successfully without any begin transaction,commit transaction or GO. And the procedure can be called and produce the expected result as well.
I am working with SQL Server 2012. Does it make some change
I know this is for an answer. But words are too small to notice in comment section.