SQL Server, ServicerBroker for asynchronized execution of stored procedures - sql

I followed this tutorial:
https://gallery.technet.microsoft.com/scriptcenter/Using-Service-Broker-for-360c961a
and it is working for me,
However,
I don't understand some thing:
At PROCEDURE proc_BrokerTargetActivProc we have infinite loop: WHILE (1=1). Why ? After all, during creating queue we bind messages with this procedure:PROCEDURE_NAME = proc_BrokerTargetActivProc.
In addition, I am not sure If I correctly understand way of working it:
ExecuteProcedureAsync push to queue message with name of procedure to execute.
What now ? How does it work that BrokerTargetActivProc will be called with exactly one message ?
What about parameter MAX_QUEUE_READERS = 5 ?
Thank in advance,
Regards

You have three questions here.
Q. "Why do we have an infinite loop in the activation procedure?"
A. The idea here is that there is some non-zero cost for starting a procedure. If you have a bunch of messages on the queue, having your already executing procedure handle them is cheaper than executing the procedure for each message.
Q. How will the activation procedure be called with only one message?
A. That is an implementation detail of the way that BrokerTargetActivProc is written. Specifically, the RECEIVE TOP(1) statement. In my environment, I receive multiple messages off of the queue at once (e.g. RECEIVE TOP(1000)). That choice (and the implications of that choice) is up to you.
Q. What about parameter MAX_QUEUE_READERS = 5?
A. In order to fully appreciate this, a reading of this article is useful. It outlines when activation occurs on a service broker queue. Having MAX_QUEUE_READERS be greater than one says that you're allowing the server to have more than one process getting messages at a time. This would be useful in the case where you have a bunch of messages come in in a short period of time and you want to increase throughput by having multiple executions of your activation procedure active at once to run through those messages.
Follow-up questions from the comments:
Q: Who is calling the BrokerTargetActivProc procedure?
A: The procedure is called when activation is deemed to be necessary (see the article linked to above). As far as the executing context, you set that when you set the procedure as the activation procedure for the queue. For instance, if I wanted it to execute as foo_user, I'd do:
alter queue [TASK_QUEUE] with activation (
procedure_name = `BrokerTargetActivProc`,
execute as 'foo_user'
);
Q: How do you pass parameters to the activation procedure?
A: You don't. The point of the activation procedure is to de-queue messages and process them. So, all of the information should be in the message (which may drive queries etc).
Q: What about error handling?
A: You have to be careful here. Anything that causes a receive statement to rollback can trigger what is called poison message handling. That said, what I do is I wrap the receive and subsequent processing of a message in a try/catch block in the activation stored procedure and in the catch, I put the message into a table for later investigation. But how you handle errors and what you do with the messages that caused them is up to you!

Related

How to queue up calls to stored procedures in Oracle?

I have a stored procedure in oracle (which schedules a one-time job to run another procedure, if this is relevant). The job calls another stored procedure which runs for a few minutes, and performs inserts updates and deletes and also uses loops. Now while the long procedure is running, if there is another call for it to run, is it possible to prevent them from executing simultaneously? And even better, to make the second one execute once the previous one has finished, like queue them?
To prevent two stored procedures to run at the same time, you could use DBMS_LOCK to get an exclusive lock (or just try to update the same row in a given table).
For your purpose the procedure DBMS_LOCK.ALLOCATE_UNIQUE was designed.
Assign some unique lockname string and call the procedure at the beginning of the critical sequence in your procedure. You will get lockhandle as an output.
Then call DBMS_LOCK.REQUEST to start the unique processing
DBMS_LOCK.ALLOCATE_UNIQUE( v_lockname, v_lockhandle);
v_res := DBMS_LOCK.REQUEST( lockhandle=>v_lockhandle, release_on_commit => TRUE);
At the end you must release the handle to be able to process the next run
v_res := DBMS_LOCK.RELEASE (v_lockhandle);
A good practice is to release it also in the EXCEPTION section to be not blocked after the failure.
Please check the possible options in the documentation such as for release_on_commit and adjust it for your need.
Some care should be taken with the return parameters of the REQUEST and RELEASE procedures.

How do I execute a stored procedure whenever an item is added to a service broker queue?

I am using SQL Service Broker. I have a queue that another process is adding items to. I want to run a stored procedure whenever items are added to the queue. The procedure will receive the top item from the queue and use its information within the stored procedure. What is the correct syntax for doing something like this? Do I use a typical SQL Trigger or is there something special to use when working with Service Broker queues?
A triggered stored procedure can be specified as part of the queue definition.
See the documentation for CREATE QUEUE - specifically the ACTIVATION clause.
An example from the documentation:
The following example creates a queue that is available to receive
messages. The queue starts the stored procedure expense_procedure when
a message enters the queue. The stored procedure executes as the user
ExpenseUser. The queue starts a maximum of 5 instances of the stored
procedure.
CREATE QUEUE ExpenseQueue
WITH STATUS=ON,
ACTIVATION (
PROCEDURE_NAME = expense_procedure,
MAX_QUEUE_READERS = 5,
EXECUTE AS 'ExpenseUser' ) ;

Can we call a stored procedure based on event happening. like triggers

Can we call a stored procedure based on event happening. like triggers
You can use Service Broker and set up an Activation Procedure to be run when an Event occurs.
You can query sys.event_notification_event_types to see the type of events available.

Is having a stored procedure that calls other stored procedures bad?

I'm Trying to make a long stored procedure a little more manageable, Is it wrong to have a stored procedures that calls other stored procedures for example I want to have a sproc that inserts data into a table and depending on the type insert additional information into table for that type, something like:
BEGIN TRANSACTION
INSERT INTO dbo.ITSUsage (
Customer_ID,
[Type],
Source
) VALUES (
#Customer_ID,
#Type,
#Source
)
SET #ID = SCOPE_IDENTITY()
IF #Type = 1
BEGIN
exec usp_Type1_INS #ID, #UsageInfo
END
IF #TYPE = 2
BEGIN
exec usp_Type2_INS #ID, #UsageInfo
END
IF (##ERROR <> 0)
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
Or is this something I should be handling in my application?
We call procs from other procs all the time. It's hard/impossible to segment a database-intensive (or database-only) application otherwise.
Calling a procedure from inside another procedure is perfectly acceptable.
However, in Transact-SQL relying on ##ERROR is prone to failure. Case in point, your code. It will fail to detect an insert failure, as well as any error produced inside the called procedures. This is because ##ERROR is reset with each statement executed and only retains the result of the very last statement. I have a blog entry that shows a correct template of error handling in Transact-SQL and transaction nesting. Also Erland Sommarskog has an article that is, for long time now, the reference read on error handling in Transact-SQL.
No, it is perfectly acceptable.
Definitely, no.
I've seen ginormous stored procedures doing 20 different things that would have really benefited from being refactored into smaller, single purposed ones.
As long as it is within the same DB schema it is perfectly acceptable in my opinion. It is reuse which is always favorable to duplication. It's like calling methods within some application layer.
not at all, I would even say, it's recommended for the same reasons that you create methods in your code
One stored procedure calling another stored procedure is fine. Just that there is a limit on the level of nesting till which you can go.
In SQL Server the current nesting level is returned by the ##NESTLEVEL function.
Please check the Stored Procedure Nesting section here http://msdn.microsoft.com/en-us/library/aa258259(SQL.80).aspx
cheers
No. It promotes reuse and allows for functionality to be componentized.
As others have pointed out, this is perfectly acceptable and necessary to avoid duplicating functionality.
However, in Transact-SQL watch out for transactions in nested stored procedure calls: You need to check ##TRANCOUNT before issuing rollback transaction because it rolls back all nested transactions. Check this article for an in-depth explanation.
Yes it is bad. While SQL Server does support and allow one stored procedures to call another stored procedure. I would generally try to avoid this design if possible. My reason?
single responsibility principle
In our IT area we use stored procedures to consolidate common code for both stored procedures and triggers (where applicable). It's also virtually mandatory for avoiding SQL source duplication.
The general answer to this question is, of course, No - it's normal and even preferred way of coding SQL stored procedures.
But it could be that in your specific case it is not such a good idea.
If you maintain a set of stored procedures that support data access tier (DAO) in your application (Java, .Net, etc.) then having database tier (let's call stored procedures that way) streamlined and relatively thin would benefit your overall design. Thus, having extensive graph of stored procedure calls may indeed be bad for maintaining and supporting overall data access logic in such application.
I would lean toward more uniform distribution of logic between DAO and database tier so that stored procedure code would fit inside single functional call.
Adding to the correct comments of other posters, there is nothing wrong in principle but you need to watch out on the execution time in case the procedure is being called for instance by an external application which is conforming to a specific timeout.
Typical example if you call the stored procedure from a web application: when the default timeout kicks in since your chain of executions takes longer you get a failure in the web application even when the stored procedure committs correctly.
Same happens if you call from an external service.
This can lead to an inconsistent behaviour in your application, triggering error management routines in external services etc.
If you are in situations like this what I do is breaking the chain of calls redirecting the long execution children calls to different processes using a Service Broker.

When ASUTIME LIMIT is met for DB2 stored procedures, will the process ROLLBACK or COMMIT?

Can't find any answers on the net, has anyone tried this?
My stored proc processes deletion of tokens found across multiple tables and it is set to run for a specified number of "cpu service units" by using the ASUTIME LIMIT integer stored proc definition/characteristic. When this limit is met, will the stored proc commit or rollback the statements executed for the current token?
Thanks.
thanks.. anyway, a co worker of mine says that:
"With ASUTIME, we are controlling the number of CPU units that the stored procedure will use in its lifetime. If the stored procedure will use more service units than is allowed, the stored procedure will cancel. This means that as long as the app stays within its bounds, the SP will go on.
It can happen that another application, like a report run, will be kicked off while the stored procedure is running. There is no guarantee that the SP will stop at this point, or any point thereafter, because so long as both apps stay within their allowed range, the SP will not terminate. This may not be the intended behavior - most SPs of this nature are run on off days (i.e. Sunday) so that they will not interfere or competewith higher priority jobs during the regular day. In short, they are meant to end, and not to co-run with other jobs.
ASUTIME is designed for runaway stored procedures, not as a tight control on looping within a procedure. WLM priorities and service goals should used be for this, not ASUTIME.
There may not be significant savings with using ASUTIME, since the stored procedure would also have to check against system resources and the RLST tables."
Posting because it might be helpful to someone else.