What is the performance impact of having a WAITFOR RECEIVE with no TIMEOUT?
We have build a .Net service that is to receive messages from our SQL Server Service Broker queue and then send the messages to an ActiveMQ.
Instead of having the service poll the SQL Server Service Broker queue every 5 seconds what is then the performance impact if we do a WAITFOR RECEIVE on the queue with no TIMEOUT?
From the documentation on RECEIVE:
The statement waits until at least one message becomes available then
returns a result set that contains all message columns.
What will happen is that the WAITFOR RECEIVE will suspend and wait for a message to come in and only return when a message does come in. If it never does, it will sit there forever.
This does not consume server resources (except for tying up a listener), but it makes it difficult to terminate the program that made the call if messages are received infrequently. The nice thing about the TIMEOUT clause is that it gives your application a way to periodically check whether someone, say, requested that the program terminate. Without a timeout that returns control to the calling thread and lets it check for whether it should exit itself, your only option is to forcibly terminate the thread from the outside.
The difference in impact on the server by cycling the call every 5 seconds as opposed to holding on indefinitely is so small as to be unmeasurable.
Related
I am using NSB 4.4.2
I want to have something like heartbeats on my saga to show processing statistics.
When i request a timeout it sends to sagas input queue.
In case of many messages prior to this timeout message, IHandleTimeouts may not be fired at specific time.
Is it a bug? Or how can i use separate queue for timeout messages?
Thanks
You are correct - when a timeout is ready to be dispatched, it is sent to the incoming queue of the endpoint, and if there are already many other messages in there, it will have to wait its turn to be processed.
Another thing you might want to consider, is that the endpoint may be down at that time.
If you want to guarantee that your saga code will be invoked at (or very close to) the time of the timeout, you'll need to set up a high availability deployment first. Then, you should look at setting the SLA required of that endpoint - how quickly messages should be processed, and then monitor the time to breach SLA performance counter.
See here for more information: http://docs.particular.net/nservicebus/monitoring-nservicebus-endpoints
You should be prepared to scale out your endpoint as needed to guarantee enough processing power to keep up with the load coming in.
NOTE: The reason we use the same incoming queue for processing these timeouts is by design. A timeout message is almost always the same priority or lower than the other business messages being processed by a saga. As such, it doesn't make sense to have them cut ahead of other messages in line.
Timeouts are sent to the [endpointname].timeouts
I am using MSMQ 4 with WCF. We have a Microsoft Dynamics plugin putting a message on an queue. A service picks up the message and makes an HTTP request to another web server. The web server responds by putting another message on a different queue. A second service picks up the messages and sends the response back to Dynamics...
We have our retry queue set up to retry 3 times and then wait for 5 minutes before retrying again. The Dynamics system some times takes so long (due to other plugins) that we can round-trip before the database transaction commits. The user's aren't seeing the update come through for another 5 minutes.
I am curious if there is a way to configure the retry mechanism to retry incrementally. So, the first time it fails, it only waits a few seconds. If it fails a second time, it waits twice that. And the time between retries just keeps growing.
The problem with just reducing the time between retries is that a bad message could easily fill up a log file.
It turns out there is no built-in way of doing this. One slightly involved option is to create multiple queues, each with its own retry/poison sub-queues, each with a growing retry delay. You can reuse the same handler for each queue - the only thing that changes is the configuration. You also need a handler that can read the poison sub-queues (service) and move the message to the next queue in the chain (client).
So, you set receiveErrorHandling to Move. The maxRetryCycles and receiveRetryCount are just 1. Each queue will use a growing retryCycleDelay. Each queue you create will have a poison sub-queue created for it automatically. You simply read from each poison sub-queue and use a client to move it to the next queue.
I am sure someone could write some code that would automatically create N queues with a growing retryCycleDelay and hook it up all programmatically. Since it is the same handler/client for every queue, it wouldn't be a big deal.
I have a high-rate UDP server using Netty (3.6.6-Final) but notice that the back-end servers can take 1 to 10 seconds to respond - i have no control over those, so cannot improve latency there.
What happens is that all handler worker threads are busy waiting for response and that any new request must wait to get processed, over time this response comes very late. Is it possible to discover for a given request that the thread pool is exhausted, so as to intercept the request early and issue a server busy response?
I would use an ExecutionHandler configured with an appropriate ThreadPoolExecutor, with a max thread count and a bounded task queue. By choosing differenr RejectedExecutionHandler policies, you can either catch the RejectedExecutionException to answer with a "server busy", or use a "caller runs policy", in which case the IO worker thread will execute the task and create a push back (but that is what you wanted to avoid).
Either way, an execution handler with a limited capacity is the way forward.
Say I have the following chain of execution in my WCF Service:
ServiceMethod calls and waits for Method1, then calls and waits for Method2, which calls and waits for Method3. Finally ServiceMethod calls and waits for Method4 before returning.
What happens if the service's configured timeout is hit during the execution of Method 3 (or any of those methods)? Does the thread executing ServiceMethod just get terminated immediately? With no further execution? Or does the process allow the thread to continue to the end, without returning any result?
My concern is in knowing how far processing went before the timeout was encountered. If the thread is allowed to complete, then one can know that all completed anyway (even though no result was returned). But if the thread just gets terminated immediately, one would have to design the ServiceMethod so that one can trace how far it got, and then try again from there.
The operation is allowed to run to completion on the server - it's the WCF channel that times out. In fact, some people have asked here for a way to force the server side processing to abort when a timeout occurs, and it's generally agreed that doing that cleanly would be difficult:
Why doesn’t WCF support service-side timeouts?
After the service instance has been created by the host, it will continue to run until either process is terminated, your logic completes and exits the entry point (operation) or uncaught exception is thrown. The timeout you are concerned with is between the client and the host.
Client will get an exception on the channel signalling the timeout as the channel fault. This tells the client that channel is not safe for use and it will have to be re-created.
Small comment on the call chain. You are better of encapsulate your step by step logic in a single workflow or manager which can help you with requirements for re-startability or compensation logic. Have a single entry point in your service which then can execute the workflow.
+1 on the answer for Why doesn't WCF support service-side timeouts from "500-internal server"
boost:: asio provides an example of how to use the library to implement asynchronous timeouts; client sends server periodic heartbeat messages to server, which echoes heartbeat back to client. failure to respond within N seconds causes disconnect. see boost_asio/example/timeouts/server.cpp
The pattern outlined in these examples would be a good starting point for part of a project i will be working on shortly, but for one wrinkle:
in addition to heartbeats, both client and server need to send messages to each other.
The timeouts example pushes heartbeat echo messages onto a queue, and a subsequent timeout causes an asynchronous handler for the timeout to actually write the data to the socket.
Introducing data for the socket to write cannot be done on the thread running io_service, because it is blocked on run(). run_once() doesn't help, you still block until there is a handler to run, and introduce the complexity of managing work for the io_service.
In asio, asynchronous handlers - writes to the socket being one of them - are called on the thread running io_service.
Therefore, to introduce messages randomly, data to be sent is pushed onto a queue from a thread other than the io_service thread, which implies protecting the queue and notification timer with a mutex. There are then two mutexes per message, one for pushing the data to the queue, and one for the handler which dequeues the data for write to socket.
This is actually a more general question than asio timeouts alone: is there a pattern, when the io_service thread is blocked on run(), by which data can be asynchronously written to the socket without taking two mutexes per message?
The following things could be of interest: boost::asio strands is a mechanism of synchronising handlers. You only need to do this though if you are calling io_service::run from multiple threads AFAIK.
Also useful is the io_service::post method, which allows you execute code from the thread that has invoked io_service::run.