I am reading Programming WCF Services 3rd Ed. by Juval Lowy. In the chapter "Queued Services" which covers NetMsmqBinding, On page 473, he says "... you should keep the service's processing of the queued call relatively short, or risk aborting the playback transaction. An important observation here is that it is wrong to equate queued calls with lengthy asynchronous calls."
1) What is a short call? 2) What is the best practice for long running operations; send send them off to the ThreadPool?
This article ran into the same problem in practice:
WCF & MSMQ & TransactionScope long process
I have looked and looked, and I cannot find any best practices regarding this matter on the internet.
3) Does this rule apply if I have no transaction?
1) By default a short process with a transaction is something that takes less than 10 minutes (default transaction timeout)
2) Can be, but if you do that you will lose the transactional behavior (if the server goes down the message will be lost)
3) Yes. By default the transaction scope has a default timeout that will abort your transaction.
The good news is that you can override that timeout on machine.config file: http://blogs.inkeysolutions.com/2012/01/managing-timeouts-while-using.html
Related
I want to add server-side retry behavior if something specific happens during operation.
Custom IOperationInvoker looks like a good candidate for this functionality, but...
Unfortunately instance on which operation should be performed is already created/resolved, so to correctly implement retry logic everyone should write stateless service implementations which is quite limiting, sometimes it's nice to have InstanceContextMode.PerCall mode and state which lives only during request lifetime.
Is there any place or possibility to force WCF to re-create/resolve service and invoke operation again?
Not so easy to imlement retry logic on your own. You will have to deal with a number of issues like those:
What if all attempts failed?
Should there be an interval between attempts?
What if you server is down between the first and second attempt?
etc.
Fortunately there is an out of the box solution which already does everything for you. Take a look at the MSMQ WCF binding.
It will put you contract object to the dead letter queue if you wish, so you can easyly keep track of such messages and even force to process them again.
You can set an interval between the attempts. So if you server is down for 10 minutes not all attempts are shot.
You can configure it to be durable. It will store the messages on disk, So when your server is back it will try to process the message again.
Etc. Take a look at the MSMQ binding you will find a lot of useful features.
Here's a good article about the queuing in WCF
So in the long run your design will be something like that:
Client Call -> MSMQ -> Service
Please note you don't have to make any changes to the code you have now at all, you just change the binding configuration and set up MSMQ which is fairly easy.
Just in case you can't use msmq binding directly with you client, you can always go with an special server side service which just puts the messages to the queue. So the design will be:
Client service -> HTTP -> QueueService (one method which puts messages to a queue) -> MSMQ -> ProcessService
Hope it helps!
I have some queries about WCF and multithreading.
My plan is to place items onto the Thread Pool and for it to process messages from the MSMQ queue.
I also will be hosting WCF in WAS.
I am wondering how the threading will work at this point. For example messages will be picked up by the WCF binding to the MSMQ queue and I know that WAS will spin up the service as and when it requires to. But lets say if we have 100 messages to process (100 messages per second for example) - would these be delivered in a threaded way or in a single thread?
If in a threaded manner then how best to commit or abort transactions? Any special considerations?
Sorry for the questions - just need to clarify this.
Its not clear what "placing items onto the Thread Pool" does but on the WCF side, a service using the netMsmqBinding handles "calls" in a similar way as other WCF bindings. The difference is that a "call" is actually an MSMQ message in a queue.
This article on netMsmqBinding gives a very clear explanation of how the binding works. If you configure the WCF service with its default InstanceContext setting (per call or per session depending on the .NET version), the service instances will pick up messages off the queue as-if they were a standard call each. There are setting in MSMQ and WCF that can affect this behavior to make the messages be processed sequentially but that's not the default.
Let WCF handle multi-threading for you by leaving the service set to per call (or per session) and for transactions, look at the code in this sample in MSDN to see how to work with them.
I have a simple WCF need - basically clients running in isolation and a server so really client/server intially.
WCF helps us decouple the service layer and practise a SOA approach for scale.
All we are doing on the server (per call/multiple concurrency) is writing to a db and then performing some IO for another system which will have immediate use for - but this might change as (unknown) requirements build.
Speed: We need the service to be literally quick as possible: 1 second is OK - 2 is slow - and some errors need to be sent back immediately.
I was considering using server async patterns, queues (MSMQ), Azure, to allow the service method to queue and return quickly. NB However, some processing might be 'online' in the WCF service (db write) with an immediate return with response/error, others could be offline (IO). Disadvantage: This requires a means to callback the client if there is a show-stopper error and design and development scales accordingly.
i) Although WCF allows for the service I see the technology as providing an interprocess comm channel and perhaps the actual service operations should run in win services. Eg. WCF writes to a db which a long-running service polls and picks up. As the system gets bigger and bigger some operations may be genuine fire and forget long running - which complete or are needed hours after. We can take these out of the immediate loop. This is true decoupling even if it slows us down. A WCF method can't pass to a service unless it is calling another WCF service and can't call a windows service!
From an architectural viewpoint, is it OK to have some operations complete and return, and others pass to a true bus or service (by some mechanism)? Am I over-engineering this?
ii) As all the operations of db and IO will take say 1 or 2 seconds max I feel I might just call the service aysnc from the client and wait for it to return and then marshal back to the client UI. This is also simple. This might prove a wrong decision in the long run but having said that, all service layer ops would be in a seperate dll so that these could be called by another service for later scale. A method call could be marked as immediate or queue for processing, say.
Thoughts?
From an architectural viewpoint, is it
OK to have some operations complete
and return, and others pass to a true
bus or service (by some mechanism)? Am
I over-engineering this?
It is OK, all depends on detailed requirements.
Thought #1
Have operations return unique request ID and one operation that provides status by request ID.
Thought #2
Have operations return result if they are done within X number of seconds or request ID.
I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled.
But they are not being called concurrently.
If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I want to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible.
I found this article on MSDN that states:
While channels and clients created by
the channels are thread-safe, they
might not support writing more than
one message to the wire concurrently.
If you are sending large messages,
particularly if streaming, the send
operation might block waiting for
another send to complete.
Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages.
Can anyone shed some light on this?
Addendum: I am also considering creating a pool of channels that the web server uses to fulfill requests. But again, I see no reason why my existing approach should block and I'd rather avoid the complexity if possible.
After much ado, this all came down to the fact that I wasn't calling Open explicitly on the channel before using it. Apparently an implicit Open can preclude concurrency in some scenarios.
You can cache the WCF proxy, but still create a channel for each service call - this will ensure concurrency, is not very expensive in comparison to creating a channel from scratch, and re-authentication for each call will not be necessary. This is explained on Wenlong Dong's blog - "Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices" (a much better source of WCF information and guidance than MSDN).
Just for completeness: Here is a blog entry explaining the observed behavior of request serialization when not opening the channel explicitly:
http://blogs.msdn.com/b/wenlong/archive/2007/10/26/best-practice-always-open-wcf-client-proxy-explicitly-when-it-is-shared.aspx
What is the optimal way to configure/code NServiceBus to delay retrying messages?
In its default configuration retry happens almost immediately up to the number of attempts defined in the configuration file. I'd ideally like to retry again after an hour, etc.
Also, how does HandleCurrentMessageLater() work? What does the Later aspect refer to?
The NSB retries is there to remedy temporary problems like deadlocks etc. Longer retries is better handled by creating another process that monitors the error queue and puts them back into to the source queue at the interval you like. Take a look at the ReturnToSourceQueue.exe that comes with NSB for reference.
Edit: NServiceBus now supports this , we call it Second Level Retries, see http://docs.particular.net/ for more details
Here is a blog post on why NServiceBus doesn't include a retry delay that I wrote after asking Udi this very same question in his distributed systems architecture course:
NServiceBus Retries: Why no back-off delay?
And here is a discussion thread covering some of the points involved in building an error queue monitor/retry endpoint:
http://tech.groups.yahoo.com/group/nservicebus/message/10964
As far as HandleCurrentMessageLater(), all that does is puts the current message back at the end of the queue. If there are no other messages waiting, it's going to be processed again immediately.
As of NServiceBus 3.2.1, they provide an out of the box solution to handle back off delays in the event of consecutive message failures. The previously existing retry mechanism still retries failures without a delay to handle cases like Database deadlocks, quickly self healing network issues, etc.
Once a message has been retried the configured number of times, the message is moved to a "Second Level Retry" queue. This queue, as configured below, will retry after a 10, 20, and 30 second delay, then the message will be moved to the configured error queue. You're free to change these values to something that better suites your environment.
You can also check out this link:
http://docs.particular.net/nservicebus/second-level-retries