My WCF service method needs to perform concurrent tasks in a transaction which is flowed from the client to service. To enable a transaction scope to flow through threads, I enabled the TransactionScopeAsyncFlowOption in the constructor of transaction scope class before sending call to service.
using (var transaction = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
//service call here. Async flow option has no effect on service side.
}
The transaction flows from client to service but it is not able to flow to sub tasks. If, however, I create new transaction scope in the service and enable its async flow there, then transaction flows to sub tasks. So my question is, why the TransactionScopeAsyncFlow option has no effect on the transaction at the service end? Should not it take the client transaction scope settings so there would not be any need to create new transaction scope in service just to enable its async flow?
I found similar question here as well. What actually happens is that WCF loses nearly all of its context upon crossing the threshold of first await and we should explicitly opt-in to flow any ambient context such as ambient transaction to subsequent async calls. There are some workarounds mentioned in this book. Check them out if you need more detail. I'll go with creating new scope before async call in the service operation.
Related
My requirement is to make the Subscriber pause processing the messages depending on whether a web service is up or not. So, when the web service is down, the messages should keep coming to the subscriber queue from Publisher and keep piling up until the web service is up again. (These messages should not go to the error queue, but stay on the Subscriber queue.)
I tried to use unsubscribe, but the publisher stops sending messages as the unsubscribe seems to clear the subscription info on RavenDB. I have also tried setting the MaxConcurrencyLevel on the Transport class, if I set the worker threads to 0, the messages coming to Subscriber go directly to the error queue. Finally, I tried Defer, which seems to put the current message in audit queue and creates a clone of the message and sends it locally to the subscriber queue when the timeout is completed. Also, since I have to keep checking the status of service and keep defering, I cannot control the order of messages as I cannot predict when the web service will be up.
What is the best way to achieve the behavior I have explained? I am using NServiceBus version 4.5.
It sounds like you want to keep trying to handle a message until it succeeds, and not shuffle it back in the queue (keep it at the top and keep trying it)?
I think your only pure-NSB option is to tinker with the MaxRetries setting, which controls First Level Retries: http://docs.particular.net/nservicebus/msmqtransportconfig. Setting MaxRetries to a very high number may do what you are looking for, but I can't imagine doing so would be a good practice.
Second Level Retries will defer the message for a configurable amount of time, but IIRC will allow other messages to be handled from the main queue.
I think your best option is to put retry logic into your own code. So the handler can try to access the service x number of times in a loop (maybe on a delay) before it throws an exception and NSB's retry features kick in.
Edit:
Your requirement seems to be something like:
"When an MyEvent comes in, I need to make a webservice call. If the webservice is down, I need to keep trying X number of times at Y intervals, at which point I will consider it a failure and handle a failure condition. Until I either succeed or fail, I will block other messages from being handled."
You have some potentially complex logic on handling a message (retry, timeout, error condition, blocking additional messages, etc.). Keep in mind the role that NSB is intended to play in your system: communication between services via messaging. While NSB does have some advanced features that allow message orchestration (e.g. sagas), it's not really intended to be used to replace Domain or Application logic.
Bottom line, you may need to write custom code to handle your specific scenario. A naive solution would be a loop with a delay in your handler, but you may need to create a more robust in-memory collection/queue that holds messages while the service is down and processes them serially when it comes back up.
The easiest way to achieve somewhat the required behavior is the following:
Define a message handler which checks whether the service is available and if not calls HandleCurrentMessageLater and a message handler which does the actual message processing. Then you specify the message handler order so that the handler which checks the service availability gets executed first.
public interface ISomeCommand {}
public class ServiceAvailabilityChecker : IHandleMessages<ISomeCommand>{
public IBus Bus { get; set; }
public void Handle(ISomeCommand message) {
try {
// check service
}
catch(SpecificException ex) {
this.Bus.HandleCurrentMessageLater();
}
}
}
public class ActualHandler : IHandleMessages<ISomeCommand>{
public void Handle(ISomeCommand message) {
}
}
public class SomeCommandHandlerOrdering : ISpecifyMessageHandlerOrdering{
public void SpecifyOrder(Order order){
order.Specify(First<ServiceAvailabilityChecker>.Then<ActualHandler>());
}
}
With that design you gain the following:
You can check the availability before the actual business code is invoked
If the service is not available the message is put back into the queue
If the service is available and your business code gets invoked but just before the ActualHandler is invoked the service becomes unavailable you get First and Second Level retries (and again the availability check in the pipeline)
In a client server WCF duplex scenario, what is the recommended way to let the server know that an error occurred on the client side? Let's say that the server notifies one of the clients that it needs to perform a certain operation and an exception is being thrown on the client side.
On the callback interface, I have something like this
[OperationContract(IsOneWay = true)]
void Work(...);
What's the best approach:
Implement a NotifyServer(int clientId, string message) message that the client can call to let the user know that the requested operation failed,
If I set IsOneWay = false on the operation contract, would I have to call every client on a BackgroundWorker thread in order to keep the UI responsive?
Implementing async operations on the server? How will this work? I can generate async operation on the client, will I have to use the same pattern (BeginWork, EndWork) for the client callback method?
Can't think of anything else, because throwing a FaultException on the client side when IsOneWay = true will not work.
Any advice?
Thank you in advance!
Ad 1. That is one way of doing it... recommended if the Work() may take unpredictable amount of time and you do not want your server thread hanging on that call.
Ad 2. You should always perform WCF operations in the background worker and never inside the UI thread.. If you set IsOneWay=False then obviously Work() method will block on the server until it has finished executing on the remote client and returns results. However even if you set isOneWay=true the method will still block on the low-level WCF communication. If WCF connection is dropped, this can be a long time, before you get notified.
Ad 3.
The pattern is up to you.
Example: MSDN: OperationContractAttribute.AsyncPattern Property
No best solution exists. It all depends on your setup (classes, threads, etc). The WCF layer you code should be easy and convenient to use - that is the main guide line.
The below text is an effort to expand and add color to this question:
How do I prevent a misbehaving client from taking down the entire service?
I have essentially this scenario: a WCF service is up and running with a client callback having a straight forward, simple oneway communication, not very different from this one:
public interface IMyClientContract
{
[OperationContract(IsOneWay = true)]
void SomethingChanged(simpleObject myObj);
}
I'm calling this method potentially thousands of times a second from the service to what will eventually be about 50 concurrently connected clients, with as low latency as possible (<15 ms would be nice). This works fine until I set a break point on one of the client apps connected to the server and then everything hangs after maybe 2-5 seconds the service hangs and none of the other clients receive any data for about 30 seconds or so until the service registers a connection fault event and disconnects the offending client. After this all the other clients continue on their merry way receiving messages.
I've done research on serviceThrottling, concurrency tweaking, setting threadpool minimum threads, WCF secret sauces and the whole 9 yards, but at the end of the day this article MSDN - WCF essentials, One-Way Calls, Callbacks and Events describes exactly the issue I'm having without really making a recommendation.
The third solution that allows the service to safely call back to the client is to have the callback contract operations configured as one-way operations. Doing so enables the service to call back even when concurrency is set to single-threaded, because there will not be any reply message to contend for the lock.
but earlier in the article it describes the issue I'm seeing, only from a client perspective
When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call
I can only assume that the reverse is true, the number of queued messages to the client has exceeded the queue capacity and the threadpool is now filled with threads attempting to call this client that are now all blocked.
What is the right way to handle this? Should I research a way to check how many messages are queued at the service communication layer per client and abort their connections after a certain limit is reached?
It almost seems that if the WCF service itself is blocking on a queue filling up then all the async / oneway / fire-and-forget strategies I could ever implement inside the service will still get blocked whenever one client's queue gets full.
Don't know much about the client callbacks, but it sounds similar to generic wcf code blocking issues. I often solve these problems by spawning a BackgroundWorker, and performing the client call in the thread. During that time, the main thread counts how long the child thread is taking. If the child has not finished in a few milliseconds, the main thread just moves on and abandons the thread (it eventually dies by itself, so no memory leak). This is basically what Mr.Graves suggests with the phrase "fire-and-forget".
Update:
I implemented a Fire-and-forget setup to call the client's callback channel and the server no longer blocks once the buffer fills to the client
MyEvent is an event with a delegate that matches one of the methods defined in the WCF client contract, when they connect I'm essentially adding the callback to the event
MyEvent += OperationContext.Current.GetCallbackChannel<IFancyClientContract>().SomethingChanged
etc... and then to send this data to all clients, I'm doing the following
//serialize using protobuff
using (var ms = new MemoryStream())
{
ProtoBuf.Serializer.Serialize(ms, new SpecialDataTransferObject(inputData));
byte[] data = ms.GetBuffer();
Parallel.ForEach(MyEvent.GetInvocationList(), p => ThreadUtil.FireAndForget(p, data));
}
in the ThreadUtil class I made essentially the following change to the code defined in the fire-and-foget article
static void InvokeWrappedDelegate(Delegate d, object[] args)
{
try
{
d.DynamicInvoke(args);
}
catch (Exception ex)
{
//THIS will eventually throw once the client's WCF callback channel has filled up and timed out, and it will throw once for every single time you ever tried sending them a payload, so do some smarter logging here!!
Console.WriteLine("Error calling client, attempting to disconnect.");
try
{
MyService.SingletonServiceController.TerminateClientChannelByHashcode(d.Target.GetHashCode());//this is an IContextChannel object, kept in a dictionary of active connections, cross referenced by hashcode just for this exact occasion
}
catch (Exception ex2)
{
Console.WriteLine("Attempt to disconnect client failed: " + ex2.ToString());
}
}
}
I don't have any good ideas how to go and kill all the pending packets the server is still waiting to see if they'll get delivered on. Once I get the first exception I should in theory be able to go and terminate all the other requests in some queue somewhere, but this setup is functional and meets the objectives.
I imagine the following WCF service usage: (of a cash acceptor)
Service Consumer 1 Service Consumer 2
cashAcceptorService.BeginTransaction(); cashAcceptorService.StopDevice();
//this should throw exception: device is locked / used in a transaction
cashAcceptorService.AcceptMoney();
cashAcceptorService.EndTransaction();
Service Consumer 1 and 2 use the same WCF single instance. I wonder if this functionality is already implemented. Do WCF transactions offer this?
How do you see this done?
If the following is true:
The service is interacting with a transactional object (eg the database)
The service has transaction flow enabled
Then WCF does indeed offer this.
The client can then use the TransactionScope class. Any transactions initiated on the client will flow through to the service automatically.
using(TransactionScope transactionScope = new TransactionScope())
{
// Do stuff with the service here
cashAcceptorService.AcceptMoney();
//
//
transactionScope.Complete();
}
Handling transactions in WCF tends to be an entire chapter of a book, but this should be enough information to get you on the right track.
It is always better to understand the concept of distributed transactions . I recommend to read this article http://www.codeproject.com/Articles/35087/Truly-Understanding-NET-Transactions-and-WCF-Imple
I have a WCF logging service that runs operates over MSMQ. Items are logged to a Sql Server 2005 database. Every functions correctly if used outside a TransactionScope. When used inside a TransactionScope instance the call always causes the transaction to be aborted. Message = "The transaction has aborted".
What do I need to do to get this call to work inside a Transaction? Is it even possible. I've read that for a client transaction to flow across a service boundary the binding must support transaction flow, which immediately limits the bindings to only NetNamedPipeBinding, NetTcpBinding, WSHttpBinding, WSDualHttpBinding and WSFederationHttpBinding.
I'm not intimately knowledgeable about MSMQ, but there's a really good blog post series by Tom Hollander on MSMQ, IIS and WCF: Getting them to play nicely - in part 3 which is the link provided Tom talks about getting transactional.
MSMQ can be transactional - or not. And in WCF, you can decorate both the service contract as well as individual operation contracts (methods) with transaction-related attributes, such as whether to allow, disallow, or require a transaction context.
As far as I understand, in your setup, you don't want the MSMQ part to be transactional - but you should be able to use it even if an ambient transaction is present. In this case, you need to add the TransactionFlow="ALlowed" to your operation contract like this:
[ServiceContract]
public interface ILoggingService
{
[OperationContract]
[TransactionFlow(TransactionFlowOption.Allowed)]
void LogData(......);
}
That should do it!
Marc
Sorry for the needless question...
I have solved my problem. I needed to place
[TransactionFlow(TransactionFlowOption.Allowed)]
on the operation in the service contract and then
[OperationBehavior(TransactionScopeRequired=true)]
on the implementation of the contract (the service itself).
Works a treat.