I have a wcf service that I am hosting within a windows service on a windows 2003 server that is listening on a MSMQ queue. I set the ReceiveRetryCount = 2 on the netmsmqbinding. The service was setup to use transactions ([OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete= true)]). The service was functioning great.
I needed to turnoff the transactions due to a database call that couldn't support MSDTC. So I switched the service properties to
[OperationBehavior(TransactionScopeRequired = false)]
Now, when an exception or fault is thrown, no retry occurs, the fault handler for the service never fires. The original message ends up in the system DLQ. I would like the fault handler to handle the faults after two retries. Any ideas?
Switch things back to the way they were before.
Around the database call, add the following (code is done from memory- let me know if I need to fix this up a bit):
// using System.Transactions;
using( var ts = new TransactionScope( TransactionScopeOption.Suppress ) )
{
// Call DB stuff
ts.Complete();
}
Related
I created a WCF service contract that works against MSMQ. Since it is MSMQ, I use one-way communication:
[OperationContract(IsOneWay = true)]
In my service implementation, I have OperationBehavior to automatically commit transactions:
[OperationBehavior(TransactionAutoComplete = true, TransactionScopeRequired = true)]
This makes sure WCF handles any exceptions by putting the message on a retry queue (per my configuration). It seems odd to be throwing an exception from a one-way operation. Is this the correct way to tell WCF not to commit the transaction?
After some research, I found that throwing an exception is the only way to tell WCF to use the built-in retry sub-queue. This exception is swallowed by WCF. It is also used to make sure the transaction isn't auto-completed.
Problem
I have this strange problem. I am hosting a WCF server in a console app:
Console.WriteLine("Press 'q' to quit.");
var serviceHost = new ServiceHost(typeof(MessageService));
serviceHost.Open();
while (Console.ReadKey().KeyChar != 'q')
{
}
serviceHost.Close();
it exposes two endpoint for publish and subscribe (duplex binding)
When I stop or exit the console app, I never receive channel faulted at the client end. I would like client to be informed is server is down. Any idea what is going wrong here?
All I want is either of the following event to be raised when console app goes down:
msgsvc.InnerDuplexChannel.Faulted += InnerDuplexChannelOnFaulted;
msgsvc.InnerChannel.Faulted += InnerChannelOnFaulted;
From MSDN:
The duplex model does not automatically detect when a service or client closes its channel. So if a service unexpectedly terminates, by default the service will not be notified, or if a client unexpectedly terminates, the service will not be notified. Clients and services can implement their own protocol to notify each other if they so choose.
AFAIK tcp channel is quite responsive to (persistant) connection problems, but you can use callback to notify clients before server becomes unavailable. From client side you can use some dummy ping/poke method 'OnTimer' to get actual connection state/keep channel alive. It's good to recover client proxy (reconnect) at that point, I suppose. If service provides metadata endpoint, you also can call svcutil for trial connection or retrieve metadata programmatically:
Uri mexAddress = new Uri("http://localhost:5000/myservice");
var mexClient = new MetadataExchangeClient(mexAddress, MetadataExchangeClientMode.HttpGet);
MetadataSet metadata = mexClient.GetMetadata();
MetadataImporter importer = new WsdlImporter(metadata);
ServiceEndpointCollection endpoints = importer.ImportAllEndpoints();
ContractDescription description =
ContractDescription.GetContract(typeof(IMyContract));
bool contractSupported = endpoints.Any(endpoint =>
endpoint.Contract.Namespace == description.Namespace &&
endpoint.Contract.Name == description.Name);
or
ServiceEndpointCollection endpoints = MetadataResolver.Resolve(typeof(IMyContract), mexAddress, MetadataExchangeClientMode.HttpGet)
if you're the client, and subscribe to all 3 events, IClientChannel's Closing, Closed, and Faulted, you should see at the very least the Closing and Closed events if you close the service console app. I'm not sure why. if you're closing up correctly, you would think you'd see Faulted. I'm using tcp.net full duplex channels and I see these events just fine. Keep in mind it's in 2017, so things might have changed since then, but I rely on these events all the time in my code. For those people who says the duplex model doesn't see Closing of channels, I'm not understanding that.
So I have been tasked with setting up MSMQ so that if our mail server goes down (which is seems to often) the messages just end up in the Queue and will be delivered when they come back up. With that said I have to say I don't know much about this except what I have learned in the past 24 hours however I believe I know enough to take the right approach but I wanted to ask someone in the community because there is some confusion amongst my colleagues given some existing setup in our WCF application.
Currently we have some services that use msmq as the protocol for the endpoint. the endpoint looks like this
<endpoint address="net.msmq://localhost/private/Publisher"
behaviorConfiguration="BatchBehaviour"
binding="netMsmqBinding"
bindingConfiguration="MSMQNoSecurity"
contract="HumanArc.Compass.Shared.Publisher.Interfaces.Service.IPublisherSubscriber"
name="PublishSubscriber"/>
This of course lets the client make a service call and if for some reason the service wasn't up it will ensure that when the service comes back up the call will be processed. What I don't think that it will do is if you have something like the following in you service method.
try
{
smtp.Send(mail);
return true;
}
catch (System.Net.Mail.SmtpFailedRecipientException ex)
{
throw new Exception("User Credentials for sending the Email are Invalid",ex);
}
catch (System.Net.Mail.SmtpException smtpEx)
{
throw new Exception(string.Format("Application encountered a problem send a mail message to {0} ", smtpHostName),smtpEx);
}
WCF isn't going to retry and send the message again somehow, am I correct about this assumption?
What I think we should have is something that looks like the following in place of the call to smtp.send() above. (from http://www.bowu.org/it/microsoft/net/email-asp-net-mvc-msmq-2.html)
string queuePath = #".\private$\WebsiteEmails";
MessageQueue msgQ;
//if this queue doesn't exist we will create it
if(!MessageQueue.Exists(queuePath))
MessageQueue.Create(queuePath);
msgQ = new MessageQueue(queuePath);
msgQ.Formatter = new BinaryMessageFormatter();
msgQ.Send(msg);
Then somewhere in the startup of the service (I am not sure where yet) we set up an event handler that will actually call send() on the SmtpClient object. Something like this
msgQ.ReceiveCompleted += new ReceiveCompletedEventHandler(msgQ_ReceiveCompleted)
So to sum it all up my first question is which way is better? Create a service that uses net:msmq as the protocol or just change the email method to put messages in the queue and set up a handler for it? The next question, if my assumption about changing the method that calls SmtpClient.Send() is correct then where in the program should I wire up ReceiveCompleted? Out WCF service is hosted in a windows service, meaning there is actually a call to ServiceBase.Run(servicesToRun). Is there a place I could wire it up there? My experience with WCF is with much simpler IIS hosted services so I am not 100% sure.
Thanks - I realize this is a long question but I have been trying to research it and there is a lot of information and I can't seem to find a clear explanation of the benefits of doing things one way vs another.
Your approach to using msmq to address availability in a downstream dependency (in this case your smtp server) is valid. However, there are a couple of things you should understand about msmq first.
If you create a queue in msmq then by default it is non-transactional. In this mode the queue will not provide the kind of guaranteed delivery semantic you require. So create your queues as transactional.
Then you can tell WCF that your service operation will enlist in the transaction when it receives a message for processing. You do this by defining a behavior on your service operation implementation:
[OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)]
public void SendEmail(Something mail)
{
....
smtp.Send(mail);
}
TransactionScopeRequired tells WCF that the service operation should enlist in the same transaction used to transmit the message from sender to receiver. TransactionAutoComplete states that the service method should commit the transaction once the operation has successfully completed. So in answer to your query above, a failure in the service operation will cause the transaction to rollback.
What happens at this point depends on your service bindings configuration.
<netMsmqBinding>
<binding name="netMsmqBinding_IMyServiceInterface"
exactlyOnce="true"
maxRetryCycles="3"
retryCycleDelay="00:01:00"
receiveErrorHandling="Move"> <-- this defines behavior after failure
...
</binding>
</netMsmqBinding>
When, for whatever reason the transaction is not committed (for example, an unhandled exception occurs), WCF will roll the message back onto the queue and retry processing once per minute up to 3 times (defined by maxRetryCycles and retryCycleDelay).
If the message still fails processing after this time then the receiveErrorHandling attribute tells WCF what to do next (The above binding specifies that the message be moved to the system poison message queue).
Note: exactlyOnce tells WCF that we require transactions, that each message will be delivered exactly once and in the order they were sent.
So your original approach is in fact correct and you just need to configure your service correctly to implement the behavior you want.
The below text is an effort to expand and add color to this question:
How do I prevent a misbehaving client from taking down the entire service?
I have essentially this scenario: a WCF service is up and running with a client callback having a straight forward, simple oneway communication, not very different from this one:
public interface IMyClientContract
{
[OperationContract(IsOneWay = true)]
void SomethingChanged(simpleObject myObj);
}
I'm calling this method potentially thousands of times a second from the service to what will eventually be about 50 concurrently connected clients, with as low latency as possible (<15 ms would be nice). This works fine until I set a break point on one of the client apps connected to the server and then everything hangs after maybe 2-5 seconds the service hangs and none of the other clients receive any data for about 30 seconds or so until the service registers a connection fault event and disconnects the offending client. After this all the other clients continue on their merry way receiving messages.
I've done research on serviceThrottling, concurrency tweaking, setting threadpool minimum threads, WCF secret sauces and the whole 9 yards, but at the end of the day this article MSDN - WCF essentials, One-Way Calls, Callbacks and Events describes exactly the issue I'm having without really making a recommendation.
The third solution that allows the service to safely call back to the client is to have the callback contract operations configured as one-way operations. Doing so enables the service to call back even when concurrency is set to single-threaded, because there will not be any reply message to contend for the lock.
but earlier in the article it describes the issue I'm seeing, only from a client perspective
When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call
I can only assume that the reverse is true, the number of queued messages to the client has exceeded the queue capacity and the threadpool is now filled with threads attempting to call this client that are now all blocked.
What is the right way to handle this? Should I research a way to check how many messages are queued at the service communication layer per client and abort their connections after a certain limit is reached?
It almost seems that if the WCF service itself is blocking on a queue filling up then all the async / oneway / fire-and-forget strategies I could ever implement inside the service will still get blocked whenever one client's queue gets full.
Don't know much about the client callbacks, but it sounds similar to generic wcf code blocking issues. I often solve these problems by spawning a BackgroundWorker, and performing the client call in the thread. During that time, the main thread counts how long the child thread is taking. If the child has not finished in a few milliseconds, the main thread just moves on and abandons the thread (it eventually dies by itself, so no memory leak). This is basically what Mr.Graves suggests with the phrase "fire-and-forget".
Update:
I implemented a Fire-and-forget setup to call the client's callback channel and the server no longer blocks once the buffer fills to the client
MyEvent is an event with a delegate that matches one of the methods defined in the WCF client contract, when they connect I'm essentially adding the callback to the event
MyEvent += OperationContext.Current.GetCallbackChannel<IFancyClientContract>().SomethingChanged
etc... and then to send this data to all clients, I'm doing the following
//serialize using protobuff
using (var ms = new MemoryStream())
{
ProtoBuf.Serializer.Serialize(ms, new SpecialDataTransferObject(inputData));
byte[] data = ms.GetBuffer();
Parallel.ForEach(MyEvent.GetInvocationList(), p => ThreadUtil.FireAndForget(p, data));
}
in the ThreadUtil class I made essentially the following change to the code defined in the fire-and-foget article
static void InvokeWrappedDelegate(Delegate d, object[] args)
{
try
{
d.DynamicInvoke(args);
}
catch (Exception ex)
{
//THIS will eventually throw once the client's WCF callback channel has filled up and timed out, and it will throw once for every single time you ever tried sending them a payload, so do some smarter logging here!!
Console.WriteLine("Error calling client, attempting to disconnect.");
try
{
MyService.SingletonServiceController.TerminateClientChannelByHashcode(d.Target.GetHashCode());//this is an IContextChannel object, kept in a dictionary of active connections, cross referenced by hashcode just for this exact occasion
}
catch (Exception ex2)
{
Console.WriteLine("Attempt to disconnect client failed: " + ex2.ToString());
}
}
}
I don't have any good ideas how to go and kill all the pending packets the server is still waiting to see if they'll get delivered on. Once I get the first exception I should in theory be able to go and terminate all the other requests in some queue somewhere, but this setup is functional and meets the objectives.
i'm currently trying to set up something like this:
a server side windows wcf service hangs out and listens via tcp for connections from a client side windows service.
when a connection is received (the client calls the CheckIn method on the service) the service obtains a callback channel via OperationContext.Current.GetCallbackChannel<T>
this channel is stored in a collection along with a unique key (specifically, i store the callback interface, the channel, and the key in a List<ClientConnection> where each of those is a property)
calls should now be able to be passed to that client service based on said unique key
this works at first, but after a while stops -- i'm no longer able to pass calls to the client. i'm assuming it's because the connection has been dropped internally and i'm trying to work with a dead connection.
that in mind, i've got the following questions:
how do i tell wcf i want to keep those tcp connections indefinitely (or for as long as possible)?
how do i check, from the client side, whether or not my connection to the server is still valid so i can drop it and check in with the server again if my connection is fried?
i can think of gimpy solutions, but I'm hoping someone here will tell me the RIGHT way.
When you establish the connection from the client, you should set two timeout values in your tcp binding (the binding that you will pass to ClientBase<> or DuplexClientBase<>):
NetTcpBinding binding = new NetTcpBinding();
binding.ReceiveTimeout = TimeSpan.FromHours(20f);
binding.ReliableSession.InactivityTimeout = TimeSpan.FromHours(20f);
My sample uses 20 hours for timeout, you can use whatever value makes sense for you. Then WCF will attempt to keep your client and server connected for this period of time. The default is relatively brief (perhaps 5 minutes?) and could explain why your connection is dropped.
Whenever there is a communication problem between the client and server (including WCF itself dropping the channel), WCF will raise a Faulted event in the client, which you can handle to do whatever you feel appropriate. In my project, I cast my DuplexClientBase<> derived object to ICommunicationObject to get a hold of the Faulted event and forward it to an event called OnFaulted exposed in my class:
ICommunicationObject communicationObject = this as ICommunicationObject;
communicationObject.Faulted +=
new EventHandler((sender, e) => { OnFaulted(sender, e); });
In the above code snippet, this is an instance of my WCF client type, which in my case is derived from DuplexClientBase<>. What you do in this event is specific to your application. In my case, the application is a non-critical UI, so if there is a WCF fault I simply display a message box to the end-user and shut down the app - it'd be a nice world if it were always this easy!