Problem
I have this strange problem. I am hosting a WCF server in a console app:
Console.WriteLine("Press 'q' to quit.");
var serviceHost = new ServiceHost(typeof(MessageService));
serviceHost.Open();
while (Console.ReadKey().KeyChar != 'q')
{
}
serviceHost.Close();
it exposes two endpoint for publish and subscribe (duplex binding)
When I stop or exit the console app, I never receive channel faulted at the client end. I would like client to be informed is server is down. Any idea what is going wrong here?
All I want is either of the following event to be raised when console app goes down:
msgsvc.InnerDuplexChannel.Faulted += InnerDuplexChannelOnFaulted;
msgsvc.InnerChannel.Faulted += InnerChannelOnFaulted;
From MSDN:
The duplex model does not automatically detect when a service or client closes its channel. So if a service unexpectedly terminates, by default the service will not be notified, or if a client unexpectedly terminates, the service will not be notified. Clients and services can implement their own protocol to notify each other if they so choose.
AFAIK tcp channel is quite responsive to (persistant) connection problems, but you can use callback to notify clients before server becomes unavailable. From client side you can use some dummy ping/poke method 'OnTimer' to get actual connection state/keep channel alive. It's good to recover client proxy (reconnect) at that point, I suppose. If service provides metadata endpoint, you also can call svcutil for trial connection or retrieve metadata programmatically:
Uri mexAddress = new Uri("http://localhost:5000/myservice");
var mexClient = new MetadataExchangeClient(mexAddress, MetadataExchangeClientMode.HttpGet);
MetadataSet metadata = mexClient.GetMetadata();
MetadataImporter importer = new WsdlImporter(metadata);
ServiceEndpointCollection endpoints = importer.ImportAllEndpoints();
ContractDescription description =
ContractDescription.GetContract(typeof(IMyContract));
bool contractSupported = endpoints.Any(endpoint =>
endpoint.Contract.Namespace == description.Namespace &&
endpoint.Contract.Name == description.Name);
or
ServiceEndpointCollection endpoints = MetadataResolver.Resolve(typeof(IMyContract), mexAddress, MetadataExchangeClientMode.HttpGet)
if you're the client, and subscribe to all 3 events, IClientChannel's Closing, Closed, and Faulted, you should see at the very least the Closing and Closed events if you close the service console app. I'm not sure why. if you're closing up correctly, you would think you'd see Faulted. I'm using tcp.net full duplex channels and I see these events just fine. Keep in mind it's in 2017, so things might have changed since then, but I rely on these events all the time in my code. For those people who says the duplex model doesn't see Closing of channels, I'm not understanding that.
Related
We have (Multiple)Clients-(One)Server architecture for poker desktop game. We are using callback Notifications using callback Channels.
But sometimes because of internet connection drops, that particualr client gets disconected from server and that particular's client's WCF channel is also gone to faluted state and his callback Channel which lies in server is also faluted.
Scenario :
That client is playing game, while internet connection drops, that game is stopped, still his game window remains open and when his/her internet connection gets back that client is dropped out from Server, but that player's game window still opens and that player can't do anything as his/her WCF channel is dropped out.
We want to close that particular client's window while he/she is dropped out from server and throwing 'CommunicationObjectAbortedException ' exception.
We can't use previous WCF channel's callback channel as it's in faluted state.
So we have tried to create new callbackChannel in server while dropping using below code :
OperationContext.Current.GetCallbackChannel();
but here Current is showing "NULL" as that player's WCF channel is aborted, so it's throwing an error that "Object reference not set to an instance of object".
So is there any solution to use aborted WCF channel's callback Channel or recover that WCF channel without reinitializing them or to call that client using new channel?
I'd try following:
On server side, when trying to communicate using faulted / aborted chanel - you'll failed.
Catch this failure, and remove its callback from the list (I suppose you manage some callback list).
On client side - when chanel Faulted / ... handled - try to re-open new chanel to server. When this new chenel will be open, on server side place this new callback back to the "valid callbacks" list.
The below text is an effort to expand and add color to this question:
How do I prevent a misbehaving client from taking down the entire service?
I have essentially this scenario: a WCF service is up and running with a client callback having a straight forward, simple oneway communication, not very different from this one:
public interface IMyClientContract
{
[OperationContract(IsOneWay = true)]
void SomethingChanged(simpleObject myObj);
}
I'm calling this method potentially thousands of times a second from the service to what will eventually be about 50 concurrently connected clients, with as low latency as possible (<15 ms would be nice). This works fine until I set a break point on one of the client apps connected to the server and then everything hangs after maybe 2-5 seconds the service hangs and none of the other clients receive any data for about 30 seconds or so until the service registers a connection fault event and disconnects the offending client. After this all the other clients continue on their merry way receiving messages.
I've done research on serviceThrottling, concurrency tweaking, setting threadpool minimum threads, WCF secret sauces and the whole 9 yards, but at the end of the day this article MSDN - WCF essentials, One-Way Calls, Callbacks and Events describes exactly the issue I'm having without really making a recommendation.
The third solution that allows the service to safely call back to the client is to have the callback contract operations configured as one-way operations. Doing so enables the service to call back even when concurrency is set to single-threaded, because there will not be any reply message to contend for the lock.
but earlier in the article it describes the issue I'm seeing, only from a client perspective
When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call
I can only assume that the reverse is true, the number of queued messages to the client has exceeded the queue capacity and the threadpool is now filled with threads attempting to call this client that are now all blocked.
What is the right way to handle this? Should I research a way to check how many messages are queued at the service communication layer per client and abort their connections after a certain limit is reached?
It almost seems that if the WCF service itself is blocking on a queue filling up then all the async / oneway / fire-and-forget strategies I could ever implement inside the service will still get blocked whenever one client's queue gets full.
Don't know much about the client callbacks, but it sounds similar to generic wcf code blocking issues. I often solve these problems by spawning a BackgroundWorker, and performing the client call in the thread. During that time, the main thread counts how long the child thread is taking. If the child has not finished in a few milliseconds, the main thread just moves on and abandons the thread (it eventually dies by itself, so no memory leak). This is basically what Mr.Graves suggests with the phrase "fire-and-forget".
Update:
I implemented a Fire-and-forget setup to call the client's callback channel and the server no longer blocks once the buffer fills to the client
MyEvent is an event with a delegate that matches one of the methods defined in the WCF client contract, when they connect I'm essentially adding the callback to the event
MyEvent += OperationContext.Current.GetCallbackChannel<IFancyClientContract>().SomethingChanged
etc... and then to send this data to all clients, I'm doing the following
//serialize using protobuff
using (var ms = new MemoryStream())
{
ProtoBuf.Serializer.Serialize(ms, new SpecialDataTransferObject(inputData));
byte[] data = ms.GetBuffer();
Parallel.ForEach(MyEvent.GetInvocationList(), p => ThreadUtil.FireAndForget(p, data));
}
in the ThreadUtil class I made essentially the following change to the code defined in the fire-and-foget article
static void InvokeWrappedDelegate(Delegate d, object[] args)
{
try
{
d.DynamicInvoke(args);
}
catch (Exception ex)
{
//THIS will eventually throw once the client's WCF callback channel has filled up and timed out, and it will throw once for every single time you ever tried sending them a payload, so do some smarter logging here!!
Console.WriteLine("Error calling client, attempting to disconnect.");
try
{
MyService.SingletonServiceController.TerminateClientChannelByHashcode(d.Target.GetHashCode());//this is an IContextChannel object, kept in a dictionary of active connections, cross referenced by hashcode just for this exact occasion
}
catch (Exception ex2)
{
Console.WriteLine("Attempt to disconnect client failed: " + ex2.ToString());
}
}
}
I don't have any good ideas how to go and kill all the pending packets the server is still waiting to see if they'll get delivered on. Once I get the first exception I should in theory be able to go and terminate all the other requests in some queue somewhere, but this setup is functional and meets the objectives.
I have a wcf service that I am hosting within a windows service on a windows 2003 server that is listening on a MSMQ queue. I set the ReceiveRetryCount = 2 on the netmsmqbinding. The service was setup to use transactions ([OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete= true)]). The service was functioning great.
I needed to turnoff the transactions due to a database call that couldn't support MSDTC. So I switched the service properties to
[OperationBehavior(TransactionScopeRequired = false)]
Now, when an exception or fault is thrown, no retry occurs, the fault handler for the service never fires. The original message ends up in the system DLQ. I would like the fault handler to handle the faults after two retries. Any ideas?
Switch things back to the way they were before.
Around the database call, add the following (code is done from memory- let me know if I need to fix this up a bit):
// using System.Transactions;
using( var ts = new TransactionScope( TransactionScopeOption.Suppress ) )
{
// Call DB stuff
ts.Complete();
}
i'm currently trying to set up something like this:
a server side windows wcf service hangs out and listens via tcp for connections from a client side windows service.
when a connection is received (the client calls the CheckIn method on the service) the service obtains a callback channel via OperationContext.Current.GetCallbackChannel<T>
this channel is stored in a collection along with a unique key (specifically, i store the callback interface, the channel, and the key in a List<ClientConnection> where each of those is a property)
calls should now be able to be passed to that client service based on said unique key
this works at first, but after a while stops -- i'm no longer able to pass calls to the client. i'm assuming it's because the connection has been dropped internally and i'm trying to work with a dead connection.
that in mind, i've got the following questions:
how do i tell wcf i want to keep those tcp connections indefinitely (or for as long as possible)?
how do i check, from the client side, whether or not my connection to the server is still valid so i can drop it and check in with the server again if my connection is fried?
i can think of gimpy solutions, but I'm hoping someone here will tell me the RIGHT way.
When you establish the connection from the client, you should set two timeout values in your tcp binding (the binding that you will pass to ClientBase<> or DuplexClientBase<>):
NetTcpBinding binding = new NetTcpBinding();
binding.ReceiveTimeout = TimeSpan.FromHours(20f);
binding.ReliableSession.InactivityTimeout = TimeSpan.FromHours(20f);
My sample uses 20 hours for timeout, you can use whatever value makes sense for you. Then WCF will attempt to keep your client and server connected for this period of time. The default is relatively brief (perhaps 5 minutes?) and could explain why your connection is dropped.
Whenever there is a communication problem between the client and server (including WCF itself dropping the channel), WCF will raise a Faulted event in the client, which you can handle to do whatever you feel appropriate. In my project, I cast my DuplexClientBase<> derived object to ICommunicationObject to get a hold of the Faulted event and forward it to an event called OnFaulted exposed in my class:
ICommunicationObject communicationObject = this as ICommunicationObject;
communicationObject.Faulted +=
new EventHandler((sender, e) => { OnFaulted(sender, e); });
In the above code snippet, this is an instance of my WCF client type, which in my case is derived from DuplexClientBase<>. What you do in this event is specific to your application. In my case, the application is a non-critical UI, so if there is a WCF fault I simply display a message box to the end-user and shut down the app - it'd be a nice world if it were always this easy!
I have a WCF self-hosted service with a net.tcp DuplexChannel. On the server I run the following to disconnect a client:
((ICommunicationObject)client.CallbackChannel).Close();
This works fine but how do I detect on the client that it has been disconnected?
Ive hooked up to Closed and Faulted-events on both the InstanceContext of the callback and the channel to the server:
InstanceContext callback = new InstanceContext(callbackImp);
callback.Closed += new EventHandler(callback_Closed);
and
((ICommunicationObject)Channel).Closed += new EventHandler(Channel_Closed);
But nothing works. I never get notified. The workaround Im using now is to have a method in the callback that triggers a disconnect from the client-side instead. But I rather not do it this way. I especially dont want to let the server wait for a user to disconnect.
EDIT
I just realized that when disconnecting from client-side I run a method in the service-contract which is marked with IsTerminating = true:
[OperationContract(IsTerminating = true)]
void Disconnect();
I figured it would be the same on the callback-contract then? I tried adding the same method to my callback and it did terminate the callback-channel from the server point of view but I still didnt got notified on the client-side...weird
EDIT
I found out some more info about this:
When the server aborts the callback
channel, a fault travels back to the
client, the client faults and we get
the Faulted event on the client.
When the server closes the callback
channel, the session is still open
until the client issues the close.
Once the client closes the channel
you'll see the Closed event.
According to this statement the Close-event is not triggered mearly by closing the callbackchannel from the server, the client has to close it as well. So I could run Close on the client in the terminating Disconnect-method of the callback. Or I could use the Abort-method on the callback server-side and skip using a Disconnect-method on the callback. I dont know which one I preffer honestly. Hmmmm.
EDIT
I went with the Abort-approach. It seemed like the most logical method and it works really well. The client gets notified with the Faulted-event on the callback-instancecontext. Nice.
I went with the Abort-approach. It seemed like the most logical method and it works really well. The client gets notified with the Faulted-event on the callback-instancecontext.
You can simply do a callback just before closing the callback channel telling the client you're closing the channel.
So just before this line of code:
((ICommunicationObject)client.CallbackChannel).Close();