Memory leak using WCF GetCallbackChannel over named pipe - wcf

We have a simple wpf application that connects to a service running on the local machine. We use a named pipe for the connection and then register a callback so that later the service can send updates to the client.
The problem is that with each call of the callback we get a build up of memory in the client application.
This is how the client connects to the service.
const string url = "net.pipe://localhost/radal";
_channelFactory = new DuplexChannelFactory<IRadalService>(this, new NetNamedPipeBinding(),url);
and then in a threadpool thread we loop doing the following until we are connected
var service = _channelFactory.CreateChannel();
service.Register();
service.Register looks like this on the server side
public void Register()
{
_callback = OperationContext.Current.GetCallbackChannel<IRadalCallback>();
OperationContext.Current.Channel.Faulted += (sender, args) => Dispose();
OperationContext.Current.Channel.Closed += (sender, args) => Dispose();
}
This callback is stored and when new data arrives we invoke the following on the server side.
void Sensors_OnSensorReading(object sender, SensorReadingEventArgs e)
{
_callback.OnReadingReceived(e.SensorId, e.Count);
}
Where the parameters are an int and a double. On the client this is handled as follows.
public void OnReadingReceived(int sensorId, double count)
{
_events.Publish(new SensorReadingEvent(sensorId, count));
}
But we have found that commenting out _event.Publish... makes no difference to the memory usage. Does anyone see any logical reason why this might be leaking memory. We have used a profiler to track the problem to this point but cannot find what type of object is building up.

Well I can partially answer this now. The problem is partially caused by us trying to be clever and getting the connection to be opened on another thread and then passing it back to the main gui thread. The solution was to not use a thread but instead use a dispatch timer. It does have the downside that the initial data load is now on the GUI thread but we are not loading all that much anyway.
However this was not the entire solution (actually we don't have an entire solution). Once we moved over to a better profiler we found out that the objects building up were timeout handlers so we disabled that feature. That's OK for us as we are running against the localhost always but I can imagine for people working with remote services it would be an issue.

Related

Inferior behavior of XSockets under Mono compared to MS.NET

Given
I am using XSockets 3.0.6 which I think is the latest stable version. Under MS.NET the behavior is as expected. On Ubuntu 14.04 and Mono 3.6.1 though the server has some kind of delays before sending messages to clients.
Problem
On MS.NET when I type a string in the client and send it, all clients are immediately notified. On Mono though message is received by the server and clients were not notified immediately. With only 1 message I waited for 5 minutes and clients were still not notified. When messages become 5-6 then all clients become notified about all messages at once. It seems like the server uses some kind of buffering but conditionally - depending on the .NET runtime, which is very strange.
Question
Am I doing something wrong? How to change the code so that all clients are immediately notified as in MS.NET?
Code
I followed (and slightly modified) the quick start example as follows...
Server
Initialization
using (var container = Composable.GetExport<IXSocketServerContainer>())
{
container.StartServers();
foreach (var server in container.Servers)
{
Console.WriteLine(server.ConfigurationSetting.Endpoint);
}
Console.Write("Started! Hit 'Enter' to quit.");
Console.ReadLine();
container.StopServers();
}
Custom controller
public class CustomController : XSocketController
{
public override void OnMessage(ITextArgs textArgs)
{
Console.WriteLine ("No delay = {0}", this.Socket.Socket.NoDelay);
if (!this.Socket.Socket.NoDelay)
{
Socket.Socket.NoDelay = true;
}
Console.WriteLine("Received {0} about {1}.", textArgs.data, textArgs.#event);
this.SendToAll(textArgs);
}
}
Client
var client = new XSocketClient("ws://127.0.0.1:4502/CustomController", "*");
client.OnOpen += (sender, eventArgs) => System.Console.WriteLine("OPEN");
client.Bind("foo", message => System.Console.WriteLine(message.data));
Thread.Sleep(1000);
client.Open();
string input;
System.Console.WriteLine("Type 'quit' to quit and any other string to send a message:");
do
{
input = System.Console.ReadLine();
if (input != "quit")
{
client.Send(input, "foo");
}
} while (input != "quit");
I experienced this my self when running XSockets on a raspberry pi.
After some investigation I realized that it had to do with the fact that the pi is single core and that internal queue did not send the message out until 5 messages was sent in.... Then all messages was sent out.
How many cores does your computer have ?
This issue is resolved in 4.0 (in alpha right now)
Edit: I have only had this issue on single core machines with Mono, on my Mac Book Air everything works great on Mono
It looks that the Naggle Algorithm is not disabled in XSockets. In System.Net.Sockets you can disable Naggle algorithm by setting Socket.NoDelay property to true.
I'm not familiar with XSockets, but if you can get underlying System.Net.Sockets.Socket class from XSockets, you can set this property to true and avoid the sending delay.

Is there an easy way to subscribe to the default error queue in EasyNetQ?

In my test application I can see messages that were processed with an exception being automatically inserted into the default EasyNetQ_Default_Error_Queue, which is great. I can then successfully dump or requeue these messages using the Hosepipe, which also works fine, but requires dropping down to the command line and calling against both Hosepipe and the RabbitMQ API to purge the queue of retried messages.
So I'm thinking the easiest approach for my application is to simply subscribe to the error queue, so I can re-process them using the same infrastructure. But in EastNetQ, the error queue seems to be special. We need to subscribe using a proper type and routing ID, so I'm not sure what these values should be for the error queue:
bus.Subscribe<WhatShouldThisBe>("and-this", ReprocessErrorMessage);
Can I use the simple API to subscribe to the error queue, or do I need to dig into the advanced API?
If the type of my original message was TestMessage, then I'd like to be able to do something like this:
bus.Subscribe<ErrorMessage<TestMessage>>("???", ReprocessErrorMessage);
where ErrorMessage is a class provided by EasyNetQ to wrap all errors. Is this possible?
You can't use the simple API to subscribe to the error queue because it doesn't follow EasyNetQ queue type naming conventions - maybe that's something that should be fixed ;)
But the Advanced API works fine. You won't get the original message back, but it's easy to get the JSON representation which you could de-serialize yourself quite easily (using Newtonsoft.JSON). Here's an example of what your subscription code should look like:
[Test]
[Explicit("Requires a RabbitMQ server on localhost")]
public void Should_be_able_to_subscribe_to_error_messages()
{
var errorQueueName = new Conventions().ErrorQueueNamingConvention();
var queue = Queue.DeclareDurable(errorQueueName);
var autoResetEvent = new AutoResetEvent(false);
bus.Advanced.Subscribe<SystemMessages.Error>(queue, (message, info) =>
{
var error = message.Body;
Console.Out.WriteLine("error.DateTime = {0}", error.DateTime);
Console.Out.WriteLine("error.Exception = {0}", error.Exception);
Console.Out.WriteLine("error.Message = {0}", error.Message);
Console.Out.WriteLine("error.RoutingKey = {0}", error.RoutingKey);
autoResetEvent.Set();
return Task.Factory.StartNew(() => { });
});
autoResetEvent.WaitOne(1000);
}
I had to fix a small bug in the error message writing code in EasyNetQ before this worked, so please get a version >= 0.9.2.73 before trying it out. You can see the code example here
Code that works:
(I took a guess)
The screwyness with the 'foo' is because if I just pass that function HandleErrorMessage2 into the Consume call, it can't figure out that it returns a void and not a Task, so can't figure out which overload to use. (VS 2012)
Assigning to a var makes it happy.
You will want to catch the return value of the call to be able to unsubscribe by disposing the object.
Also note that Someone used a System Object name (Queue) instead of making it a EasyNetQueue or something, so you have to add the using clarification for the compiler, or fully specify it.
using Queue = EasyNetQ.Topology.Queue;
private const string QueueName = "EasyNetQ_Default_Error_Queue";
public static void Should_be_able_to_subscribe_to_error_messages(IBus bus)
{
Action <IMessage<Error>, MessageReceivedInfo> foo = HandleErrorMessage2;
IQueue queue = new Queue(QueueName,false);
bus.Advanced.Consume<Error>(queue, foo);
}
private static void HandleErrorMessage2(IMessage<Error> msg, MessageReceivedInfo info)
{
}

WCF Streaming - who closes the file?

According to Microsoft's samples, here's how one would go about streaming a file throuhg WCF:
// Service class which implements the service contract
public class StreamingService : IStreamingSample
{
public System.IO.Stream GetStream(string data)
{
//this file path assumes the image is in
// the Service folder and the service is executing
// in service/bin
string filePath = Path.Combine(
System.Environment.CurrentDirectory,
".\\image.jpg");
//open the file, this could throw an exception
//(e.g. if the file is not found)
//having includeExceptionDetailInFaults="True" in config
// would cause this exception to be returned to the client
try
{
FileStream imageFile = File.OpenRead(filePath);
return imageFile;
}
catch (IOException ex)
{
Console.WriteLine(
String.Format("An exception was thrown while trying to open file {0}", filePath));
Console.WriteLine("Exception is: ");
Console.WriteLine(ex.ToString());
throw ex;
}
}
...
Now, how do I know who's responsible for releasing the FileStream when the transfer is done?
EDIT: If the code is put inside a "using" block the stream gets shut down before the client receives anything.
The service should clean up and not the client. WCF's default for OperationBehaviorAttribute.AutoDisposeParameters seems to be true, therefore it should do the disposing for you. Although there doesn't seem to be a fixed answer on this.
You could try using the OperationContext.OperationCompleted Event:
OperationContext clientContext = OperationContext.Current;
clientContext.OperationCompleted += new EventHandler(delegate(object sender, EventArgs args)
{
if (fileStream != null)
fileStream.Dispose();
});
Put that before your return.
Check this blog
Short answer: the calling code, via a using block.
Long answer: sample code should never be held up as an exemplar of good practice, it's only there to illustrate one very specific concept. Real code would never have a try block like that, it adds no value to the code. Errors should be logged at the topmost level, not down in the depths. Bearing that in mind, the sample becomes a single expression, File.OpenRead(filePath), that would be simply plugged into the using block that requires it.
UPDATE (after seeing more code):
Just return the stream from the function, WCF will decide when to dispose it.
The stream needs to be closed by party who is responsible to read it. For example, if service returns the stream to client, it's client application responsibility close the stream as Service doesn't know or have control when client finishes reading stream. Also, WCF will not take care of closing the stream again because of the fact that it doesn't know when receiving party has finished reading. :)
HTH,
Amit Bhatia

WCF Proxy Client taking time to create, any cache or singleton solution for it

we have more than dozon of wcf services and being called using TCP binding. There are a lots of calls to same wcf service at various places in code.
AdminServiceClient client = FactoryS.AdminServiceClient();// it takes significant time. and
client.GetSomeThing(param1);
client.Close();
i want to cache the client or produce it from singleton. so that i can save some time, Is it possible?
Thx
Yes, this is possible. You can make the proxy object visible to the entire application, or wrap it in a singleton class for neatness (my preferred option). However, if you are going to reuse a proxy for a service, you will have to handle channel faults.
First create your singleton class / cache / global variable that holds an instance of the proxy (or proxies) that you want to reuse.
When you create the proxy, you need to subscribe to the Faulted event on the inner channel
proxyInstance.InnerChannel.Faulted += new EventHandler(ProxyFaulted);
and then put some reconnect code inside the ProxyFaulted event handler. The Faulted event will fire if the service drops, or the connection times out because it was idle. The faulted event will only fire if you have reliableSession enabled on your binding in the config file (if unspecified this defaults to enabled on the netTcpBinding).
Edit: If you don't want to keep your proxy channel open all the time, you will have to test the state of the channel before every time you use it, and recreate the proxy if it is faulted. Once the channel has faulted there is no option but to create a new one.
Edit2: The only real difference in load between keeping the channel open and closing it every time is a keep-alive packet being sent to the service and acknowledged every so often (which is what is behind your channel fault event). With 100 users I don't think this will be a problem.
The other option is to put your proxy creation inside a using block where it will be closed / disposed at the end of the block (which is considered bad practice). Closing the channel after a call may result in your application hanging because the service is not yet finished processing. In fact, even if your call to the service was async or the service contract for the method was one-way, the channel close code will block until the service is finished.
Here is a simple singleton class that should have the bare bones of what you need:
public static class SingletonProxy
{
private CupidClientServiceClient proxyInstance = null;
public CupidClientServiceClient ProxyInstance
{
get
{
if (proxyInstance == null)
{
AttemptToConnect();
}
return this.proxyInstance;
}
}
private void ProxyChannelFaulted(object sender, EventArgs e)
{
bool connected = false;
while (!connected)
{
// you may want to put timer code around this, or
// other code to limit the number of retrys if
// the connection keeps failing
AttemptToConnect();
}
}
public bool AttemptToConnect()
{
// this whole process needs to be thread safe
lock (proxyInstance)
{
try
{
if (proxyInstance != null)
{
// deregister the event handler from the old instance
proxyInstance.InnerChannel.Faulted -= new EventHandler(ProxyChannelFaulted);
}
//(re)create the instance
proxyInstance = new CupidClientServiceClient();
// always open the connection
proxyInstance.Open();
// add the event handler for the new instance
// the client faulted is needed to be inserted here (after the open)
// because we don't want the service instance to keep faulting (throwing faulted event)
// as soon as the open function call.
proxyInstance.InnerChannel.Faulted += new EventHandler(ProxyChannelFaulted);
return true;
}
catch (EndpointNotFoundException)
{
// do something here (log, show user message etc.)
return false;
}
catch (TimeoutException)
{
// do something here (log, show user message etc.)
return false;
}
}
}
}
I hope that helps :)
In my experience, creating/closing the channel on a per call basis adds very little overhead. Take a look at this Stackoverflow question. It's not a Singleton question per se, but related to your issue. Typically you don't want to leave the channel open once you're finished with it.
I would encourage you to use a reusable ChannelFactory implementation if you're not already and see if you still are having performance problems.

What is the proper life-cycle of a WCF service client proxy in Silverlight 3?

I'm finding mixed answers to my question out in the web. To elaborate on the question:
Should I instantiate a service client proxy once per asynchronous invocation, or once per Silverlight app?
Should I close the service client proxy explicitly (as I do in my ASP.NET MVC application calling WCF services synchronously)?
I've found plenty of bloggers and forum posters out contradicting each other. Can anyone point to any definitive sources or evidence to answer this once and for all?
I've been using Silverlight with WCF since V2 (working with V4 now), and here's what I've found. In general, it works very well to open one client and just use that one client for all communications. And if you're not using the DuplexHttBinding, it also works fine to do just the opposite, to open a new connection each time and then close it when you're done. And because of how Microsoft has architected the WCF client in Silverlight, you're not going to see much performance difference between keeping one client open all the time vs. creating a new client with each request. (But if you're creating a new client with each request, make darned sure you're closing it as well.)
Now, if you're using the DuplexHttBinding, i.e., if you want to call methods on the client from the server, it's of course important that you don't close the client with each request. That's just common sense. However, what none of the documentation tells you, but which I've found to be absolutely critical, is that if you're using the DuplexHttBinding, you should only ever have one instance of the client open at once. Otherwise, you're going to run into all sorts of nasty timeout problems that are going to be really, really hard to troubleshoot. Your life will be dramatically easier if you just have one connection.
The way that I've enforced this in my own code is to run all my connections through a single static DataConnectionManager class that throws an Assert if I try to open a second connection before closing the first. A few snippets from that class:
private static int clientsOpen;
public static int ClientsOpen
{
get
{
return clientsOpen;
}
set
{
clientsOpen = value;
Debug.Assert(clientsOpen <= 1, "Bad things seem to happen when there's more than one open client.");
}
}
public static RoomServiceClient GetRoomServiceClient()
{
ClientsCreated++;
ClientsOpen++;
Logger.LogDebugMessage("Clients created: {0}; Clients open: {1}", ClientsCreated, ClientsOpen);
return new RoomServiceClient(GetDuplexHttpBinding(), GetDuplexHttpEndpoint());
}
public static void TryClientClose(RoomServiceClient client, bool waitForPendingCalls, Action<Exception> callback)
{
if (client != null && client.State != CommunicationState.Closed)
{
client.CloseCompleted += (sender, e) =>
{
ClientsClosed++;
ClientsOpen--;
Logger.LogDebugMessage("Clients closed: {0}; Clients open: {1}", ClientsClosed, ClientsOpen);
if (e.Error != null)
{
Logger.LogDebugMessage(e.Error.Message);
client.Abort();
}
closingIntentionally = false;
if (callback != null)
{
callback(e.Error);
}
};
closingIntentionally = true;
if (waitForPendingCalls)
{
WaitForPendingCalls(() => client.CloseAsync());
}
else
{
client.CloseAsync();
}
}
else
{
if (callback != null)
{
callback(null);
}
}
}
The annoying part, of course, is if you only have one connection, you need to trap for when that connection closes unintentionally and try to reopen it. And then you need to reinitialize all the callbacks that your different classes were registered to handle. It's not really all that difficult, but it's annoying to make sure it's done right. And of course, automated testing of that part is difficult if not impossible . . .
You should open your client per call and close it immediately after. If you in doubt browse using IE to a SVC file and look at the example they have there.
WCF have configuration settings that tells it how long it should wait for a call to return, my thinking is that when it does not complete in the allowed time the AsyncClose will close it. Therefore call client.AsyncClose().