wcf slow connection and number of connections in the pool - wcf

I have a wcf client.
The client calls a function and then closes.
If I use netstat there is only one connection.
I made an experiment.
In the server function I put Thread.sleep(10000).
Then again I started the client.
With netstat I found out that there are 5 connections.
Why when the response is slow the client opens more connections than one ?
Regards

Nettcp connection are pooled and if you had your process running for a while then you see that these would be reused and a new one would be created if an existing one is being used and returned to the pool. So your usage will determine how the pool functions.
http://kennyw.com/work/indigo/173

Related

Questions about SignalR Connection

All,
I am using SignalR (.net 6) and have couple of questions about SignalR Connections (specifically SignalR connections that use web sockets):
Q #1)
If the SignalR client crashes, will SignalR server dispose the underlying connection automatically for me (and the OnDisconnectedAsync() event will be fired)?
The idea is to dispose client resources (on the server, resource ex: NHibernate session) that are tied to each connection.
My Tests Indicate (on local machine, both server and client):
I tried to simulate this scenario where I had a running client which then I shut down with Task manager and the minute Windows released resources for the process, the SignalR server somehow detected that connection was lost and released the connection and OnDisconnectedAsync() was called. I am not sure if my test was sufficient for this use case (client crash). I am curious of how did the server know, was it the fact the maybe the finalizer for client connection ran?
Q #2) If the current connection between client and server is broken or interrupted and SignalR needs to reconnect, and it successfully reconnects, does it use the same connection (with the same connection ID/same web socket) or does it attempt create new connection (tied to a new web socket)?
https://learn.microsoft.com/en-us/aspnet/core/signalr/configuration?view=aspnetcore-6.0&tabs=dotnet
The server considers the client disconnected if it hasn't received a message (including keep-alive) in this interval. It could take longer than this timeout interval for the client to be marked disconnected due to how this is implemented. The recommended value is double the KeepAliveInterval value.
It assigns a new connection id. Consider using other data to track which user is it, eg. Checking in the on connect and on disconnect methods.
https://learn.microsoft.com/en-us/aspnet/core/signalr/groups?view=aspnetcore-6.0

If IIS and/or SQL Server queue up requests, where can those queues be monitored?

I have a function exposed through an ASMX web service that upserts data into a SQL Server database. I call it asynchronously like this from a console app:
theClient.DoMyFunctionAsync(myParam);
It works great, but I have some questions ...
If I call the function 100 or 500 or 1000 times as fast as I can, where would the bottle-necks lie and how would I monitor them?
I assume IIS would either stop receiving or queue up the requests? If it queues them up, where is that queue and how can I monitor it?
I know the .NET web service handles database connection pooling. Does the connection pool have a queue mechanism that stores the requests for a while until a connection from the pool is available? And if so, how would I monitor that queue?
For asp.net, check the counter "Requests Queued" using Perfomance Monitor.
Details and other counters here: http://msdn.microsoft.com/en-us/library/fxk122b4(v=vs.100).aspx

WCF client with changing IP addresses and interfaces during connection

Imagine situation (this is real situation):
There is a WCF client application on laptop.
Laptop is connected by WiFi to internet.
User is doing some stuff (request reply operations) on his laptop at work connected to WCF service.
Then user's laptop is sleep-down and user go home. At home user wake-up his laptop, connect HSPDA/3G modem (different interface & ip) and want to continue on work in client appliaction. Note that application hasn't been closed.
User (client application) should be authenticated and if it is possible, communication should be encrypted.
What are the best practices?
Create new proxy for each operation? This should be very slow when initializing net.tcp connection with authentication.
Is solution basicHttp connection (+HTTPS) with InstanceContextMode.PerCall? Note that speed and higher payload is problem.
Or the best solution is something like "wrapper(Func<>)", which contains while loop until operation is successfully finished (on fail, new connection is created and function is called again).
Thanks you for suggestions
I've always kept the connection open for as long as the unit of work is necessary. Basically, the connection is only open and available while the application is performing some processing (and those processes require a WCF connection). It may be more overhead to keep reconnecting (and depending on connection speed it may add latency) but it's also more secure when it comes to having a connection to work with (least probability of failure) and I'm generally saving those resources for other purposes.
However, this all depends on what the application does; If the client is dumb and the service is doing all the work it may make sense to keep the connection as every function executes a method on the service. Though with that comes some failure checking and re-establishing should the connection be unexpectedly severed.
Also, netTcp is going to be a lot faster than wsHttp. And I personally haven't see a lot of latency on establishing a netTcp connection (though I don't know what kind of authentication you're doing [mine has generally implemented windows authentication])

closing WCF proxy

I have always followed the guidance of try/Close/catch/Abort when it comes to a WCF proxy. I am facing a code base now that creates proxies in MVC controllers and just lets them go out of scope. I'm arguing the case that we need to edit the code base to use try/Close/catch/Abort but there is resistance.
Does anyone know a metric (e.g. perfmon) I can capture to illustrate the problem/benefit. Or a definitive reference that spells out the problem/benefit no one can dispute?
You can create a sample application to mimic the problem. Though I haven't tried you can try this,
Create a simple service and limit the maxConcurrentCalls and maxConcurrentSessions to 5.
Create a client application and in that, call the service method without closing the connection.
Fire up 6 or more clients
See what happens when you open a new connection from a client. Probably the client will wait for certain time and you get some exception.
If the client don't close the connection properly, the connection will still remain open in the service so what happens if 1000s of client connected to the service at a time and leave their connections open? The service has a limitation that it could server 'n' connections at a time and because of that the service can't handle any new requests from clients and that's why closing connections are very important.
I think you are aware about the using problem in WCF service. In my applications I close the WCF connections using an extension method as said in this thread.
Have you tried a simple 'netstat -N' from the command prompt both on server and client? Yoy are likely to see a lot of waiting/pending connections which might exhaust your server resources for no reason.

wcf and duplex communication

I have a lot of client programs and one service.
This Client programs communicate with the server with http channel with WCF.
The clients have dynamic IP.
They are online 24h/day.
I need the following:
The server should notify all the clients in 3 min interval. If the client is new (started in the moment), is should notify it immediately.
But because the clients have dynamic IP and they are working 24h/day and sometimes the connection is unstable, is it good idea to use wcf duplex?
What happens when the connection goes down? Will it automatically recover?
Is is good idea to use remote MSMQ for this type of notification ?
Regards,
WCF duplex is very resource hungry and per rule of thumb you should not use more than 10. There is a lot of overhead involved with duplex channels. Also there is not auto-recover.
If you know the interval of 3 minutes and you want the client to get information when it starts why not let the client poll the information from the server?
When the connection goes down the callback will throw an exception and the channel will close.
I am not sure MSMQ will work for you unless each client will create an MSMQ queue for you and you push messages to each one of them. Again with an unreliable connection it will not help. I don't think you can "push" the data if you loose the connection to a client, client goes off-line or changes an IP without notifying your system.