In my current version of a project, the HttpClinet has been used for creating requests. But in request's peak time most of our TCP ports get in waited status, Their status remains about 2 minutes event after task completion. I read some articles about IHttpClientFactory.
But I'm not sure how this solution can solve our problems. Any ideas will be appreciated.
There are tonnes of articles out there which would tell you why you should not dispose HttpClient (which would result in what you mentioned, and end-up with socket exhaustion issue), but rather to use IHttpClientFactory to manage the life-cycle of of HttpClient-related services.
This is because each time you make a request with HttpClient, and dispose it after usage, would cause the socket to be in a TIME_WAIT state, and imagine if you make few thousands of requests in few seconds, you will run out of sockets. The IHttpClientFactory is a contract to better manage your Http services, and to re-use sockets from the connection pool without you having to manage it.
As a start, go through this, I think it provides sufficient info. about what you want to achieve,
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests
One of the key point in the article above, to answer your question about how IHttpClientFactory can solve your issue,
Each time you get an HttpClient object from the IHttpClientFactory, a
new instance is returned. But each HttpClient uses an
HttpMessageHandler that's pooled and reused by the IHttpClientFactory
to reduce resource consumption, as long as the HttpMessageHandler's
lifetime hasn't expired.
Hope this helps!
Related
In the project I'm currently working we're using WCF.
Company policy forces us to use async calls and the reason should be security.
I've asked why this is so much more secure but I don't get clear answers.
Can someone explain why this is so much secure?
They are not. The same security (authentication, encryption) mechanisms and considerations apply whether a call blocks until it gets a response or it uses a callback.
The only way someone may be confused into thinking that asynch calls are more "safe/secure", is they think that unhandled WCF exceptions will not bring down the main thread if they are asynchronous, as they will be raised inside the callback.
In this case, I would advice extreme caution when approaching the owner of this policy to avoid career-limiting consequences. Some people can get emotionally attached to their policies.
There is no point why an async call will be more secure than a sync call. I think you should talk to the owner of the policy for the same.
No they are not more or less secure than synchronous calls. The only difference is the client waits for a response on synchronous calls, whereas on async it is notified of a response.
Are they coming from the angle that synchronous calls leave the connection open longer or something?
Just exposing a WCF operation using an async signature (BeginBlah/EndBlah) doesn't actually affect the exposed operation at all. When you view the meta data, an operation like
[OperationContract(AsyncPattern=true)]
IAsyncResult BeginSomething(AsyncCallback, object)
void EndSomething(IAsyncResult)
...actually still ends up being represented as an operation called 'Something'. And actually this is one of the nice things about WCF: the client and server can differ in whether they choose to implement/consume an operation syncronously.
So if you are using generating WCF proxies (eg through Add Service Reference) then you will get syncronous versions of each operation whether they are implemented asyncronously or not unless you tick the little checkbox to generate the async overloads. And when you do you then get async versions of operations that might only be declared syncronously on the server.
All WCF is doing is, on both the client and server, giving you a choice about your threading model: do you want WCF to wait for the result, or are you going to signal it that you've finished. How the actual transport connection is managed is - to the best of my knowlege - totally unaffected. eg: For a NetTcpBinding the socket still stays open for the duration of the call, either way.
So, to get to the point, I really struggle to imagine how this could possibly make any difference to the security envelope of a WCF service. If a service is exposed using an async pattern, and is genuinely implemented in an async way (async for outbound IO, or queues work via the thread pool or something) then there's probably an argument that it would be harder to DOS the service (by exhausting the pool of WCF IO threads), but that'd be about it.
See Syncronous and Asyncronous Operations in MSDN
NB: If you are sharing the contract interface between the client and server then obviously the syncronisity of the two ends match (because they are both using the same interface type), but that's just a limitation of using a shared interface. If you made another equivilent interface, differing only by the async pattern, you could still create a ChannelFactory against it just fine.
I agree with the other answers - definitely not more secure.
Fire up Fiddler and watch a synchronous request vs. an asynchronous request. You'll basically see the same type of traffic (although the sync may send and receive more data since it's probably a postback). But you can intercept both of those requests, manipulate them, and resend them and cause havoc on your server.
Fiddler's a great tool, by the way. It's an eye-opener in terms of what kind of data and how much data you're sending to the server.
I'm creating an asynchronous socket programming in vb.net. i've utilised the code from Asynchronous client and server code from the following links:
http://msdn.microsoft.com/en-us/library/fx6588te.aspx for server program
The client program is present as per http://msdn.microsoft.com/en-us/library/bew39x2a.aspx.
When I try to connect the for more than one client the second client always waits until the first client completes the call. I want the clients to accept calls at the same time.
Does WCF help to make multiple clients to accept calls at the same time? If so what is WCF and how will it help. Or is there any other concept which can help?
Yes, WCF can help you there. But it implements only well known protocols like SOAP, WS-*, JSON, and a few proprietary ones like binary TCP binding.
You'd only use async socket programming if you need
High scalability (more than 20 simultaneous clients)
A custom protocol
If you build on top of HTTP, I recommend the HttpListener class
If you need a custom protocol with a few clients, use synchronous socket programming with multiple threads.
If you still want to implement a server with async sockets, then you need a continuous loop that accepts connections (after EndAccept() immediately call BeginAccept() again) and then start the BeginReceive()
I can tell you from experience though that debugging such a server is not easy. It's quite hard to follow the chain of events even through a detailed log file. Good luck with that :)
I'm currently playing around a little with WCF, during this I stepped on a question where I'm not sure if I'm on the right track.
Let's assume a simple setup that looks like this: client -> service1 -> service2.
The communication is tcp-based.
So where I'm not sure is, if it makes sense that the service1 caches the client proxy for service2. So I might get a multi-threaded access to that proxy, and I have to deal with it.
I'd like to take advantage of the tcp session to get better performance, but I'm not sure if this "architecture" is supported by WCF/network/whatever at all. The problem I see is that all the communication goes over the same channel, if I'm not using locks or another sync.
I guess the better idea is to cache the proxy in a threadstatic variable.
But before I do that, I wanted to confirm that it's really not a good idea to have only one proxy instance.
tia
Martin
If you don't know that you have a performance problem, then why worry about caching? You're opening yourself to the risk of improperly implementing multithreading code, and without any clear, measurable benefit.
Have you measured performance yet, or profiled the application to see where it's spending its time? If not, then when you do, you may well find that the overhead of multiple TCP sessions is not where your performance problems lie. You may wish you had the time to optimize some other part of your application, but you will have spent that time optimizing something that didn't need to be optimized.
I am already using such a structure. I have one service that collaborates with some other services and realise the implementation. Of course, in my case the client calls some one-way method of the first service. I am getting very good benifit. Of course, I also have configured it to limit the number of concurrent calls in some of the cases.
Yes, that architecture is supported by WCF. I deal with applications every day that use similar structures, using NetTCPBinding.
The biggest thing to worry about is the ConcurrencyMode of the various services involved, and making sure that they do not block unnecessarily. It is very easy to get into a scenario where you will be guaranteed timeouts, or at the least have poor performance due to multiple, synchronous calls across service boundaries. Even OneWay calls are not guaranteed to immediately return.
careful with threadstatic, .net changes the thread so the variable can get null.
For session...perhaps you could use session enabled calls:
http://msdn.microsoft.com/en-us/library/ms733040.aspx
But i would not recomend using if you do not have any performance issue. I would use the normal way, or if service 1 is just for forwarding you could use that functionality easily with 4.0:
http://www.sdn.nl/SDN/Artikelen/tabid/58/view/View/ArticleID/2979/Whats-New-in-WCF-40.aspx
Regards
Firstly, make sure you know about the behaviour of ThreadStatic in ASP.NET applications:
http://piers7.blogspot.com/2005/11/threadstatic-callcontext-and_02.html
The same thread that started your request may not be the same thread that finishes it. Basically the only safe way of storing Thread local storage in ASP.NET applications is inside HttpContext. The next obvious approach would be to creat a wrapper client to manage your WCF client proxy and ensure each IO request is thread safe using locks.
Although my personal preference would be to use a pool of proxy clients. Whenever you need one pop it off the pool queue and when you're finished with it put it back on.
Using techniques as hinted at in:
http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.callbackcontract.aspx
I am implementing a ServerPush setup for my API to get realtime notifications from a server of events (no polling). Basically, the Server has a RegisterMe() and UnregisterMe() method and the client has a callback method called Announcement(string message) that, through the CallbackContract mechanisms in WCF, the server can call. This seems to work well.
Unfortunately, in this setup, if the Server were to crash or is otherwise unavailable, the Client won't know since it is only listening for messages. Silence on the line could mean no Announcements or it could mean that the server is not available.
Since my goal is to reduce polling rather than immediacy, I don't mind adding a void Ping() method on the Server alongside RegisterMe() and UnregisterMe() that merely exists to test connectivity of to the server. Periodically testing this method would, I believe, ensure that we're still connected (and also that no Announcements have been dropped by the transport, since this is TCP)
But is the Ping() method necessary or is this connectivity test otherwise available as part of WCF by default - like serverProxy.IsStillConnected() or something. As I understand it, the channel's State would only return Faulted or Closed AFTER a failed Ping(), but not instead of it.
2) From a broader perspective, is this callback approach solid? This is not for http or ajax - the number of connected clients will be few (tens of clients, max). Are there serious problems with this approach? As this seems to be a mild risk, how can I limit a slow/malicious client from blocking the server by not processing it's callback queue fast enough? Is there a kind of timeout specific to the callback that I can set without affecting other operations?
Your approach sounds reasonable, here are some links that may or may not help (they are not quite exactly related):
Detecting Client Death in WCF Duplex Contracts
http://tomasz.janczuk.org/2009/08/performance-of-http-polling-duplex.html
Having some health check built into your application protocol makes sense.
If you are worried about malicious clients, then add authorization.
The second link I shared above has a sample pub/sub server, you might be able to use this code. A couple things to watch out for -- consider pushing notifications via async calls or on a separate thread. And set the sendTimeout on the tcp binding.
HTH
I wrote a WCF application and encountered a similar problem. My server checked clients had not 'plug pulled' by periodically sending a ping to them. The actual send method (it was asynchronous being a server) had a timeout of 30 seconds. The client simply checked it received the data every 30 seconds, while the server would catch an exception if the timeout was reached.
Authorisation was required to connect to the server (by using the built-in feature of WCF that force the connecting person to call a particular method first) so from a malicious client perspective you could easily add code to check and ban their account if they do something suspicious, while disconnecting users who do not authenticate.
As the server I wrote was asynchronous, there wasn't any way to really block it. I guess that addresses your last point, as the asynchronous send method fires off the ping (and any other sending of data) and returns immediately. In the SendEnd method it would catch the timeout exception (sometimes multiple for the client) and disconnect them, without any blocking or freezing of the server.
Hope that helps.
You could use a publisher / subscriber service similar to the one suggested by Juval:
http://msdn.microsoft.com/en-us/magazine/cc163537.aspx
This would allow you to persist the subscribers if losing the server is a typical scenario. The publish method in this example also calls each subscribers on a separate thread, so a few dead subscribers will not block others...
I have a Windows service that logs speed readings from a radar gun to a database. In addition, I made the service a WCF server. I have a Forms and a CF client that subscribe to the service and get called back whenever there is a reading that satisfies certain criteria.
This works in principle, but after some time the channel times out. It seems that there are some fundamental issues with long-running connections (see
http://blogs.msdn.com/drnick/archive/2007/11/05/custom-transport-retry-logic.aspx) and a duplex HTTP callback may not be the right solution. Are there any other ways I can realize a publish/subscribe pattern with WCF?
Edit: Even with a 2 hour timeout the channel is eventually compromised. I get this error:
The operation 'SignalSpeedData' could not be completed because the sessionful channel timed out waiting to receive a message. To increase the timeout, either set the receiveTimeout property on the binding in your configuration file, or set the ReceiveTimeout property on the Binding directly.
This happened 15 minutes after the last successful call. I am wondering if instead of keeping the session open, it is possible to re-establish a fresh session for every call.
Reliable messaging will take care of your needs. The channel reestablishes itself if there is a problem. WSDualHTTPBinding provides this for the http binding, and there is also a tcp binding that allows this. If you are on the same computer, named pipe binding will provide this by default.