How to implement WCF proxy pooling? - wcf

What is the best practice around implementing WCF proxy pooling? What precautions should be taken in the design?
Any pointers in this direction is greatly appreciated.

If you want to go down that path, from Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices:
You need to implement the right
synchronization logic for managing the
proxies.
You need to make sure the
proxies are used equally. Sometimes,
you may want to implement a
round-robin pattern for the proxies.
You need to handle
exceptions and retries for the pool.
The pool size needs to be
limited and configurable.
You may need to be able to
create proxies even if when no proxy
is available from the pool.

Why do you want to pool proxies?
Pools usually only exist when a resource (like a database connection) is scarce, expensive to build and possibly costly to maintain.
This should not be the case with WCF proxies, really - you create them as needed, and discard them when no longer needed.
I don't see any benefit or real use in trying to pool WCF proxies - what problem or issue are you trying to solve?
OK, thanks for your reply - I do understand what you're trying to accomplish - but I'm afraid, you're pretty much on your own, since I don't think there's any bits and pieces in the .NET framework and/or the WCF subsystem to would aid in creating proxy pools.
Marc
PS: as the article that Tuzo linked to shows, maybe you can get away with just caching the channelFactory objects. Creating those is indeed quite expensive, and if you can cache those for the lifetime of the app, maybe that'll be good enough for your needs. Check it out!

Related

Consume WCF service from go application

Is it even possible more or less natively consume WCF service from Go application?
I can imagine it should be possible to execute SOAP calls in Go, but WCF is a bit more than that only, for example authorization will probably be a problem also...
Have anyone at least approached this area, or maybe someone can give useful me advice in this "wheel reinvention task"?
Thank you in advance for all your input, ideas and suggestions.
I think you should expose a RESTful Service. I myself have the problem with exposing a WCF Service too many clients using PHP, Go, Ruby and all kind of languages. We never ever got it right, to automatically generate a proxy.
The maybe simplest way is to use WCF, like described in this example:
https://www.codeproject.com/Articles/105273/Create-RESTful-WCF-Service-API-Step-
By-Step-Guide
But I recommend to switch to ASP.NET Core (Migration is not that hard) or if you have the budget I would consider https://servicestack.net/
It may be well beyond the wait time for this. However, here is something really interesting that could help. The situation the authors found themselves is relevant even today in some organizations.
https://github.com/khoad/msbingo
Here's the motivation provided by the authors:
Application/soap+msbin1 encoding was a blocking issue for modernizing services from WCF to platform-agnostic technologies such as Go. We needed to be able to make calls to dependency services that spoke msbin1 and were not going to be updated or even reconfigured, but we did not want to introduce unnecessary complexity such as workarounds like .NET-based WCF request translator proxies or deploying Mono with our service instances. Initially we tried the Mono deployment route, which, while it would have worked well enough, significantly complicated our deployment pipeline, thus erasing one of the major advantages of golang.
I found it a useful starting point to begin experimentation.

What is the best way of pulling json data in terms of performance?

Currently I am using HttpWebRequest to pull json data from an external site, and the performance was not good. Is wcf much better?
I need expert advice on this..
Probably not, but that's not the right question.
To answer it: WCF, which certainly supports JSON, is ultimately going to use HttpWebRequest at the bottom level, and it will certainly have the same network latency. Even more importantly, it will use the same server to get the JSON. WCF has a lot of advantages in building, maintaining, and configuring web services and clients, but it's not magically faster. It's possible that your method of deserializing JSON is really slow compared to what WCF would use by default, but I doubt it.
And that brings up the really important point: find out why the performance is bad. Changing frameworks is only an intelligible optimization option if you know what's slow, and, by extension, how doing something different would make it less slow. Is it the server? Is it deserialization? Is it network? Is it authentication or some other request overhead detail? And so on.
So the real answer is: profile! Once you know what the performance issue really is, you can make an informed decision about whether a framework like WCF would help.
The short answer is: no.
The longer answer is that WCF is an API which doesn't specify a communication method, but supports multiple methods. However, those methods are normally over SOAP which is going to involve more overheard than a JSON, and it would seem the world has decided to move on from SOAP.
What sort of performance are you looking for and what are you getting? It may be that you are simply facing physical limitations of network locations, in which case you might look towards making your interface feel more responsive, even if the data is sluggish.
It'd be worth it to see if most of the latency is just in reaching the remote site (e.g. response times are comparable to ping times). Or, perhaps, the problem is the time it takes for the remote site to generate and serve the page. If so, some intermediate caching might be best.
+1 on what Isaac said, but one thing I'd add is, if you do use WCF here, it'll internally use the HttpWebRequest in most places, so you're definitely not gaining performance at all. One way you may unintentionally gain in performance -- however -- is in how WCF recycles, reuses, pools, and caches most transport objects internally. So it ultimately goes back to Isaac's advice on profiling.

WCF NetTCP Binding Over Internet

I have a question. I would like to serve a series of services made with WCF. The client that consumes the services is also .NET with WCF. I would like to have high speed of access, fast response, transport medium to small Data Contracts (primary .net basic data types). The distribution will be over internet, I´m looking for reliability, availability and basic security.
I don´t want to use WsHttp, because my only client is based on .net and I will have almost 150 clients requesting the services.
What do you suggest to use for binding? Are there any disadvantages, risks, etc?
Thanks in advance!
Since you plan to use simple types and small data contracts, the binding you use is nearly irrelevant compared to the latency introduced by going over the Internet. So, the right answer is to use the one which is easiest to manage and the most secure.
I recommend that you host the app in IIS and use a wsHttpBinding and take all the manageability goodness that goes along with it. It will also happen to be interoperable, and while that is irrelevant today, it is just free, so why not?
And, please consider the granularity of your service. You know your customers better, but on the wide open Internet, stuff happens. Because the round trip time over the Internet is variable and impossible to control, it could take milliseconds or seconds or may not get there at all. So, you should take fewer trips with larger payloads if possible, and use all sorts of caching and async operations to make the app appear "fast".
There is a good article on choosing a binding by Juval Lowy here:
http://www.code-magazine.com/article.aspx?quickid=0605051&page=3
Generally the advice is not to use net tcp binding over the internet. Have not heard of anyone doing it. Although it may work if the ports are open all the way and no one blocks the calls.
Test it with nettcp, if it does not work you just need to change the configuration.
The most important thing is to consider your security needs. Do you just need point to point, then basichttp over ssl. Do you need end to end, then wshttp with message encryption.
According to your scenario, NetTcpBinding is the binding of choice. As you are sure that client will be WCF, no need for interoperability.
Have a look here in Programing WCF Services book.
The only thing I'm not sure about is firewalls. If you have to get trough on of theses, maybe some WS binding could be more appropriate.

Improving WCF performance

Could I know ways to improve performance of my .Net WCF service?
Right now its pretty slow and sometimes it gets clogged & eventually stops responding.
What kind of InstanceContextMode and ConcurrencyMode are you using on your service class?
If it's PerCall instances, you might want to check if you can reduce the overhead of creating a server instance for each call.
If it's Single instances (singleton) - do you really need that? :-) Try using PerCall instead.
Marc
Well, what sort of data are you sending, and over what binding?
Is the problem the size of requests (bandwidth), or the quantity of requests (latency). If latency, then simply make fewer, but bigger, requests ;-p
For bandwidth: if you are sending binary data over http, you can enable MTOM - that'll save you a few bytes. You can enable compression support at the server, but this isn't guaranteed.
If you are using .NET to .NET, you might want to consider protobuf-net; this has WCF hooks for swapping the formatter (DataContractSerializer) to use google's "protocol buffers" binary format, which is very small and fast. I can advise on how on request.
Other than that: send less data ;-p
What binding are you using? If you're using HTTP you could get better perfomance with TCP.
In all likelihood though the bottleneck is going to be higher up in the WCF pipeline and possibly in your hosted objects.
We'd need some more details about your WCF set up to be able to help much.
The symptoms you describe could be caused by anything at all. You'll need to narrow it down by using a profiler such as JetBrain's dotTrace or Automated QA's AQTime.
Or you could do it the old fashioned way by instrumenting your code (which is what the profilers do for you). Collect the start time before your operation starts. When it finishes, subtract the start time from the curren time to determine the elapsed time, then print it out or log it or whatever. Do the same around the methods that this operation calls. You'll quickly see which methods are taking the longest time. Then, move into those methods and do the same, finding out what makes them take so long, etc.
"Improve performance of my .Net WCF service" - its very generic term you are asking, different ways we can improve performance and at the sametime you need to find which one causing performance hit like DB access in WCF methods.
Please try to know available features in WCF like oneWay WCF method it will help you in finding ways to improve performance.
Thanks
Venkat
Here is an article with some statistics from real production systems, you could use these to compare/benchmark your performance.
WCF Service Performance
Microsoft recently released a knowledge base article:
WCF Performance and Stability Issues - http://support.microsoft.com/kb/982897
These issues include the following:
Application crashes
Hangs
General performance of the application when calling WCF Service.

WCF in the enterprise, any pointers from your experience?

Looking to hear from people who are using WCF in an enterprise environment.
What were the major hurdles with the roll out?
Performance issues?
Any and all tips appreciated!
Please provide some general statistics and server configs if you can!
WCF can be configuration hell. Be sure to familiarize yourself with its diagnostics and svcTraceViewer, lest you get madenning cryptic, useless exceptions. And watch out for the generated client's broken implementation of the disposable pattern.
I've been recently hired to a company that previously handled their client/server communication with traditional asp.net web services and passing dataset's back and forth.
I re-wrote the core so now there is a Net.Tcp "connected" client... and everything is done through there. It was a week worth of "in-production-discoveries"... but well worth it.
The pain points we had to find out late in the game was:
1) The default throttling blocked the 11th user onward (it defaults to allow only 10).
2) The default "maxBufferSize" was set to 65k, so the first bitmap that needed to be downloaded crashed the server :)
3) Other default configurations (max concurent connections, max concurrent calls, etc).
All in all, it was absolutely worth it... the app is a lot faster just by changing their infrustructure and now that we have "connected" users... the server can send messages down to the clients.
Other beautiful gains is that, since we know 100% who is connected, we can actually enforce our licensing policy at the application level. Before now (and before I was hired) my company had to simply log, and then at the end of the month bill the clients extra for connecting too many times.
As already stated, configuration nightmare and exceptions can be cryptic. You can enable tracing and use the trace log viewer to generally troubleshoot a problem but its definitely a shifting of gears to troubleshoot a WCF service, especially once you've deployed it and you are experiencing problems before your code is even executing.
For communication between components within my organization I ended up using [NetDataContract] on my services and proxies which is recommended against (you can't integrate with platforms outside of .NET and to integrate you need the assembly that has the contracts) though I found the performance to be stellar and my overall development time reduced by using it. For us it was the right solution.
WCF is definitely great for enterprise stuff as it is designed with scalability, extensibility, security, etc... in mind.
as maxidad said, it can be very hard though as exceptions often tell you nearly nothing, if you use security (obvisously for enterprise scenarios) you have to deal with certificates, meaningless MessageSecurityExceptions and so on.
Dealing with WCF services is definitely harder than with old asmx service, but it's worth the effort once you're in.
supplying server configs will not be useful to you as it has to fit to your scenario. using the right bindings is very important, as well as security, concurreny. there is no single way to go when using wcf. just think about your requirements. do you need callbacks, what are your users? what kind of security do you need?
however, WCF will be definitely the right technology for enterprise scale applications.