Communication between reliable services - wcf

Microsoft offers a couple of different options of communication protocols to communicate between reliable services.
There are the default stack (strongly typed RPC), HTTP, WCF or custom protocols. I think the easiest way is to use the default communication stack. But what is the communication stack with the best performance?

How are you defining performance? Highest number of request-responses in a given amount of time? Shortest amount of time for a single request-response? Memory or CPU overhead on the sender and receiver?
The stack with the best performance will usually be the custom one you write
yourself specifically for your service communication characteristics. Every other stack is going behave differently depending on the situation.
Here's an example. Say you're streaming large amounts of data between services. HTTP might be a better choice than RPC here because you can open a connection and stream data as it arrives, whereas RPC will send it all in one big payload which leaves a big memory footprint. An even better option might be to open a WebSocket. An even better option than that would be to just use a regular socket.
Btw, We're no longer calling it the "default" stack because there isn't really a default - you have to choose one. The strongly typed RPC stack is now simply referred to as "Remoting."

Related

What is the best way of pulling json data in terms of performance?

Currently I am using HttpWebRequest to pull json data from an external site, and the performance was not good. Is wcf much better?
I need expert advice on this..
Probably not, but that's not the right question.
To answer it: WCF, which certainly supports JSON, is ultimately going to use HttpWebRequest at the bottom level, and it will certainly have the same network latency. Even more importantly, it will use the same server to get the JSON. WCF has a lot of advantages in building, maintaining, and configuring web services and clients, but it's not magically faster. It's possible that your method of deserializing JSON is really slow compared to what WCF would use by default, but I doubt it.
And that brings up the really important point: find out why the performance is bad. Changing frameworks is only an intelligible optimization option if you know what's slow, and, by extension, how doing something different would make it less slow. Is it the server? Is it deserialization? Is it network? Is it authentication or some other request overhead detail? And so on.
So the real answer is: profile! Once you know what the performance issue really is, you can make an informed decision about whether a framework like WCF would help.
The short answer is: no.
The longer answer is that WCF is an API which doesn't specify a communication method, but supports multiple methods. However, those methods are normally over SOAP which is going to involve more overheard than a JSON, and it would seem the world has decided to move on from SOAP.
What sort of performance are you looking for and what are you getting? It may be that you are simply facing physical limitations of network locations, in which case you might look towards making your interface feel more responsive, even if the data is sluggish.
It'd be worth it to see if most of the latency is just in reaching the remote site (e.g. response times are comparable to ping times). Or, perhaps, the problem is the time it takes for the remote site to generate and serve the page. If so, some intermediate caching might be best.
+1 on what Isaac said, but one thing I'd add is, if you do use WCF here, it'll internally use the HttpWebRequest in most places, so you're definitely not gaining performance at all. One way you may unintentionally gain in performance -- however -- is in how WCF recycles, reuses, pools, and caches most transport objects internally. So it ultimately goes back to Isaac's advice on profiling.

WCF NetTCP Binding Over Internet

I have a question. I would like to serve a series of services made with WCF. The client that consumes the services is also .NET with WCF. I would like to have high speed of access, fast response, transport medium to small Data Contracts (primary .net basic data types). The distribution will be over internet, I´m looking for reliability, availability and basic security.
I don´t want to use WsHttp, because my only client is based on .net and I will have almost 150 clients requesting the services.
What do you suggest to use for binding? Are there any disadvantages, risks, etc?
Thanks in advance!
Since you plan to use simple types and small data contracts, the binding you use is nearly irrelevant compared to the latency introduced by going over the Internet. So, the right answer is to use the one which is easiest to manage and the most secure.
I recommend that you host the app in IIS and use a wsHttpBinding and take all the manageability goodness that goes along with it. It will also happen to be interoperable, and while that is irrelevant today, it is just free, so why not?
And, please consider the granularity of your service. You know your customers better, but on the wide open Internet, stuff happens. Because the round trip time over the Internet is variable and impossible to control, it could take milliseconds or seconds or may not get there at all. So, you should take fewer trips with larger payloads if possible, and use all sorts of caching and async operations to make the app appear "fast".
There is a good article on choosing a binding by Juval Lowy here:
http://www.code-magazine.com/article.aspx?quickid=0605051&page=3
Generally the advice is not to use net tcp binding over the internet. Have not heard of anyone doing it. Although it may work if the ports are open all the way and no one blocks the calls.
Test it with nettcp, if it does not work you just need to change the configuration.
The most important thing is to consider your security needs. Do you just need point to point, then basichttp over ssl. Do you need end to end, then wshttp with message encryption.
According to your scenario, NetTcpBinding is the binding of choice. As you are sure that client will be WCF, no need for interoperability.
Have a look here in Programing WCF Services book.
The only thing I'm not sure about is firewalls. If you have to get trough on of theses, maybe some WS binding could be more appropriate.

Improving WCF performance

Could I know ways to improve performance of my .Net WCF service?
Right now its pretty slow and sometimes it gets clogged & eventually stops responding.
What kind of InstanceContextMode and ConcurrencyMode are you using on your service class?
If it's PerCall instances, you might want to check if you can reduce the overhead of creating a server instance for each call.
If it's Single instances (singleton) - do you really need that? :-) Try using PerCall instead.
Marc
Well, what sort of data are you sending, and over what binding?
Is the problem the size of requests (bandwidth), or the quantity of requests (latency). If latency, then simply make fewer, but bigger, requests ;-p
For bandwidth: if you are sending binary data over http, you can enable MTOM - that'll save you a few bytes. You can enable compression support at the server, but this isn't guaranteed.
If you are using .NET to .NET, you might want to consider protobuf-net; this has WCF hooks for swapping the formatter (DataContractSerializer) to use google's "protocol buffers" binary format, which is very small and fast. I can advise on how on request.
Other than that: send less data ;-p
What binding are you using? If you're using HTTP you could get better perfomance with TCP.
In all likelihood though the bottleneck is going to be higher up in the WCF pipeline and possibly in your hosted objects.
We'd need some more details about your WCF set up to be able to help much.
The symptoms you describe could be caused by anything at all. You'll need to narrow it down by using a profiler such as JetBrain's dotTrace or Automated QA's AQTime.
Or you could do it the old fashioned way by instrumenting your code (which is what the profilers do for you). Collect the start time before your operation starts. When it finishes, subtract the start time from the curren time to determine the elapsed time, then print it out or log it or whatever. Do the same around the methods that this operation calls. You'll quickly see which methods are taking the longest time. Then, move into those methods and do the same, finding out what makes them take so long, etc.
"Improve performance of my .Net WCF service" - its very generic term you are asking, different ways we can improve performance and at the sametime you need to find which one causing performance hit like DB access in WCF methods.
Please try to know available features in WCF like oneWay WCF method it will help you in finding ways to improve performance.
Thanks
Venkat
Here is an article with some statistics from real production systems, you could use these to compare/benchmark your performance.
WCF Service Performance
Microsoft recently released a knowledge base article:
WCF Performance and Stability Issues - http://support.microsoft.com/kb/982897
These issues include the following:
Application crashes
Hangs
General performance of the application when calling WCF Service.

WCF Binding Performance

I am using basic HTTP binding.
Does anybody know which is the best binding in terms of performance as thats the key issue for our site?
Depends on where the services are located.
If they're on the same machine, NetNamedPipeBinding should give you the maximum performance.
Otherwise you'll have to choose depending on where they are located, if they have to communicate over the internet, interopability etc.
Soledad Pano's blog has a good flow chart to help with choosing the appropriate bindings depending on situation
This is comparing apples to oranges. If you are using the basic HTTP binding, then there is a basic set of services and whatnot that it is providing, which is different from the services that the WsHttpBinding offers, for example.
Given that, the performance metrics are going to be different, but you also aren't going to get the same functionality, and if you need that particular set of functionality, then the comparison isn't worth doing at all.
Additionally, there are bindings (like the net tcp and named pipe bindings) which might not be applicable at all, but have better performance characteristics.
Finally, your statement about "best performance" indicates that you really aren't looking at it the right way. You have expectations of what your load is during peak and non-peak times, as well as the response times that are acceptable for your product. You need to determine if WCF falls within those parameters, and then work from there, not just say
"I'm looking for the best performance".
You will have to give more requirements for what you are trying to do, and then more light can be shed on it.
A good resource for WCF info:
http://www.codeplex.com/WCFSecurity/Wiki/View.aspx?title=Questions%20and%20Answers&referringTitle=Home
Has a section on choosing bindings for your particular scenario. Is security not an issue? If not then you have more choices available to you.
It's hard to tell what the performance will be without other known factors (server HW, amount of concurrent users, etc.).
HTTP binding will be performing slightly better then HTTPS for example, but binary WCF to WCF communication will be quicker then HTTP for the price of lesser compatibility.
I think you need to provide more details - what is the desired functionality (do you need SOAP messages exchange, or Ajax with JSON?) and expected server load.

Has anybody compared WCF and ZeroC ICE?

ZeroC's ICE (www.zeroc.com) looks interesting and I am interested in looking at it and comparing it to our existing software that uses WCF. In particular, our WCF app uses server callbacks (via HTTP).
Anybody who's compared them? How did it go? I'm particularly interested in the performance aspect, since interoperability isn't much of a concern for us right now. Thanks!
I did a very terse review of ICE a few years ago, and although I haven't compared them directly before, having reasonable knowledge of WCF my thoughts might have some relevance.
Firstly, it's not entierely fair to compare WCF with ICE as WCF as ICE is a specific remote communication mechanism and WCF is a higher level remote communications framework.
While WCF is often thought of as implementing SOAP web services, and that is indeed its main use to date, it can also be used for implementing remote services using all manner of encodings and transport channels, which means it can theoretically be used for performant comms between applications.
In comparison, ICE is a cross-platform remote communicaton mechanism that uses binary encoding for performant communications between applications. It's something of a simplified evolution of CORBA and is more directly comparable to CORBA, DCOM, .NET Remoting, and JNI.
However, even though there's no direct correspondence between ICE and WCF, if you need your .NET app to communicate remotely then they're both contenders. Some of the decision points you might want to consider include:
Resourcing. It'll be easier to find developers with WCF experience than ICE experience.
Performance. If you want performance then ICE performs fast, but WCF can also be used in a performant configuration. Alternatively, .NET Remoting can provide very good performance, and whatever the MS-sponsored benchmarks say I've seen it outperform WCF by 10%.
Cross-platform. If you need to communicate with non-Windows applications then you're limited with the WCF options you can use. In addition, since every SOAP stack seems to implement the standards differently it can be a pain creating truly generic Web Services (though WS-I helps)
If you don't need every ounce of performance from day one, then I'd personally plump for WCF to start with, and then consider ICE if performance ever becomes critical. Even then it might be cheaper to scale out your service boxes than it is to move to ICE, and if you don't have any exotic cross-platform needs then you could always look at reconfiguring WCF for binary encoding etc
Michi Henning from ZeroC has recently published a white paper on just this topic -- "Choosing Middleware: Why Performance and Scalability do (and do not) Matter". It compares Ice, WCF (binary & SOAP), and RMI with various performance metrics, platforms, languages, etc. There's more information on Michi's blog, but the white paper is also quite readable, with all the standard caveats of any benchmark.
Disclaimer: I've used Ice and RMI extensively, but never WCF.
Apache Thrift is another contender to ICE and WCF. It was developed and open sourced by Facebook. Apache Thrift is nice in some ways because its not only extremely efficient on the encoding side, it also supports adding of fields to structures without breaking all of the clients (something we found extremely useful for our projects).
Google Protocol Buffers would seem not really a contender as it doesn't mention .NET support on the home page. However, some community addons support C#. In addition, ICE provides emulation for Google Protocol Buffers if you're working with existing services.
Data point: we just converted a callback multi-platform and multi-language project from Ice to Thrift with pretty good results. Ice does a lot for you, so we had to implement disconnection listeners, connection events, etc. ourselves. And in one case we got bit in the proverbial with a big object lock that Ice was letting us get away with -- this caused a deadlock in the Thrift server but it was easily fixed by less lazy coding on the C# side.
I've just finished benchmarking, and in our application anything that pushes large amounts of data is faster than, or on par with, Ice. Shorter messages with more over-head (i.e., a "heartbeat" that updates a status over the protocol) is a bit slower.
The most important bit was that in order to implement the callback service correctly we had to extend Thrift interfaces and define our own protocol, along with a Thrift "Processor" and callback client-server. But I freely admit our application is /very/ special. The existing protocols and servers should be sufficient. But extending them, even to use multiplex sockets from .Net, was not terribly difficult.
We are using ICE to integrate modules written in both C++, Java and C#. The nice thing is that our server can access components on remote machines as well, so if we need more performance we can shift processing to different machines.
I've used both WCF and ICE, and I'd say that ICE is cleaner on the implementation side. ICE also has very detailed and readable documentation.
ICE supports some things that WCF cannot do, including load balancing, automated remote client updates, etc.