WCF - what is the fastest binding? - wcf

I currently have a WCF service which uses BasicHTTP binding, and is wrapped with a secure router/firewall (PFSense).
I have heard that there is a faster binding than BasicHTTP binding, but I do now know what it is.
Does anyone know?
Update: ok, two great answers for intranet/localhost. Thank you!
What about for internet deployed apps? Is there a faster internet centric solution?

If your solution is deployed to an intranet, you can use NetTcpBinding.
http://msdn.microsoft.com/en-us/library/system.servicemodel.nettcpbinding.aspx
While perhaps not authoratative, this post covers some benchmarking with these results, which are consistent with my answer and parapura's:
WSDualHttpBinding: Processed 1602 calls in 10 seconds
WSHttpBinding: Processed 2531 calls in 10 seconds
BasicHttpBinding: Processed 17913 calls in 10 seconds
NetTcpBinding: Processed 39957 calls in 10 seconds
NetNamedPipeBinding: Processed 48255 calls in 10 seconds

On the same machine you can use NetNamedPipeBinding for maximum performance.
Decision Points for Choosing a Transport
Throughput measures the amount of data that can be transmitted and processed in a specified period of time. Like latency, the chosen transport can affect the throughput for service operations. Maximizing throughput for a transport requires minimizing both the overhead of transmitting content as well as minimizing the time spent waiting for message exchanges to complete. Both the TCP and named pipe transports add little overhead to the message body and support a native duplex shape that reduces the wait for message replies.

Related

Large RabbitMQ message in Slow network

I am using RabbitMQ with Spring AMQP
large message (>100MB, 102400KB)
small bandwidth (<512Kbps)
low heartbeat interval (10 seconds)
single broker
It will take >= 200*8 seconds to consume the message, which is more than my heartbeat interval. From https://stackoverflow.com/a/42363685/418439
If the message transfer time between nodes (60seconds?) > heartbeat time between nodes, it will cause the cluster to disconnect and the loose the message
Will I also face the disconnection issue even I am using single broker?
Does the heartbeat and consumer using the same thread, where if
consumer is consuming, it is not possible to perform heartbeat?
If so, what can I do to consume the message, without increase heartbeat interval or reduce my message size?
Update:
I have received another answer and comments after I posted my own answer. Thanks for the feedback. Just to clarify, I do not use AMQP for file transfer. Actually the data is in JSON message, some are simple and small but some contain complex information, include some free hand drawing. Besides saving the data at Data Center, we also save a copy of message at branch level via AMQP, for case connectivity to Data Center is not available.
So, the real questions here are a bit more fundamental, and those are: (1) is it appropriate to perform a large file transfer via AMQP, and (2) what purpose does the heartbeat serve?
Heartbeats
First off, let's address the heartbeat question. As the RabbitMQ documentation clearly states, the purpose of the heartbeat is "to ensure that the application layer promptly finds out about disrupted connections."
The reason for this is simple. In an ordinary AMQP usage, there may be several seconds, even minutes between the arrival of successive messages. Without data being exchanged across a TCP session, many firewalls and other networking equipment automatically close ports to lower exposure to the enterprise network. Heartbeats further help mitigate a fundamental weakness in TCP, which is the difficulty of detecting a dropped connection. Networks experience failure, and TCP is not always able to detect that on its own.
So, the bottom line here is that, while you're transferring a large message, the connection is active and the heartbeat function serves no useful purpose, and can cause you trouble. It's best to turn it off in such cases.
AMQP For Moving Large Files?
The second issue, and I believe more important question, is how should large files be dealt with. To answer this, let's first consider what a message queue does: sending messages -- small bits of data which communicate something to another computer system. The operative word here is small. Messages typically contain one of three things: 1. commands (go do something), 2. events (something happened), 3. requests (give me some data), and 4. responses (here is your data). A full discussion on these is beyond the scope, but suffice it to say that each of these can generally be composed of a small message less than 100kB.
Indeed, the AMQP protocol, which underlies RabbitMQ, is a fairly chatty protocol. It requires large messages be divided into multiple segments of no more than 131kB. This can add a significant amount of overhead to a large file transfer, especially when compared to other file transfer mechanisms (FTP, for instance). Secondly, the message has to be fully processed by the broker before it is made available in a queue, and it ties up valuable resources on the broker while this is being done. For one, the whole message must fit into RAM on the broker due to its architecture. This solution may work for one client and one broker, but it will break quickly when scaling out is attempted.
Finally, compression is often desirable when transferring files - HTTP supports gzip compression automatcially. AMQP does not. It is quite common in message-oriented applications to send a message containing a resource locator (e.g. URL) pointing to the larger data file, which is then accessed via appropriate means.
The moral of the story
As the adage goes: "to the man with a hammer, everything looks like a nail." AMQP is not a hammer- it's a precision scalpel. It has a very specific purpose, and narrow applicability within that purpose. Using it for something other than its intended purpose will lead to stability and reliability problems in whatever it is you are designing, and overall dissatisfaction with your end product.
Will I also face the disconnection issue even I am using single
broker?
Yes
Does the heartbeat and consumer use the same thread, where
if consumer is consuming, it is not possible to perform heartbeat?
Can't confirm the thread, but from what I observe when Java RabbitMQ consumer consumes a message, it won't perform heartbeat acknowledgement. If the time to consume longer than 3 x heartbeat timeout timer (due to large message and/or low bandwidth), MQ server will close AMQP connection.
If so, what can I do to consume the message, without increase
heartbeat interval or reduce my message size?
I resolved my issue by increasing heartbeat size. No further code change is required.

Strategy for busy WCF service

I've got a really busy self-hosted WCF server that requires 2000+ clients to update their status on a frequent basis. What I'm finding is that the CPU utilization of the server is sitting at around 70% constantly, and the clients have a 50% chance of actually getting a connection to the server. They will timeout after 60 seconds. This is problematic because if the server doesn't hear back from a client, it'll assume the client is offline.
I've implemented throttling so I can adjust concurrent connections/sessions/etc., but if I'm not mistaken, increasing this will only lead to higher CPU utilization and worse connectivity problems. Right?
Will increasing the timeout to something more than 60 seconds help? I'm not exactly sure how it works, but will a client sit in a type of queue until the server can field the request? Or is it best to set the timeout to something smaller and make the client check in more often if it can't get connected (this seems like it could only make the problem worse in a sense)?
If it's really important for the server to know if the client is still connected, I don't think relying solely on WCF is your best bet for that.
Maybe your server should have some sort of ping mechanism that either allows it to ping client machines based on some sort of timer or vice versa.
If you're super concerned about the messages always getting through, no matter what, then I suggest exploring Reliable services. Check out the enableReliableSession behavior attribute. I suggest reading through at least the first chapter in Juval Lowy's Programming WCF Services which is available for free as the Kindle sample of the book.
Increasing the timeout may help, but probably not much, and the Amazing Ever-Increasing Timeout is kind of a motif on http://www.thedailywtf.com . Making the client hammer the server if it can't get through the first time is guaranteed to cause pain.
If all that you care about is knowing whether the client is there, might it be practical to go down a layer or two, and have the client send you an HTTP POST once in a while? WCF requires some active back-and-forth, but a POST can just lay there until your server has time to deal with it, and the client can just send it and forget about it.

Service instances in WCF

I'm using perfmon to examine my service behaviour. What I do is I launch 6 instances of client application on separate machines and send requests to server in 120 threads (20threads per client application).
I have examined counters and maximum number of instances (I use PerSession model and set number of instances to 100) is 12, what I consider strange as my response times from service revolve around 120 seconds... I thought that increasing number of instances will cause WCF to create more instances, and as a result response times would be quicker.
Any idea why WCF doesn't create even more instances of service?
Thanks Pawel
WCF services are throttled by default - it's a service behavior, which you can tweak easily.
See the MSDN docs on ServiceThrottling.
Here are the defaults:
<serviceThrottling
maxConcurrentCalls="16"
maxConcurrentInstances="Int.MaxValue"
maxConcurrentSessions="10" />
With these settings, you can easily control how many sessions or concurrent calls can be handled, and you can make sure your server isn't overwhelmed by (fraudulent) requests and brought to its knees.
Ufff, last attempt to understand that silly WCF.
What I did now is:
create client that starts 20 threads, every thread sends requests to service in a loop. Performance counter on server claims that only 2 instances of service object are created all the time. Average request time is about 40seconds (I start measuring before proxy call and finish after call returns).
modify that client to start 5 threads and launch 4 instances of that client (to simulate 20 threads behaviour from previous example). Performance monitor shows that 8 instances of service object are created all the time. Average request time is 20seconds.
Could somebody tell me what is going on? I thought that there is a problem with server that it doesn't want to handle more requests at concurently, but apparently it is client that causes a stir and don't want to send more requests concurently... Maybe there is some kind of configuration option that limits client from sending more then two requests at one time... (buffer,throttling etc...)
Channel factory is created in every thread.
You might want to refer to this article and make adjustment to your WCF configuration (specifically maxConnections) to get the number of connections you want.
Consider using something like http://www.codeplex.com/WCFLoadTest to hit the service.
Also, perfmon will only get you so far. If you want to debug WCF service you should look at the SvcTraceViewer and SvcConfigEditor in the Windows SDK.
On your service binding what have you set the maxconnections to? Calls to connect will block once the limit is reached.
Default is 10 I think.
http://msdn.microsoft.com/en-us/library/ms731379.aspx

WCF Service Design example

I have to create a WCF service that will accept thousands of requests every 5 minutes, which each request passing a small (1-5KB) text file.
The service will pass the file contents to another assembly, which will process the lines and insert some records into the database. Nothing too heavy on this side.
I need help on the following aspects:
Which WCF configuration should I use that will give me the best performance? The calls to the service will come from the internet not an internal LAN.
The service will accept requests every 5 minutes, which means I have only 5 minutes to process all the requests before the next cycle. Is MSMQ the best solution here?
Any examples online I can read?
For best performance, I'll assume you're talking about less latency. You should pick a TCP transport, like net.tcp. This document can help you to decide Choosing a Transport
About that MSMQ part: you'll receive a lot request and just start processing them after 5 minutes? If yes, your choice is correct: MSMQ will keep that request queue and you can work on them asynchronously.
Use NetTCPbinding
Optimizing WCF Web Service Performance
Creating high performance WCF services

Performance of WCF with net.tcp

I have a WCF net.tcp service hosted with the builtin ServiceHost, and when doing some stress tests I get a strange behavior. The first time i send a bunch of requests, 5 to 10 requests are answered quickly, and the rest are returning at about 2 second intervals. The second time i send the requests, 10 - 20 are returned quickly, and rest with 2 sencond intervals.
The above repeats until I can get over 100 requests returned quickly, but if I wait a minute or so the memory usage of the service goes down and the requests go back to 5-10 returning quick.
The service I am testing has a small delay, so that I can get many open connections at the same time, if this delay is removed the requests return so quickly that i have perhaps 2-5 connections open at the same time. This delay is to simulate DB connections and other outgoing stuff.
From the behavior it looks like the ServiceHost is allocating something, threads, class instances, but I can not figure out what it is.
I could have a timer in the client that calls the service to keep it working, but that seems like a bad solution.
If I have a high sustained load to the service it will crunch all requests quickly, but if I have a period of low activity and then a surge of connections comes in the service will be slow.
I guess my question is WHAT is it the get allocated during high load of the WCF service, and HOW can I configure the service to preallocate more of the things that get allocated.
EDIT:
I did some more testing, and looking at the taskmgr for the process I can see that when the servicehost is 'resting' there are 10 threads open, but when I start sending requests, the threadcount goes up. As long as the threadcount is high the servicehost can process incoming requests quickly, but if I pause sending the requests, the open threadcount decreases, and subsequent requests starts taking longer time to process.
Now, how can I tell the servicehost to keep a bunch of threads open? Or more than the 10-12 that it keeps by default?
Well, after lots of googling, it seems that the problem is the threadpool. The CLR threadpool allocates a few threads, and when they are used, it throttles the creation of new threads, and after a time it also deallocates unused threads.
There is some confusion about a bug that meant that the ThreadPool did not honor the SetMinThreads call.
http://www.michaelckennedy.net/blog/PermaLink,guid,708ee9c0-a1fd-46e5-8fa0-b1894ad6ce0f.aspx
I am not sure if this bug is solved, or what, because when I modify the ThreadPool settings, the problem persists.
The thing that determines how may request are handled simultaneous is the ServiceThrottlingBehavior. There are a number of different threasholds that will limit the amount of request being processed. This also depends on the binding your are using, for example wsHttpBinding defaults to sessions on while basicHttpBinding uses no sessions and the default session limit of 10 is no problem.
See http://msdn.microsoft.com/en-us/library/ms735114.aspx for more details.
The bug you referenced is fixed in .NET 3.5 SP1. That may have had something to do with the problem, I think it's more likely (much more likely) that throttling is your problem rather than thread as Maurice keyed into.
<system.serviceModel>
<service name="???" >
<endpoint ... />
</service>
</system.serviceModel>
What's the throttle limit for this "empty" config? 10 session, 16 concurrent calls! Beware.
Here's more on the threading:
http://www.michaelckennedy.net/blog/2008/08/20/ThreadPoolBugInNET20SP1IsFixed.aspx
This feels like a hack but it seems to solve your issue. The problem is that the threadpool will take time to start up a new thread, so what you really need is threads waiting on standby. Add a constructor to your service and set the minimum number of threads you would like.
public YourService()
{
int workerThreads;
int portThreads;
ThreadPool.GetMinThreads(out workerThreads, out portThreads);
ThreadPool.SetMinThreads(200, portThreads);
}