Performance tuning a WCF with netTcpBinding - wcf

We have a WCF service consumed by an ASP.NET application which is not where we would like it to be with the speed. The slowness is seen at the networking level, not in the processing. So for example it takes 100 milliseconds for the processing, however it takes 600ms for transporting 27,000 bytes of data from one machine to the other (confirmed via wireshark).
We have disabled secure connection. The test is being run on a machine with no load, and the two machines are on the same LAN.

Related

Maximum concurrent calls to WCF service

We have a wcf service hosted in IIS (Win 2012). We expect thousands of messages coming throughout the day, however the peak would be around 4K concurrent messages. All these are one way (async) requests and wcf service performs some processing which will take several seconds for each request.
- Is there any limitations on max concurrent requests that can be sent to wcf svc hosted on IIS? Does it depend on thread availability on server? Any settings that needs tweaking would be useful.
In another scenario, we have less number of async requests coming to wcf svc, however for each request service performs few things parallely (Parallel.ForEach). What would be maximum parallel threads available in this scenario and does it depend on any other factors? Any settings that needs tweaking would be useful.

Why doesn't my azure hosted WCF service scale when I add more machines?

We have a WCF service which we are hosting on azure. It takes some xml and processes it in memory (no external calls/db etc and it takes about 150ms) and returns some xml.
We have been load testing it and when we run it on 1,2 and 4 core machines we can max out the processors and get around a max of 40 calls per second throughput (on the 4 core machine). However when we switch to an 8 core machine or two 4 core machines we still only get around 40 calls per second.
Why might I not be able to get more throughput when I scale up the number of machines doing the processing? I would expect adding more machines would increase my throughput fairly linearly, but it doesn't. Why not?
Not sure if Azure has specific throttling, but the .NET framework has a limit on the number of outgoing connections to the same address that can be active at a time. In this MSDN article called Improving Web Services Performance it mentions that the default value for this is 2.
Configure The maxconnection Attribute
The maxconnection attribute in Machine.config limits the number of concurrent outbound calls.
Note This setting does not apply to local requests (requests that originate from ASP.NET applications on the same server as the Web service). The setting applies to outbound connections from the current computer, for example, to ASP.NET applications and Web services calling other remote Web services.
The default setting for maxconnection is two per connection group. For desktop applications that call Web services, two connections may be sufficient. For ASP.NET applications that call Web services, two is generally not enough. Change the maxconnection attribute from the default of 2 to (12 times the number of CPUs) as a starting point.
<connectionManagement>
<add address="*" maxconnection="12"/>
</connectionManagement>
Note that 12 connections per CPU is an arbitrary number, but empirical evidence has shown that it is optimal for a variety of scenarios when you also limit ASP.NET to 12 concurrent requests (see the "Threading" section later in this chapter). However, you should validate the appropriate number of connections for your situation.
These limits are in place to prevent a single users from monopolizing all the resources on a remote server (DOS attack). Since this is a service running in Azure I would guess that they have throttling on their end to prevent a user from consuming all of their incoming connections from a single IP.
My next step would be to check and see if there is a concurrent connection limit for azure web roles (this thread suggests there is and it's configurable) and to either increase it. Otherwise I would try to perform my load test from multiple sources and see if you still experience the same limits.

netTcp async callbacks executing very slowly over the internet?

I have a duplex voice and video chatting service hosted in IIS 7 with netTcpBinding. The consumer of the service is a Silverlight client. The client invokes a method on the service: SendVoiceAsync(byte[] buffer) 10 times a second (every 100ms), and the service should call the client back also 10 times a second (in resposnse to every call). I've tested my service rigorously in a LAN and it works very well, for every 100 calls sending the voice/video buffer to the service, the service calls the other clients back 100 times with received voice buffers in that time period. However when I consume the service over the internet it become horrifically slow, lags terribly and often gives me a timeout error. I've noticed that over HTTP the callbacks are being received at a rate of about 1/10th of each call to the server, so for every 100 calls to the service, the server calls the clients back 10 times, when this number should be 100 (as it is in LAN) or something very close to it.
So what could be causing the service to become so laggy over HTTP? I've followed the general guidelines on configuring netTcpBinding for optimised performance, and while it seems to pay dividends in LAN its terrible over the internet. It feels as if something is blocking the client from sending their replies to the service, though I've disabled all firewalls, and forwarded ports 4502-4535 and port 80 on which the website hosting the service resides to the server computer. If it helps, my service has ConcurrencyMode set to Multiple, and it's InstanceContextMode set to Single. Also my service operations are all one way and not request-reply.
Thanks for any help.
The internet is a much more noisy and difficult network than LAN: packets might get lost, re-routed via different routers/switch and the latency is generally pretty bad.
That's why TCP exists, it is an ordered, reliable protocol so every packet is acknowledged by the receiver and re-sent if it didn't make it.
The problem with that is it does not try to be fast, it tries to get all of the data sent across and in the order it was sent.
So I'm not surprised that your setup works in LAN, as LAN Round Trip Times (RTT) are usually about 5 ~ 80 ms, but fails over the internet where an RTT of 250 ms is normal.
You can try to send your data less often, and switch to UDP which is non-ordered and unreliable but faster than TCP. I think UDP is standard for voice/video over the internet as you can compensate for the lost packets with a minor degradation of the voice/video quality.Online games suffer from the same issue, for example the original Quake was unplayable over the internet.

Long communication time of WCF Web Services within the same server

Even if this question is a year old, I am still searching a good answer for this question. I appreciate any information that will lead me to fully understand this issue regarding low performances of communicating web services hosted on the same machine.
I am currently developing a system with several WCF Web Services that communicate intensively.
They are running under IIS7, on the same machine, each service being in a different Application Pool, with multiple workers in the Web Garden.
During the individual evaluation of each Web Service, I can serve 10000-20000 requests per minute, quickly and without any issues for the resource consumption (processor and memory).
When I test the whole system or just a subsystem formed by two Web Services I can't serve more than 2000 requests/minute.
I also observed that communication time between Web Service is a big issue (sometimes more than 10 seconds). But when testing with only 1000 requests per minute everything goes smoothly (connection time of no more than 60 ms).
I have tested the system both with SOAPUI and JMETER, but the times were computed based on system logs, not from the testing tools.
Memory and network aren't an issue (they are used very little).
Later on, I have tested the performance of 2 communicating WCF web services, hosted on two server and on the same server. It again seems that there is a bottleneck when the services are on the same machine, lowering the number of connection with from ten thousands to thousands; again, no memory or processor limiting.
As a note, I am working with quite big data in some cases and some of the operations needed are long ones.
I used perf.mon to see what's going on, for memory, processes, webservice, aspnet, etc. but I didn't see anything that could indicate what it's going wrong.
I also tried all the performance settings and tuning options I could find on the Internet.
Does someone know what can be wrong? Why the communication between Web Services could last so long? Why the Web Service which serves as an entry point in the system can accept 10000 requests/minute when is tested alone, but when communicating with another Web Service barely accepts 2000?
It's an IIS7 problem? Could my system perform better if each Web Service will be deployed on a different server?
I want to understand better how things internally function (IIS and WCF services) to improve performances for current and future systems.
You could try to collect data from WCF performance counters : concurrent calls, instances, duration, ... In addition, WCF throttling provides some properties that you can use to limit how many instances or sessions are created at the application level. Performance of the WCF service can be improved by creating proper instance.
Finally, in load testing, there are many configuations to apply to different component : max concurrent http connection, IIS limits, having many load clients... You load test is invalidated because of this.

Client CPU goes almost 100% when stop service

We have a service which is hosted as a Windows service. netTcpBinding with message security type without reliable session.
On the client side we have a proxy collection cached in a list as channel creation and dispose is costly operations. My client is connecting to server and getting the data from server.
Now if I stop the server, then the CPU jumps up. The worker thread which consumes CPU is for the code execution of
void System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32, UInt32, NativeOverlapped *)
When i dispose all the proxies the client application CPU consumption becomes none. I need to know how we can fix up this issue on the WCF.
One question is why are you having collection of proxy's on client for single wcf service. Say you have 20 proxies and WCF Service instancing is per-session then it will create 20 instances of service on your server each having memory allocated to it. If you are having per-call (which is default) then you will have even more instances. Instead of having list of proxies cant you reuse one proxy.
I suppose when you are stopping service, cpu has to clean(garbage collect) too many instances of service in short time hence it jumps.
Unless you do not close proxies their respective instances on server wont be released.Try making instancing Singleton.