I am a newbies programming WCF in .net. Recently, I worked on one of the WCF project which responds bytes of image file to the client. Everything worked fine but the performance. Although the service is built on with concurrency mode as parallel, it puts all the requests in queue. Thus, if 5 requests are in queue, the last request has to wait 5X (15 sec instead of 3 sec). The msdn blog: http://social.msdn.microsoft.com/Forums/en/wcf/thread/861ea6f7-6c4e-4c3f-abde-ae60228244ea explains about similar problem. But the solution was not helpful to me. I would like to thank in advance to you all for any suggestion/help.
Firstly, if at all possible, I recommend using IIS7 on Server 2008+ if at all possible. Its capabilities far exceed IIS6'. If you're unable to use IIS7 ...
Be sure you've configured you website hosting your WCF services as a web garden. This allows multiple worker processes to process incoming requests. This overcomnig situations where the ASP.NET thread-pool saturates/blocks, resulting in requests being queued while the single worker process churns through each request in sequence.
Secondly, as the article you point to states, be sure to boost the number of concurrent threads ASP.NET is configured to handle.
Note, if your code is calling into code that serializes work to a single blocking thread (e.g. COM objects written in VB6 that perform ANY string manipulation), then it doesn't matter how many worker threads you configure - they'll all be serialized down to one thread (since VB6's string routines are single-threaded)! This is why the web-garden & multiple worker processes configuration is so important.
HTH.
Related
I've encountered a strange problem with an application I've developed. The application is a windows service hosting AspNetCore 2.0 running on Kestrel. This application receives requests through an IIS site acting as a proxy.
In this application, I also use signal 2.2.2 integrated using Microsoft.AspNetCore.Owin. All worked well until I detected that the application was not responding to requests.
Other applications on the same machine and using the same IIS server as proxy were working fine. Restarting the application pool serving the site solved the problem temporarily.
The problem resurfaced again and digging through monitoring information the application seems to hang when there are 400 signalr SSE connections on the same machine. This seems plausible as I've found that by default OWIN limits the number of concurrent requests at 100 * number of cpus. (Note that a site on the same machine is serving 5000 requests per minute without a sweat but these are not a long-lived request like the SignalR ones)
The problem is that I seem unable to find the same option when hosting Owin inside AspNetCore. Does someone know if this can be the solution and what is the correct setting?
EDIT: I'm fairly certain that the issue is caused by the number of SignalR connections opened concurrently because by disabling it in Javascript the problem vanished.
2nd EDIT: signalr does not seem to be the cuplrit as load testing the site with crank both in test and in production worked until 5000 concurrent connections which is the default IIS limit and is fine by me
After some trial and error I've been able to identify and correct the problem but it was no easy task so I'm leaving this answer behind if someone else stumbles upon the same problem.
Disabling SignalR did not solve the problem but it made it appear less often.
Thanks to the monitoring in place on the server and IIS I observed that the problem appeared when the number of connections to the site started growing rapidly. This system primarily makes request to other services so it does not have a database nor expensive computations.
Examining the code I've found that there were three problems:
a new HttpClient was created for every request which can exhaust the sockets which are not reused between requests blog blog2 blog3
by default there's a maximum number of concurrent connections on the httpClient to a single domain and this limit is set by default to 2 (!!!) blog4
the code was waiting synchronously on every web request to another system (this program was ported from an mvc4 site which never displayed this problem). This worked fine in MVC but asp.net core is very sensitive to this as it will rapidly exhaust all available threads and because the thread pool starts with the number of cores they will be exhausted quickly making all the requests wait. This value can be increased as temporary stop gap solution with ThreadPool.SetMaxThreads(Int32, Int32) but the only solution is to transform all calls in async calls.
Once all calls were mde async the problem never returned. Basically the problem was due to threadpool starvation and aspnet core sensibility to it vs MVC. Here you can find a nice explanation and a detection method using PerfView.
This could be the issue, but it's unlikely. When hosting in dotnet core you're probably using Kestrel as a webserver implementation, to switch these limits such as concurrent connections you can use KestrelServerLimits class as described in this Microsoft article.
KestrelServerLimits should not be causing you any problems since the default value for ConcurrentConnections is unlimited.
I have a windows phone 8 application, which communicates with WCF service using basicHttpBinding. The service is hosted on IIS7 (and not using windows azure)
As the service may go down for any reason, I am exploring the use of message queues to increase the reliability of the system.
I have looked at NetMsmqBinding provided in WCF - but it looks like this binding is not supported by WP8 client.
I am also looking at using RabbitMQ, but cannot find any working example with WP8 client using WCF.
Please can anyone suggest what is the best way forward? Any sample code (or links) will be much appreciated.
Thanks
First off, netMsmqBinding cannot be used across the internet. This is because it uses MSMQ which is not exposed over http.
When you're making calls to a resource across the internet, unreliability is something you need to factor into your application. Because of the number of possible problems you can encounter, it's generally not a case of if, but when, there is a failure and it's how your application deals with this which is important.
Even so, there are things you can do to minimize the reliability issues you experience, one of which does involve queuing.
Where queuing can be useful is taking large, complex, and long running processes offline. Because calls to such processes implemented synchronously often time out, you can gain a lot of reliability by making the actual processing call asynchronous.
As an example, it would be fairly common to have the web server invoke some offline process via message queuing and return to the client that their request is being processed. Because doing this is inexpensive calls are far less likely to fail. Your problem then becomes one of how to return the response to the client once the offline processing has been done.
Even if this question is a year old, I am still searching a good answer for this question. I appreciate any information that will lead me to fully understand this issue regarding low performances of communicating web services hosted on the same machine.
I am currently developing a system with several WCF Web Services that communicate intensively.
They are running under IIS7, on the same machine, each service being in a different Application Pool, with multiple workers in the Web Garden.
During the individual evaluation of each Web Service, I can serve 10000-20000 requests per minute, quickly and without any issues for the resource consumption (processor and memory).
When I test the whole system or just a subsystem formed by two Web Services I can't serve more than 2000 requests/minute.
I also observed that communication time between Web Service is a big issue (sometimes more than 10 seconds). But when testing with only 1000 requests per minute everything goes smoothly (connection time of no more than 60 ms).
I have tested the system both with SOAPUI and JMETER, but the times were computed based on system logs, not from the testing tools.
Memory and network aren't an issue (they are used very little).
Later on, I have tested the performance of 2 communicating WCF web services, hosted on two server and on the same server. It again seems that there is a bottleneck when the services are on the same machine, lowering the number of connection with from ten thousands to thousands; again, no memory or processor limiting.
As a note, I am working with quite big data in some cases and some of the operations needed are long ones.
I used perf.mon to see what's going on, for memory, processes, webservice, aspnet, etc. but I didn't see anything that could indicate what it's going wrong.
I also tried all the performance settings and tuning options I could find on the Internet.
Does someone know what can be wrong? Why the communication between Web Services could last so long? Why the Web Service which serves as an entry point in the system can accept 10000 requests/minute when is tested alone, but when communicating with another Web Service barely accepts 2000?
It's an IIS7 problem? Could my system perform better if each Web Service will be deployed on a different server?
I want to understand better how things internally function (IIS and WCF services) to improve performances for current and future systems.
You could try to collect data from WCF performance counters : concurrent calls, instances, duration, ... In addition, WCF throttling provides some properties that you can use to limit how many instances or sessions are created at the application level. Performance of the WCF service can be improved by creating proper instance.
Finally, in load testing, there are many configuations to apply to different component : max concurrent http connection, IIS limits, having many load clients... You load test is invalidated because of this.
In order to optimize some server-side database calls I decided to use System.Threading.Tasks.Task to parallelize several database calls then use Task.WaitAll() to get all the results, package them up and send them to the client via WCF. This seems to work fine when testing on the dev web server in Visual Studio (cassini) but does not work when deployed to IIS. Profiling the client calls (with firebug) shows that calls get to IIS but no corresponding calls are submitted to SQL Server.
Anyone experienced this? Is there a limitation in using Tasks within IIS?
There is no direct limitation - however, when you use a Task, it schedules the Task on the ThreadPool. IIS, by default, shares a single thread pool for the entire IIS process, which can (especially on a busy server) cause thread starvation to occur. This means that the same guidance about using the ThreadPool applies when working with tasks. See this post for details.
In order to see if this is the problem, you could, at least as a test, generate all of your Task instances with the TaskCreationOptions.LongRunning hint. This will cause the default TaskScheduler to create task on it's own, dedicated (new) Thread instead of using a ThreadPool thread. While I don't think this is a good idea for a long term solution, you would be able to verify that it's thread pool starvation causing your problem. If it is, you could determine other options, such as potentially using a custom TaskScheduler to manage the threads/tasks for this operation.
I'm developing a web application that needs to perform a task that consumes a lot CPU and Memory, and that also may last several minutes. In order to get a better user experience, I also developed a windows service that hosts a WCF service that performs this "high cost" task and that comunicates with the web app using msmq (message queues).
This worked great until I tried to make a load test... The windows service starts consuming a lot of resource, puttin the CPU to work at 100% and more than 1GB of memory. I've looked for optimizations and I've done a lot of tweaks to the code and I think that it is very efficient, but the task just requires a lot of resources.
The problem is that while the WCF service is working, the CPU gets used at 100% and the web app turns INCREDIBLY SLOW! I don't mind if the task that the WCF service does takes a couple of minutes more, but I want the web app to perform well for users.
So I'm wondering if there is a way to limit the resources that the WCF service can consume, giving priority to the web app.
Thanks in advance.
Juan
The easy solution would be to place the WCF service on a different machine.
The fact that the service is using alot of CPU is probably not related to you using WCF.
There are some ways that you may be able to improve the performance of your web app:
Process only one message at a time.
Break the jobs into smaller parts.
Set priority of the windows service to below normal in the task manager
Install more RAM on the server
I guess this is a problem of your Windows service design. When you decide to host WCF in Windows service you have to control resource utilization = you have to control throttling. You have to create configurable control over internal service processing so that you can change the load based on available resources. If you host WCF in IIS it already provides such control on AppPool level.
There are some freeware tools which allow limiting CPU usage for given process but that is not something I would recommend for production usage.
Best regards, Ladislav