I'm developing a web application that needs to perform a task that consumes a lot CPU and Memory, and that also may last several minutes. In order to get a better user experience, I also developed a windows service that hosts a WCF service that performs this "high cost" task and that comunicates with the web app using msmq (message queues).
This worked great until I tried to make a load test... The windows service starts consuming a lot of resource, puttin the CPU to work at 100% and more than 1GB of memory. I've looked for optimizations and I've done a lot of tweaks to the code and I think that it is very efficient, but the task just requires a lot of resources.
The problem is that while the WCF service is working, the CPU gets used at 100% and the web app turns INCREDIBLY SLOW! I don't mind if the task that the WCF service does takes a couple of minutes more, but I want the web app to perform well for users.
So I'm wondering if there is a way to limit the resources that the WCF service can consume, giving priority to the web app.
Thanks in advance.
Juan
The easy solution would be to place the WCF service on a different machine.
The fact that the service is using alot of CPU is probably not related to you using WCF.
There are some ways that you may be able to improve the performance of your web app:
Process only one message at a time.
Break the jobs into smaller parts.
Set priority of the windows service to below normal in the task manager
Install more RAM on the server
I guess this is a problem of your Windows service design. When you decide to host WCF in Windows service you have to control resource utilization = you have to control throttling. You have to create configurable control over internal service processing so that you can change the load based on available resources. If you host WCF in IIS it already provides such control on AppPool level.
There are some freeware tools which allow limiting CPU usage for given process but that is not something I would recommend for production usage.
Best regards, Ladislav
Related
I have a windows phone 8 application, which communicates with WCF service using basicHttpBinding. The service is hosted on IIS7 (and not using windows azure)
As the service may go down for any reason, I am exploring the use of message queues to increase the reliability of the system.
I have looked at NetMsmqBinding provided in WCF - but it looks like this binding is not supported by WP8 client.
I am also looking at using RabbitMQ, but cannot find any working example with WP8 client using WCF.
Please can anyone suggest what is the best way forward? Any sample code (or links) will be much appreciated.
Thanks
First off, netMsmqBinding cannot be used across the internet. This is because it uses MSMQ which is not exposed over http.
When you're making calls to a resource across the internet, unreliability is something you need to factor into your application. Because of the number of possible problems you can encounter, it's generally not a case of if, but when, there is a failure and it's how your application deals with this which is important.
Even so, there are things you can do to minimize the reliability issues you experience, one of which does involve queuing.
Where queuing can be useful is taking large, complex, and long running processes offline. Because calls to such processes implemented synchronously often time out, you can gain a lot of reliability by making the actual processing call asynchronous.
As an example, it would be fairly common to have the web server invoke some offline process via message queuing and return to the client that their request is being processed. Because doing this is inexpensive calls are far less likely to fail. Your problem then becomes one of how to return the response to the client once the offline processing has been done.
Normally I host my WCF services in IIS but I've been told by a colleague that services run better (performance wise) when hosted in Local services.
Is this true? What are the pros and cons for each?
Without knowing how your services are designed, how much CPU/memory they consume, how many concurrent clients they support, how they are accessed, etc., it's hard to make a general statement about which method is better/faster. So I'll share my previous experience.
Initially we hosted our WCF services in local Windows Services, after doing some rudimentary performance testing against the two WCF deployment methods. Hosting them in Win Svcs was slightly faster (not noticeably). .NET 4.0/REST/wsHTTPBinding/total 3,000 conc. calls/load balanced sever farm/memory intensive/default WCF settings (initially didn't tweak them).
Then we noticed the memory on our WCF servers was maxing out several days after starting the service and when it happened our services generated strange exceptions sporadically. We turned on perf counters on our servers. That's when we learned that perf profiling WCF services hosted in Win Svc didn't give us a whole lot of insight because many perf counters simply didn't return any info at all, which was confirmed by Microsoft Tech Support team. We also used ANTS to look for memory leaks but didn't find any major issue in our code. We then started tweaking WCF settings (e.g. maxbufferpoolsize) attentively with the help from Microsoft consultants. Ultimately we came to the conclusion that GC wasn't happening frequently enough to free up allocated memory. We even tried switching to server mode GC from workstation mode, which actually ended up worsening the problem.
As the last resort, we switched to IIS. The performance of the service didn't get any better, which was fully expected. However, some of the IIS-specific perf counters confirmed our suspicion about GC not happening frequently enough. We then found this wonderful setting in IIS that allowed us to specify when and how often to recycle an app pool. Yes, we could have developed a simple custom solution to restart our WCF services but why reinvent the wheel, we thought. Additionally, when you recycle an app pool in IIS, IIS doesn't kill it abruptly. Instead, it creates a new one to handle subsequent requests while the old one stays a live for a configurable amount of time to finish processing all outstanding requests. That built-in capability allowed us to maintain our uptime SLA.
Based on my experience, I would suggest you keep them in IIS unless you really really really need to squeeze for that last bit of juice from your servers.
Even if this question is a year old, I am still searching a good answer for this question. I appreciate any information that will lead me to fully understand this issue regarding low performances of communicating web services hosted on the same machine.
I am currently developing a system with several WCF Web Services that communicate intensively.
They are running under IIS7, on the same machine, each service being in a different Application Pool, with multiple workers in the Web Garden.
During the individual evaluation of each Web Service, I can serve 10000-20000 requests per minute, quickly and without any issues for the resource consumption (processor and memory).
When I test the whole system or just a subsystem formed by two Web Services I can't serve more than 2000 requests/minute.
I also observed that communication time between Web Service is a big issue (sometimes more than 10 seconds). But when testing with only 1000 requests per minute everything goes smoothly (connection time of no more than 60 ms).
I have tested the system both with SOAPUI and JMETER, but the times were computed based on system logs, not from the testing tools.
Memory and network aren't an issue (they are used very little).
Later on, I have tested the performance of 2 communicating WCF web services, hosted on two server and on the same server. It again seems that there is a bottleneck when the services are on the same machine, lowering the number of connection with from ten thousands to thousands; again, no memory or processor limiting.
As a note, I am working with quite big data in some cases and some of the operations needed are long ones.
I used perf.mon to see what's going on, for memory, processes, webservice, aspnet, etc. but I didn't see anything that could indicate what it's going wrong.
I also tried all the performance settings and tuning options I could find on the Internet.
Does someone know what can be wrong? Why the communication between Web Services could last so long? Why the Web Service which serves as an entry point in the system can accept 10000 requests/minute when is tested alone, but when communicating with another Web Service barely accepts 2000?
It's an IIS7 problem? Could my system perform better if each Web Service will be deployed on a different server?
I want to understand better how things internally function (IIS and WCF services) to improve performances for current and future systems.
You could try to collect data from WCF performance counters : concurrent calls, instances, duration, ... In addition, WCF throttling provides some properties that you can use to limit how many instances or sessions are created at the application level. Performance of the WCF service can be improved by creating proper instance.
Finally, in load testing, there are many configuations to apply to different component : max concurrent http connection, IIS limits, having many load clients... You load test is invalidated because of this.
I am looking for suggestion for hosting my WCF enterprise application.
The app. require to run without stopping at the server. It also use TCP to yield the best performance at the intranet environment.
I am thinking to host it at window service because IIS recycle process, and has timeout.
However, I find this from the msdn http://msdn.microsoft.com/en-us/library/ff649818.aspx :
Window service...Lack of enterprise features. Windows services do not have the security, manageability, scalability, and administrative features that are included in IIS.
Does it mean Window Service is not suitable for enterprise application? But How about MS SQL, Oracle, MySQL etc. They all host at Win. Service right?
Regards
Bryan
Windows service is suitable for enterprise application! The quoted text actually means that IIS has a lot of built-in management features which are not available in custom hosting (like windows service) unless you implement them at your own.
One of such features is the recycling you want to avoid which helps application to keep low resource consumption (server is in healthy state). Another such feature is IIS checking of the worker state. If worker process looks stuck (don't process requests for any reason), IIS will start automatically another process and routes new requests to that process.
IIS + WAS + AppFabric can provide very big feature set but they are not good for every scenario. If you have service which requires some background continuous, scheduled or multi threaded processing it is probably better to move to self hosted scenario.
We're planning a system running on Windows/.Net 3.5 that has a number of "services" that need to run in the background. Some will be active all of the time, but some will only be called occassionally and can be stood-up on demand.
As far as I can see, my options are:
Windows Services - always running(?)
IIS hosted something - called on demand
COM+/ .Net Enterprise Sevices - most complex option, but most powerful?
Distributed transactions is not a requirement, these are mainly computation engines, rather than transaction processors.
Does anyone have any experience of working with all of these and what further pros & cons can be claimed for each technology?
EDIT
Is suppose there are multiple ways of hosting code in IIS, web services, WCF (as pointed out below), any others? Relative pros/cons?
WCF feels like the right way to go. There are still many choices to make. WCF provides a number of communication mechanisms and hosting environments:
WCF combines the following technologies under one set of APIs-
ASMX;
WSE;
Remoting;
COM+;
MSMQ.
So for instance you can use persistent messages from MSMQ for occassionaly connected clients or standard XML encoding SOAP messages over an HTTP transport layer. You can also use new features in 3.5 like binary encoding of XML or JSON encoding over HTTP.
Hosting environments include:
Console applications
Windows services
WCF services inside IIS 7.0
and on Windows Vista or Windows Server 2008 you can use WAS (Windows Activation Services) to host WCF services.
Different hosting environments have pros and cons. I suggest you look at MSDN for more details (e.g. http://msdn.microsoft.com/en-us/library/bb332338.aspx).
Because WCF encompasses a lot of functionality it is more difficult to learn than any one of the technologies it replaces. I still think it pays for itself in the long run.
It depends on what the software will do, and how (and if) users or systems need to interact with it. Depending on those things, there may be one more, often overlooked, option: set it up as a scheduled task. This is often a very good alternative to a windows service, if the software is of the kind that will act on certain time intervals (check for a change in a database, act on the changed data and send it somewhere, for instance).
If you will have other systems talking directly to your software, I would imagine that a WCF application hosted in IIS would be a rather straighforward way. We use both those approaches in my current assignment; WCF services for looking up and storing data, and scheduled tasks for data calculations that run on a regular basis.
The scheduled task has one upside compared to the others in one specific field; it uses system resources only when running.
You mentioned starting up a process "on demand". WAS - Windows Activation Service, or sometimes called Windows Process Activation Servvice, though it is never abbreviated "WPAS" - is the thing inside Windows that provides on-demand process activation. The way it works - when a message arrives, WAS can start a worker process to handle the message. WAS was, prior to IIS7, fairly tightly integrated into IIS. It was used primarily to activate processes that did web work - like an ASP.NET worker process. With IIS7, WAS is generalized so that it can activate worker processes based on non-HTTP as well as HTTP messages. If you write your app to receive messages through WCF, you can get activation essentially "for free". That applies if it is HTTP, TCP, MSMQ; SOAP or otherwise.
The key thing with this on-demand startup though, is that it is tied to the communication. In fact the process lifecycle model for WAS is tied to communication as well. By default if there are no incoming messages after a while, the process will be shut down by WAS. That may or may not be what you want.
As for process hosting - COM+ offers a hosting environment but it is primarily intended for use as a host for processes that communicate. This may not be the perfect fit for you.
If you have compute engines, you may just want to run a Windows Service. A service like that can be started and stopped either administratively or programmatically. In the latter case, you could imagine a WAS-activated worker process programmatically starting a windows service.
You could also imagine writing a simple Windows Service that watches a location (filesystem, message queue, etc) for a message, and when that file or message arrives, the Windows Service starts up a compute engine process, which itself is NOT a Windows Service, but is just a process.
Speaking of MSMQ - That is basically the same model as MSMQ triggers. You can configure MSMQ to start a process when a message arrives on a particular queue.
There are lots of options.