Long communication time of WCF Web Services within the same server - wcf

Even if this question is a year old, I am still searching a good answer for this question. I appreciate any information that will lead me to fully understand this issue regarding low performances of communicating web services hosted on the same machine.
I am currently developing a system with several WCF Web Services that communicate intensively.
They are running under IIS7, on the same machine, each service being in a different Application Pool, with multiple workers in the Web Garden.
During the individual evaluation of each Web Service, I can serve 10000-20000 requests per minute, quickly and without any issues for the resource consumption (processor and memory).
When I test the whole system or just a subsystem formed by two Web Services I can't serve more than 2000 requests/minute.
I also observed that communication time between Web Service is a big issue (sometimes more than 10 seconds). But when testing with only 1000 requests per minute everything goes smoothly (connection time of no more than 60 ms).
I have tested the system both with SOAPUI and JMETER, but the times were computed based on system logs, not from the testing tools.
Memory and network aren't an issue (they are used very little).
Later on, I have tested the performance of 2 communicating WCF web services, hosted on two server and on the same server. It again seems that there is a bottleneck when the services are on the same machine, lowering the number of connection with from ten thousands to thousands; again, no memory or processor limiting.
As a note, I am working with quite big data in some cases and some of the operations needed are long ones.
I used perf.mon to see what's going on, for memory, processes, webservice, aspnet, etc. but I didn't see anything that could indicate what it's going wrong.
I also tried all the performance settings and tuning options I could find on the Internet.
Does someone know what can be wrong? Why the communication between Web Services could last so long? Why the Web Service which serves as an entry point in the system can accept 10000 requests/minute when is tested alone, but when communicating with another Web Service barely accepts 2000?
It's an IIS7 problem? Could my system perform better if each Web Service will be deployed on a different server?
I want to understand better how things internally function (IIS and WCF services) to improve performances for current and future systems.

You could try to collect data from WCF performance counters : concurrent calls, instances, duration, ... In addition, WCF throttling provides some properties that you can use to limit how many instances or sessions are created at the application level. Performance of the WCF service can be improved by creating proper instance.
Finally, in load testing, there are many configuations to apply to different component : max concurrent http connection, IIS limits, having many load clients... You load test is invalidated because of this.

Related

ASP .NET Core Application Process Isolation for IIS hosted Kestrel Services

I'm migrating a service based integration platform from .Net Framework to .Net Core. The original versions of the integration platform have proven very successful and compared to replacing it with a 'off the shelf' integration solution, it has a far better ROI.
So after redeveloping the code, all tests has been working very well and have achieved higher levels of performance with a single IIS server that I could with 2 IIS servers with the original versions.
Except... If I go over ~3 message/sec with multiple clients, I start seeing duplicate GUID key errors when trying to save instrumentation data to my DB. All these errors are generated from the on-ramp service. The on-ramp places the message on a queue. The messages are then consumed by an off-ramp service and sent to the destination (for this load test the destination is a file folder).
Even though the off-ramp is also running on the same server as the on-ramp, we do not see any duplication errors generated by the off-ramp. I suspect this is due to the queue creating a linier process, so only one instance of the off-ramp is running at any time vs the on-ramp that has up to 4 clients firing concurrent messages at it's API.
Initially I thought the issue was caused by a static global variable class I had implemented, crossing process boundaries. But I would expect that the issue would be seen with the off-ramp as well, as the service architecture for both are virtually identical.
Summary of thoughts on issue:
If it is a pure coding issue, then errors would happen at low messaging rates.
The error would also be seen on the off-ramp if the GUID duplication was chance.
The on and off ramps are both running on the same server, but duplication only seen on the on ramp. IE on ramp not impacting the off ramp and visa versa.
Duplication has to be due to shared memory between concurrently running on-ramp instances, generated by multiple client scenario.
To try and resolve the issue I removed the static global variable class but I'm still seeing the duplication errors.
This issue was never observed in the original IIS implementation (after millions of message processed). I suspect the issue is with process isolation in the IIS hosted Kestrel .Net Core service host. From what I have read there is good isolation between different apps (based on IIS path) but not within the same app. So basically within the same IIS app pool. This could explain why .Net Core does not support multiple app running in the same IIS app pool.
If any one has a good idea how i can achieve process isolation between instances of the same app running in the same IIS app pool I would appreciate your thoughts/suggestions.
After running more tests I was able to resolve the issue. The problem was with the scope of the instrumentation variable. At low rates there was never a problem, but at high throughput, the same instrumentation object was being accessed by a second instance of the process.
The issue was difficult to track down due to the short lived nature of the integration services.
Thanks to anyone who reviewed the question.
Martin

WCF performance: Having multiple instances (separate websites) in IIS of same service make sense?

In order to improve performance* of a WCF service, one of the following can be done:
Use WCF inbuilt features (throttling etc)
Install multiple instances (separate websites in IIS) of the same service in the same machine.
I understand that these things are better tested than discussed but just wanted to get an opinion if someone has already tried both these approaches.
This service uses InstanceMode.PerSession and ConcurrencyMode.Multiple
Performance: This service handles data (MTOM encoded). There should not be any timeouts since clients will make synchronous calls to this service.
No, multiple endpoints from a single service won't help, as you describe it.
Yes, you can have a running WCF in IIS with multiple endpoints, but the same service is processing the requests whether they come into endpoints 1, 2, 3 or n. And since WCF requests are processed on their own threads, there's no benefit to adding extra end points.
Think of it this way: 10 requests come into a WCF service. Each request is processed on its own thread whether there are 10 endpoints or just 1. So there's no speed advantage gained by adding endpoints.
I've spent 2 years building industrial-scale WCF services. If you're worried about performance, the WCF service is the least of your worries. I've load tested a WCF service, sending 1000 concurrent users (each uploading multiple 157kb files) at a medium size (4 core) server; the server barely breaks a sweat while uploading 160 files/second.
If you're planning to build huge web service, the way to spread out the processing load is to have 1-n WCF web services fronted by a load balancer like F5. Then you can scale up to Amazon.Com size if you like.

Hosting dataservice in IIS or Local Services

Normally I host my WCF services in IIS but I've been told by a colleague that services run better (performance wise) when hosted in Local services.
Is this true? What are the pros and cons for each?
Without knowing how your services are designed, how much CPU/memory they consume, how many concurrent clients they support, how they are accessed, etc., it's hard to make a general statement about which method is better/faster. So I'll share my previous experience.
Initially we hosted our WCF services in local Windows Services, after doing some rudimentary performance testing against the two WCF deployment methods. Hosting them in Win Svcs was slightly faster (not noticeably). .NET 4.0/REST/wsHTTPBinding/total 3,000 conc. calls/load balanced sever farm/memory intensive/default WCF settings (initially didn't tweak them).
Then we noticed the memory on our WCF servers was maxing out several days after starting the service and when it happened our services generated strange exceptions sporadically. We turned on perf counters on our servers. That's when we learned that perf profiling WCF services hosted in Win Svc didn't give us a whole lot of insight because many perf counters simply didn't return any info at all, which was confirmed by Microsoft Tech Support team. We also used ANTS to look for memory leaks but didn't find any major issue in our code. We then started tweaking WCF settings (e.g. maxbufferpoolsize) attentively with the help from Microsoft consultants. Ultimately we came to the conclusion that GC wasn't happening frequently enough to free up allocated memory. We even tried switching to server mode GC from workstation mode, which actually ended up worsening the problem.
As the last resort, we switched to IIS. The performance of the service didn't get any better, which was fully expected. However, some of the IIS-specific perf counters confirmed our suspicion about GC not happening frequently enough. We then found this wonderful setting in IIS that allowed us to specify when and how often to recycle an app pool. Yes, we could have developed a simple custom solution to restart our WCF services but why reinvent the wheel, we thought. Additionally, when you recycle an app pool in IIS, IIS doesn't kill it abruptly. Instead, it creates a new one to handle subsequent requests while the old one stays a live for a configurable amount of time to finish processing all outstanding requests. That built-in capability allowed us to maintain our uptime SLA.
Based on my experience, I would suggest you keep them in IIS unless you really really really need to squeeze for that last bit of juice from your servers.

Why doesn't my azure hosted WCF service scale when I add more machines?

We have a WCF service which we are hosting on azure. It takes some xml and processes it in memory (no external calls/db etc and it takes about 150ms) and returns some xml.
We have been load testing it and when we run it on 1,2 and 4 core machines we can max out the processors and get around a max of 40 calls per second throughput (on the 4 core machine). However when we switch to an 8 core machine or two 4 core machines we still only get around 40 calls per second.
Why might I not be able to get more throughput when I scale up the number of machines doing the processing? I would expect adding more machines would increase my throughput fairly linearly, but it doesn't. Why not?
Not sure if Azure has specific throttling, but the .NET framework has a limit on the number of outgoing connections to the same address that can be active at a time. In this MSDN article called Improving Web Services Performance it mentions that the default value for this is 2.
Configure The maxconnection Attribute
The maxconnection attribute in Machine.config limits the number of concurrent outbound calls.
Note This setting does not apply to local requests (requests that originate from ASP.NET applications on the same server as the Web service). The setting applies to outbound connections from the current computer, for example, to ASP.NET applications and Web services calling other remote Web services.
The default setting for maxconnection is two per connection group. For desktop applications that call Web services, two connections may be sufficient. For ASP.NET applications that call Web services, two is generally not enough. Change the maxconnection attribute from the default of 2 to (12 times the number of CPUs) as a starting point.
<connectionManagement>
<add address="*" maxconnection="12"/>
</connectionManagement>
Note that 12 connections per CPU is an arbitrary number, but empirical evidence has shown that it is optimal for a variety of scenarios when you also limit ASP.NET to 12 concurrent requests (see the "Threading" section later in this chapter). However, you should validate the appropriate number of connections for your situation.
These limits are in place to prevent a single users from monopolizing all the resources on a remote server (DOS attack). Since this is a service running in Azure I would guess that they have throttling on their end to prevent a user from consuming all of their incoming connections from a single IP.
My next step would be to check and see if there is a concurrent connection limit for azure web roles (this thread suggests there is and it's configurable) and to either increase it. Otherwise I would try to perform my load test from multiple sources and see if you still experience the same limits.

Limit WCF service resources usage

I'm developing a web application that needs to perform a task that consumes a lot CPU and Memory, and that also may last several minutes. In order to get a better user experience, I also developed a windows service that hosts a WCF service that performs this "high cost" task and that comunicates with the web app using msmq (message queues).
This worked great until I tried to make a load test... The windows service starts consuming a lot of resource, puttin the CPU to work at 100% and more than 1GB of memory. I've looked for optimizations and I've done a lot of tweaks to the code and I think that it is very efficient, but the task just requires a lot of resources.
The problem is that while the WCF service is working, the CPU gets used at 100% and the web app turns INCREDIBLY SLOW! I don't mind if the task that the WCF service does takes a couple of minutes more, but I want the web app to perform well for users.
So I'm wondering if there is a way to limit the resources that the WCF service can consume, giving priority to the web app.
Thanks in advance.
Juan
The easy solution would be to place the WCF service on a different machine.
The fact that the service is using alot of CPU is probably not related to you using WCF.
There are some ways that you may be able to improve the performance of your web app:
Process only one message at a time.
Break the jobs into smaller parts.
Set priority of the windows service to below normal in the task manager
Install more RAM on the server
I guess this is a problem of your Windows service design. When you decide to host WCF in Windows service you have to control resource utilization = you have to control throttling. You have to create configurable control over internal service processing so that you can change the load based on available resources. If you host WCF in IIS it already provides such control on AppPool level.
There are some freeware tools which allow limiting CPU usage for given process but that is not something I would recommend for production usage.
Best regards, Ladislav