I'm setting up an AppFabric caching cluster on a small webfarm (5 web servers).
The caching cluster is installed on the same servers that run the IIS, if that matters.
I only use the AppFabric cache for my Model layer, meaning mostly business logic objects created from database queries. No page caching or similar.
This works just fine when enabled on the main website.
However on one of the 5 web servers there's a second IIS site, which hosts a couple of services, amongst others 3 WCF endpoints, as well as 2 old-school ASMX webservices.
When I enabled the AppFabric caching for this site, it tears the whole cluster down. A call to Get-CacheClusterHealth shows all 5 are completely gone (100% in Unallocated named cache fractions)
The Model code is actually the exact same DLLs as we use for the main website, so I doubt it's anything in the code (since the main site works)
I noticed this error in IIS -> AppFabric Dashboard: Error occurs while parsing service file myendpoint.svc
So that got me thinking: Could this be caused by the WCF endpoints somehow ?
There is a related question to this here:-
AppFabric Cache server and web application on same physical machine
Microsoft don't recommend having cache nodes being dual use (also hosting applictions). This could be the cause of your problem. We use an appfabric cache cluster but we dedicate them to appfabric and nothing else. See the article from MS here:-
AppFabric Caching Physical Architecture
Related
I'm migrating a service based integration platform from .Net Framework to .Net Core. The original versions of the integration platform have proven very successful and compared to replacing it with a 'off the shelf' integration solution, it has a far better ROI.
So after redeveloping the code, all tests has been working very well and have achieved higher levels of performance with a single IIS server that I could with 2 IIS servers with the original versions.
Except... If I go over ~3 message/sec with multiple clients, I start seeing duplicate GUID key errors when trying to save instrumentation data to my DB. All these errors are generated from the on-ramp service. The on-ramp places the message on a queue. The messages are then consumed by an off-ramp service and sent to the destination (for this load test the destination is a file folder).
Even though the off-ramp is also running on the same server as the on-ramp, we do not see any duplication errors generated by the off-ramp. I suspect this is due to the queue creating a linier process, so only one instance of the off-ramp is running at any time vs the on-ramp that has up to 4 clients firing concurrent messages at it's API.
Initially I thought the issue was caused by a static global variable class I had implemented, crossing process boundaries. But I would expect that the issue would be seen with the off-ramp as well, as the service architecture for both are virtually identical.
Summary of thoughts on issue:
If it is a pure coding issue, then errors would happen at low messaging rates.
The error would also be seen on the off-ramp if the GUID duplication was chance.
The on and off ramps are both running on the same server, but duplication only seen on the on ramp. IE on ramp not impacting the off ramp and visa versa.
Duplication has to be due to shared memory between concurrently running on-ramp instances, generated by multiple client scenario.
To try and resolve the issue I removed the static global variable class but I'm still seeing the duplication errors.
This issue was never observed in the original IIS implementation (after millions of message processed). I suspect the issue is with process isolation in the IIS hosted Kestrel .Net Core service host. From what I have read there is good isolation between different apps (based on IIS path) but not within the same app. So basically within the same IIS app pool. This could explain why .Net Core does not support multiple app running in the same IIS app pool.
If any one has a good idea how i can achieve process isolation between instances of the same app running in the same IIS app pool I would appreciate your thoughts/suggestions.
After running more tests I was able to resolve the issue. The problem was with the scope of the instrumentation variable. At low rates there was never a problem, but at high throughput, the same instrumentation object was being accessed by a second instance of the process.
The issue was difficult to track down due to the short lived nature of the integration services.
Thanks to anyone who reviewed the question.
Martin
I'm using Simple Injector in my WCF service. While running it from VS2010 everything is fine. However, when I publish it to my server using IIS 7, after some time (20 min, counted) my WCF loses all registered assemblies, modules, classes in container.
I guess IIS recycles the WCF Service Application Pool and drops my container registrations.
Can anyone help me on this?
While there exists many legitimate cases of using self-hosting WCF services, however, approaching self-hosting just because of IIS recycling may be counter productive.
Hosting in IIS gives you a lot benefit during development and daily operations, and I am not going to repeat what benefits which you could easily find out in google search.
So when IIS receive the first request to your application, it will launch a worker process named "w3wp.exe" according the settings in the application pool associated with your web app. And by default IIS will shutdown in 20 minutes of idle time. Check the Advanced Settings of the application pool, you will find a lot settings for the life cycle. You won't get such flexibility and robustness through self-hosting out of the box.
So basically you could have a few options provided you decide to stay with IIS hosting.
Change the Idle Time-out to 24-hours or even a month.
Write a small program or use cUrl to ping your application every 10 minute.
Leave it as it is
If you want to keep states during operations, save them in disk, then load them during next launch triggered by a request.
I'm working on my fourth or fifth implementation of a WCF service over MSQM with IIS/WAS activation. And I was never able to make it work properly. It's always the same story: my services are activated only if the IIS web site was interacted some other way (like servicing the service metadata page at /somewhere/myService.svc). Suddenly, if the only thing happening is sending messages into the queue, my services stop to process messages, and restart as soon as I visit the .svc page...
It's a so common pattern for me, that I also came to a common solution: scheduling a job (every few minutes) that runs a powershell script that access that page. Quite simple, but not very elegant. And, further more, unnecessary in theory.
This happened over different IIS versions (7.0 and 7.5), over various Win 2008 service packs and releases and with server in AD domains or workgroups. I think I've read every bits on the web about this, especially MSDN and microsofties blog, so binding configuration, MSMQ permissions, and all the other small details you can discover here and there are set up.
So the question: does anybody was successful with WAS over MSMQ?
Even if this question is a year old, I am still searching a good answer for this question. I appreciate any information that will lead me to fully understand this issue regarding low performances of communicating web services hosted on the same machine.
I am currently developing a system with several WCF Web Services that communicate intensively.
They are running under IIS7, on the same machine, each service being in a different Application Pool, with multiple workers in the Web Garden.
During the individual evaluation of each Web Service, I can serve 10000-20000 requests per minute, quickly and without any issues for the resource consumption (processor and memory).
When I test the whole system or just a subsystem formed by two Web Services I can't serve more than 2000 requests/minute.
I also observed that communication time between Web Service is a big issue (sometimes more than 10 seconds). But when testing with only 1000 requests per minute everything goes smoothly (connection time of no more than 60 ms).
I have tested the system both with SOAPUI and JMETER, but the times were computed based on system logs, not from the testing tools.
Memory and network aren't an issue (they are used very little).
Later on, I have tested the performance of 2 communicating WCF web services, hosted on two server and on the same server. It again seems that there is a bottleneck when the services are on the same machine, lowering the number of connection with from ten thousands to thousands; again, no memory or processor limiting.
As a note, I am working with quite big data in some cases and some of the operations needed are long ones.
I used perf.mon to see what's going on, for memory, processes, webservice, aspnet, etc. but I didn't see anything that could indicate what it's going wrong.
I also tried all the performance settings and tuning options I could find on the Internet.
Does someone know what can be wrong? Why the communication between Web Services could last so long? Why the Web Service which serves as an entry point in the system can accept 10000 requests/minute when is tested alone, but when communicating with another Web Service barely accepts 2000?
It's an IIS7 problem? Could my system perform better if each Web Service will be deployed on a different server?
I want to understand better how things internally function (IIS and WCF services) to improve performances for current and future systems.
You could try to collect data from WCF performance counters : concurrent calls, instances, duration, ... In addition, WCF throttling provides some properties that you can use to limit how many instances or sessions are created at the application level. Performance of the WCF service can be improved by creating proper instance.
Finally, in load testing, there are many configuations to apply to different component : max concurrent http connection, IIS limits, having many load clients... You load test is invalidated because of this.
I am looking for suggestion for hosting my WCF enterprise application.
The app. require to run without stopping at the server. It also use TCP to yield the best performance at the intranet environment.
I am thinking to host it at window service because IIS recycle process, and has timeout.
However, I find this from the msdn http://msdn.microsoft.com/en-us/library/ff649818.aspx :
Window service...Lack of enterprise features. Windows services do not have the security, manageability, scalability, and administrative features that are included in IIS.
Does it mean Window Service is not suitable for enterprise application? But How about MS SQL, Oracle, MySQL etc. They all host at Win. Service right?
Regards
Bryan
Windows service is suitable for enterprise application! The quoted text actually means that IIS has a lot of built-in management features which are not available in custom hosting (like windows service) unless you implement them at your own.
One of such features is the recycling you want to avoid which helps application to keep low resource consumption (server is in healthy state). Another such feature is IIS checking of the worker state. If worker process looks stuck (don't process requests for any reason), IIS will start automatically another process and routes new requests to that process.
IIS + WAS + AppFabric can provide very big feature set but they are not good for every scenario. If you have service which requires some background continuous, scheduled or multi threaded processing it is probably better to move to self hosted scenario.