I have a new .netcore web app deployed to a windows server 2008R2 IIS instance, and after the app has been idle for a while, it's slow upon first load, and then subsequent requests are super fast.
In previous versions of .net I changed the Idle Time-out property of the app pool in IIS to 0 to fix this issue. Is there a similar setting somewhere with .netcore, perhaps something I've missed I could add to the Startup.cs file?
How about setting up the Application Pool Idle time on IIS?
Also make sure the recycling settings are unchecked
Baring there being a legitimate fix as I am unfamiliar with .netcore; writing a route which returns an HTTP 200 or something as a heartbeat and then calling it from a scheduled task every X minutes(idle - 1) would prevent the application from ever idling and thus keep it from slowing down requests.
Related
I am using SqlTableDependency and SignalR in my ASP.NET WebAPI project for dispatching real-time notifications to the client app. Everything is working just fine. However, SqlTableDependency stops listening to database changes after a while.
My WebAPI project is running in the local IIS. I am not sure but I think the Idle Timeout in the IIS application pool is the root cause for the issue. IIS might be terminating the application after idle timeout causing SqlTableDependency watchdog to run and remove all DB objects (triggers, queues, etc.).
What would be a good strategy to tackle this problem? Is there a way to check if SqlTableDependency is still watching the table changes? So that if it's not, I can create a new instance of it. Or, is it good practice to just turn off idle timeout for my WebAPI project?
Any ideas or suggestions are highly welcomed.
I've encountered a strange problem with an application I've developed. The application is a windows service hosting AspNetCore 2.0 running on Kestrel. This application receives requests through an IIS site acting as a proxy.
In this application, I also use signal 2.2.2 integrated using Microsoft.AspNetCore.Owin. All worked well until I detected that the application was not responding to requests.
Other applications on the same machine and using the same IIS server as proxy were working fine. Restarting the application pool serving the site solved the problem temporarily.
The problem resurfaced again and digging through monitoring information the application seems to hang when there are 400 signalr SSE connections on the same machine. This seems plausible as I've found that by default OWIN limits the number of concurrent requests at 100 * number of cpus. (Note that a site on the same machine is serving 5000 requests per minute without a sweat but these are not a long-lived request like the SignalR ones)
The problem is that I seem unable to find the same option when hosting Owin inside AspNetCore. Does someone know if this can be the solution and what is the correct setting?
EDIT: I'm fairly certain that the issue is caused by the number of SignalR connections opened concurrently because by disabling it in Javascript the problem vanished.
2nd EDIT: signalr does not seem to be the cuplrit as load testing the site with crank both in test and in production worked until 5000 concurrent connections which is the default IIS limit and is fine by me
After some trial and error I've been able to identify and correct the problem but it was no easy task so I'm leaving this answer behind if someone else stumbles upon the same problem.
Disabling SignalR did not solve the problem but it made it appear less often.
Thanks to the monitoring in place on the server and IIS I observed that the problem appeared when the number of connections to the site started growing rapidly. This system primarily makes request to other services so it does not have a database nor expensive computations.
Examining the code I've found that there were three problems:
a new HttpClient was created for every request which can exhaust the sockets which are not reused between requests blog blog2 blog3
by default there's a maximum number of concurrent connections on the httpClient to a single domain and this limit is set by default to 2 (!!!) blog4
the code was waiting synchronously on every web request to another system (this program was ported from an mvc4 site which never displayed this problem). This worked fine in MVC but asp.net core is very sensitive to this as it will rapidly exhaust all available threads and because the thread pool starts with the number of cores they will be exhausted quickly making all the requests wait. This value can be increased as temporary stop gap solution with ThreadPool.SetMaxThreads(Int32, Int32) but the only solution is to transform all calls in async calls.
Once all calls were mde async the problem never returned. Basically the problem was due to threadpool starvation and aspnet core sensibility to it vs MVC. Here you can find a nice explanation and a detection method using PerfView.
This could be the issue, but it's unlikely. When hosting in dotnet core you're probably using Kestrel as a webserver implementation, to switch these limits such as concurrent connections you can use KestrelServerLimits class as described in this Microsoft article.
KestrelServerLimits should not be causing you any problems since the default value for ConcurrentConnections is unlimited.
I have ASP .Net core webapi deployed on IIS 7.5 (Windows 2008 R2). I have controllers as well as listener classes (which wait for a message to arrive on a RabbitMQ message) which perform the same functionality.
The problem is whenever webapi is deployed on IIS or has some idle time, the RabbitMQ messages don't get picked up. Only if I make a API call to the control does the application 'wake up' and picks up the message.
Tweaks I have tried:
In the application pool,
set 'Idle timeout' to 0 .
set the 'Disable Overlapped Recycled' to true.
set 'Disable recycling for configuration changes'.
I have no idea what is causing this. I need the application to pick up messages immediately and have no idle time. Could anyone please point me in the right direction?
As a complete workaround, you can keep your app alive by sending requests all the time. In my case, I don't even have access to changing IIS settings.
To send requests I use Availability feature in Application Insights -- it lets you create tests that send GET requests to your app as often as every 5 minutes. You can read more about it here.
I have a WCF Service hosted in IIS 7.5 that is responding to the first soap message posted to it after inactivity with a 404 Error.(It works around 15 seconds after that...it is likely waking up after that initial ping.)
In investigating this issue I have:
-Prevented App Pool Recycling by setting the Idle Time-out to 0 and the recycling time interval to 0
- Attempted to enable the app warmer by installing Microsoft's App Inititializer and
amehrots app initializer ui for iis 7.5. Using this I set the application pool to always running and preloaded/preinitted my service.
- Installed http://keepalive.codeplex.com/ to run through the metabase and hit the service with activity.
While the service is active following an iis restart, it still appears to sleep after a period of inactivity. I am currently looking into reliable sessions and whether tweaks can be made to the web.config. Any further guidance would be appreciated.
There is an idle time setting on the Application pool.
The default is 20 mins, if there is no Activity for 20 mins the app pool is released from memory. The first Call after that will trigger a load and JIT compile of the code.
You can stop the shutdown by setting the idle time to 0.
I decided to give up on my attempts at an elegant solution and ended up adding a windows service to send a web request to each of the urls that I needed to keep alive.
I have a WCF Service Deployed on IIS. (BasicHTTPBinding with [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)])
I have built custom in-memory session management and Now I am facing a strange problem that is IIS 7 Restarts Automatically without even throwing any kind of warning or error not even in EventLog. This problem leads to destroy the all available sessions.
I discovered this issue after logging the Application_Start and Application_End methods using log for net and also i put the break point in application_start and it paused there in between test execution.
This happens rarely but i need to know why it happens and if it is normal and acceptable or not. if not then what may be the possible reasons of this.
Regards
Mubashar Ahmad
Could it be the app pool being re-cycled? IIS 6 has this set on by default to 1740 minutes. As for IIS 7 I guess you would have the same kind of setting? I know in IIS 6 this "event" is not logged as 'n error.
IIS recycles worker processes either when it detects an "unhealthy" process, or after certain operator-configurable limits are reached.
Among the limits are:
memory threshold
after a configured number of requests
elapsed time
time of day
more info
The Session timeout (which is separate to the app pool recycling) is set to 90 minutes by default, this is set at the application level. This also means anything being held in Session will be blown away at that time. You can set it via the properties of the virtual directory/application in IIS6, and via SessionState->Open Feature in IIS7 (when you have the application selected).
Also note that session timeout can be set via the web.config of an ASP.Net application, should your web services be hosted in one of those.