We're running a Sitefinity 6.2 site on IIS 7.5. For some reason, the site is extremely slow on first load(above 90Sec). There are not many images (Only 4 png's, the largest being 163KB) that could slow down the site. We've tried rebuilding the database indexes to no avail.
There are a couple of other Sitefinity websites of older versions on the web server. We've not had this problem with the older versions.
Any help is greatly appreciated.
IIS's application pool by default is set to go to sleep if there site is not being used. This ensures that the resources are returned to the system for other sites.
Busy sites therefore don't experience this lag on 'waking up'.
Watch this video to illustrate how you can make the Sitefinity site 'always' available.
http://www.youtube.com/watch?feature=player_embedded&v=zRqMAVnOUhw.
Alon
Firewater Interactive
http://www.firewater.net
We've had this issue with all our Sitefinity sites, the first hit takes a long time for the site to get going. What we've done to combat this is run a task in task scheduler every five minutes that runs a C# exe which sends off a web request to each site:
static void Main(string[] args)
{
var sitefinitySites = new List<Uri>
{
new Uri("http://www.example.com")
};
using (var client = new WebClient())
{
foreach (var site in sitefinitySites)
{
try
{
client.DownloadString(site);
}
catch (WebException ex)
{
//send an email or something because the site might be down
}
}
}
}
What about IIS "Always Running" feature
http://developers.de/blogs/damir_dobric/archive/2009/10/11/iis-7-5-and-always-running-web-applications.aspx
Related
I ran into an issue with my API generating a huge CPU load of lsass.exe
The environment :
Windows Server 2016
.NET Core 2.2 (aslo tested with .NET Core 3.0)
In order to investigate it, I created a new ASP.NET Core website using the default template (dotnet new web). I updated Kestrel configuration to look like this :
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.ConfigureKestrel((context, options) =>
{
options.AddServerHeader = false;
options.Listen(IPAddress.Any, 5001, listenOptions =>
{
listenOptions.UseHttps(StoreName.My, "*.mycertificate.domain", false, StoreLocation.LocalMachine);
});
})
.UseStartup<Startup>();
});
Alongisde this website, i created a load test using JMeter in order to hit the website with this load :
When running the test browsing the homepage of the website, the result is having the lsass.exe process to heavily use the CPU close the 100%.
I ran others tests using those configurations and the result is still the same
Kestrel using different ways to load the certificate
IIS using InProcess website with a https binding on the certificate
HTTP.sys
Any ideas on how to configure properly https on aspnet-core to create a heavy load API ?
Thanks for your help
Thanks for the reply, but i tried the below process and it worked. Now my idle CPU usage is always less than 5%.
Go to Settings > System > Notifications & actions
Turn off 'Show me tips about Windows'
Restart
I'm hosting an ASP.NET Core app in a Windows Azure Web Site. I'm wondering how to get details of an exception occuring in the Startup.Configure() method? All I see is An error occurred while starting the application..
One thing that DOES work is adding an app setting of ASPNETCORE_ENVIRONMENT="Development".
Then I get System.Exception... at X.Startup.Configure() as expected.
But this is not a feasible solution. Azure is my Staging environment, and I'm already using the environment concept to substitute my connection strings (as suggested in almost every ASP.NET Core documentation I have ever read).
Things I have tried without any effect:
Adding app.UseDeveloperExceptionPage() (not surrounded by any if statement).
Adding <customErrors mode="Off"/> to Web.config, as suggested here https://stackoverflow.com/a/29539669/268091
Adding ASPNET_DETAILED_ERRORS="true" to Web.config, as suggested here https://stackoverflow.com/a/32094245/268091
Enabling Detailed error messages in Azure portal / Diagnostics logs
Adding a try-catch, writing a manual response, as suggested here https://stackoverflow.com/a/29524042/268091
Deleting everything and redeploying.
Is there really no other way to achieve this, than hijacking the environment concept altogether?
I don't know if this would work for you, but we've decided to report these using Application Insights.
public void Configuration(IAppBuilder app)
{
var ai = new Microsoft.ApplicationInsights.TelemetryClient();
ai.TrackEvent("Application Starts");
try
{
//Amazing code here
}
catch ( Exception ex )
{
ex = new Exception("Application start up failed.", ex);
ai.TrackException(ex);
throw;
}
}
My site is built on a WebAPI back end...
the issues occurs on deployment, as my Uri wasn't formatted correctly due to our IIS deployment/site structure
WRONG
http://itil.mysite.com/api/Building
RIGHT
http://itil.mysite.com/TestSite/api/building
So I modified my http helper to include a baseUri
like so
define(function () {
var baseUri = window.AppPath;
return {
baseUri: baseUri,
defaultJSONPCallbackParam: 'callback',
get: function (url, query) {
return $.ajax(baseUri + url, { data: query });
},
...
});
And on my Index.cshtml
added the following to get the set the root/baseUri path:
var AppPath = '#string.Format("{0}://{1}{2}", Request.Url.Scheme, Request.Url.Authority, Url.Content("~"))';
console.log('AppPath: '+AppPath);
The baseUri path is correct when I log it to the console from the Index.cshtml: EG.
AppPath: http://itil.mysite.com/TestSite/
But when I do the actual api call (from my deployed instance), it still uses the old Uri..
http.get('api/building').done(viewInit);
STILL WRONG
http://itil.mysite.com/api/building
My next thought was that the files must be cached somehow, so I tried the following:
Restarted IIS numerous times,
Deleted and redeployed files
Disabled Caching in chrome,
Disabled .js caching in IIS (usermode & kernel
mode),
Restarted my PC
Modified the ScriptBundle to try and force it
to (for the lack of a better word) go out of sync, then added my
code back
The code works when i use my Visual Studio dev server, but I'm getting the
same issue on my local IIS & Alpha test site... with no luck.
How the hell do i clear the cache on a deployed site :/ This is getting to the point where things seems to be a bit ridiculous. Either I'm losing it, or the "big guy" hates me.
Sigh.. Second time I've been caught out by this. I thought my issue was MVC related, its was Durandal deployment related :P
Note to everyone reading this.
Once you deploy a Durandal project & if you modify ANY of the existing javascript files or main.js. Remember to run optimizer.exe.
...\App\durandal\amd\optimizer.exe
I'm having a hard time trying to get my task to stay persistent and run indefinitely from a WCF service. I may be doing this the wrong way and am willing to take suggestions.
I have a task that starts to process any incoming requests that are dropped into a BlockingCollection. From what I understand, the GetConsumingEnumerable() method is supposed to allow me to persistently pull data as it arrives. It works with no problem by itself. I was able to process dozens of requests without a single error or flaw using a windows form to fill out the request and submit them. Once I was confident in this process I wired it up to my site via an asmx web service and used jQuery ajax calls to submit request.
The site submits request based on a url that is submitted, the Web Service downloads the html content from the url and looks for other urls within the content. It then proceeds to create a request for each url it finds and submits it to the BlockingCollection. Within the WCF service, if the application is Online (i.e. Task has started) - it pulls the request using the GetConsumingEnumerable via a Parallel.ForEach and Processes the request.
This works for the first few submissions, but then the task just stops unexpectedly. Of course, this is doing 10x more request than I could simulate in testing - but I expected it to just throttle. I believe the issue is in my method that starts the task:
public void Start()
{
Online = true;
Task.Factory.StartNew(() =>
{
tokenSource = new CancellationTokenSource();
CancellationToken token = tokenSource.Token;
ParallelOptions options = new ParallelOptions();
options.MaxDegreeOfParallelism = 20;
options.CancellationToken = token;
try
{
Parallel.ForEach(FixedWidthQueue.GetConsumingEnumerable(token), options, (request) =>
{
Process(request);
options.CancellationToken.ThrowIfCancellationRequested();
});
}
catch (OperationCanceledException e)
{
Console.WriteLine(e.Message);
return;
}
}, TaskCreationOptions.LongRunning);
}
I've thought about moving this into a WF4 Service and just wire it up in a Workflow and use Workflow Persistence, but am not willing to learn WF4 unless necessary. Please let me know if more information is needed.
The code you have shown is correct by itself.
However there are a few things that can go wrong:
If an exception occurs, your task stops (of course). Try adding a try-catch and log the exception.
If you start worker threads in a hosted environment (ASP.NET, WCF, SQL Server) the host can decide arbitrarily (without reason) to shut down any worker process. For example, if your ASP.NET site is inactive for some time the app is shut down. The hosts that I just mentioned are not made to have custom threads running. Probably, you will have more success using a dedicated application (.exe) or even a Windows Service.
It turns out the cause of this issue was with the WCF Binding Configuration. The task suddenly stopped becasue the WCF killed the connection due to a open timeout. The open timeout setting is the time that a request will wait for the service to open a connection before timing out. In certain situations, it reached the limit of 10 max connection and caused the incomming connections to get backed up waiting for a connection. I made sure that I closed all connections to the host after the transactions were complete - so I gave in to upping the max connections and the open timeout period. After this - it ran flawlessly.
Calling a WCF Service in my application throws EndpointNotFoundException after one minute. All timeouts are more than one minute.
var binding = new BasicHttpBinding {
OpenTimeout = TimeSpan.FromMinutes(3),
CloseTimeout = TimeSpan.FromMinutes(6),
ReceiveTimeout = TimeSpan.FromMinutes(2),
SendTimeout = TimeSpan.FromMinutes(5)
};
client = new ServiceClient(binding, new EndpointAddress("http://..."));
client.InnerChannel.OperationTimeout = TimeSpan.FromMinutes(4);
I found a thread on Microsoft's forum, but there is no solution.
http://social.msdn.microsoft.com/Forums/ar/windowsphone7series/thread/cba9c633-6d79-4c04-8c08-cd0b5b33d8c6
The problem occurs only with services that work out more than one minute.
Invoke of this service throws EndpointNotFoundException:
public string Test() {
Thread.Sleep(60000);
return "test";
}
But invoke of this service works correctly:
public string Test() {
Thread.Sleep(58000);
return "test";
}
It is not clear from the question if the problem occures on the emulator or the device.
If it is occuring on the emulator do you have network access - i.e. can you see external sites from IE. If not check the proxy settings on your host machine as a LAN proxy will prevent the emulator communicating.
What are the server-side timeouts set to? Sounds like the issue may possibly be at the other end of the wire.
I downloaded .NET Framework's libraries from Windows Phone device and decompile they.
HttpWebRequest has unchangeable timeout in 1 minute.
To confirm, I created an aspx page. If I put Thread.Sleep(60000) in Page_Load, HttpWebRequest will not be able to get an response.