How to track down long running calls to IIS? - asp.net-mvc-4

Our users are restless. They keep complaining about woolly, unmeasurable stuff, particularly slowness, without giving specifics, which of course makes it very difficult to track down.
Nonetheless, it is quite possible that they are right, that there are server calls that are taking way too long to come back. So I want to put some kind of sniffer on the web site (we're using ASP.NET MVC 4 on IIS7) that will log any call that takes more than n seconds to turn around, or that returns more than x megabytes of data, along with all request parameters, the response size, and maybe a certain amount of response data.
I haven't a clue how to do this, though. Any suggestions?

here is my take on this:
FRT
While you can use failed request tracing to log slow requests, in my experience is more useful for finding out why a request fails before it hits your application, rather than why its running slowly. 9/10 times its going to simply show you that the slowdown is in your code somewhere.
Log Parser
Yes you can download and analyze iis logs. I use Log Parser Lizard to do the analysis - its a great gui over log parser. Here's a sample of how you might query slow requests over 1000ms:
SELECT
To_String(To_timestamp(date, time), 'dd/MM/yyyy hh:mm:ss') As Time,
cs-uri-stem, cs-uri-query, cs-method, time-taken, cs-bytes, sc-status
FROM
'C:\inetpub\logs\LogFiles\W3SVC1\u_ex140721.log'
WHERE
time-taken > 1000
ORDER BY time-taken desc
New Relic
My recommendation - go easy on yourself and sign up for a free trial. No I don't work for them, but I've used their APM product a lot. Install the agent on the server - set it up. In 10 mins you will be amazed at the data you see about the site. Trust me.
Its designed to work in production environments and gives you amazing depth of info on what's running slow, down to the database query and stack traces. Its pure awesome. Once its setup wait for the next user complaint, log in and look at traces for the time frame.
When your pro trial ends, you can still get valuable data on the free tier, but it will only keep last 24 hours. We purchased licenses -expensive yes, but worth every cent. Why? Time taken to identify root causes was reduced by an order of magnitude, we can get proactive by looking at what is number 2, 3 and 4 on the slow requests list and working those before they become big problems, and finally the alerting makes us much more responsive when things were going wrong.
Code it
You could roll you own. This blog uses Mvc ActionFilters to do the logging. You could also use an HttpModule similar to this post. The nice thing about this approach is you can compile and implement the module separately from your application, and then just drop in the dll and update web.config to wire up the module. I would be wary of these approaches for a very busy site. Also, getting the right level of detail to fully identify the root is challenging.
View Requests
As touched on by Appleman1234, IIS has a little known feature to look at requests currently executing. Its handy for the 'hey its running slow right now' situation. You can use appcmd.exe or the IIS gui to do it. You will need to install the 'Request Monitor' IIS feature for this to work. This approach is ok for rudimentary narrowing of the problem, but does not show you whats running slowly in your controller.

There are various ways you can do this:
Failed Requests Tracing(FRT) – formerly known as Failed Request Event Buffering (FREB) with custom failure condition of takes over a certain time to load / run
Logging request information with IIS logging functionality and then using a tool like LogParserStudio
Using tools like Fiddler or IISMonitor on the IIS server to capture request information
For FRT the official documentation is available here and information how to capture dumps for long running process is avaliable here
For logging request information in IIS information about log file analysis is located here
For information on configuring Fiddler to capture IIS requests find information here
A summary of the steps in the linked resources is provided below.
For FRT
From IIS Manager for a given site,In the Actions pane, under Configure, click Failed Request Tracing and enter desired values in dialog box to enable Failed Request Tracing.
From IIS Manager for a given site, under IIS click Failed Request Tracing Rules, in order to define rules of failure for a given request. In the Actions pane, click Add and follow the wizard.
The logs will go in the directory you specify and are viewable in a web broswer.
For IIS logging
Logging is enabled by default on IIS
From IIS Manager for a given site,under IIS click Logging, and in the Actions Pane, click Enable to enable logging if it isn't already.
From IIS Manager for a given site,under IIS click Logging, and then configure as desired and click apply.
Install LogParser, .Net 4.x and LogParserStudio (if you need additional steps see here
Open LogParserStudio and add logs to it, you then can use SQL queries to get information from the log files.
For Fiddler
You need to change the user that IIS runs as to a user that can launch applications, like Fiddler (instead of Network Service), and then launch Fiddler with that user.
Also see Monitor Activity on a Web Server (IIS 7) for further information.

Related

ColdFusion 2018 - Requests Multiply Executed

with a new project we encountered some strange behaviour on our ColdFusion application.
Whenever a single request is initiated from the browser, the code of the cfml-templates is
executed multiple times. Upon viewing the corresponding log-files we found out, that indeed
for some reason the same request fires the evaluation in our application multiple times. One request
generates several entries. This is especially the case for long-running requests, such as database imports.
The ColdFusion application implements a REST-service, but even on manually requesting a resource,
such as a certain cfml page, on the same application - the code gets executed an unknown amount of times(variable initializations, database write-operations etc. take place), and if the request runs too long (cap at around ~4-6 seconds) there is no response to the browser.
About the infrastructure:
The application is Coldfusion18 with Tomcat Standard Edition
The webserver is an Apache (2.4.6).
Everything runs on a Linux machine with Cent OS 7.7
The corresponding Java version is 11.0.4
Our best guess is that there might be some misscommunication between the coldfusion connector with
the apache webserver. We actually searched for some configuration parameters that could cause the
problem, without success. Upon an installation on a windows machine we did not encounter that error.
Anyone got any idea?
we just found our answer in the following post:
Link to Solution

"The search engine appears to be down or failing to respond to the search query"

I've installed FusionAuth (awesome product) into a Docker Swarm cluster using the official docker-compose.yml file and everything seems to work brilliantly.
EXCEPT
Periodically, when a user goes to login they will be presented with the above error stating that the search engine is not available. If they try again immediately then everything works correctly! I would, obviously, prefer that they never saw the error.
Elasticsearch is definitely running and is responding to API calls correctly, and I can see the fusionauth_user index is present and populated with docs.
I guess my question is two fold:
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
I've search the docs for answers to the above but I can't seem to find anything :-(
Thanks for the kind feedback.
1) What role does the ElasticSearch engine play in the FusionAuth ecosystem and can it be disabled?
Elasticsearch provides full text search of user data. Each time a user is created or updated the user is re-indexed. In this case during login, we are updating the search index with the last login instant.
This service is required and cannot be disabled. We have had clients request to make this service optional for embedded applications or small scale scenarios where Elasticsearch may not be required. While this is not currently in plan, it is possible we may revisit this option in the future.
2) Is there a configurable timeout somewhere that is causing the error message and, if so, where can change it?
Not currently.
Full disclosure, I am not a Docker or Docker Swarm expert at all - perhaps there are some nuances to Swarm and response time due to spin up and spin down of resources?
Do you see any exceptions in the log when a user sees this error on the login?

Want to deploy WCF web service on Azure platform

I want to deploy my WCF web Service on Azure plateform.
I have created a Storage account for my website, and also created a cloud Service and uploaded my package file and config file to the staging site.
But while uploading, The message displays
'Your staging deployment is starting. Hang on, the page will refresh once the deployment begins.'
I am waiting sice 2-3 hours and not getting the desired output.
Am I doing correctly? Or is there anything that I forgot?
Please Help...!
Most likely there is a problem in your code or the packaging that is causing the role to continuously restart. This is a fairly common problem, but there are a lot of possible causes (missing an assembly reference, an uncaught exception, the Run() method is exiting, a Startup Task is failing, or many other things). You need to gather more information to know exactly what the problem is and how to fix it.
There are many threads here on SO about this topic. There's also a Microsoft post discussing how to diagnose this type of issue. Those are good places to start.

Debugging HTTP 500 (Internal Server Error) in WCF Service / Staging

I have some WCF services which are running great locally; client can consume them and the server is putting data in the DB as expected. The problem is that when I deploy these to a staging machine, all I can see are HTTP 500 errors.
How do I start debugging the problem?
Given that it's only on staging and not on my local dev machine, I assume it's an IIS configuration problem somewhere.
When I use Fiddler to see what's being sent and what the response is, I can see (as expected) correct request data, and only a 500 as the response -- no further details.
I'm pretty green to WCF and IIS, so it's probably something obvious; I've used aspnet_iisreg, deployed my .svc file and all the built DLLs/files from bin; maybe I missed something.
I looked in the IIS logs, but they're pretty skimpy; no error information there, either (or maybe I'm looking in the wrong place?)
(More important than solving the specific problem is figuring out how to see enough details about errors so I can work through problems myself.)
Edit: I of course checked the event logs first -- and surprisingly, didn't find any mention of the exceptions. So I assume that the service is at least being invoked, and that something is faulting in the middle.
The first place to look for errors is event log on the server. There should be basic information why request was not processed. If it is WCF related you can turn on WCF tracing and check for more details in generated logs.
Add:
<httpErrors errorMode="Detailed"/>
In Web.config under:
<system.webServer>
And see what's happening in more details.
'Http 500 Internal Server Error' might occur if your Service Account's password got expired. Please make sure that you don't have any issues with Service Account which is running the app pool on IIS.
It turns out that the server was returning a 500 because of a huge dataset returned; WCF puts some limitations on the size of data (and strings) you can return, to prevent DOS attacks. I solved the problem by increasing the limits, and decreasing the size of data returned (where applicable).

WCF Services not working from Silverlight Application after Deploying

Okay I have seen some very similar questions here but none seem to be answered to my liking. I have created a Silverlight application that calls a couple of services to populate various comboboxes from the database. I got this working without too much trouble on my local machine.
So now I want to deploy it to our webserver. It was relatively straight forward to get ISS7 to load the Silverlight application. However, none of my services seem to be working properly, in that the comboboxes are empty. In IE I get the following error:
Message: Unhandled Error in Silverlight Application An exception occurred during the operation, making the result invalid. Check InnerException for exception details. at System.ComponentModel.AsyncCompletedEventArgs.RaiseExceptionIfNecessary()
at MyTestPage.ViewModel.MyService.GetInfoCompletedEventArgs.get_Result()
at MyTestPage.ViewModel.MainPageViewModel.b__2(Object s, GetInfoCompletedEventArgs ea)
at MyTestPage.ViewModel.MyService.MyServiceClient.OnGetInfoCompleted(Object state)
Line: 1
Char: 1
Code: 0
URI: http://www.mywebsite.com/MyTestPage.aspx
My problem is that this error only occurs when deploying on the webserver and I have no clue how to debug this problem. The error says to check the InnerException but I haven't found an answer yet (after hours of searching) that tells me how I should do this.
I have tried browsing to the services and I am able to do so using the domain name i.e. http://test.myserver.com/Services/MyService.svc. However when logged onto the server and using http://localhost:3456/Services/MyService.svc - which is the path in the ServicesReferences.ClientConfig file - It cannot be found.
Some answers here seem to suggest using a clientaccesspolicy.xml file but I don't understand why this should be necessary if the services are hosted on the same server as the application - they aren't required when debugging on my local machine. Despite my reservations I have tried adding a clientaccesspolicy.xml file to the root of the application but this still doesn't make any difference.
So I have a couple of questions:
1) How do I get access to the InnerException when I am running the application on the webserver? Is there a specific log file I can view or turn on?
2) If, for some reason, I am trying to access the service in a cross domain fashion (even though they are located on the same server) how do I configure the application so that this isn't required?
UPDATE:
Ok, I was able to get the tracing to work. I can now see the trace details on the page when it loads but it doesn't really tell me anything useful. I have also added the option to write the details to the disk. Initially this file wasn't being written and I couldn't understand why. Then I noticed that refreshing my silverlight application was not triggering a write to the log. It was only when I manually browsed to the services that the log file was updated. This seems to indicate to me that my silverlight application is not hitting the services at all (for some reason). I tried cutting out the View Model object and hitting the service directly from the xaml code behind file but this didn't make any difference either.
At this point after spending more than two days trying to figure this out, I am thinking about starting again from scratch.
For my mind it shouldn't be this difficult to deploy something that works on a development machine to a webserver.
I pretty much gave up on my initial approach. I had another go following along from this video http://www.silverlight.net/learn/videos/all/net-ria-services-intro/. It uses Domain services instead of the WCF Services and it was actually fairly straight forward to get it going on the webserver. The example is two years old now so maybe there are better ways to do this now (I am open to suggestions) but at least it worked within an hour of trying it (compared to 2.5 days and getting nowhere).