So I've recently figured out how to turn on the WCF traces which creates an svclog file on my server and this is awesome. But the problem I have is I don't have direct access to the server where I'm working on my web service so everytime I need to look at the trace file I have to get someone else here at my office to log onto the server for me then I have to upload the trace file to a cloud server because its usually too big to email and then go through weeks worth of requests to find what I want to analyze.
Long story short I want to know if there is a way with trace logging for me to send an email to myself of every trace that gets stored?
The answer appears to be "no" there is no way to email traces using the svclog. There are other ways to accomplish the same goal.
You can set up a custom trace listener to log to DB or email. Log4Net or Nlog have some of those already built in. I've used both and prefer NLog over Log4net due to Nlog having new functionality support.
If you want to create your own from scratch. Look at this guys solution for an example how to.
Related
What would be the best way to debug Parse Cloud Code? Currently it's a mess of logging to the console and checking logs. Does anyone have a good workable solution?
During development, you should begin by testing against a local hosted server. I.e., I use VS Code. You can set breakpoints and watch variables for their values. You can set up a tool like ngrok to get a remote URL for your local endpoint so you can test with non-local hosted clients if you'd like.
We also use Slack extensively. We've created our own slack bot, and it has several channels it reports relevant information too, triggered from our parse-server. One of these is a dev error channel. Instead of console.logs, which are hard to sift through and find what you're looking for, we push important information to Slack. We don't switch every single console.log to a slack message, just the important "Hey something went wrong here's the information" messages. This brings them to our attention so we can identify and resolve them way faster. Slack is awesome. I recommend using slack, even on a solo project.
at the moment you can access your Logs using a console.log() or console.error() for functions and all general logs of everything that happens with your app, at Back4App you can access using: Server Settings -> Logs -> Settings -> Server System Log.
Or functions and all logs generated by Parse server, they're: request.log.info() and request.log.error(), at Back4App you can access using: Dashboard -> Logs.
Our users are restless. They keep complaining about woolly, unmeasurable stuff, particularly slowness, without giving specifics, which of course makes it very difficult to track down.
Nonetheless, it is quite possible that they are right, that there are server calls that are taking way too long to come back. So I want to put some kind of sniffer on the web site (we're using ASP.NET MVC 4 on IIS7) that will log any call that takes more than n seconds to turn around, or that returns more than x megabytes of data, along with all request parameters, the response size, and maybe a certain amount of response data.
I haven't a clue how to do this, though. Any suggestions?
here is my take on this:
FRT
While you can use failed request tracing to log slow requests, in my experience is more useful for finding out why a request fails before it hits your application, rather than why its running slowly. 9/10 times its going to simply show you that the slowdown is in your code somewhere.
Log Parser
Yes you can download and analyze iis logs. I use Log Parser Lizard to do the analysis - its a great gui over log parser. Here's a sample of how you might query slow requests over 1000ms:
SELECT
To_String(To_timestamp(date, time), 'dd/MM/yyyy hh:mm:ss') As Time,
cs-uri-stem, cs-uri-query, cs-method, time-taken, cs-bytes, sc-status
FROM
'C:\inetpub\logs\LogFiles\W3SVC1\u_ex140721.log'
WHERE
time-taken > 1000
ORDER BY time-taken desc
New Relic
My recommendation - go easy on yourself and sign up for a free trial. No I don't work for them, but I've used their APM product a lot. Install the agent on the server - set it up. In 10 mins you will be amazed at the data you see about the site. Trust me.
Its designed to work in production environments and gives you amazing depth of info on what's running slow, down to the database query and stack traces. Its pure awesome. Once its setup wait for the next user complaint, log in and look at traces for the time frame.
When your pro trial ends, you can still get valuable data on the free tier, but it will only keep last 24 hours. We purchased licenses -expensive yes, but worth every cent. Why? Time taken to identify root causes was reduced by an order of magnitude, we can get proactive by looking at what is number 2, 3 and 4 on the slow requests list and working those before they become big problems, and finally the alerting makes us much more responsive when things were going wrong.
Code it
You could roll you own. This blog uses Mvc ActionFilters to do the logging. You could also use an HttpModule similar to this post. The nice thing about this approach is you can compile and implement the module separately from your application, and then just drop in the dll and update web.config to wire up the module. I would be wary of these approaches for a very busy site. Also, getting the right level of detail to fully identify the root is challenging.
View Requests
As touched on by Appleman1234, IIS has a little known feature to look at requests currently executing. Its handy for the 'hey its running slow right now' situation. You can use appcmd.exe or the IIS gui to do it. You will need to install the 'Request Monitor' IIS feature for this to work. This approach is ok for rudimentary narrowing of the problem, but does not show you whats running slowly in your controller.
There are various ways you can do this:
Failed Requests Tracing(FRT) – formerly known as Failed Request Event Buffering (FREB) with custom failure condition of takes over a certain time to load / run
Logging request information with IIS logging functionality and then using a tool like LogParserStudio
Using tools like Fiddler or IISMonitor on the IIS server to capture request information
For FRT the official documentation is available here and information how to capture dumps for long running process is avaliable here
For logging request information in IIS information about log file analysis is located here
For information on configuring Fiddler to capture IIS requests find information here
A summary of the steps in the linked resources is provided below.
For FRT
From IIS Manager for a given site,In the Actions pane, under Configure, click Failed Request Tracing and enter desired values in dialog box to enable Failed Request Tracing.
From IIS Manager for a given site, under IIS click Failed Request Tracing Rules, in order to define rules of failure for a given request. In the Actions pane, click Add and follow the wizard.
The logs will go in the directory you specify and are viewable in a web broswer.
For IIS logging
Logging is enabled by default on IIS
From IIS Manager for a given site,under IIS click Logging, and in the Actions Pane, click Enable to enable logging if it isn't already.
From IIS Manager for a given site,under IIS click Logging, and then configure as desired and click apply.
Install LogParser, .Net 4.x and LogParserStudio (if you need additional steps see here
Open LogParserStudio and add logs to it, you then can use SQL queries to get information from the log files.
For Fiddler
You need to change the user that IIS runs as to a user that can launch applications, like Fiddler (instead of Network Service), and then launch Fiddler with that user.
Also see Monitor Activity on a Web Server (IIS 7) for further information.
I want to deploy my WCF web Service on Azure plateform.
I have created a Storage account for my website, and also created a cloud Service and uploaded my package file and config file to the staging site.
But while uploading, The message displays
'Your staging deployment is starting. Hang on, the page will refresh once the deployment begins.'
I am waiting sice 2-3 hours and not getting the desired output.
Am I doing correctly? Or is there anything that I forgot?
Please Help...!
Most likely there is a problem in your code or the packaging that is causing the role to continuously restart. This is a fairly common problem, but there are a lot of possible causes (missing an assembly reference, an uncaught exception, the Run() method is exiting, a Startup Task is failing, or many other things). You need to gather more information to know exactly what the problem is and how to fix it.
There are many threads here on SO about this topic. There's also a Microsoft post discussing how to diagnose this type of issue. Those are good places to start.
Okay I have seen some very similar questions here but none seem to be answered to my liking. I have created a Silverlight application that calls a couple of services to populate various comboboxes from the database. I got this working without too much trouble on my local machine.
So now I want to deploy it to our webserver. It was relatively straight forward to get ISS7 to load the Silverlight application. However, none of my services seem to be working properly, in that the comboboxes are empty. In IE I get the following error:
Message: Unhandled Error in Silverlight Application An exception occurred during the operation, making the result invalid. Check InnerException for exception details. at System.ComponentModel.AsyncCompletedEventArgs.RaiseExceptionIfNecessary()
at MyTestPage.ViewModel.MyService.GetInfoCompletedEventArgs.get_Result()
at MyTestPage.ViewModel.MainPageViewModel.b__2(Object s, GetInfoCompletedEventArgs ea)
at MyTestPage.ViewModel.MyService.MyServiceClient.OnGetInfoCompleted(Object state)
Line: 1
Char: 1
Code: 0
URI: http://www.mywebsite.com/MyTestPage.aspx
My problem is that this error only occurs when deploying on the webserver and I have no clue how to debug this problem. The error says to check the InnerException but I haven't found an answer yet (after hours of searching) that tells me how I should do this.
I have tried browsing to the services and I am able to do so using the domain name i.e. http://test.myserver.com/Services/MyService.svc. However when logged onto the server and using http://localhost:3456/Services/MyService.svc - which is the path in the ServicesReferences.ClientConfig file - It cannot be found.
Some answers here seem to suggest using a clientaccesspolicy.xml file but I don't understand why this should be necessary if the services are hosted on the same server as the application - they aren't required when debugging on my local machine. Despite my reservations I have tried adding a clientaccesspolicy.xml file to the root of the application but this still doesn't make any difference.
So I have a couple of questions:
1) How do I get access to the InnerException when I am running the application on the webserver? Is there a specific log file I can view or turn on?
2) If, for some reason, I am trying to access the service in a cross domain fashion (even though they are located on the same server) how do I configure the application so that this isn't required?
UPDATE:
Ok, I was able to get the tracing to work. I can now see the trace details on the page when it loads but it doesn't really tell me anything useful. I have also added the option to write the details to the disk. Initially this file wasn't being written and I couldn't understand why. Then I noticed that refreshing my silverlight application was not triggering a write to the log. It was only when I manually browsed to the services that the log file was updated. This seems to indicate to me that my silverlight application is not hitting the services at all (for some reason). I tried cutting out the View Model object and hitting the service directly from the xaml code behind file but this didn't make any difference either.
At this point after spending more than two days trying to figure this out, I am thinking about starting again from scratch.
For my mind it shouldn't be this difficult to deploy something that works on a development machine to a webserver.
I pretty much gave up on my initial approach. I had another go following along from this video http://www.silverlight.net/learn/videos/all/net-ria-services-intro/. It uses Domain services instead of the WCF Services and it was actually fairly straight forward to get it going on the webserver. The example is two years old now so maybe there are better ways to do this now (I am open to suggestions) but at least it worked within an hour of trying it (compared to 2.5 days and getting nowhere).
I have a silverlight project that calls into a wcf service. Everything works fine on my local machine.
However when I deploy to a virtual machine, with the exact same query the wcf service returns, but the result is empty.
I've tried debugging, but have not been able to get it to break in the wcf service.
Any ideas what the problem could be, or how I could go about debugging it?
Thanks
I figured out what the problem is, but am not sure what the solution is.
In my silverlight project the wcf service I am referencing is http://localhost/.../SilverlightApiService.svc
I used fiddler on my vm to see the request that was made and instead of trying to contact the above service, it was trying to contact:
http:///.../SilverlightApiService.svc
So, for some reason my machine name is getting inserted in there instead of localhost. Any thoughts on this would be appreciated.
I had this exact problem when deploying to amazon ec2 - The machine name for the service was being returned in the wsdl rather than the dns.
There were a couple solutions (one involved creating static wsdl - yuck!)
But the other was creating a sort of factory pattern for the service
This thread (you can read it all, but the answers are at the bottom.)
http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/c7fd51a2-773e-41d4-95a0-244e925597fe/
The slight downfall with this is that although it works - if you change the location of the server, you will need to remember to update your config - Which although isn't hard, it's easy to forget to do.
Can you give us a bit more information? What kind of binding are you using? What does the service config and the client config look like? Where do you get your data from that gets returned? Could it be the service on the VM just doesn't get any data? (e.g. queries a database that just doesn't have the data requested?)
Marc
I have had that happen before. I would try this. Set you start page as the web service file and run the app. Then set the start page back to your default page. Then update all the server references in your SL project. Recompile everything and republish. This has helped me a bunch of times in the past.
I figured it out.
Basically my machine name was hard coded in my ServiceReferences.ClientConfig file in my silverlight project.
What I had to do was specify programmatically what url to use for the service reference when instantiating my service client:
System.ServiceModel.EndpointAddress address = new System.ServiceModel.EndpointAddress(new Uri
(Application.Current.Host.Source, "../WebServices/SilverlightService.svc"));
ServiceClient serviceClient = new ServiceClient("BasicHttpBinding_IService", address);
I figured out what the problem is, but am not sure what the solution is.
In my silverlight project the wcf service I am referencing is http://localhost/.../SilverlightService.svc
I used fiddler on my vm to see the request that was made and instead of trying to contact the above service, it was trying to contact:
http:///.../SilverlightService.svc
So, for some reason my machine name is getting inserted in there instead of localhost. Any thoughts on this would be appreciated.