I've configured a WCF web service (.NET 3.5) to monitor counters using sitescope. This creates a warning when the main service call exceeds some predefined threshold.
I'm trying to figure out the best way to log the service with greater granularity. By this I mean I need some way to save the timing of each of the calls within the web service to see what may be causing a problem.
One way of doing this is to use log4net to log the timing of each call (e.g. database) that is used in the web service. Or maybe I should output the timing to a database? Whatever the solution I don't want it to greatly affect the timing of the web service itself.
Has anyone had to implement a similar solution? How should I design it so it only log/saves a sample of detailed web service calls rather than every call?
Thanks for your time.
Related
I have a windows phone 8 application, which communicates with WCF service using basicHttpBinding. The service is hosted on IIS7 (and not using windows azure)
As the service may go down for any reason, I am exploring the use of message queues to increase the reliability of the system.
I have looked at NetMsmqBinding provided in WCF - but it looks like this binding is not supported by WP8 client.
I am also looking at using RabbitMQ, but cannot find any working example with WP8 client using WCF.
Please can anyone suggest what is the best way forward? Any sample code (or links) will be much appreciated.
Thanks
First off, netMsmqBinding cannot be used across the internet. This is because it uses MSMQ which is not exposed over http.
When you're making calls to a resource across the internet, unreliability is something you need to factor into your application. Because of the number of possible problems you can encounter, it's generally not a case of if, but when, there is a failure and it's how your application deals with this which is important.
Even so, there are things you can do to minimize the reliability issues you experience, one of which does involve queuing.
Where queuing can be useful is taking large, complex, and long running processes offline. Because calls to such processes implemented synchronously often time out, you can gain a lot of reliability by making the actual processing call asynchronous.
As an example, it would be fairly common to have the web server invoke some offline process via message queuing and return to the client that their request is being processed. Because doing this is inexpensive calls are far less likely to fail. Your problem then becomes one of how to return the response to the client once the offline processing has been done.
I am working on a project in which I want to use a Windows Workflow 4 State Machine. The Visual Studio solution templates and most guidance seem to steer everything towards hosting as a service in IIS that is created dynamically from send and receive activities within the workflow.
However, I would prefer to not use the send and receive activities and then host in my own WCF service which would allow me to use a Windows Service instead of IIS and use other bindings like TCP instead of HTTP and create my own interface instead of exposing MEX. In addition, it would be portable to any other hosting arrangement like in a WPF app or a console or whatever.
This feels a lot more flexible to me. Somehow, having service operations as part of the workflow seems like pretty tight coupling of two things that aren't that related. Is there any downside to my approach? I'm new to WF so I might be missing something.
Depending on the kind of workflows you are running you might need to write quite a bit of pluming code that workflow services provide for you.
Things to consider:
Are your workflows long lived?
Are you sending multiple messages to the same workflow?
Do your workflows need to survive a host restart?
Are you using Delay activities to respond to timeouts?
Do you need to be a able to retry action after error situations?
Lots of these things are automatically taken care of with a WF service and need your attention otherwise. It is certainly doable, I have done it in the past, but be aware of of what you are losing.
I would like to create a service whose job is to monitor other services that are running within the same process, and then report basic information like health or service dependencies. I'm having trouble figuring out the best way for my monitoring service to access detailed information about the other services without having to have each service publish its metadata or expose some custom endpoint the monitoring service can communicate with. If I load the configuration and read through it I can get most of the way there but this approach has a few weaknesses:
Getting the absolute URI for each endpoint can be difficult,
especially when using IIS hosting or fileless activation.
Any configuration that was done programmatically would not be able to be read by the monitoring service
What I'd like to be able to do is to somehow access the ServiceDescription to get all the information I need about each ServiceHost, without requiring any work on the part of the service designer to give it to me. Is something like this possible?
If you've checked Channs links and are convinced you need to roll your own health monitoring infrastructure, you'll probably need to either derive from ServiceHost or go all out and derive from ServiceHostFactoryBase or possibly do both depending on what you need to implement. They'll give you access to the ServiceDescription instance for each service as it is spun up.
One alternative would be to use WCF's built-in health monitoring and performance monitoring capabilities. This works at the individual service level though.
We have a WCF Service using a wsHttpBinding. When it recieves many requests in a short period of time (25 per second for a few minutes) it stops working and our other asp.net applications and pages to stop responding as well. Some of them timeout and eventually we see the following in the event viewer:
ISAPI 'c:\windows\microsoft.net\framework\v2.0.50727\aspnet_isapi.dll' reported itself as unhealthy for the following reason: 'Deadlock detected'.
Often we get calls about the problem first and restart IIS to solve the problem.
How can we configure our WCF service to handle this many transmissions or at least configure it to not take down our other applications when it can't handle the load. Our classic asp applications run without issues during this time, it's only our .net apps that are effected.
are you running all your asp/wcf sites in the same AppPool? if so, I'd suggest creating a new one and running the WCF service just in that. That in itself might be enough to solve the problem from a practical perspective.
Also can you target a more recent version of the framework with your WCF app? (and leave the other apps the same) It will isolate it much better.
I'm developing a web application that needs to perform a task that consumes a lot CPU and Memory, and that also may last several minutes. In order to get a better user experience, I also developed a windows service that hosts a WCF service that performs this "high cost" task and that comunicates with the web app using msmq (message queues).
This worked great until I tried to make a load test... The windows service starts consuming a lot of resource, puttin the CPU to work at 100% and more than 1GB of memory. I've looked for optimizations and I've done a lot of tweaks to the code and I think that it is very efficient, but the task just requires a lot of resources.
The problem is that while the WCF service is working, the CPU gets used at 100% and the web app turns INCREDIBLY SLOW! I don't mind if the task that the WCF service does takes a couple of minutes more, but I want the web app to perform well for users.
So I'm wondering if there is a way to limit the resources that the WCF service can consume, giving priority to the web app.
Thanks in advance.
Juan
The easy solution would be to place the WCF service on a different machine.
The fact that the service is using alot of CPU is probably not related to you using WCF.
There are some ways that you may be able to improve the performance of your web app:
Process only one message at a time.
Break the jobs into smaller parts.
Set priority of the windows service to below normal in the task manager
Install more RAM on the server
I guess this is a problem of your Windows service design. When you decide to host WCF in Windows service you have to control resource utilization = you have to control throttling. You have to create configurable control over internal service processing so that you can change the load based on available resources. If you host WCF in IIS it already provides such control on AppPool level.
There are some freeware tools which allow limiting CPU usage for given process but that is not something I would recommend for production usage.
Best regards, Ladislav