I'm working on my fourth or fifth implementation of a WCF service over MSQM with IIS/WAS activation. And I was never able to make it work properly. It's always the same story: my services are activated only if the IIS web site was interacted some other way (like servicing the service metadata page at /somewhere/myService.svc). Suddenly, if the only thing happening is sending messages into the queue, my services stop to process messages, and restart as soon as I visit the .svc page...
It's a so common pattern for me, that I also came to a common solution: scheduling a job (every few minutes) that runs a powershell script that access that page. Quite simple, but not very elegant. And, further more, unnecessary in theory.
This happened over different IIS versions (7.0 and 7.5), over various Win 2008 service packs and releases and with server in AD domains or workgroups. I think I've read every bits on the web about this, especially MSDN and microsofties blog, so binding configuration, MSMQ permissions, and all the other small details you can discover here and there are set up.
So the question: does anybody was successful with WAS over MSMQ?
Related
I'm using Simple Injector in my WCF service. While running it from VS2010 everything is fine. However, when I publish it to my server using IIS 7, after some time (20 min, counted) my WCF loses all registered assemblies, modules, classes in container.
I guess IIS recycles the WCF Service Application Pool and drops my container registrations.
Can anyone help me on this?
While there exists many legitimate cases of using self-hosting WCF services, however, approaching self-hosting just because of IIS recycling may be counter productive.
Hosting in IIS gives you a lot benefit during development and daily operations, and I am not going to repeat what benefits which you could easily find out in google search.
So when IIS receive the first request to your application, it will launch a worker process named "w3wp.exe" according the settings in the application pool associated with your web app. And by default IIS will shutdown in 20 minutes of idle time. Check the Advanced Settings of the application pool, you will find a lot settings for the life cycle. You won't get such flexibility and robustness through self-hosting out of the box.
So basically you could have a few options provided you decide to stay with IIS hosting.
Change the Idle Time-out to 24-hours or even a month.
Write a small program or use cUrl to ping your application every 10 minute.
Leave it as it is
If you want to keep states during operations, save them in disk, then load them during next launch triggered by a request.
I Have a .NET 4.5 WCF web service that consumes messages from a local private MSMQ queue running on Windows Server 2008 R2 with AppFabric installed.
This service reads the messages of the queue and processes the files referenced in the message, i have used AppFabric to throttle the service to process 16 concurrent messages, 8 on each AppPool worker process.
The AppPool is uses a domain account that has full privileges on the network share where the files to be processed are stored.
This service has been working fine for years, except in the last week the ~90% of the files its been asked to process have failed with either a UnauthorizedAccessException.
This behavior was exhibited across all of the services on that application server, no matter which file server the service was asked to process files from. Even files that had previously processed file were now failing.
After a long a fruitless weekend searching and hacking of various different things including:
Shared Folder Permissions and Quotas
Windows Licencing (CALs etc)
Firewalls
Various software patches to the Web app
I eventually discovered the actual issue by accident, whilst redeploying the Web app i noticed something odd. When i stopped the web app via the WCF menu in IIS, the messages continued to be consumed so i stopped the stopped the app pool running the web service, but the messages continue to be consumed, I though this might be due to the large latency added to MQMQ message state by the Distributed transaction service when lots of messages are rolled back to the poison message queue, so i went to lunch. When i came back the messages were still being consumed and process explorer confirmed the apppool running my server was no longer executing.
Something was clearly up but it was uncertain weather this was the cause, a symptom or a coincidence. The clincher was when i throttled my service back to only process one message at a time to see if the access, to the share was reaching some sort of limit, I noticed that failure rate went up to ~98%. This suggested that something else was processing the messages and failing, but also reporting those failures into my reporting system in a way only my application could.
I little further investigation revealed that the default application pool used to serve the default web site, was also executing my WCF web service but failing to access the files on the file server as the identity used to run the default application pool had no privileges the failures took less time the than the successful file processes therefor the slower i made my service go the more messages were failed by the default app pool.
The Cause
Whilst i was adjusting the throttling on my web app, i inadvertently set the throttling or the default web site that was the parent to the web application, i noticed this strait away and reset them back to the default value. What i hadn't realized at the time was that this had added a <system.servicemodel> tag to the web config of the default website. The outcome of which was that my default web site started to behave like a web application and for reasons i am yet to understand, it started to execute the functionality of its child web application, it may be related to the WAS activation, all i know is that i was most certainly not the desired behavior.
The Fix
I removed the <system.servicemodel> tag and its contents from the web.conf of the default website and removed net.msmq from its list of enabled protocols and everything is back to normal.
For a few weeks now I've been having a really weird problem. I have a couple of services which work just fine when self-hosted in a command line app. However in IIS+AppFabric I cannot access one of the services - I get TimeoutException and am pretty sure that the call doesn't even make it to the service (all services have an aspect to log all calls before doing anything). Note that both services are configured identically with regards to bindings and behaviors by code. I tried many things like putting them on different app pools, disabling some of the transports... And what is really strange that if both services are in one app pool - one of the services works but if I put them on separate threads - the other service times-out. It really drives me nuts...
Also I see pretty often events in the system event log: "A process serving application pool 'Authorization Management' suffered a fatal communication error with the Windows Process Activation Service. The process id was '11852'. The data field contains the error number." The error number is 0x80070218. After the event the service host initializes without problems (I can see my own info log messages) however the service is unreachable.
Does this ring a bell to anyone?
Thanks!
It turned out that I had a bug in the initialization of the services' hosts. I was trying something, and when I removed the try code, apparently I didn't delete the first line which was locking some resource.
Anyway, it is a good lesson. Nevertheless, if your services do not work, your initialization might be buggy...
Sorry about the noice.
We have been tumbling, for more than a month now, with an issue where a wcf msmq service hosted in IIS7.5 (WAS) will stop processing messages from the queue.
We have been unable to narrow it down more than "at some point" it will stop processing messages from the queue. Calling the svc through a http browser call will start the processing again.
After reading really many articles, blogs and forum posts about this issue we have ensured the following: Security settings, protocol bindings and msmq/service naming, but alas: the service will still stop processing messages (at some point).
Encouraged by this article http://www.daczkowski.net/2010/11/19/leveraging-msmq-in-asp-net-application-through-wcf-service/ we seem to have now finally (almost) eliminated the problem for windows server 2008 r2 sp1 64 bit, but it still seems to appear on Windows 7 32 bit.
Now to get to my question: Can anyone tell me if there actually exists a guarentee (documentation on this would be appreciated) that a msmq WAS hosted wcf service will actually restart (under all conditions) on a IIS7.5 NOT running the AppFabric extension?
I am aware that this question is very compounded, but I'm hard pressed for documentation on why we should extend our OTS package with AppFabric to resolve this restart problem.
Best regards,
Are you're net.msmq endpoints actually using addresses that IIS can bind to a queue name? It's possible to use non-IIS compatible names in the config and WAS won't really ever be able to wake your application up because WAS will only ever register to queues following a path name it can resolve. In that case you need something like AppFabric or a "startup" script to actually activate your services so that they will bind to the queues in their own.
We're planning a system running on Windows/.Net 3.5 that has a number of "services" that need to run in the background. Some will be active all of the time, but some will only be called occassionally and can be stood-up on demand.
As far as I can see, my options are:
Windows Services - always running(?)
IIS hosted something - called on demand
COM+/ .Net Enterprise Sevices - most complex option, but most powerful?
Distributed transactions is not a requirement, these are mainly computation engines, rather than transaction processors.
Does anyone have any experience of working with all of these and what further pros & cons can be claimed for each technology?
EDIT
Is suppose there are multiple ways of hosting code in IIS, web services, WCF (as pointed out below), any others? Relative pros/cons?
WCF feels like the right way to go. There are still many choices to make. WCF provides a number of communication mechanisms and hosting environments:
WCF combines the following technologies under one set of APIs-
ASMX;
WSE;
Remoting;
COM+;
MSMQ.
So for instance you can use persistent messages from MSMQ for occassionaly connected clients or standard XML encoding SOAP messages over an HTTP transport layer. You can also use new features in 3.5 like binary encoding of XML or JSON encoding over HTTP.
Hosting environments include:
Console applications
Windows services
WCF services inside IIS 7.0
and on Windows Vista or Windows Server 2008 you can use WAS (Windows Activation Services) to host WCF services.
Different hosting environments have pros and cons. I suggest you look at MSDN for more details (e.g. http://msdn.microsoft.com/en-us/library/bb332338.aspx).
Because WCF encompasses a lot of functionality it is more difficult to learn than any one of the technologies it replaces. I still think it pays for itself in the long run.
It depends on what the software will do, and how (and if) users or systems need to interact with it. Depending on those things, there may be one more, often overlooked, option: set it up as a scheduled task. This is often a very good alternative to a windows service, if the software is of the kind that will act on certain time intervals (check for a change in a database, act on the changed data and send it somewhere, for instance).
If you will have other systems talking directly to your software, I would imagine that a WCF application hosted in IIS would be a rather straighforward way. We use both those approaches in my current assignment; WCF services for looking up and storing data, and scheduled tasks for data calculations that run on a regular basis.
The scheduled task has one upside compared to the others in one specific field; it uses system resources only when running.
You mentioned starting up a process "on demand". WAS - Windows Activation Service, or sometimes called Windows Process Activation Servvice, though it is never abbreviated "WPAS" - is the thing inside Windows that provides on-demand process activation. The way it works - when a message arrives, WAS can start a worker process to handle the message. WAS was, prior to IIS7, fairly tightly integrated into IIS. It was used primarily to activate processes that did web work - like an ASP.NET worker process. With IIS7, WAS is generalized so that it can activate worker processes based on non-HTTP as well as HTTP messages. If you write your app to receive messages through WCF, you can get activation essentially "for free". That applies if it is HTTP, TCP, MSMQ; SOAP or otherwise.
The key thing with this on-demand startup though, is that it is tied to the communication. In fact the process lifecycle model for WAS is tied to communication as well. By default if there are no incoming messages after a while, the process will be shut down by WAS. That may or may not be what you want.
As for process hosting - COM+ offers a hosting environment but it is primarily intended for use as a host for processes that communicate. This may not be the perfect fit for you.
If you have compute engines, you may just want to run a Windows Service. A service like that can be started and stopped either administratively or programmatically. In the latter case, you could imagine a WAS-activated worker process programmatically starting a windows service.
You could also imagine writing a simple Windows Service that watches a location (filesystem, message queue, etc) for a message, and when that file or message arrives, the Windows Service starts up a compute engine process, which itself is NOT a Windows Service, but is just a process.
Speaking of MSMQ - That is basically the same model as MSMQ triggers. You can configure MSMQ to start a process when a message arrives on a particular queue.
There are lots of options.