I'd like to host a WCF web service in IIS. The service should keep a certain set of data all the time, it must never be lost.
My colleague told me this is impossible because IIS closes down the service after a certain time (I assume without any activity). Is that true? How do I prevent that behavior?
In case it matters, both IIS 6 and 7 are available.
By default, IIS recycles the worker process after a certain period of inactivity (20 mins if I recall correct). This causes your data to be lost.
You can turn off this behavior in the Properties page of the ApplicationPool under which your app is running.
EDIT: having said that, if it is really important that this data is never lost, I would consider storing it in a database or some other form of storage.
My colleague told me this is
impossible because IIS closes down the
service after a certain time (I assume
without any activity). Is that true?
How do I prevent that behavior?
This is true, but you can get around it by using an out of process state server.
Here are three links describing session state and how to set it up in IIS:
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/0d9dc063-dc1a-46be-8e84-f05dbb402221.mspx?mfr=true
http://www.eggheadcafe.com/articles/20021016.asp
http://msdn.microsoft.com/en-us/library/ms178586.aspx
Related
Please excuse the Obvious Self-Q/A, but this information is widely misunderstood, and almost always incorrectly answered. So I Wanted to place this information here for people searching for a definitive answer to this problem.
Even so, there's still some information I haven't been able to nail down. I will put this towards the end of the question (skip to that if you are not interested in the preamble).
How do I correctly configure a WCF NetTcp Duplex Reliable Session?
There are many questions and answers regarding this topic, and nearly all of them suggest setting inactivityTimeout="Infinite" in your configuration. This doesn't really seem to work correctly, particularly for the case of NetTcp (It may work correctly for WSDualHttp Bindings, but I have never used those).
There are a number of other issues that are often associated with this: Including, Channel not faulting after client or server unexpectedly disconnected, Channel disconnecting after 10 minutes, Channel randomly disconnecting... Channel throwing exception when trying to open... Unable to configure Metadata on same endpoint...
Please note: There are two concepts that are important below. Infrastructure messages are internal to the way WCF communicates, and are used by the framework to keep things running smoothly. Operation messages are messages that occur because your app has done something, like send a message across the wire. Infrastructure messages are largely invisible to your app (but they still occur in the background) while operation messages are the result of an action your app has taken.
Information I have figured out, through hard won trial and error.
Infinite does not appear to be a valid configuration setting in all situations (and certainly, the visual studio validation schema doesn't know about it).
There are two special configuration converters, called InfiniteIntConverter and InfiniteTimeSpanConverter which will sometimes work to convert the value Infinite to either Int.MaxValue or TimeSpan.MaxValue, but I haven't yet figured out the situations in which this appears to be valid as sometimes it works, and sometimes it doesn't. What's more, it appears that some libraries will allow Infinite in the config, while others will not, so you can succeed in one part of a configuration, but fail in another.
You must configure BOTH inactivityTimeout and receiveTimeout, on both the client and the server. While these values do not HAVE to be the same, they probably should be as they will probably cause confusion if they are not. (technically, you can leave inactivityTimeout to its default value if you want, but you should be aware of its value, and what it does)
inactivityTimeout should NEVER be set to a large value, much less Infinite or TimeSpan.MaxValue.
inactivityTimeout has two functions (and this is not widely understood). The first function defines the maximum amount of time that can elapse on a channel without receiving any "infrastructure" or "operation" messages. The second function defines the time period in which infrastructure messages are sent (half the time specified). If no infrastructure or operation messages have been received during the timeout period, the connection is aborted.
receiveTimeout specifies the maximum amount of time that can elapse between operation messages only. This value can be set to a large value, such as TimeSpan.MaxValue (particularly if your channel runs internally over a trusted network or over a vpn). This value is what defines how long the reliable session will "stay alive" if there is no activity between client and server (other than infrastructure messages). ie, your client does not call any methods of the interface, and your server does not call back into the client.
setting a short inactivityTimeout and a large receiveTimeout keeps your reliable session "tacked up" even when there is no operational activity between your client and server. The short inactivity timeout (i like to keep the default 10 minutes or less) sends infrastructure "ping" messages to keep the TCP connection alive while the long receive timeout keeps the reliable session active. while at the same time providing a reasonable timeout in case of disconnection.
If you set inactivityTimeout to a large value, then the reliable session will not be reliable as it has no way to keep the Tcp connection alive, nor does it have any way to verify the integrity of the connection. It won't know if a user has disconnected unexpectedly until you try and send a message to that client and find out the connection is no longer there. This is why many people who use Infinite for this setting resort to creating a "Ping" method in their service, which is completely unnecessary if you've configured these settings correctly.
If you set inactivityTimeout to a value larger than receiveTimeout then it will likewise also be unreliable, as you will still be governed by the receiveTimeout for operation messages. ie. if you forget to set receiveTimeout and leave it at the default 10 minutes, then if the user is idle for 10 minutes, the connection will be aborted.
When the client or server unexpectedly disconnects (app crashes, network failure, someone trips over the power cord, etc..), the other side may not notice right away. I have attached various ChannelFaulted event handlers in various test situations, and sometimes the connection is faulted right away... other times it doesn't seem to fault at all. What i have discovered through trial and error is that the when it doesn't seem to fault, it will actually fault after the inactivityTimeout expires on that end. (so if it's set to 10 minutes, then after 10 minutes it will call the ChannelFaulted event).
I have not yet figured out why in some situations it notices the disconnection right away, and others it waits for the timer to expire. In both cases, I notice internal first chance communication exceptions thrown and handled by the framework, and there are calls to Abort the connection... but somehow the call to the event handler gets lost and it must wait for the timeout. My suspicion is this is somehow thread related.
When trying to configure Metadata to work across the NetTcp channel, I have had sporadic results. Sometimes it works, sometimes it doesn't. I've read many reports that Metadata doesn't work over NetTcp and that you have to use an Http channel for the Metadata, but I have in fact had it work on several occasions using the net.tcp:// url to generate the proxy. Then I would change something, recompile and it would no longer work. Changing things back, it wouldn't work again. So it was very confusing what magic incantation was necessary to make Metadata function over net.tcp, shared with the endpoint on the same port (obviously with a different address).
When configuring both a NetTcp and Metatdata endpoint on the same service, and specifying non-default settings for connection parameters like listenBacklog, and maxConnections, you also need to make sure the Metadata endpoint uses the same settings, which typically means you have to define a custom binding, since these settings are not available from the standard tcp mex binding. This includes setting listenBacklog and maxPendingConnections on tcpTransport, and groupName and maxOutboundConnectionsPerEndpoint on connectionPoolSettings.
The default setting for the Ordered setting of ReliableSession is True. This uses a lot more overhead than turning it off. If you don't need ordered messages, i would suggest turning it off (need to set this on both sides)
-
Configuration I still need to understand:
How do I correctly configure the shared net.tcp Metadata endpoint? (I will add an example when I get a chance) Currently, i'm specifying an http get url to bypass the problem. It's so inconsistent as to why it sometimes works and sometimes does not. I kept getting the error `The URI Prefix is not recognized' when generating the proxy in Visual Studio.
Why does WCF sometimes Fault the channel immediately upon disconnect, and sometimes waits for inactivityTimeout to expire? What controls/causes one vs the other behavior?
I have a situation where I have two programs (one exe and one dll loaded into the process space of another third-party exe) communicating requests with each other using a local machine wcf service (using net named pipe binding). There's a third host exe that starts hosting the service. It all works great (so far anyways... I'm still learning), but I got to thinking about what would happen if the channel faults or the service times out. What would be the best practice for checking and handling faults as well as keep the channel alive?
In my case it will be up to the user to keep the applications open or close them and we do have those users who tend to keep them open overnight, over the weekend, etc... It seems to me this could open the possibility of a fault or loss of service and I don't have a clue how to recover. Any help would be greatly appreciated.
Firstly, why would you keep the channel alive indefinitely?
Imagine you are connecting to a database from which you want to read over the course of one day. Would you create the database connection in the morning and then close it in the evening?
It is relatively cheap to construct a channel in WCF for each call, unless you know you are going to be making multiple calls within a few seconds of each other, in which case you should reuse the channel.
EDIT
This post explains how to do it. It's pretty complicated and it may be easier to just set a huge timeout value for the binding in code (as suggested at the end of the post):
Do WCF Callbacks TimeOut
EDIT
There's tons of stuff on google about this: http://bit.ly/10ZPWE2
I'm using AppFabric caching in a WCF service hosted in WAS.
I must do something wrong because sometimes GetObjectsInRegion() return an empty list while objetcs are indeed present in the region.
Unfortunately, I'm not able to identify the context in which the problem is reproductible.
It seems though that if the web service is restarted, existing regions are seen empty for the service.
Im sure that this is not tied to a timeout problem.
I'll update the question if there is any progress on my side.
Any help appreciated.
This one was a bug on my side.
I was not explicitely setting expiration timeout in some circumstances. The cache cluster was configured with default expiration settings. The TTL is 10 minutes. Objetcs were automatically removed from the cache.
The takeway is : Always set an expiration timout when putting objetcs in the cache.
I have to design and implement a way to deal with long running processes in a client/server application. A typical long running process would/could take 2-3 minutes. I also need to report progress to the UI in the meantime and keep the UI responsive.
Having these in my mind I though of a few solutions:
One async request to start the process which starts the server-side process and returns an assigned LRPID (Long Running Process ID) then poll periodically from the client using that LRPID. (Pro: simple to deploy, no firewall messing around Con: Unelegant, resource consuming etc.)
Use a duplex binding (such as NetTcpBinding) and initiate callbacks from the server as progress is being made (Pro: Elegant, efficient, Con: Deployment nightmare)
[Your suggestion???]
What would be your take on this?
Here is a post by Dan Wahlin about how to create a WCF Progress Indicator for a Silverlight Application. This should be of some help.
If you do not want to have to worry about the client's firewall, etc... I would probably go with your first solution and use a BackGroundWorker to make the call in order to keep from blocking the UI thread. I did this recently for an app where a request to generate a report is put on a queue and is retrieved once it is done. It seems to work well.
Another way (without having to change the WCF binding) is to use a WebBrowser control in the WPF client, and SignalR to post progress messages from the server to that control.
Note that to avoid javascript errors that happen with the WebBrowser control (because by default it seems to use Internet Explorer version 7 which doesn't seem to be compatible with jQuery.js), you will need to add keys to the registry on the client machine to change the default for the client app to use IE10 or later - see http://weblog.west-wind.com/posts/2011/May/21/Web-Browser-Control-Specifying-the-IE-Version).
This could be a deployment nuisance (because admin rights seem to be needed - eg on a 64 bit Windows 8.1 pc - to add the registry keys).
Also, it still seems necessary to call the long running WCF method in a separate thread, otherwise the WebBrowser control doesn't seem to update its display to show the SignalR messages it is receiving. (This makes sense because the UI thread would otherwise have to wait until the WCF call had finished).
But I mention it as an alternative approach using a newer tool (SignalR) :)
I'm using perfmon to examine my service behaviour. What I do is I launch 6 instances of client application on separate machines and send requests to server in 120 threads (20threads per client application).
I have examined counters and maximum number of instances (I use PerSession model and set number of instances to 100) is 12, what I consider strange as my response times from service revolve around 120 seconds... I thought that increasing number of instances will cause WCF to create more instances, and as a result response times would be quicker.
Any idea why WCF doesn't create even more instances of service?
Thanks Pawel
WCF services are throttled by default - it's a service behavior, which you can tweak easily.
See the MSDN docs on ServiceThrottling.
Here are the defaults:
<serviceThrottling
maxConcurrentCalls="16"
maxConcurrentInstances="Int.MaxValue"
maxConcurrentSessions="10" />
With these settings, you can easily control how many sessions or concurrent calls can be handled, and you can make sure your server isn't overwhelmed by (fraudulent) requests and brought to its knees.
Ufff, last attempt to understand that silly WCF.
What I did now is:
create client that starts 20 threads, every thread sends requests to service in a loop. Performance counter on server claims that only 2 instances of service object are created all the time. Average request time is about 40seconds (I start measuring before proxy call and finish after call returns).
modify that client to start 5 threads and launch 4 instances of that client (to simulate 20 threads behaviour from previous example). Performance monitor shows that 8 instances of service object are created all the time. Average request time is 20seconds.
Could somebody tell me what is going on? I thought that there is a problem with server that it doesn't want to handle more requests at concurently, but apparently it is client that causes a stir and don't want to send more requests concurently... Maybe there is some kind of configuration option that limits client from sending more then two requests at one time... (buffer,throttling etc...)
Channel factory is created in every thread.
You might want to refer to this article and make adjustment to your WCF configuration (specifically maxConnections) to get the number of connections you want.
Consider using something like http://www.codeplex.com/WCFLoadTest to hit the service.
Also, perfmon will only get you so far. If you want to debug WCF service you should look at the SvcTraceViewer and SvcConfigEditor in the Windows SDK.
On your service binding what have you set the maxconnections to? Calls to connect will block once the limit is reached.
Default is 10 I think.
http://msdn.microsoft.com/en-us/library/ms731379.aspx