WCF - The message could not be dispatched because the service at the endpoint address ... is unavailable for the protocol of the address - wcf

Ok, can I vent?? I am so sick and tired of this. I'm working away most of the day and the WCF services are working great. Next time I run my app and make a WCF call, bam! the tcp socket is no longer available. I have searched high and low to solve this and there is no real solution. The only solution I can find is to reboot the machine which is a huge time-waste and burden. Restarting WPA service, net.tcp service, IIS, etc. does not do a thing. Logging off and back on does not fix it. Only a reboot fixes this issue. I do nothing except run my app again making a WCF call, and this crap happens. There are no configuration issues with anything. I have been dealing with this for months and cannot find any specific reason or solution as to why this happens. It happens with my firewall on or off, does not matter.
Any insight from anyone? I think there is truly a bug in the WCF / net.tcp layer that is causing this. I even get it on a production 2008 R2 server when sometimes making a Web.config change, so I have learned to stop the IIS, WPA, net.tcp, etc. services prior to the change then restart them. What a pain.
I'm using .NET4 all around, VS2010, all service packs, etc. applied. Everything is the most current.
Excuse me while I reboot.....
Can anyone help with this?

Open a command prompt
Navigate to c:\windows\microsoft.net\framework64\v4.0.30319
Register the service model using the command "ServiceModelReg.exe -r"
Credits go there http://kumaranbose.blogspot.be/2010/08/cryptic-wcf-nettcp-errors.html

This issue hunts me for almost 3 years now but only happens sporadically. TCPView helped.
I have killed SMSSvcHost.exe process and then restarted Net.Tcp Listener Adapter service. That cleared the issue. Not really a solution but at least, I don't have to resort to rebooting the server anymore.

I had this issue. It would happen after each IIS reset (which happens as part of our deployment). The issue was resolved after restarting NetTcpPortSharing service (which also restarts Net.Tcp Listener Adapter service)

I am not sure I have an answer but, you could identify the process that has the port open and that can help narrow the scope of the problem. I have used Sysinternals suite which has a TCPView. This proggy was helpful to me.
TCPView - http://technet.microsoft.com/en-us/sysinternals/bb897437

Sounds Net.Tcp Listener Adapter service is being killed by some process or exception being throw by the web service putting the channel in a faulted state.
Have you tried setting the startup type of the service to automatic and recovery to restart service on first and second failure?
I doubt it very much that there is a bug in wcf net.tcp channel layer. If the listener is running and tcp socket no longer available i would suggest you look into the code especially around the exception handling strategy and have a peek into the iis request logs.

Related

Ensuring (restart of) MSMQ WCF service hosted on IIS7.5 WAS

We have been tumbling, for more than a month now, with an issue where a wcf msmq service hosted in IIS7.5 (WAS) will stop processing messages from the queue.
We have been unable to narrow it down more than "at some point" it will stop processing messages from the queue. Calling the svc through a http browser call will start the processing again.
After reading really many articles, blogs and forum posts about this issue we have ensured the following: Security settings, protocol bindings and msmq/service naming, but alas: the service will still stop processing messages (at some point).
Encouraged by this article http://www.daczkowski.net/2010/11/19/leveraging-msmq-in-asp-net-application-through-wcf-service/ we seem to have now finally (almost) eliminated the problem for windows server 2008 r2 sp1 64 bit, but it still seems to appear on Windows 7 32 bit.
Now to get to my question: Can anyone tell me if there actually exists a guarentee (documentation on this would be appreciated) that a msmq WAS hosted wcf service will actually restart (under all conditions) on a IIS7.5 NOT running the AppFabric extension?
I am aware that this question is very compounded, but I'm hard pressed for documentation on why we should extend our OTS package with AppFabric to resolve this restart problem.
Best regards,
Are you're net.msmq endpoints actually using addresses that IIS can bind to a queue name? It's possible to use non-IIS compatible names in the config and WAS won't really ever be able to wake your application up because WAS will only ever register to queues following a path name it can resolve. In that case you need something like AppFabric or a "startup" script to actually activate your services so that they will bind to the queues in their own.

WCF service does not respond, how to debug?

consider a WCF service, which is heavily used and behaves normally. But then is stopps responding. In the sevice level message trace you can see the outgoing message on the client, but no incoming message on the server. On transport level theres a incoming message and then nothing. After 60 seconds the client throws a TimeoutException.
What can cause a behavior like this?
What would you do to debug this behavior?
Is it possible that this behavior is caused by too many concurrent connections/sessions?
EDIT:
Client and Server are on the same machine. Both are .NET apps. When the client is restarted the problem sometimes does not happen. Also the problem does only appear on a single machine. I was not able to reproduce the behavior on any other machine.
Regards
Michael
I understand you have no problem on network level as you have mentioned that you can see incoming request on transport level.
So the first thing to check does the service is up and does it works if the client is on the same machine.
Also you can analyze incoming messages may be the problem can be there.
Here WireShark will be your friend.
Also check can you view the wsdl from the client machine. By the way are your clients also .NET apps?
You can configure tracing using the application’s configuration file—either Web.config for Web-hosted applications, or Appname.config for self-hosted applications by Service Trace Viewer
or use from debug tools such Debug Diag

Strange Problem with Webservice and IIS

I have a Problem which confuses me a little bit, resp. where I don't have any idea about what it could be.
The System I'm using is Windows Vista, IIS 7.0, VS2008, Windows Software Factory, Entity Framework, WCF. The Binding for all Webservices is wshttpbinding.
I'm using a Webservice hosted in IIS. This Webservice uses/calls another Webservice (also installed in the IIS). If I use a client calling the first Webservice (which calls the second Webservice) it works fine for about 4-10 Times. And then (it is repeatable to get this Problem, but sometimes it happens after 4, sometimes after 10 Time, but it always will happen), the Service and the IIS gets stuck.
Stuck means, that this Webservice isn't callable anymore and generates an timeout after 1 minute.
Even increasing Timeout doesn't change anything.
If i try to restart the IIS I get an timeout error (and this is really confusing me. It seems that the Webservice has "crashed" somehow and blocks the Restart of the IIS). So the IIS is also "stuck" (it is not really stuck, but I can't restart it). Only if I kill the w3wp.exe IIS is restartable and the Webservice will work again (until i again call this service several times).
The logfiles (i'm no expert in things like logging or where to find/enable such logs, so to say : i'm a newbie) like http-logging, Event Viewer or WCF-Message Logging don't show any hints upon the source of the problem.
I don't have this problem when I'm using a Webservice which doesn't call another Service.
Calling a Webservice is done by Service Reference (I'm using no Proxy-Classes), but I think this should be no Problem.
I have no idea of what is happening, nor how to solve this Problem.
Regards
Rene
Edit. : I hope my posting is more readable now :-)
insert System.Diagnostics.Debugger.Break() into your web service code. When that point is reached, you will be able to step through the service logic. This may help you diagnose the cause of the deadlock.
Another alternative is to turn on WCF Tracing, and diagnose that way.

WCF Client hang on service interruption

I have a fairly straightforward WCF service that performs one-way file synchronization for a bunch of smart clients. I've noticed that when there's a network or service interruption during a call, the client stops being able to communicate with the server until the entire application is restarted.
The service runs with BasicHttpBinding and is hosted with IIS6 (a .svc page), using transferMode="Streamed" and messageEncoding="Mtom". The service is configured to use the default InstanceContextMode (I think it's Per Call?) and ConcurrencyMode=Single. It's using the default throttling behavior, but I'm in an isolated test environment that nobody else is hitting.
Clients are Windows Services. I'm using this ServiceProxyHelper to ensure connections are Close()'d or Abort()'d correctly when Dispose()'d, though there are no sessions so I don't think that even matters. When an error occurs, the Client object is disposed and then goes out of scope. After the exception is detected, the service waits a bit, then creates a new client object and tries again. So it should recover from the failure, but for some reason all subsequent calls to the service fail.
I can reproduce this reliably by starting a client, allowing it to transfer a few files, then iisresetting the server. First the client generally displays a "Service is Too Busy" error (which maps to the IIS 503 error that you get during an app restart). After that, all subsequent calls to the service time out. As far as I can tell the calls are not even being attempted by the client. I have tracing enabled and what I see is: Timeout error, followed by a "Failed to send request message over HTTP" warning, followed by another Timeout error.
The crazy thing is that when I configure the client to use Fiddler (port 8888) as a proxy in app.config, everything works as desired. So somehow Fiddler as the proxy is closing or finalizing some kind of connection that WCF on its own is not.
Thoughts?
Edit 2009-10-30 8:54PM: Changed service attributes to: InstanceContextMode=Single and ConcurrencyMode=Multiple. No difference.
Well that was painful. It took me forever but finally I zeroed in on the difference between running with a proxy vs without and started poking around the <system.net> settings. It turns out that adding this configuration bit to the client fixes the problem:
<system.net>
<settings>
<servicePointManager expect100Continue="false" />
</settings>
</system.net>
Can somebody explain what's going on? Why should this setting cause WCF clients to hang irreparably when there's a service interruption?
Are you sure this isn't a client side issue. If your Windows service is making the WCF calls on a seperate thread from the main, and you have an un-handled exception happening on the child thread...the calling thread may or may not sit there and wait forever becaues it's waiting for that thread to return.
That would explain why there's an Exception inside the service and then it looks like the service makes no more calls to the service...it's hung.
Used to be a huge issue when using Timers to spawn processes in .NET 2.0 Windows Services.

WCF client application hang -- need repro advice

I have a WCF application with a couple thousand clients connecting to a pair of services running under IIS. What I've noticed is that some of these clients get into a hung state, and I'm trying to reproduce this.
When this problem was first noticed, I had not modified the throttling configuration and the services were set to ConcurrencyMode.Single. One thing I noticed was that an IISReset on the server caused many clients to hang. Yet pulling this same stunt on the client running against IIS on my local machine doesn't seem to cause the problem.
I caught this only once in the wild, but didn't have debugging enabled at the time. The symptom I witnessed was that the client appeared to be trying to open a connection to the web server, but did not succeed. While monitoring with Fiddler, I saw no attempt to reach the service endpoint. Obviously that makes me suspect the client proxy.
I have a very solid hunch as to what's happening -- namely I've been using "Close()" instead of "Abort()" when the service throws an exception, which I believe is causing the channels to become corrupted. But considering the effort to get a new version out there, I need to reproduce this problem by causing a client on my own machine to hang before I can start making changes to the code.
Where should I start?
Thanks in advance,
roufamatic
Have you got any logging turned on? This could help in diagnosing the problem. It can be done completely in config, so no need to build a new version. Use the Service Configuration Editor tool to set it all up. The Visual Studio 2008 Training Kit has a good tutorial on how to use logging and the log viewer.
I suppose this was too vague a question though I was mostly curious what people might suggest. As it turns out there was a nontrivial difference between my workstation and a production environment that, once resolved, allowed me to see the problem. In this case, somehow using Fiddler to watch the traffic actually prevented the error from occurring! Now to ask another question.