I've developed a WCF Service which is hosted as a Windows Service and exposes a MSMQ endpoint.
I have the client app on SERVER1, and the MSMQ and WCF Service on SERVER2.
When the SERVER1/ClientApp attempts to push a message on to the SERVER2 MSMQ, I get the following errror:
System.TypeInitializationException: The type initializer for 'System.ServiceModel.Channels.Msmq' threw an exception. ---> System.DllNotFoundException: Unable to load DLL 'mqrt.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E)
at System.ServiceModel.Channels.UnsafeNativeMethods.MQGetPrivateComputerInformation(String computerName, IntPtr properties)
at System.ServiceModel.Channels.MsmqQueue.GetMsmqInformation(Version& version, Boolean& activeDirectoryEnabled)
at System.ServiceModel.Channels.Msmq..cctor()
--- End of inner exception stack trace ---
at System.ServiceModel.Channels.Msmq.EnterXPSendLock(Boolean& lockHeld, ProtectionLevel protectionLevel)
at System.ServiceModel.Channels.MsmqOutputChannel.OnSend(Message message, TimeSpan timeout)
at System.ServiceModel.Channels.OutputChannel.Send(Message message, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)
Exception rethrown at [7]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at FacilityManager.Service.NotificationsProcessorServiceReference.INotificationsProcessor.SendNewReactiveTaskNotifications(NewReactiveTaskDataContract newReactiveTaskDataContract)
Both SERVER1 and SERVER2 are running Windows Server 2008 R2 Enterprise (6.1 SP1), and both have had MSMQ installed via the Add Features in Server Manager.
I understand that the DLL is missing (fairly obvious from the error!), but I've no idea what I should be installing to get the dll where it should be.
A search in Windows Explorer shows that the DLL is present in the following directories on both servers....
C:\Windows\System32
C:\Windows\SysWOW64
C:\Windows\winsxs\x86_microsoft-windows-msmq-runtime-core_31bf3856ad364e35_6.1.7601.17514_none_5768e2ad17453bd6
C:\Windows\winsxs\amd64_microsoft-windows-msmq-runtime-core_31bf3856ad364e35_6.1.7601.17514_none_b3877e30cfa2ad0c
Any help appreciated.
An obvious aside; If you don't have the Windows Feature -> Microsoft Message Queue (MSMQ) Server installed then you will get this error. Simply go to Programs and Features and then Turn Windows Feature on or off.
I'm none the wiser but things are working now.
After hours on SO and Google, I ended up just checking that MSMQ was installed on both Servers by writing a quick console application with the code grabbed from here...
https://stackoverflow.com/a/16104212/192999
I ran the console app on both Server1 and Server2 and both came back with a result of True to IsMsmqInstalled.
I then ran my application and the "Unable to load DLL 'mqrt.dll'" error was no longer being raised.
I don't know if the call to NativeMethods.LoadLibrary("Mqrt.dll"); registered the DLL or something, but it certainly fixed my problem.
I hope this helps someone in the future!
This can be caused by your service on SERVER2 starting up and finishing its initialization before MSMQ is done initializing itself. The easiest way to test this is to restart the service hosting the WCF MSMQ endpoint. If the WCF service is hosted in IIS, perhaps bouncing the app pool will do the same thing, but I do not know for sure -- I've never dealt with an IIS hosted MSMQ endpoint.
If restarting the service fixes your problem and your own service is a Windows service, you can then add MSMQ as a dependency to your own service so that it will delay its startup until MSMQ is ready. This answer on Server Fault describes how to do it. Incidentally, the service you want to depend on is called "Message Queueing"
Related
I have two BizTalk development machines, one of which I am trying to get into a consistent state with the other. One of my tests checks the content of a SOAP fault response received from an orchestration - this is by design. The problem is that between the two machines, which as far as I know are configured identically and have the same application with the same configuration installed on them, handle faults differently as indicated by the stack trace of the caught exception within the orchestration.
The incoming fault expected is received from a Specify Later, request-response port with a SOAP 1.1 Fault operation configured. This is caught by a catch block which simply serializes the exception detail into another fault message and returns it to the caller. I can see that the fault is caught in the same way by the same catch block on both machines.
Baseline machine stack trace:
at Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkAsyncResult.End()
at Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkServiceInstance.EndOperation(IAsyncResult result)
at Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkServiceInstance.Microsoft.BizTalk.Adapter.Wcf.Runtime.ITwoWayAsync.EndTwoWayMethod(IAsyncResult result)
at AsyncInvokeEndEndTwoWayMethod(Object , Object[], IAsyncResult )
at System.ServiceModel.Dispatcher.AsyncMethodInvoker.InvokeEnd(Object instance, Object[]&outputs, IAsyncResult result)
at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeEnd(MessageRpc&rpc)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage7(MessageRpc&rpc)
at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)
Other machine's stack trace:
at Microsoft.BizTalk.Adapter.Wcf.Runtime.BizTalkServiceInstance.EndOperation(IAsyncResult result)
at AsyncInvokeEndEndTwoWayMethod(Object , Object[], IAsyncResult )
at System.ServiceModel.Dispatcher.AsyncMethodInvoker.InvokeEnd(Object instance, Object[]&outputs, IAsyncResult result)
at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeEnd(MessageRpc&rpc)
at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage7(MessageRpc&rpc)
at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet)
This is the only difference in behaviour that I have noticed. Why are two instances of the same orchestration handling the same fault differently?
This is due to the bitness of the hosting environment when hosting an orchestration in the out-of-process Isolated Host. When the IIS application pool is set to Enable 32-bit Applications = False, the BizTalk adapter's fault processing logic is slightly different, or expressed differently in terms of stack trace, and this causes the test to fail.
I actually set this myself and forgot about it, but the reason for doing it was that a completely unrelated application had added a global module to my IIS configuration that was compiled in 64-bit mode, which caused the application pool to repeatedly error then shut down whenever it ran in 32-bit mode. This module was UxCertAuthModule.dll and it is installed, I believe, by one of the components of Windows Azure Pack. I believe this is a bug, and removing this global module fixes 32-bit application pools, and my test.
edit:
I have raised this as a possible bug on the Azure Pack forums.
We can't determine why the Azure BasicHttpRelay is throwing an occasional FaultException without any details. We've enabled WCF diagnostic tracing, but the available stack trace information is still the same. It seems like the WCF client channel fails for a brief time and then shortly returns.
We do cache the WCF Channel (e.g. CreateChannel), but this is the first time we've experienced this strange behavior. We have other Azure Service Bus relay solutions that work fine with this approach.
Error Message:
There was an error encountered while processing the request.
Stack Trace:
at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc)
at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at [our WCF method]...
FaultException - FaultCode Details:
Name: ServerErrorFault
Namespace: http://schemas.microsoft.com/netservices/2009/05/servicebus/relay
IsPredefinedFault: false
IsReceiverFault: false
IsSenderFault: false
Soap Message
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Header />
<s:Body>
<s:Fault>
<faultcode xmlns:a="http://schemas.microsoft.com/netservices/2009/05/servicebus/relay">a:ServerErrorFault</faultcode>
<faultstring xml:lang="en-US">There was an error encountered while processing the request.</faultstring>
<detail>
<ServerErrorFault xmlns="http://schemas.microsoft.com/netservices/2009/05/servicebus/relay" xmlns:i="http://www.w3.org/2001/XMLSchema-instance" />
</detail>
</s:Fault>
</s:Body>
</s:Envelope>
Through debugging, we can see the server properly responds to the message requests (via IDispatchMessageInspector), but the client fails to handle the response appropriately (IClientMessageInspector reports fault). Subsequent relay requests will succeed after the client channel seemingly corrects itself. These failures seem to be intermittent and not load-driven. We never see these FaultException errors using basicHttpBinding outside the Azure relay.
Does anyone have any suggestions? We are using Azure SDK 1.8.
I've tried configured a new Service Bus Relay namespace using owner shared secret, but still seeing the same results.
After reaching out to MS - this issue turned out to be an MS bug with the Relay or the SDK, specifically when using Http Connectivity Mode. At this point, the only workaround is ensure you have the appropriate outgoing TCP ports opened up to ensure reliable connectivity with the Azure Relay.
Allow Outgoing TCP Ports: 9350 - 9354
MS has told us that they are still working on resolving the root cause. Hopefully this workaround will help others. Our corporate firewall had these TCP ports blocked which forced all communication over port 80 which must trigger this issue. The positive thing is that opening up these ports enables faster connectivity to the relay when starting up your listeners (AutoDetect doesn't have to check the TCP ports availability every time).
First of all, sorry, i'm not fluent.
I'm trying to figure out why my WCF services stop working when we have an environment with high calls/second rate. I'm not sure that just increasing timeout will solve the issue.
We have 2 webservices:
The first is hosted on IIS 7.5, Windows Server 2008 R2 Enterprise SP1 x64, with AppFabric (and WAS)
Second, hosted on Windows Service, Windows 2003 R2 SP1 x86
Both webservices have minimum configuration: No authentication, No trasaction, Without special treating of message.. check the binding:
<netTcpBinding>
<binding transactionFlow="false">
<security mode="None">
<message clientCredentialType="None" />
<transport clientCredentialType="None"></transport>
</security>
<reliableSession enabled="false"/>
</binding>
</netTcpBinding>
We are trying to use Net.Tcp binding because of its realibility and velocity.
FACT 1 - Net.Tcp Binding is primary reason
When the load is high, the channel Net.Tcp stop working. That's it! But the BasicHttp still working like a charm.
The WindowsService: the channel net.tcp last down for some minutes (3m - 10m) before get working back (BY ITSELF, without we change anything. Goblins are working hard).
The AppFabric/IIS/WAS: the channel net.tcp keep down. Need manual restart.
The BasicHttpBinding configuration is similar to net.tcp: without any treating of the message, whitout security concerns or something like that.
FACT 2 - Without any kind of logging
We couldn't find any kind, tip, trick to figure out what's happening. I have tried Dump the memory, event logs, System.Diagnostics and nothing relevant. The most relevant tip is an Error from SMSvcHost 4.0.0.0:
An error occurred while dispatching a duplicated socket: this handle
is now leaked in the process. ID: 2272 Source:
System.ServiceModel.Activation.TcpWorkerProcess/62875109 Exception:
System.TimeoutException: This request operation sent to
http://schemas.microsoft.com/2005/12/ServiceModel/Addressing/Anonymous
did not receive a reply within the configured timeout (00:01:00). The
time allotted to this operation may have been a portion of a longer
timeout. This may be because the service is still processing the
operation or because the service was unable to send a reply message.
Please consider increasing the operation timeout (by casting the
channel/proxy to IContextChannel and setting the OperationTimeout
property) and ensure that the service is able to connect to the
client.
Server stack trace: at
System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at
System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.End(SendAsyncResult
result) at
System.ServiceModel.Channels.ServiceChannel.EndCall(String action,
Object[] outs, IAsyncResult result) at
System.ServiceModel.Channels.ServiceChannelProxy.InvokeEndService(IMethodCallMessage
methodCall, ProxyOperationRuntime operation) at
System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage
message)
Exception rethrown at [0]: at
System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at
System.ServiceModel.Activation.WorkerProcess.EndDispatchSession(IAsyncResult
result) Process Name: SMSvcHost Process ID: 1532
Do you have any tip or configuration trick to help me solve this issue?
Whats the best configuration for high load scenarios?
If you generated a service reference in Visual Studio, or with the svcutil tool, make sure you always call the Close or Abort methods of your proxies. I encountered a similar problem some days ago because I forgot to call these methods.
In case you are calling the Close() and Abort() methods accordingly and still receive this error consider the following scenario:
You run a Microsoft .NET Framework 3.0-based or .NET Framework 3.5-based Windows Communication Foundation (WCF) service.
The WCF service uses the Net.Tcp Port Sharing Service (Smsvchost.exe) and is hosted on a computer that is running Internet Information Services (IIS).
One of the following conditions is true:
The CPU usage is high on the computer that is running IIS.
A throttle occurs in a service model for the WCF service.
Multiple requests are sent to the WCF service at the same time.
In this scenario, the WCF service takes longer than one minute to process a request from a client application. Additionally, an error message that assembles the following event entry is logged in the event log:
Log Name: System
Source: SMSvcHost 3.0.0.0
Date:
Event ID: 8
Task Category: Sharing Service
Level: Error
Keywords: Classic
User: LOCAL SERVICE
Computer:
Description: An error occurred while dispatching a duplicated socket: this handle is now leaked in the process.
ID: 2620
Source: System.ServiceModel.Activation.TcpWorkerProcess
Exception:
System.TimeoutException: This request operation sent to did not receive a reply within the configured timeout (00:01:00). The time allotted to this operation may have been a portion of a longer timeout. This may be because the service is still processing the operation or because the service was unable to send a reply message. Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client.
Note: You must restart IIS to recover the WCF service from this issue.
Cause:
This issue occurs because of the Smsvchost.exe process times out after one minute when it tries to transfer an incoming connection request to the W3wp.exe worker process. Additionally, this time-out is not configurable.
When the CPU has a heavy workload, or when many concurrent connection requests are incoming, the Smsvchost.exe process cannot transfer the incoming connection to the W3wp.exe worker process within one minute. Therefore, the Smsvchost.exe process times out and eventually stops responding. When this issue occurs, the Smsvchost.exe process cannot route later requests to the W3wp.exe worker process until IIS is restarted.
Solution:
Microsoft suggests applying the hot fix 2504602 that is described in Microsoft Knowledge Base (KB) article. This hot fix is available for WCF in the .NET Framework 3.0 SP2, in the .NET Framework 3.5 SP1 and the .NET Framework 4.
In addition, Microsoft claims to have solved this issue in the .Net Framework 4.5, therefore, you should upgrade to the latest version.
In case you upgrade to the .Net Framework 4.5 and the problem persists the workaround is to modify the smsvchost.exe.config file to increase timeout and pending accepts and various other parameters.
I have hosted a WCF service inside a Windows service using C#. It works fine and I was able to communicate with the WCF service from a client application.
But the issue is if I leave the client idle for 10 min or so and then try to connect again, I get the following error
Server stack trace:
at System.ServiceModel.Channels.CommunicationObject.ThrowIfDisposedOrNotOpen()
at System.ServiceModel.Channels.ServiceChannel.Call(String action,
Boolean oneway, ProxyOperationRuntime operation, Object[] ins,
Object[] outs, TimeSpan timeout)
It is not the windows service that is down, it is your client proxy.
You say that you leave the client idle. You should not do this. You should close the client after you have made your request. Then open it when needed.
This happends when your service binding ReceiveTimeout setting is left at its default value (10 minutes).
To set this to "forever", you can set in your config file:
ReceiveTimeout = "infinite"
or by code:
binding.ReceiveTimeout = TimeSpan.MaxValue;
I have a WCF service run on Windows Server 2008 RC2 IIS 7 with no firewall. When I trying to call it with netTcpBinding binding, I get this exception:
System.TimeoutException: The open
operation did not complete within the
allotted timeout of 00:00:30. The time
allotted to this operation may have
been a portion of a longer timeout.
---> System.TimeoutException: The socket transfer timed out after
00:00:30. You have exceeded the
timeout set on your binding. The time
allotted to this operation may have
been a portion of a longer timeout.
---> System.Net.Sockets.SocketException: A
connection attempt failed because the
connected party did not properly
respond after a period of time, or
established connection failed because
connected host has failed to respond
at
System.Net.Sockets.Socket.Receive(Byte[]
buffer, Int32 offset, Int32 size,
SocketFlags socketFlags) at
System.ServiceModel.Channels.SocketConnection.ReadCore(Byte[]
buffer, Int32 offset, Int32 size,
TimeSpan timeout, Boolean closing)...
The method I call just returns a numeric value, nothing else, so the problem not in timeout. If I use wsHttpBinding - it works without problems. Also I added logging to method I call, so I know that it even not executed.
I made all steps to configure IIS from here. The questions are:
Anybody know what the problem may be?
How can I troubleshoot/debug this
problem?
I am as frustrated as you are with this misleading fault message. If I am not wrong you get this TimeoutException within the first second. I can assure you your issue either related to security or serialization problems. I took this issue to Microsoft's WCF product group, they seemed to be surprised but didn't hear back anything.
First suggestion is to look at NetTcpBinding security settings. Make sure both client and service have identical bare bones security settings to start with (such as not message encryption nor transport layer security). If you can make it work without security, step by step increase security settings.
Second suggestion some serialization problem may be crashing the service: An empty nullable field, overloaded methods mixing with operations, ambiguous contract names, invalid casts.To debug, setup tracing in your service's config settings. You can do this easily via WCF Service Config utility(SvcConfigEditor.exe). Run your service get the exception, stop it and open the generated trace logs with WCF Service Trace Viewer Tool. Both tools comes with .NET (not Visual Studio) and can be found in Program Files/Windows SDKs folder.