I am trying to add a service reference to axapta 2009. All is working well, its a simple web method(external webservice) that gets executed on the server tier(necessary, otherwise clr interop error)
But I've ran into the following problems :
is it possible to close the proxy one way or another? Because this option is not available in the generated service object in AX (only the webmethods and a tostring).
at a certain moment, i ran into a service with faulted state. Normally, you create the service object again, but this didnt solve anything, until i restarted the AOS. Is this normal behaviour? Is the service object cached or something like that on server side?
Thx in advance.
This is due to the fact that the WCF service is throwing Faults, probably unhandled faults.
Do you have access to the WCF Service? If so then have a look at this link: How do I prevent a WCF service from enter a faulted state?
Try to catch any exceptions within the WCF Service and log them.
Unfortunately Ax cannot catch FaultExceptions thrown by WCF so you will be limited to modify the WCF Service with an object encapsulating the return message, along with a flag if the method processed successfully or if an exception was thrown.
Yes it is normal behavior for a faulted WCF service to stay in the Faulted state. You may have to restart the IIS service or just recycle the AppPooll the WCF Service is running under.
Related
I have a WCF service that is coded to throw a Custom FaultException under certain conditions. When hosted locally and on several servers this executes as excpected, Custom fault thrown by service custom fault caught by client, but on production and UAT server the Custom fault is thrown by what the client recieves is a Protocol Exception (500 error).
Is anyone aware of an IIS or sever setting that could be effecting this WCF server? This issue is driving me crazy
I am having a similar Issue,
our server is using a third party web service, when a client connects to our server from the same machine, our server can catch the Fault Exception, but if the client is connecting over the network our server can't handle the Fault Exception, and is only getting the "500 Internal Server Error".
I used a sniffer to see the incoming data, and I can see that the webservice is sending the Fault Exception in both cases, this is a third party webservice so i have no control over it.
the clients use .Net Remotting to connect to the server.
The Solution:
add RemotingConfiguration.CustomErrorsMode = CustomErrorsModes.Off;
to the Remoting server,
for some reason it is affecting the Exceptions it (the server) is receiving.
this is not the ideal solution because now we are exposing our servers Exceptions ...
I built a application with 5 wcf services and hosted them in IIS 7.5. I used the default configuration for the net.Tcp port (808*).
I am not used to host WCF services in IIS (I always hosted in Windows Services) and I found interesting that when I call the service (using tcp binding) I got two different process on the server.
One is SMSvcHost.exe (the one that is actually using the 808 port) and the other is w3wp.exe wich I think is handling a instance of the server I am calling.
I have a lot of questions so I will enumerate them:
Should I use IIS instead of Windows Service to host WCF Services (tcp binding) ?
The fact that I have two processes to answer my request means that I may have cpu impacts ?
Sometimes my services stop answering using the tcp binding. I got a timeout error on my clients but the mex is correctly answering if I go to the http://myServer/Service1.svc. I suspect that this problem is caused by fault connections but I am throwing exceptions correctly (using fault exceptions) and catching them correctly on my clients. Besides, I am also implementing a partial class for every service to dispose corretly (either using Close or Abort methods). Is there any way to figure out what's going on when the services stop answering ?
Shouldn't the w3wp.exe processes be closed after the client ends the request ? They remain on the Task Manager even when no one is using the services. I guess this is the reason of why my Entity Library logging locks the file after my request is completed.
Ideally it would be better hosted in Windows Activation Services (WAS) which is close to what you think of as IIS, but not quite. Here's a good introductory article on MSDN magazine:
http://msdn.microsoft.com/en-us/magazine/cc163357.aspx
I encounter this strange problem while using wcf services along with
L2SQL DAL.
The server is hosted at the localhost and contains an implementation of the correspondent interface. The client is familiar with the interface and occasionally queries the database via the exposed service using tcp transport.
When client runs locally everything's just fine.
But whenever client runs at another machine the 'InvalidOperationException' is thrown in the System.Data.dll (the transport still gets delivered) and over the time the channel enters 'Faulted' state(and transport fails to get delivered).
I feel I'm missing something very basic in my application.
Could anyone please point out possible reasons for such and odd behaviour?
An exception in the service will cause the channel to enter a Faulted state, if you do not clean up properly. See: http://bloggingabout.net/blogs/erwyn/archive/2006/12/09/WCF-Service-Proxy-Helper.aspx
for how to clean up the proxy when the service fails.
On your problem with L2SQL, looks like you allready found a solution.
Imagine the following setup: a Silverlight client tunnels a serialized command over the network using a WCF service which in turn deserializes the command and sends it using NServiceBus to a generic host which is responsible for processing the command. The WCF service has - upon sending the command - registered a callback to be invoked. The generic host validates the command and 'returns' an error code (either 0 == success or >0 == failure).
Note: The WCF service is modelled after the built-in WCF service. The difference is that this WCF service receives a 'universal command' (not an IMessage), deserializes it into a real command (which does implement IMessage), and consequently sends the deserialized command off to the bus.
When unexpected exceptions occur, the command gets (after a certain amount of retries) queued in an error queue. At this point, the initiating WCF service sits there idle, unaware of what just happened. At some later point, the Silverlight client will time out according to the WCF client proxy configuration.
Things which are fuzzy in my head:
Does NServiceBus handle this scenario in any way? When does the timeout exception get thrown (if at all)? Or is this something exclusive to sagas?
Presuming I use [OperationContract(AsyncPattern=true)], are there any WCF related timeout settings that will kill the service operation? Or will the EndXXX method be somehow called? Or will it sit there forever, leaking, waiting for something that will never come?
Ways to proceed:
reuse existing timeout mechanisms, provided things don't leak.
build my own timeout mechanism between the wcf service and nservicebus.
notify the wcf service somehow when the command lands in the error queue.
build my own async notifcation mechanism using full blown callback message handler in the WCF service layer.
Things I've done:
run the example provided with NServiceBus.
spiked the happy case.
Any guidance on how to proceed is welcome, be it blog post, mailing list entries, ...
Some motivations for picking my current approach
I'm trying to leverage some of the scalability advantages (using distributor in a later phase) of NServiceBus.
I don't want to host a gazillion WCF services (one for each command), that's why I cooked up a bus-like WCF service.
Even though this is somewhat request/response style, I'm mostly concerned with gracefully handling a command reply not coming through.
You can develop any sort of message type you desire, IMessage is simply a marker interface. If you inspect the WSDL file that the service mex endpoint provides, there is no reference to IMessage, therefore you can define any command you like in you service. That being the case you should be able to use the provided WCF host.
I was able to reproduce the issue you describe using the built-in WCF hosting option. When an exception is thrown, the entire transaction is rolled back and this includes the Bus.Return, and therefore the service never gets a response.
I found a hack around this that I could provide, but I recommend reconsidering how you are using the service. If you are truly looking to do some expensive operations in a separate process then I would recommend in your WCF endpoint that you do a Bus.Send to a different process altogether. This would ensure to your client that the command was successfully received and that work is in progress. From there it would be up to the server to complete the command(some up front validation would help ensure its success). If the command was not completed successfully this should be made known on another channel(some background polling from the client would do).
I have a fairly straightforward WCF service that performs one-way file synchronization for a bunch of smart clients. I've noticed that when there's a network or service interruption during a call, the client stops being able to communicate with the server until the entire application is restarted.
The service runs with BasicHttpBinding and is hosted with IIS6 (a .svc page), using transferMode="Streamed" and messageEncoding="Mtom". The service is configured to use the default InstanceContextMode (I think it's Per Call?) and ConcurrencyMode=Single. It's using the default throttling behavior, but I'm in an isolated test environment that nobody else is hitting.
Clients are Windows Services. I'm using this ServiceProxyHelper to ensure connections are Close()'d or Abort()'d correctly when Dispose()'d, though there are no sessions so I don't think that even matters. When an error occurs, the Client object is disposed and then goes out of scope. After the exception is detected, the service waits a bit, then creates a new client object and tries again. So it should recover from the failure, but for some reason all subsequent calls to the service fail.
I can reproduce this reliably by starting a client, allowing it to transfer a few files, then iisresetting the server. First the client generally displays a "Service is Too Busy" error (which maps to the IIS 503 error that you get during an app restart). After that, all subsequent calls to the service time out. As far as I can tell the calls are not even being attempted by the client. I have tracing enabled and what I see is: Timeout error, followed by a "Failed to send request message over HTTP" warning, followed by another Timeout error.
The crazy thing is that when I configure the client to use Fiddler (port 8888) as a proxy in app.config, everything works as desired. So somehow Fiddler as the proxy is closing or finalizing some kind of connection that WCF on its own is not.
Thoughts?
Edit 2009-10-30 8:54PM: Changed service attributes to: InstanceContextMode=Single and ConcurrencyMode=Multiple. No difference.
Well that was painful. It took me forever but finally I zeroed in on the difference between running with a proxy vs without and started poking around the <system.net> settings. It turns out that adding this configuration bit to the client fixes the problem:
<system.net>
<settings>
<servicePointManager expect100Continue="false" />
</settings>
</system.net>
Can somebody explain what's going on? Why should this setting cause WCF clients to hang irreparably when there's a service interruption?
Are you sure this isn't a client side issue. If your Windows service is making the WCF calls on a seperate thread from the main, and you have an un-handled exception happening on the child thread...the calling thread may or may not sit there and wait forever becaues it's waiting for that thread to return.
That would explain why there's an Exception inside the service and then it looks like the service makes no more calls to the service...it's hung.
Used to be a huge issue when using Timers to spawn processes in .NET 2.0 Windows Services.