In our application we have a WCF "standalone" published on a Azure WorkerRole (Cloud Service). In this WCF, we have modified the throttling and limited the maximum number of concurrent calls. When this value is exceeded, the WCF queued requests. We don't know where are queued.
For the application's features the requests can't be enqueued or enqueued less time as soon as possible.
We can see these queues?
How can we change the parameters to not enqueued?
Can you help me?
Thanks a lot of.
I modify all the timeouts there are in the HHTP binding:
binding.ReceiveTimeout = New TimeSpan(0, 0, 1)
binding.OpenTimeout = New TimeSpan(0, 0, 1)
binding.CloseTimeout = New TimeSpan(0, 0, 1)
binding.SendTimeout = New TimeSpan(0, 0, 1)
I have put all timeouts to 1 second. But the behavior is the same. The requests are queued.
The changes we have made have been in the Server binding.
Related
Scenario: WCF service is running. Client is attempting to connect to it and use a method on the service. Service hangs for 2-3 minutes before returning a timeout.
All timeout values are set to be 10 seconds.
binding.CloseTimeout = TimeSpan.FromSeconds(_timeout);
binding.OpenTimeout = TimeSpan.FromSeconds(_timeout);
binding.SendTimeout = TimeSpan.FromSeconds(_timeout);
binding.ReceiveTimeout = TimeSpan.FromSeconds(_timeout);
Additionally the client has its InnerChannel.OperationTimeout set to TimeSpan.FromSeconds(_timeout) as well.
_timeout comes from the app.config file and is the value "10".
What I expect is that if there is a timeout that it takes 10 seconds to resolve, and not 120-180 seconds, which is what it is doing now.
The connection appears to be fine, its calling the method that intermittently fails and times out 120-180 seconds later.
Upon trying to send the command immediately after the timeout (repeating the call basically) it sends and receives just fine.
What am I missing here? What else needs set to make sure the client is not sitting for 2-3 minutes before timing out?
All articles that I am finding deal with these five timeout values (Close, Open, Send, Receive, and Operation).
binding is of type BasicHttpBinding.
It appears that both service side and client side require these timeouts. There was no timeout value placed on the service end. Once placed, the problem appears to have reconciled itself.
Problem
I have this strange problem. I am hosting a WCF server in a console app:
Console.WriteLine("Press 'q' to quit.");
var serviceHost = new ServiceHost(typeof(MessageService));
serviceHost.Open();
while (Console.ReadKey().KeyChar != 'q')
{
}
serviceHost.Close();
it exposes two endpoint for publish and subscribe (duplex binding)
When I stop or exit the console app, I never receive channel faulted at the client end. I would like client to be informed is server is down. Any idea what is going wrong here?
All I want is either of the following event to be raised when console app goes down:
msgsvc.InnerDuplexChannel.Faulted += InnerDuplexChannelOnFaulted;
msgsvc.InnerChannel.Faulted += InnerChannelOnFaulted;
From MSDN:
The duplex model does not automatically detect when a service or client closes its channel. So if a service unexpectedly terminates, by default the service will not be notified, or if a client unexpectedly terminates, the service will not be notified. Clients and services can implement their own protocol to notify each other if they so choose.
AFAIK tcp channel is quite responsive to (persistant) connection problems, but you can use callback to notify clients before server becomes unavailable. From client side you can use some dummy ping/poke method 'OnTimer' to get actual connection state/keep channel alive. It's good to recover client proxy (reconnect) at that point, I suppose. If service provides metadata endpoint, you also can call svcutil for trial connection or retrieve metadata programmatically:
Uri mexAddress = new Uri("http://localhost:5000/myservice");
var mexClient = new MetadataExchangeClient(mexAddress, MetadataExchangeClientMode.HttpGet);
MetadataSet metadata = mexClient.GetMetadata();
MetadataImporter importer = new WsdlImporter(metadata);
ServiceEndpointCollection endpoints = importer.ImportAllEndpoints();
ContractDescription description =
ContractDescription.GetContract(typeof(IMyContract));
bool contractSupported = endpoints.Any(endpoint =>
endpoint.Contract.Namespace == description.Namespace &&
endpoint.Contract.Name == description.Name);
or
ServiceEndpointCollection endpoints = MetadataResolver.Resolve(typeof(IMyContract), mexAddress, MetadataExchangeClientMode.HttpGet)
if you're the client, and subscribe to all 3 events, IClientChannel's Closing, Closed, and Faulted, you should see at the very least the Closing and Closed events if you close the service console app. I'm not sure why. if you're closing up correctly, you would think you'd see Faulted. I'm using tcp.net full duplex channels and I see these events just fine. Keep in mind it's in 2017, so things might have changed since then, but I rely on these events all the time in my code. For those people who says the duplex model doesn't see Closing of channels, I'm not understanding that.
I have a wcf service that I am hosting within a windows service on a windows 2003 server that is listening on a MSMQ queue. I set the ReceiveRetryCount = 2 on the netmsmqbinding. The service was setup to use transactions ([OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete= true)]). The service was functioning great.
I needed to turnoff the transactions due to a database call that couldn't support MSDTC. So I switched the service properties to
[OperationBehavior(TransactionScopeRequired = false)]
Now, when an exception or fault is thrown, no retry occurs, the fault handler for the service never fires. The original message ends up in the system DLQ. I would like the fault handler to handle the faults after two retries. Any ideas?
Switch things back to the way they were before.
Around the database call, add the following (code is done from memory- let me know if I need to fix this up a bit):
// using System.Transactions;
using( var ts = new TransactionScope( TransactionScopeOption.Suppress ) )
{
// Call DB stuff
ts.Complete();
}
I'm attempting to load test a WCF service with (IIS6/Server2003/BasicHttpBinding). The service is throttled as follows:
<serviceThrottling maxConcurrentCalls="100" maxConcurrentSessions="100" maxConcurrentInstances="100"/>
To assess the number of calls on the server I'm using the ServiceModelService 3.0.0.0 performance counters. If I throttle the maxConcurrentCalls down to 20, 15, 10 or anything lower the Instances performance counter show that WCF is respecting the throttling. However, if I change maxConcurrentCalls to 30 I'm never able to get Instances to go above 24. Additionally, Calls Outstanding never goes above 24. What else could be limiting WCF?
See Why Only Two Concurrent Requests for WCF Load Testing?
When I looked at this question, my
first response is that: the client did
not really send enough requests to the
server. Why is that? Here are the
reasons:
1) If you use the synchronous WCF
HttpModule/HttpHandler (installed by
default), you would get the maximal
number of concurrent requests (held by
that number of ASP.NET worker threads)
as 12 * [Number of CPU for the
Server].
2) WCF throttling is
specified above.
I wish to test and observe timeout behaviours between a WCF client and service host. For receiveTimeout and sendTimeout, it is probably easy to transmit a large byte stream that takes more than a few seconds and set those timeout attributes to ridiculously low values.
However, since there is nothing that can be done beyond the calling of a serviceProxy.Open() or .Close() methods, I am thinking what is a good way to delay the opening and closing of WCF connections, to cross the thresholds of openTimeout and closeTimeout?
Well, if you have exposed your contracts correctly (as interfaces), you can mock an instance of the proxy which throws TimeoutException and pass it to your code for use.