WCF Async working too long - wcf

I have wcf service with some debug method:
public Result DebugMethod(TimeSpan time){
Thread.Sleep(time);
return new Result { Code = new Random().Next()};
}
I wanted to test performance between sync and async calling.
I packed up Result into Response class, that have mutex waiting for result:
class Response {
public Result result;
bool loaded = false;
public ManualResetEvent _wait = new ManualResetEvent(false);
public Result get(){
if(!loaded){
_wait.WaitOne();
loaded = true;
}
return result;
}
And my DebugMethodCompleted event:
//r - correct response structure, result - wcf response
r.result = result
r._wait.Set();
And I'm calling DebugMethodAsync 10 times, each with TimeSpan = 1 sec.
Then, after all async calls, I'm checking results (just to wait for them).
So I thought it would take about 1s.
But all times it is taking 5s.
If I change number of calls to n, it is taking n/2 s to get all responses, like there could be just to async tasks processing at time.
EDIT:
It seems like all async call in client app are made, but server is handling up to 2 concurrently (from one client).
So:
Client made 4 async call and is waiting for result, server is handling first two, then, after 1s., third and forth.

Did you check WCF limits and throttling? It is probably set to 2. Client OS's have a hard-coded limit of max. 10 that you cannot configure away.

There are few things needs to be done.
First, ServiceThrottling should be set on server service behavior.
There are number of concurrent calls that can be done to service.
<behavior ...>
...
<serviceThrottling
maxConcurrentCalls="A"
maxConcurrentSessions="B"
maxConcurrentInstances="C"
/>
</behavior>
Next, on clinet app, System.Net.ServicePointManager.DefaultConnectionLimit should be set. Default value is 2.
// somewhere in the code
System.Net.ServicePointManager.DefaultConnectionLimit = X;

Related

Efficient thread use on high traffic ASP.NET Core Web API with processing timeout

My ASP.NET Core Web API (Linux) endpoint needs to serve a high volume of concurrent requests. If the request takes more than 200ms then it should abort and return a custom piece of JSON. The code is all awaitable. The request must always return HTTP 200 and the HTTP request timeout cannot be reduced from 30 secs to 200ms.
What is the most efficient way to accomplish what I want? Should I use a Task? Should I use Task.Wait or Task.WaitAsync? Or should the work methods run in the HTTP request thread, periodically check Stopwatch.Elapsed and throw a timeout exception?
This is my current code:
var task = Task.Factory.StartNew(async () =>
{
// Processing part 1
var result1 = await DoWorkPart1("Param1");
if (cancellationToken.IsCancellationRequested())
cancellationToken.ThrowIfCancellationRequested();
// Processing part 2
var result2 = wait DoWorkPart2(result1);
return result2;
}).Unwrap(); // Return lambda task, not outer task
// Is it better to use WaitAsync?
task.Wait(TimeSpan.FromMilliseconds(150));
if (task.IsCompleted) // Result within timeout
{
if (task.Exception == null) // Success
{
return Ok(task.Result);
}
else
{
return Ok(new FailedObject() { Reason = ReasonEnum.UnexpectedError };
}
}
else // Timeout
{
return OK(new FailedObject() { Reason = ReasonEnum.TookTooLong };
}
What is the most efficient way to accomplish what I want?
I recommend using CancellationTokens to cancel. With a very short timeout like 200ms, you might just want to create a CancellationTokenSource with that timeout and ignore the CancellationToken provided to you by ASP.NET, which handles situations like clients disconnecting early.
Should I use a Task? Should I use Task.Wait or Task.WaitAsync? Or should the work methods run in the HTTP request thread, periodically check Stopwatch.Elapsed and throw a timeout exception?
I would say none of these. Instead, pass the CancellationToken down as far as you possibly can, ideally right to the lowest-level APIs your asynchronous code is calling.
If some of those APIs ignore their cancellation tokens, or if it's possible they may complete synchronously (e.g., due to caching), then adding cancellationToken.ThrowIfCancellationRequested(); in-between steps is a good idea.
Side note: Don't use StartNew.
using var cts = new CancellationTokenSource(TimeSpan.FromMilliseconds(200));
try
{
// Processing part 1
var result1 = await DoWorkPart1("Param1", cts.Token);
cts.Token.ThrowIfCancellationRequested();
// Processing part 2
var result2 = wait DoWorkPart2(result1, cts.Token);
return Ok(result2);
}
catch (OperationCanceledException)
{
return OK(new FailedObject() { Reason = ReasonEnum.TookTooLong };
}
catch
{
return Ok(new FailedObject() { Reason = ReasonEnum.UnexpectedError };
}

Async WCF Service with multiple async calls inside

I have a web service in WCF that consume some external web services, so what I want to do is make this service asynchronous in order to release the thread, wait for the completion of all the external services, and then return the result to the client.
With Framework 4.0
public class MyService : IMyService
{
public IAsyncResult BeginDoWork(int count, AsyncCallback callback, object serviceState)
{
var proxyOne = new Gateway.BackendOperation.BackendOperationOneSoapClient();
var proxyTwo = new Gateway.BackendOperationTwo.OperationTwoSoapClient();
var taskOne = Task<int>.Factory.FromAsync(proxyOne.BeginGetNumber, proxyOne.EndGetNumber, 10, serviceState);
var taskTwo = Task<int>.Factory.FromAsync(proxyTwo.BeginGetNumber, proxyTwo.EndGetNumber, 10, serviceState);
var tasks = new Queue<Task<int>>();
tasks.Enqueue(taskOne);
tasks.Enqueue(taskTwo);
return Task.Factory.ContinueWhenAll(tasks.ToArray(), innerTasks =>
{
var tcs = new TaskCompletionSource<int>(serviceState);
int sum = 0;
foreach (var innerTask in innerTasks)
{
if (innerTask.IsFaulted)
{
tcs.SetException(innerTask.Exception);
callback(tcs.Task);
return;
}
if (innerTask.IsCompleted)
{
sum = innerTask.Result;
}
}
tcs.SetResult(sum);
callback(tcs.Task);
});
}
public int EndDoWork(IAsyncResult result)
{
try
{
return ((Task<int>)result).Result;
}
catch (AggregateException ex)
{
throw ex.InnerException;
}
}
}
My questions here are:
This code is using three threads: one that is instanced in the
BeginDoWork, another one that is instanced when the code enter
inside the anonymous method ContinueWhenAll, and the last one when
the callback is executed, in this case EndDoWork. Is that correct or
I’m doing something wrong on the calls? Should I use any
synchronization context? Note: The synchronization context is null
on the main thread.
What happen if I “share” information between
threads, for instance, the callback function? Will that cause a
performance issue or the anonymous method is like a closure where I
can share data?
With Framework 4.5 and Async and Await
Now with Framework 4.5, the code seems too much simple than before:
public async Task<int> DoWorkAsync(int count)
{
var proxyOne = new Backend.ServiceOne.ServiceOneClient();
var proxyTwo = new Backend.ServiceTwo.ServiceTwoClient();
var doWorkOne = proxyOne.DoWorkAsync(count);
var doWorkTwo = proxyTwo.DoWorkAsync(count);
var result = await Task.WhenAll(doWorkOne, doWorkTwo);
return doWorkOne.Result + doWorkTwo.Result;
}
But in this case when I debug the application, I always see that the code is executed on the same thread. So my questions here are:
3.. When I’m waiting for the “awaitable” code, is that thread released and goes back to the thread pool to execute more requests?
3.1. If So, I suppose that when I get a result from the await Task, the execution completes on the same thread that the call before. Is that possible? What happen if that thread is processing another request?
3.2 If Not, how can I release the thread to send it back to the thread pool with Asycn and Await pattern?
Thank you!
1. This code is using three threads: one that is instanced in the BeginDoWork, another one that is instanced when the code enter inside the anonymous method ContinueWhenAll, and the last one when the callback is executed, in this case EndDoWork. Is that correct or I’m doing something wrong on the calls? Should I use any synchronization context?
It's better to think in terms of "tasks" rather than "threads". You do have three tasks here, each of which will run on the thread pool, one at a time.
2. What happen if I “share” information between threads, for instance, the callback function? Will that cause a performance issue or the anonymous method is like a closure where I can share data?
You don't have to worry about synchronization because each of these tasks can't run concurrently. BeginDoWork registers the continuation just before returning, so it's already practically done when the continuation can run. EndDoWork will probably not be called until the continuation is complete; but even if it is, it will block until the continuation is complete.
(Technically, the continuation can start running before BeginDoWork completes, but BeginDoWork just returns at that point, so it doesn't matter).
3. When I’m waiting for the “awaitable” code, is that thread released and goes back to the thread pool to execute more requests?
Yes.
3.1. If So, I suppose that when I get a result from the await Task, the execution completes on the same thread that the call before. Is that possible? What happen if that thread is processing another request?
No. Your host (in this case, ASP.NET) may continue the async methods on any thread it happens to have available.
This is perfectly safe because only one thread is executing at a time.
P.S. I recommend
var result = await Task.WhenAll(doWorkOne, doWorkTwo);
return result[0] + result[1];
instead of
var result = await Task.WhenAll(doWorkOne, doWorkTwo);
return doWorkOne.Result + doWorkTwo.Result;
because Task.Result should be avoided in async programming.

Have multiple calls wait on the same internal async task

(Note: this is an over-simplified scenario to demonstrate my coding issue.)
I have the following class interface:
public class CustomerService
{
Task<IEnumerable<Customer>> FindCustomersInArea(String areaName);
Task<Customer> GetCustomerByName(String name);
:
}
This is the client-side of a RESTful API which loads a list of Customer objects from the server then exposes methods that allows client code to consume and work against that list.
Both of these methods work against the internal list of Customers retrieved from the server as follows:
private Task<IEnumerable<Customer>> LoadCustomersAsync()
{
var tcs = new TaskCompletionSource<IEnumerable<Customer>>();
try
{
// GetAsync returns Task<HttpResponseMessage>
Client.GetAsync(uri).ContinueWith(task =>
{
if (task.IsCanceled)
{
tcs.SetCanceled();
}
else if (task.IsFaulted)
{
tcs.SetException(task.Exception);
}
else
{
// Convert HttpResponseMessage to desired return type
var response = task.Result;
var list = response.Content.ReadAs<IEnumerable<Customer>>();
tcs.SetResult(list);
}
});
}
catch (Exception ex)
{
tcs.SetException(ex);
}
}
The Client class is a custom version of the HttpClient class from the WCF Web API (now ASP.NET Web API) because I am working in Silverlight and they don't have an SL version of their client assemblies.
After all that background, here's my problem:
All of the methods in the CustomerService class use the list returned by the asynchronous LoadCustomersAsync method; therefore, any calls to these methods should wait (asynchronously) until the LoadCustomers method has returned and the appopriate logic executed on the returned list.
I also only want one call made from the client (in LoadCustomers) at a time. So, I need all of the calls to the public methods to wait on the same internal task.
To review, here's what I need to figure out how to accomplish:
Any call to FindCustomersInArea and GetCustomerByName should return a Task that waits for the LoadCustomersAsync method to complete. If LoadCustomersAsync has already returned (and the cached list still valid), then the method may continue immediately.
After LoadCustomersAsync returns, each method has additional logic required to convert the list into the desired return value for the method.
There must only ever be one active call to LoadCustomersAsync (of the GetAsync method within).
If the cached list expires, then subsequent calls will trigger a reload (via LoadCustomersAsync).
Let me know if you need further clarification, but I'm hoping this is a common enough use case that someone can help me work out the logic to get the client working as desired.
Disclaimer: I'm going to assume you're using a singleton instance of your HttpClient subclass. If that's not the case we need only modify slightly what I'm about to tell you.
Yes, this is totally doable. The mechanism we're going to rely on for subsequent calls to LoadCustomersAsync is that if you attach a continuation to a Task, even if that Task completed eons ago, you're continuation will be signaled "immediately" with the task's final state.
Instead of creating/returning a new TaskCompletionSource<T> (TCS) every time from the LoadCustomerAsync method, you would instead have a field on the class that represents the TCS. This will allow your instance to remember the TCS that last represented the call that represented a cache-miss. This TCS's state will be signaled exactly the same as your existing code. You'll add the knowledge of whether or not the data has expired as another field which, combined with whether the TCS is currently null or not, will be the trigger for whether or not you actually go out and load the data again.
Ok, enough talk, it'll probably make a lot more sense if you see it.
The Code
public class CustomerService
{
// Your cache timeout (using 15mins as example, can load from config or wherever)
private static readonly TimeSpan CustomersCacheTimeout = new TimeSpan(0, 15, 0);
// A lock object used to provide thread safety
private object loadCustomersLock = new object();
private TaskCompletionSource<IEnumerable<Customer>> loadCustomersTaskCompletionSource;
private DateTime loadCustomersLastCacheTime = DateTime.MinValue;
private Task<IEnumerable<Customer>> LoadCustomersAsync()
{
lock(this.loadCustomersLock)
{
bool needToLoadCustomers = this.loadCustomersTaskCompletionSource == null
||
(this.loadCustomersTaskCompletionSource.Task.IsFaulted || this.loadCustomersTaskCompletionSource.Task.IsCanceled)
||
DateTime.Now - this.loadCustomersLastCacheTime.Value > CustomersService.CustomersCacheTimeout;
if(needToLoadCustomers)
{
this.loadCustomersTaskCompletionSource = new TaskCompletionSource<IEnumerable<Customer>>();
try
{
// GetAsync returns Task<HttpResponseMessage>
Client.GetAsync(uri).ContinueWith(antecedent =>
{
if(antecedent.IsCanceled)
{
this.loadCustomersTaskCompletionSource.SetCanceled();
}
else if(antecedent.IsFaulted)
{
this.loadCustomersTaskCompletionSource.SetException(antecedent.Exception);
}
else
{
// Convert HttpResponseMessage to desired return type
var response = antecedent.Result;
var list = response.Content.ReadAs<IEnumerable<Customer>>();
this.loadCustomersTaskCompletionSource.SetResult(list);
// Record the last cache time
this.loadCustomersLastCacheTime = DateTime.Now;
}
});
}
catch(Exception ex)
{
this.loadCustomersTaskCompletionSource.SetException(ex);
}
}
}
}
return this.loadCustomersTaskCompletionSource.Task;
}
Scenarios where the customers aren't loaded:
If it's the first call, the TCS will be null so the TCS will be created and customers fetched.
If the previous call faulted or was canceled, a new TCS will be created and the customers fetched.
If the cache timeout has expired, a new TCS will be created and the customers fetched.
Scenarios where the customers are loading/loaded:
If the customers are in the process of loading, the existing TCS's Task will be returned and any continuations added to the task using ContinueWith will be executed once the TCS has been signaled.
If the customers are already loaded, the existing TCS's Task will be returned and any continuations added to the task using ContinueWith will be executed as soon as the scheduler sees fit.
NOTE: I used a coarse grained locking approach here and you could theoretically improve performance with a reader/writer implementation, but it would probably be a micro-optimization in your case.
I think you should change the way you call Client.GetAsync(uri). Do it roughly like this:
Lazy<Task> getAsyncLazy = new Lazy<Task>(() => Client.GetAsync(uri));
And in your LoadCustomersAsync method you write:
getAsyncLazy.Value.ContinueWith(task => ...
This will ensure that GetAsync only gets called once and that everyone interested in its result will receive the same task.

WCF Async deadlock?

Has anyone run into a situation where a WaitAny call returns a valid handle index, but the Proxy.End call blocks? Or has any recommendations or how best to debug this - tried tracing, performance counters (to check the max percentages), logging everywhere
The test scenario: 2 async. requests are going out (there's a bit more to the full implementation), and the 1st Proxy.End call return successfully, but the subsequent blocks. I've check the WCF trace and don't see anything particularly interesting. NOTE that it is self querying an endpoint that exists in the same process as well as a remote machine (=2 async requests)
As far as I can see the call is going through on the service implementation side for both queries, but it just blocks on the subsequent end call. It seems to work with just a single call though, regardless of whether it is sending the request to a remote machine or to itself; so it something to do with the multiple queries or some other factor causing the lockup.
I've tried different "concurrencymode"s and "instancecontextmode"s but it doesn't seem to have any bearing on the result.
Here's a cut down version of the internal code for parsing the handle list:
ValidationResults IValidationService.EndValidate()
{
var results = new ValidationResults();
if (_asyncResults.RemainingWaitHandles == null)
{
results.ReturnCode = AsyncResultEnum.NoMoreRequests;
return results;
}
var waitArray = _asyncResults.RemainingWaitHandles.ToArray();
if (waitArray.GetLength(0) > 0)
{
int handleIndex = WaitHandle.WaitAny(waitArray, _defaultTimeOut);
if (handleIndex == WaitHandle.WaitTimeout)
{
// Timeout on signal for all handles occurred
// Close proxies and return...
}
var asyncResult = _asyncResults.Results[handleIndex];
results.Results = asyncResult.Proxy.EndServerValidateGroups(asyncResult.AsyncResult);
asyncResult.Proxy.Close();
_asyncResults.Results.RemoveAt(handleIndex);
_asyncResults.RemainingWaitHandles.RemoveAt(handleIndex);
results.ReturnCode = AsyncResultEnum.Success;
return results;
}
results.ReturnCode = AsyncResultEnum.NoMoreRequests;
return results;
}
and the code that calls this:
validateResult = validationService.EndValidateSuppression();
while (validateResult.ReturnCode == AsyncResultEnum.Success)
{
// Update progress step
//duplexContextChannel.ValidateGroupCallback(progressInfo);
validateResult = validationService.EndValidateSuppression();
}
I've commented out the callbacks on the initiating node (FYI it's actually an 3-tier setup, but the problem is isolated to this 2nd tier calling the 3rd tier - the callbacks go from the 2nd tier to the 1st tier which have been removed in this test). Thoughts?
Sticking to the solution I left in my comment. Simply avoid chaining a callback to an aysnc calls that have different destinations (i.e. proxies)

Persisted properties - asynchronously

In classic ASP.NET I’d persist data extracted from a web service in base class property as follows:
private string m_stringData;
public string _stringData
{ get {
if (m_stringData==null)
{
//fetch data from my web service
m_stringData = ws.FetchData()
}
return m_stringData;
}
}
This way I could simply make reference to _stringData and know that I’d always get the data I was after (maybe sometimes I’d use Session state as a store instead of a private member variable).
In Silverlight with a WCF I might choose to use Isolated Storage as my persistance mechanism, but the service call can't be done like this, because a WCF service has to be called asynchronously.
How can I both invoke the service call and retrieve the response in one method?
Thanks,
Mark
In your method, invoke the service call asynchronously and register a callback that sets a flag. After you have invoked the method, enter a busy/wait loop checking the flag periodically until the flag is set indicating that the data has been returned. The callback should set the backing field for your method and you should be able to return it as soon as you detect the flag has been set indicating success. You'll also need to be concerned about failure. If it's possible to get multiple calls to your method from different threads, you'll also need to use some locking to make your code thread-safe.
EDIT
Actually, the busy/wait loop is probably not the way to go if the web service supports BeginGetData/EndGetData semantics. I had a look at some of my code where I do something similar and I use WaitOne to simply wait on the async result and then retrieve it. If your web service doesn't support this then throw a Thread.Sleep -- say for 50-100ms -- in your wait loop to give time for other processes to execute.
Example from my code:
IAsyncResult asyncResult = null;
try
{
asyncResult = _webService.BeginGetData( searchCriteria, null, null );
if (asyncResult.AsyncWaitHandle.WaitOne( _timeOut, false ))
{
result = _webService.EndGetData( asyncResult );
}
}
catch (WebException e)
{
...log the error, clean up...
}
Thanks for your help tvanfosson. I followed your code and have also found a pseudo similar solution that meets my needs exactly using a lambda expression:
private string m_stringData;
public string _stringData{
get
{
//if we don't have a list of departments, fetch from WCF
if (m_stringData == null)
{
StringServiceClient client = new StringServiceClient();
client.GetStringCompleted +=
(sender, e) =>
{
m_stringData = e.Result;
};
client.GetStringAsync();
}
return m_stringData;
}
}
EDIT
Oops... actually this doesn't work either :-(
I ended up making the calls Asynchronously and altering my programming logic to use MVVM pattern and more binding.