Lock-free thread safety in console apps - asp.net-core

To ensure thread-safety, I'm trying to find a generic cross-platform approach to
execute all delegates asynchronously in the main thread or ...
execute delegete in a background thread and pass result to the main one
Considering that console apps do not have synchronization context, I create new context when app is loading and then use one of the following methods.
Set and restore custom SC as described in Await, SynchronizationContext, and Console Apps article by Stephen Toub
Marshall all delegates to main thread using context.Post call as described in the article ExecutionContext vs SynchronizationContext by Stephen Toub
Using background thread with producer-consumer collection as described in Basic synchronization by Joe Albahari
Question
Ideas #1 and #2 set context correctly only if it's done synchronously. If they're called from inside Parallel.For(0, 100) then synchronization context starts using all threads available in a thread pool. Idea #3 always performs tasks within dedicated thread as expected, unfortunately, not in the main thread. Combining idea #3 with IOCompletionPortTaskScheduler, I can achieve asynchrony and single-threading, unfortunately, this approach will work only in Windows. Is there a way to combine these solutions to achieve requirements at the top of the post, including cross-platform.
Scheduler
public class SomeScheduler
{
public Task<T> RunInTheMainThread<T>(Func<T> action, SynchronizationContext sc)
{
var res = new TaskCompletionSource<T>();
SynchronizationContext.SetSynchronizationContext(sc); // Idea #1
sc.Post(o => res.SetResult(action()), null); // Idea #2
ThreadPool.QueueUserWorkItem(state => res.SetResult(action())); // Idea #3
return res.Task;
}
}
Main
var scheduler = new SomeScheduler();
var sc = SynchronizationContext.Current ?? new SynchronizationContext();
new Thread(async () =>
{
var res = await scheduler.ExecuteAsync(() => 5, sc);
});

You can use the lock/Monitor.Pulse/Monitor.Wait and a Queue
I know the title says lock-free. But my guess is that you want the UI updates to occur outside the locks or worker threads should be able to continue working without having to wait for main thread to update the UI (at least this is how I understand the requirement).
Here the locks are never during the producing of items, or updating the UI. They are held only during the short duration it takes to enqueue/dequeue item in the queue.
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
using static System.Threading.Thread;
namespace ConsoleApp1
{
internal static class Program
{
private class WorkItem
{
public string SomeData { get; init; }
}
private static readonly Queue<WorkItem> s_workQueue = new Queue<WorkItem>();
private static void Worker()
{
var random = new Random();
// Simulate some work
Sleep(random.Next(1000));
// Produce work item outside the lock
var workItem = new WorkItem
{
SomeData = $"data produced from thread {CurrentThread.ManagedThreadId}"
};
// Acquire lock only for the short time needed to add the work item to the stack
lock (s_workQueue)
{
s_workQueue.Enqueue(workItem);
// Notify the main thread that a new item is added to the queue causing it to wakeup
Monitor.Pulse(s_workQueue);
}
// work item is now queued, no need to wait for main thread to finish updating the UI
// Continue work here
}
private static WorkItem GetWorkItem()
{
// Acquire lock only for the duration needed to get the item from the queue
lock (s_workQueue)
{
WorkItem result;
// Try to get the item from the queue
while (!s_workQueue.TryDequeue(out result))
{
// Lock is released during Wait call
Monitor.Wait(s_workQueue);
// Lock is acquired again after Wait call
}
return result;
}
}
private static void Main(string[] args)
{
const int totalTasks = 10;
for (var i = 0; i < totalTasks; i++)
{
_ = Task.Run(Worker);
}
var remainingTasks = totalTasks;
// Main loop (similar to message loop)
while (remainingTasks > 0)
{
var item = GetWorkItem();
// Update UI
Console.WriteLine("Got {0} and updated UI on thread {1}.", item.SomeData, CurrentThread.ManagedThreadId);
remainingTasks--;
}
Console.WriteLine("Done");
}
}
}
Update
Since you don't want to have the main thread Wait for an event, you can change the code as follows:
private static WorkItem? GetWorkItem()
{
// Acquire lock only for the duration needed to get the item from the queue
lock (s_workQueue)
{
// Try to get the item from the queue
s_workQueue.TryDequeue(out var result);
return result;
}
}
private static void Main(string[] args)
{
const int totalTasks = 10;
for (var i = 0; i < totalTasks; i++)
{
_ = Task.Run(Worker);
}
var remainingTasks = totalTasks;
// Main look (similar to message loop)
while (remainingTasks > 0)
{
var item = GetWorkItem();
if (item != null)
{
// Update UI
Console.WriteLine("Got {0} and updated UI on thread {1}.", item.SomeData, CurrentThread.ManagedThreadId);
remainingTasks--;
}
else
{
// Queue is empty, so do some other work here then try again after the work is done
// Do some other work here
// Sleep to simulate some work being done by main thread
Thread.Sleep(100);
}
}
Console.WriteLine("Done");
}
The problem in the above solution is that the Main thread should do only part of the work it is supposed to do, then call GetWorkItem to check if the queue has something, before resuming whatever it was doing again. It is doable if you can divide that work into small pieces that don't take too long.
I don't know if my answer here is what you want. What do you imagine the main thread would be doing when there are no work items in the queue?
if you think it should be doing nothing (i.e. waiting) then the Wait solution should be fine.
If you think it should be doing something, then may be that work it should be doing can be queued as a Work item as well.

Related

Memory leaks using NSubstitute interface in a loop?

If I run the following program I see the free memory rapidly decrease to zero in Windows Task manager. Is it forbidden to use NSubstitute in loops?
using System;
using NSubstitute;
using System.Threading;
namespace NSubstituteMemoryLeaks
{
class Program
{
static void Main(string[] args)
{
IConfig config = Substitute.For<IConfig>();
config.Value.Returns(0);
Thread th = new Thread(() => {
while (true)
{
int val = config.Value;
}
});
th.IsBackground = true;
th.Start();
Console.WriteLine("Press ENTER to stop...");
Console.ReadLine();
}
}
public interface IConfig
{
int Value { get; set; }
}
}
Mock generates objects. The problem is the program creates those in a short period of time and doesn't give enough time to Garbage Collector to collect objects.
It is not specific to NSubstitute. You can see the same behavior in Moq too.
You could solve it by explicitly calling GC.Collect();.
Task.Run(() =>
{
while (true)
{
int val = config.Value;
GC.Collect();
}
});
There are pro and con. You might want to read When to call GC.Collect() before implementing it.
NSubstitute records all calls made to a substitute, so if you are calling a substitute in an infinite loop it will eventually exhaust available memory. (If you call config.ReceivedCalls() after 10,000 loop iterations you should see 10,000 entries in that list.)
If you call config.ClearReceivedCalls() periodically in the loop this might help.
If you have a bounded loop this should not be an issue; the memory will be cleared once the substitute is no longer in use and GC cleans it up.

Return thread to ThreadPool on lock

When I lock on a thread on the ThreadPool like this the thread is blocked:
private static object _testServerLock = new object();
private static TestServer _testServer = null;
public TestServer GetServer()
{
lock (_testServerLock)
{
if (_testServer == null)
{
_testServer = new TestServer(); // does some async stuff internally
}
}
return _testServer;
}
If I have too more concurrent threads calling this than I have threads in the ThreadPool all of them will end up waiting for the lock, while async code happening elsewhere can't continue since it is waiting for a free thread in the ThreadPool.
So I don't want to block the thread, I need to return it to the ThreadPool while I am waiting.
Is there some other way to lock which returns the waiting thread to the ThreadPool?
Whatever has to be done inside a lock should be moved into a Task, which is started before the tests and finishes, when it has created its resource.
Whenever a test wants to get the resource created by the task, it can block with an await on the creator-task before accessing the resource. So all accesses to the resource are in tasks and can't block all threads of the pool.
Something like:
private static object _testServerLock = new object();
private static TestServer _testServer = null;
private static Task _testTask = null;
private async Task<TestServer> CreateTestServerAsync()
{
...
}
// Constructor of the fixture
public TestFixture()
{
// The lock here may be ok, because it's before all the async stuff
// and it doesn't wait for something inside
lock (_testServerLock)
{
if (_testTask == null)
{
_testTask = Task.Run(async () => {
// it's better to expose the async nature of the call
_testServer = await CreateTestServerAsync();
});
// or just, whatever works
//_testTask = Task.Run(() => {
// _testServer = new TestServer();
//});
}
}
}
public async Task<TestServer> GetServerAsync()
{
await _testTask;
return _testServer;
}
Update:
You can remove the lock using the initialization of the static member.
private static TestServer _testServer = null;
private static Task _testTask = Task.Run(async () => {
_testServer = await CreateTestServerAsync();
});
private static async Task<TestServer> CreateTestServerAsync()
{
...
}
public TestFixture()
{
}
public async Task<TestServer> GetServerAsync()
{
await _testTask;
return _testServer;
}
With xUnit ~1.7+, the main thing you can do is make your Test Method return Task<T> and then use async/await which will limit your hard-blocking/occupation of threads
xUnit 2.0 + has parallel execution and a mechanism for controlling access to state to be shared among tests. Note however that this fundamentally operates by running one tests in the Test Class at a time and giving the Class Fixture to one at a time (which is equivalent to what normally happens - only one Test Method per class runs at a time). (If you use a Collection Fixture, effectively all the Test Classes in the collection become a single Test Class).
Finally, xUnit 2 offers switches for controlling whether or not:
Assemblies run in parallel with other [Assemblies]
Test Collections/Test Classes run in parallel with others
Both of the prev
You should be able to manage your issue by not hiding the asyncness as you've done and instead either exposing it to the Test Method or by doing build up/teardown via IAsyncLifetime

When calling a WCF RIA Service method using Invoke, does the return type affect when the Completed callback is executed?

I inherited a Silverlight 5 application. On the server side, it has a DomainContext (service) with a method marked as
[Invoke]
public void DoIt
{
do stuff for 10 seconds here
}
On the client side, it has a ViewModel method containing this:
var q = Context.DoIt(0);
var x=1; var y=2;
q.Completed += (a,b) => DoMore(x,y);
My 2 questions are
1) has DoIt already been activated by the time I attach q.Completed, and
2) does the return type (void) enter into the timing at all?
Now, I know there's another way to call DoIt, namely:
var q = Context.DoIt(0,myCallback);
This leads me to think the two ways of making the call are mutually exclusive.
Although DoIt() is executed on a remote computer, it is best to attach Completed event handler immediately. Otherwise, when the process completes, you might miss out on the callback.
You are correct. The two ways of calling DoIt are mutually exclusive.
If you have complicated logic, you may want to consider using the Bcl Async library. See this blog post.
Using async, your code will look like this:
// Note: you will need the OperationExtensions helper
public async void CallDoItAndDosomething()
{
this.BusyIndicator.IsBusy = true;
await context.DoIt(0).AsTask();
this.BusyIndicator.IsBusy = false;
}
public static class OperationExtensions
{
public static Task<T> AsTask<T>(this T operation)
where T : OperationBase
{
TaskCompletionSource<T> tcs =
new TaskCompletionSource<T>(operation.UserState);
operation.Completed += (sender, e) =>
{
if (operation.HasError && !operation.IsErrorHandled)
{
tcs.TrySetException(operation.Error);
operation.MarkErrorAsHandled();
}
else if (operation.IsCanceled)
{
tcs.TrySetCanceled();
}
else
{
tcs.TrySetResult(operation);
}
};
return tcs.Task;
}
}

How to Schedule a task for future execution in Task Parallel Library

Is there a way to schedule a Task for execution in the future using the Task Parallel Library?
I realize I could do this with pre-.NET4 methods such as System.Threading.Timer ... however if there is a TPL way to do this I'd rather stay within the design of the framework. I am not able to find one however.
Thank you.
This feature was introduced in the Async CTP, which has now been rolled into .NET 4.5. Doing it as follows does not block the thread, but returns a Task which will execute in the future.
Task<MyType> new_task = Task.Delay(TimeSpan.FromMinutes(5))
.ContinueWith<MyType>( /*...*/ );
(If using the old Async releases, use the static class TaskEx instead of Task)
You can write your own RunDelayed function. This takes a delay and a function to run after the delay completes.
public static Task<T> RunDelayed<T>(int millisecondsDelay, Func<T> func)
{
if(func == null)
{
throw new ArgumentNullException("func");
}
if (millisecondsDelay < 0)
{
throw new ArgumentOutOfRangeException("millisecondsDelay");
}
var taskCompletionSource = new TaskCompletionSource<T>();
var timer = new Timer(self =>
{
((Timer) self).Dispose();
try
{
var result = func();
taskCompletionSource.SetResult(result);
}
catch (Exception exception)
{
taskCompletionSource.SetException(exception);
}
});
timer.Change(millisecondsDelay, millisecondsDelay);
return taskCompletionSource.Task;
}
Use it like this:
public void UseRunDelayed()
{
var task = RunDelayed(500, () => "Hello");
task.ContinueWith(t => Console.WriteLine(t.Result));
}
Set a one-shot timer that, when fired, starts the task. For example, the code below will wait five minutes before starting the task.
TimeSpan TimeToWait = TimeSpan.FromMinutes(5);
Timer t = new Timer((s) =>
{
// start the task here
}, null, TimeToWait, TimeSpan.FromMilliseconds(-1));
The TimeSpan.FromMilliseconds(-1) makes the timer a one-shot rather than a periodic timer.

Pattern for limiting number of simultaneous asynchronous calls

I need to retrieve multiple objects from an external system. The external system supports multiple simultaneous requests (i.e. threads), but it is possible to flood the external system - therefore I want to be able to retrieve multiple objects asynchronously, but I want to be able to throttle the number of simultaneous async requests. i.e. I need to retrieve 100 items, but don't want to be retrieving more than 25 of them at once. When each request of the 25 completes, I want to trigger another retrieval, and once they are all complete I want to return all of the results in the order they were requested (i.e. there is no point returning the results until the entire call is returned). Are there any recommended patterns for this sort of thing?
Would something like this be appropriate (pseudocode, obviously)?
private List<externalSystemObjects> returnedObjects = new List<externalSystemObjects>;
public List<externalSystemObjects> GetObjects(List<string> ids)
{
int callCount = 0;
int maxCallCount = 25;
WaitHandle[] handles;
foreach(id in itemIds to get)
{
if(callCount < maxCallCount)
{
WaitHandle handle = executeCall(id, callback);
addWaitHandleToWaitArray(handle)
}
else
{
int returnedCallId = WaitHandle.WaitAny(handles);
removeReturnedCallFromWaitHandles(handles);
}
}
WaitHandle.WaitAll(handles);
return returnedObjects;
}
public void callback(object result)
{
returnedObjects.Add(result);
}
Consider the list of items to process as a queue from which 25 processing threads dequeue tasks, process a task, add the result then repeat until the queue is empty:
class Program
{
class State
{
public EventWaitHandle Done;
public int runningThreads;
public List<string> itemsToProcess;
public List<string> itemsResponses;
}
static void Main(string[] args)
{
State state = new State();
state.itemsResponses = new List<string>(1000);
state.itemsToProcess = new List<string>(1000);
for (int i = 0; i < 1000; ++i)
{
state.itemsToProcess.Add(String.Format("Request {0}", i));
}
state.runningThreads = 25;
state.Done = new AutoResetEvent(false);
for (int i = 0; i < 25; ++i)
{
Thread t =new Thread(new ParameterizedThreadStart(Processing));
t.Start(state);
}
state.Done.WaitOne();
foreach (string s in state.itemsResponses)
{
Console.WriteLine("{0}", s);
}
}
private static void Processing(object param)
{
Debug.Assert(param is State);
State state = param as State;
try
{
do
{
string item = null;
lock (state.itemsToProcess)
{
if (state.itemsToProcess.Count > 0)
{
item = state.itemsToProcess[0];
state.itemsToProcess.RemoveAt(0);
}
}
if (null == item)
{
break;
}
// Simulate some processing
Thread.Sleep(10);
string response = String.Format("Response for {0} on thread: {1}", item, Thread.CurrentThread.ManagedThreadId);
lock (state.itemsResponses)
{
state.itemsResponses.Add(response);
}
} while (true);
}
catch (Exception)
{
// ...
}
finally
{
int threadsLeft = Interlocked.Decrement(ref state.runningThreads);
if (0 == threadsLeft)
{
state.Done.Set();
}
}
}
}
You can do the same using asynchronous callbacks, there is no need to use threads.
Having some queue-like structure to hold the pending requests is a pretty common pattern. In Web apps where there may be several layers of processing you see a "funnel" style approach with the early parts of the processing change having larger queues. There may also be some kind of prioritisation applied to queues, higher priority requests being shuffled to the top of the queue.
One important thing to consider in your solution is that if request arrival rate is higher than your processing rate (this might be due to a Denial of Service attack, or just that some part of the processing is unusually slow today) then your queues will increase without bound. You need to have some policy such as to refuse new requests immediately when the queue depth exceeds some value.