I have invoked BeginInvoke on 10 delegates in a loop. Instead of using 10 threads, the threadpool uses only two/three threads for executing the delegates. Can somebody please explain the reason for this?. The delegate execution takes only a few ms (less than 10ms).
When I logged threadpool parameters before invoking BeginInvoke it indicated that Min Threads = 2, Max Threads = 500, Available threads = 498.
I got the problem when I invoked the following managed c++ code.
void EventHelper::FireAndForget(Delegate^ d, ... array<Object^>^ args)
{
try
{
if (d != nullptr)
{
array<Delegate^>^ delegates = d->GetInvocationList();
String^ message1 = String::Format("No of items in the event {0}",delegates.Length);
Log(LogMessageType::Information,"EventHelper.FireAndForget", message1);
// Iterating through the list of delegate methods.
for each(Delegate^ delegateMethod in delegates)
{
try
{
int minworkerThreads,maxworkerThreads,availworkerThreads, completionPortThreads;
ThreadPool::GetMinThreads(minworkerThreads, completionPortThreads);
ThreadPool::GetMaxThreads(maxworkerThreads, completionPortThreads);
ThreadPool::GetAvailableThreads(availworkerThreads, completionPortThreads);
String^ message = String::Format("FireAndForget Method {0}#{1} MinThreads - {2}, MaxThreads - {3} AvailableThreads - {4}",
delegateMethod->Method->DeclaringType, delegateMethod->Method->Name, minworkerThreads, maxworkerThreads, availworkerThreads);
Log(LogMessageType::Information,"EventHelper.FireAndForget", message);
DynamicInvokeAsyncProc^ evtDelegate = gcnew DynamicInvokeAsyncProc(this, &EventHelper::OnTriggerEvent);
evtDelegate->BeginInvoke(delegateMethod, args, _dynamicAsyncResult, nullptr); //FIX_DEC_09 Handle Leak
}
catch (Exception^ ex)
{
String^ message = String::Format("{0} : DynamicInvokeAsync of '{1}.{2}' failed", _id,
delegateMethod->Method->DeclaringType, d->Method->Name);
Log(LogMessageType::Information,"EventHelper.FireAndForget", message);
}
}
}
else
{
}
}
catch (Exception^ e)
{
Log(LogMessageType::Error, "EventHelper.FireAndForget", e->ToString());
}
}
This is the method given in the delegate
void EventHelper::OnTriggerEvent(Delegate^ delegateMethod, array<Object^>^ args)
{
try
{
int minworkerThreads,maxworkerThreads,availworkerThreads, completionPortThreads;
ThreadPool::GetMinThreads(minworkerThreads, completionPortThreads);
ThreadPool::GetMaxThreads(maxworkerThreads, completionPortThreads);
ThreadPool::GetAvailableThreads(availworkerThreads, completionPortThreads);
String^ message = String::Format("OnTriggerEvent Method {0}#{1} MinThreads - {2}, MaxThreads - {3} AvailableThreads - {4}",
delegateMethod->Method->DeclaringType, delegateMethod->Method->Name, minworkerThreads, maxworkerThreads, availworkerThreads);
Log(LogMessageType::Information,"EventHelper::OnTriggerEvent", message);
message = String::Format("Before Invoke Method {0}#{1}",
delegateMethod->Method->DeclaringType, delegateMethod->Method->Name);
Log(LogMessageType::Information,"EventHelper::OnTriggerEvent", message);
// Dynamically invokes (late-bound) the method represented by the current delegate.
delegateMethod->DynamicInvoke(args);
message = String::Format("After Invoke Method {0}#{1}",
delegateMethod->Method->DeclaringType, delegateMethod->Method->Name);
Log(LogMessageType::Information,"EventHelper::OnTriggerEvent", message);
}
catch (Exception^ ex)
{
Log(LogMessageType::Error, "EventHelper.OnTriggerEvent", ex->ToString());
}
}
You wouldn't want 10 threads to be created for this. The optimal situation is to have as many active threads as you have cores. You'll find that ThreadPool.MinThreads equals the # of logical CPU's on your PC.
Additional Threads will be created but the ThreadPool delays that on purpose. The algorithm in Fx4 has been improved, see this page. A quick look at the picture at the bottom will help you understand the principle.
Extra threads are only helpful to compensate for blocked threads, but this is difficult to get exactly right.
The threadpool deliberately waits a little while before starting up new threads - if the delegates execute quickly anyway (which it sounds like they do), it's more efficient to execute them on a few threads than to spin up new ones.
From the docs for ThreadPool:
When all thread pool threads have been
assigned to tasks, the thread pool
does not immediately begin creating
new idle threads. To avoid
unnecessarily allocating stack space
for threads, it creates new idle
threads at intervals. The interval is
currently half a second, although it
could change in future versions of the
.NET Framework..NET Framework.
Related
I'm following this tutorial to create a hosted service. The program runs as expected. However, I want to process the queued items concurrently.
In my app, there are 4 clients, each of these clients can process 4 items at a time. So at any given time, 16 items should be processed in parallel.
So based on these requirements, I've modified the code a bit:
In the MonitorLoop class:
private int count = 0;
private async ValueTask MonitorAsync()
{
while (!_cancellationToken.IsCancellationRequested)
{
await _taskQueue.QueueAsync(BuildWorkItem);
Interlocked.Increment(ref count);
Console.WriteLine($"Count: {count}");
}
}
and in the same class:
if (delayLoop == 3)
{
_logger.LogInformation("Queued Background Task {Guid} is complete.", guid);
Interlocked.Decrement(ref count);
}
This shows that, if I set the "Capacity" as 4, the value will never increase after 5.
Basically, if the queue is full, it will wait until there's room for one more.
The problem is that the items are processed one at a time.
Here's the code for the BackgroundProcessing method on the QueuedHostedService class:
private async Task BackgroundProcessing(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
var workItem = await TaskQueue.DequeueAsync(stoppingToken);
try
{
//instead of getting a single item from the queue, somehow, here
//we should be able to process them in parallel for 4 clients
//with a limit for maximum items each client can process
await workItem(stoppingToken);
}
catch (Exception ex)
{
_logger.LogError(ex, "Error occurred executing {WorkItem}.", nameof(workItem));
}
}
}
I want to process them in parallel. I'm not sure if using Channel as the queue in the system is the best solution. Maybe I should have a ConcurrentQueue instead. But again, I'm not sure how to achieve a robust implementation that can have 4 clients with 4 threads each.
If you want four processors, then you can refactor the code to use four instances of your main loop, and use Task.WhenAll to (asynchronously) wait for all of them to complete:
private async Task BackgroundProcessing(CancellationToken stoppingToken)
{
var task1 = ProcessAsync(stoppingToken);
var task2 = ProcessAsync(stoppingToken);
var task3 = ProcessAsync(stoppingToken);
var task4 = ProcessAsync(stoppingToken);
await Task.WhenAll(task1, task2, task3, task4);
async Task ProcessAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
var workItem = await TaskQueue.DequeueAsync(stoppingToken);
try
{
await workItem(stoppingToken);
}
catch (Exception ex)
{
_logger.LogError(ex, "Error occurred executing {WorkItem}.", nameof(workItem));
}
}
}
}
I'm not sure how to achieve a robust implementation
If you want a robust implementation, then you can't use that tutorial, sorry. The primary problem with that kind of background work is that it will be lost on any app restart. And app restarts are normal: the server can lose power or crash, OS or runtime patches can be installed, IIS will recycle your app periodically, and whenever you deploy your code, the app will restart. And whenever any of these things happen, all in-memory queues like channels will lose all their work.
A production-quality implementation requires a durable queue at the very least. I also recommend a separate background processor. I have a blog series on the subject that may help you get started.
Is there a good way to have channels ignore offers once closed without throwing an exception?
Currently, it seems like only try catch would work, as isClosedForSend isn't atomic.
Alternatively, is there a problem if I just never close a channel at all?
For my specific use case, I'm using channels as an alternative to Android livedata (as I don't need any of the benefits beyond sending values from any thread and listening from the main thread). In that case, I could listen to the channel through a producer that only sends values when I want to, and simply ignore all other inputs.
Ideally, I'd have a solution where the ReceiveChannel can still finish listening, but where SendChannel will never crash when offered a new value.
Channels throw this exception by design, as means of correct communication.
If you absolutely must have something like this, you can use an extension function of this sort:
private suspend fun <E> Channel<E>.sendOrNothing(e: E) {
try {
this.send(e)
}
catch (closedException: ClosedSendChannelException) {
println("It's fine")
}
}
You can test it with the following piece of code:
val channel = Channel<Int>(capacity = 3)
launch {
try {
for (i in 1..10) {
channel.sendOrNothing(i)
delay(50)
if (i == 5) {
channel.close()
}
}
println("Done")
}
catch (e: Exception) {
e.printStackTrace()
}
finally {
println("Finally")
}
}
launch {
for (c in channel) {
println(c)
delay(300)
}
}
As you'll notice, producer will start printing "It's fine" since the channel is closed, but consumer will still be able to read first 5 values.
Regarding your second question: it depends.
Channels don't have such a big overhead, and neither do suspended coroutines. But a leak is a leak, you know.
I ended up posting an issue to the repo, and the solution was to use BroadcastChannel. You can create a new ReceiveChannel through openSubscription, where closing it will not close the SendChannel.
This more accurately reflects RxJava's PublishSubject
Suppose I have a libusb program that just uses the hotplug API. You register a callback and then apparently have to call libusb_handle_events() in a loop which then calls your hotplug callback.
int LIBUSB_CALL hotplugCallback(libusb_context* ctx,
libusb_device* device,
libusb_hotplug_event event,
void* user_data)
{
cout << "Device plugged in or unplugged";
}
void main()
{
libusb_init(nullptr);
libusb_hotplug_register_callback(nullptr,
static_cast<libusb_hotplug_event>(LIBUSB_HOTPLUG_EVENT_DEVICE_ARRIVED | LIBUSB_HOTPLUG_EVENT_DEVICE_LEFT),
LIBUSB_HOTPLUG_NO_FLAGS,
LIBUSB_HOTPLUG_MATCH_ANY,
LIBUSB_HOTPLUG_MATCH_ANY,
LIBUSB_HOTPLUG_MATCH_ANY,
&hotplugCallback,
this,
&hotplugCallbackHandle);
for (;;)
{
if (libusb_handle_events_completed(nullptr, nullptr) != LIBUSB_SUCCESS)
return 1;
}
return 0;
}
The question is, without timeout hacks how can I exit this event loop cleanly? I can't find any functions that force libusb_handle_events() (or libusb_handle_events_completed()) to return. In theory they could just never return.
Sorry if this is late.
The question could have been phrased better but I'm assuming (from your comment updates) that your actual program resembles something a little closer to this:
int LIBUSB_CALL hotplugCallback(libusb_context *ctx,
libusb_device *device,
libusb_hotplug_event event,
void *user_data) {
cout << "Device plugged in or unplugged";
}
void SomeClass::someFunction() {
libusb_init(nullptr);
libusb_hotplug_register_callback(nullptr,
static_cast<libusb_hotplug_event>(LIBUSB_HOTPLUG_EVENT_DEVICE_ARRIVED | LIBUSB_HOTPLUG_EVENT_DEVICE_LEFT),
LIBUSB_HOTPLUG_NO_FLAGS,
LIBUSB_HOTPLUG_MATCH_ANY,
LIBUSB_HOTPLUG_MATCH_ANY,
LIBUSB_HOTPLUG_MATCH_ANY,
&hotplugCallback,
this,
&hotplugCallbackHandle);
this->thread = std::thread([this]() {
while (this->handlingEvents) {
int error = libusb_handle_events_completed(context, nullptr);
}
});
}
Let's say your object is being deallocated and, no matter what is happening on the USB bus, you don't care and you want to clean up your thread.
You negate this->handlingEvents and you call thread.join() and the thread hangs for 60 seconds and then execution resumes.
This is done because the default behavior of libusb_handle_events_completed calls libusb_handle_events_timeout_completed and passes in a 60 second timeout interval with plans to make it infinite.
The way you force libusb_handle_events_completed to return is you call libusb_hotplug_deregister_callback which wakes up libusb_handle_events(), causing the function to return.
There is more info about this behavior in the docs.
So your destructor (or wherever you want to stop listening immediately) for the class could look something like this:
SomeClass::~SomeClass() {
this->handlingEvents = false;
libusb_hotplug_deregister_callback(context, hotplugCallbackHandle);
if (this->thread.joinable()) this->thread.join();
libusb_exit(this->context);
}
In the function:
int libusb_handle_events_completed(libusb_context* ctx, int* completed)
You can change the value of the completed to "1" so the function will return without blocking
According to their docs:
If the parameter completed is not NULL then after obtaining the event
handling lock this function will return immediately if the integer
pointed to is not 0. This allows for race free waiting for the
completion of a specific transfer.
There is no functions in libusb that force libusb_handle_events() to return.
It's recommended to use libusb_handle_events() in a dedicated thread so your main thread will not be blocked by this call. Even though, if you need to manipulate the call of the event handler you can put the call in a while(condition) and change the condition state in your main thread.
Libusb documentation details this here.
I have a handler similar to the following, which essentially responds to a command and sends a whole bunch of commands to a different queue.
public void Handle(ISomeCommand message)
{
int i=0;
while (i < 10000)
{
var command = Bus.CreateInstance<IAnotherCommand>();
command.Id = i;
Bus.Send("target.queue#d1555", command);
i++;
}
}
The issue with this block is, until the loop is fully completed none of the messages appear in the target queue or in the outgoing queue. Can someone help me understand this behavior?
Also if I use Tasks to send messages within the Handler as below, messages appear immediately. So two questions on this,
What's the explanation on Task based Sends to go through immediately?
Are there are any ramifications on using Tasks with in message handlers?
public void Handle(ISomeCommand message)
{
int i=0;
while (i < 10000)
{
System.Threading.ThreadPool.QueueUserWorkItem((args) =>
{
var command = Bus.CreateInstance<IAnotherCommand>();
command.Id = i;
Bus.Send("target.queue#d1555", command);
i++;
});
}
}
Your time is much appreciated!
First question: Picking a message from a queue, running all the registered message handlers for it AND any other transactional action(like writing new messages or writes against a database) is performed in ONE transaction. Either it all completes or none of it. So what you are seeing is the expected behaviour: picking a message from the queue, handling ISomeCommand and writing 10000 new IAnotherCommand is either done completely or none of it. To avoid this behaviour you can do one of the following:
Configure your NServiceBus endpoint to not be transactional
public class EndpointConfig : IConfigureThisEndpoint, AsA_Publisher,IWantCustomInitialization
{
public void Init()
{
Configure.With()
.DefaultBuilder()
.XmlSerializer()
.MsmqTransport()
.IsTransactional(false)
.UnicastBus();
}
}
Wrap the sending of IAnotherCommand in a transaction scope that suppresses the ambient transaction.
public void Handle(ISomeCommand message)
{
using (new TransactionScope(TransactionScopeOption.Suppress))
{
int i=0;
while (i < 10000)
{
var command = Bus.CreateInstance();
command.Id = i;
Bus.Send("target.queue#d1555", command);
i++;
}
}
}
Issue the Bus.Send on another thread, by either starting a new thread yourself, using System.Threading.ThreadPool.QueueUserWorkItem or the Task classes. This works because an ambient transaction is not automatically carried over to a new thread.
Second question: The ramifications of using Tasks, or any of the other methods I mentioned, is that you have no transactional quarantee for the whole thing.
How do you handle the case where you have generated 5000 IAnotherMessage and the power suddenly goes out?
If you use 2) or 3) the original ISomeMessage will not complete and will be retried automatically by NServiceBus when you start up the endpoint again. End result: 5000 + 10000 IAnotherCommands.
If you use 1) you will lose IAnotherMessage completely and end up with only 5000 IAnotherCommands.
Using the recommended transactional way, the initial 5000 IAnotherCommands would be discarded, the original ISomeMessage comes back on the queue and is retried when the endpoint starts up again. Net result: 10000 IAnotherCommands.
If memory serves NServiceBus wraps the calls to the message handlers in a TransactionScope if the transaction option is used and TransactionScope needs some help to be cross-thread friendly:
TransactionScope and multi-threading
If you are trying to reduce overhead you can also bundle your messages. The signature for the send is Bus.Send(IMessage[]messages). If you can guarantee that you don't blow up the size limit for MSMQ, then you could Send() all the messages at once. If the size limit is an issue, then you can chunk them up or use the Databus.
The question pretty much sums it up. I have a WCF service, and I want to wait until it finished to do something else, but it has to be until it finishes. My code looks something like this. Thanks!
private void RequestGeoCoordinateFromAddress(string address)
{
GeocodeRequest geocodeRequest = new GeocodeRequest();
GeocodeServiceClient geocodeService = new GeocodeServiceClient("BasicHttpBinding_IGeocodeService");
geocodeService.GeocodeCompleted += new EventHandler<GeocodeCompletedEventArgs>(geocodeService_GeocodeCompleted);
// Make the geocode request
geocodeService.GeocodeAsync(geocodeRequest);
//if (geocodeResponse.Results.Length > 0)
// results = String.Format("Latitude: {0}\nLongitude: {1}",
// geocodeResponse.Results[0].Locations[0].Latitude,
// geocodeResponse.Results[0].Locations[0].Longitude);
//else
// results = "No Results Found";
// wait for the request to finish here, so I can do something else
// DoSomethingElse();
}
private void geocodeService_GeocodeCompleted(object sender, GeocodeCompletedEventArgs e)
{
bool isErrorNull = e.Error == null;
Exception error = e.Error;
try
{
double altitude = e.Result.Results[0].Locations[0].Latitude;
double longitude = e.Result.Results[0].Locations[0].Longitude;
SetMapLocation(new GeoCoordinate(altitude, longitude));
}
catch (Exception ex)
{
// TODO: Remove reason later
MessageBox.Show("Unable to find address. Reason: " + ex.Message);
}
}
There is a pattern, supported by WCF, for a call to have an asynchronous begin call, and a corresponding end call.
In this case, the asynchronous methods would be in the client's interface as so:
[ServiceContract]
interface GeocodeService
{
// Synchronous Operations
[OperationContract(AsyncPattern = false, Action="tempuri://Geocode", ReplyAction="GeocodeReply")]
GeocodeResults Geocode(GeocodeRequestType geocodeRequest);
// Asynchronous operations
[OperationContract(AsyncPattern = true, Action="tempuri://Geocode", ReplyAction="GeocodeReply")]
IAsyncResult BeginGeocode(GeocodeRequestType geocodeRequest, object asyncState);
GeocodeResults EndGeocode(IAsyncResult result);
}
If you generate the client interface using svcutil with the asynchronous calls option, you will get all of this automatically. You can also hand-create the client interface if you aren't using automatically generating the client proxies.
The End call would block until the call is complete.
IAsyncResult asyncResult = geocodeService.BeginGeocode(geocodeRequest, null);
//
// Do something else with your CPU cycles here, if you want to
//
var geocodeResponse = geocodeService.EndGeocode(asyncResult);
I don't know what you've done with your interface declarations to get the GeocodeAsync function, but if you can wrangle it back into this pattern your job would be easier.
You could use a ManualResetEvent:
private ManualResetEvent _wait = new ManualResetEvent(false);
private void RequestGeoCoordinateFromAddress(string address)
{
...
_wait = new ManualResetEvent(false);
geocodeService.GeocodeAsync(geocodeRequest);
// wait for maximum 2 minutes
_wait.WaitOne(TimeSpan.FromMinutes(2));
// at that point the web service returned
}
private void geocodeService_GeocodeCompleted(object sender, GeocodeCompletedEventArgs e)
{
...
_wait.Set();
}
Obviously doing this makes absolutely no sense, so the question here is: why do you need to do this? Why using async call if you are going to block the main thread? Why not use a direct call instead?
Generally when using async web service calls you shouldn't block the main thread but do all the work of handling the results in the async callback. Depending of the type of application (WinForms, WPF) you shouldn't forget that GUI controls can only be updated on the main thread so if you intend to modify the GUI in the callback you should use the appropriate technique (InvokeRequired, ...).
Don't use this code with Silverlight:
private ManualResetEvent _wait = new ManualResetEvent(false);
private void RequestGeoCoordinateFromAddress(string address)
{
...
_wait = new ManualResetEvent(false);
geocodeService.GeocodeAsync(geocodeRequest);
// wait for maximum 2 minutes
_wait.WaitOne(TimeSpan.FromMinutes(2));
// at that point the web service returned
}
private void geocodeService_GeocodeCompleted(object sender, GeocodeCompletedEventArgs e)
{
...
_wait.Set();
}
When we call _wait.WaitOne(TimeSpan.FromMinutes(2)), we are blocking the UI thread, which means the service call never takes place. In the background, the call to geocodeService.GeocodeAsync is actually placed in a message queue, and will only be actioned when the thread is not executing user code. If we block the thread, the service call never takes place.
Synchronous Web Service Calls with Silverlight: Dispelling the async-only myth
The Visual Studio 11 Beta inludes C# 5 with async-await.
See Async CTP - How can I use async/await to call a wcf service?
It makes it possible to write async clients in a 'synchronous style'.
I saw one guy did use ManualReset and waitAll, but he had to wrap all code inside of ThreadPool..
It is very bad idea...thought it works