io_service::reset documentation states that reset() must be called before subsequent calls to run(), run_one(), poll() or poll_one().
Questions:
Why is this necessary? -
What behaviour might I expect if this step is neglected?
Why is this requirement not important enough to warrant an assert if it's neglected?
Some context: I finished debugging some unit-tests that checked that called poll() repeatedly without reset() and was attempting to check the expected number of handlers was being executed each time. It appears that with enough calls to poll(), all handlers are eventually executed in the order expected, but it takes more calls than you would otherwise expect. Correctly calling reset() fixes the problem, but I'm curious to know if this is the only side effect of not calling reset(), or if there are potentially worse effects such as dropping handlers or effects that might appear in a multi-threaded example.
When the io_service has been stopped:
all invocations of poll(), poll_one(), run(), and run_one() will return as soon as possible
subsequent calls to poll(), poll_one(), run(), and run_one() will return immediately without invoking any handlers or processing the event loop
Invoking io_service::reset() sets the io_service to no longer be in a stopped state, allowing subsequent calls to poll(), poll_one(), run(), and run_one() to invoke handlers and process the event loop.
Why is this necessary?
It is necessary if one wishes to invoke handlers or process the event loop once the io_service has been stopped explicitly via io_service.stop() or implicitly by running out of work.
What behaviour might I expect if this step is neglected?
If io_service.stopped() is true, then subsequent calls to poll(), poll_one(), run(), and run_one() will not perform any work.
Why is this requirement not important enough to warrant an assert if it's neglected?
The io_service::reset() documentation's use of the word "must" tends to set an overly critical tone without mentioning the consequences of not calling reset(). The behavior described by io_service::stop() is not critical enough to warrant an error:
Subsequent calls to run(), run_one(), poll() or poll_one() will return immediately until reset() is called.
For reset(), the only hard requirement is to not call it when there are unfinished calls to poll(), poll_one(), run(), and run_one().
Related
When looking through boost asio co_spawn documentation (https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/reference/co_spawn/overload6.html), I see this statement, "Spawn a new coroutined-based thread of execution", however my understanding is that co_spawn does not create an actual thread, but uses threads that are part of the boost::asio::io_context pool. It's a "coroutine-based thread of execution" in a sense, that this coroutine would be a root of all coroutines that are spawned from inside this one
Is my understanding correct here or an actual thread is created whenever co_spawn is used like this:
::boost::asio::co_spawn(io_ctx, [&] -> ::boost::asio::awaitable<void> {
// do something
}, ::boost::asio::detached);
thanks!
It does not. See The Proactor Design Pattern: Concurrency Without Threads and https://www.boost.org/doc/libs/1_78_0/doc/html/boost_asio/overview/core/threads.html
What does detached mean/do? The documentation says:
The detached_t class is used to indicate that an asynchronous operation is detached. That is, there is no completion handler waiting for the operation's result.
It comes down to writing a no-op handler but (a) less work (b) more room for the library to optimize.
Another angle to look at this from is this: if the execution context for the executor (io_ctx) is never run/polled, nothing will ever happen. As always in boost, you decide where you run the service (whether you use threads e.g.)
What would happen if I call CoUninitialize when the CoInitialize return RPC_E_CHANGED_MODE? Will it cause any issue?
It is safe to call CoUninitialize when you stopped all COM activity on the thread. Leaving such COM activity - in wide sense, esp. leaving referenced stubs and proxies - is very likely to cause undefined behavior and exceptions of sorts.
Since CoInitialize and CoUninitialize can be safely called multiple times, your unpaired CoUninitialize call might have different consequences depending on context.
When you had 2+ CoInitialize calls on the thread before your CoUninitialize call, nothing will happen immediately, however you are going to have issues later closer to thread termination when upper level code calls its presumably paired CoUninitialize calls and finally terminates COM initialization. Note that your CoUninitialize in this scenario does not let you change apartment mode since your call does not terminate COM on the thread (you can only change apartment mode when you uninitialized COM completely on the thread).
All in all, you should stick to the basic rule: you never call CoUninitialize on its own. You call CoInitialize and if it succeeds you must call CoUninitialize later on the thread when you are finished with your COM. Stepping aside from this path is likely to get you into trouble, which pretty often too painful to quickly identify and troubleshoot.
I'm programming an application that makes use of asynchronous web requests using NSURLConnection, so I have multiple threads running. To ensure that the main logic of my app happens on one thread, I am making heavy use of performSelectorOnMainThread:waitUntilDone:. Sometimes though, I am running this on the main thread, which piqued my curiosity.
If performSelectorOnMainThread:waitUntilDone: is called while in the main thread? Does it act the same as just performSelector:? What if waitUntilDone: is YES? What if it is NO?
EDIT: I have found that when waitUntilDone: is YES, the selector is executed (almost) immediately, but I cannot figure out when it is executed if waitUntilDone: is NO.
performSelectorOnMainThread:withObject:waitUntilDone:
is a method to deliver message on main thread of your application. Here boolean value in parameter waitUntilDone: specifies that whether you want to block your main thread to execute specified selector or not.
for example -
if you written these two lines-
[self performSelectorOnMainThread:#selector(print) withObject:nil waitUntilDone:YES];
NSLog(#"Hello iPhone");
and this is the print method -
- (void) print
{
NSLog(#"Hello World");
}
then you will get this o/p
Hello World
Hello iPhone
so it first pause the execution of your main thread and print "Hello World" and then execute main thread again and print "Hello iPhone" because you specified YES in waitUntilDone:
but if you specified NO in waitUntilDone: then it will print like this -
Hello iPhone
Hello World
it clearly indicates that it put your request of executing the specified selector in a queue and as OS gets its main thread free it executed you request.
Calling performSelectorOnMainThread:withObject:waitUntilDone: either from main thread or a secondary thread doesn't make any difference in it's execution, it depends on what you specified in waitUntilDone:
for more info -
NSObject Class Reference
If the current thread is also the main thread, and you pass YES,
the message is performed immediately, otherwise the perform is
queued to run the next time through the run loop.
If YES, it can be performed before performSelectorOnMainThread:withObject:waitUntilDone: returns.
I have found that when waitUntilDone: is YES, the selector is executed (almost) immediately, but I cannot figure out when it is executed if waitUntilDone: is NO.
The bit about the run loop: Your main thread has a run loop. This more or less prevents a thread from exiting. A run loop manages a todo list. When its work is complete, it suspends execution of that thread for some time. Then it wakes up later and sees if has work to do.
The amount of work can vary greatly (e.g. it may do some really heavy drawing or file i/o between the time it awakes and the point your selector is performed. Therefore, it's not a good tool for really precise timing, but it should be enough to know how it works and how the implementations adds the work to the run loop.
http://developer.apple.com/library/ios/#documentation/cocoa/Conceptual/Multithreading/RunLoopManagement/RunLoopManagement.html
If waitUntilDone: is YES it acts as an immediate function call.
If waitUntilDone: is NO then it queues the call along with all other threads' calls.
This method queues the message on the run loop of the main thread
using the common run loop modes—that is, the modes associated with the
NSRunLoopCommonModes constant. As part of its normal run loop
processing, the main thread dequeues the message (assuming it is
running in one of the common run loop modes) and invokes the desired
method.
As noted above, things like drawing and I/O are prioritized over anything in queues. Once the main thread gets around to having time for queue service in the next event loop, there's a couple other details that make it not quite as simple as counting on first in first out:
1) dispatch_async() blocks ignore modes.
2) The performSelector variants with a specific mode argument -- event tracking, say -- may take precedence over ones with the default common modes argument in a loop running in that specific mode.
As a general rule, if you want predictable timing behaviours you should use the low level GCD dispatch functions that don't take account of higher level considerations like run loop modes.
I was going through some tutorial on SystemC and there was a mention that we cant put wait in SC_METHOD, it didn't explain why.
That is because SC_METHOD does not have its own thread of execution. Every time an event on an SC_METHOD sensitivity list is triggered, the SC_METHOD's code is (ideally) entirely executed. In other words, calling wait() in an SC_METHOD would freeze the simulation itself.
In contrast, an SC_THREAD has its own thread of execution and its activity is generally modeled inside a loop containing or not wait() statements, which pauses the execution of the thread. Whenever an event (of the sensitivity list) is triggered, the execution is resumed at the command that follows the previously issued wait().
Its a feature of the language. SC_METHOD is meant to executed to completion without being deferred or losing context, unlike SC_THREAD.
I am in the middle of creating a cloud integration framework for iOS. We allow you to save, query, count and remove with synchronous and asynchronous with selector/callback and block implementations. What is the correct practice? Running the completion blocks on the main thread or a background thread?
For simple cases, I just parameterize it and do all the work i can on secondary threads:
By default, callbacks will be made on any thread (where it is most efficient and direct - typically once the operation has completed). This is the default because messaging via main can be quite costly.
The client may optionally specify that the message must be made on the main thread. This way, it requires one line or argument. If safety is more important than efficiency, then you may want to invert the default value.
You could also attempt to batch and coalesce some messages, or simply use a timer on the main run loop to vend.
Consider both joined and detached models for some of your work.
If you can reduce the task to a result (remove the capability for incremental updates, if not needed), then you can simply run the task, do the work, and provide the result (or error) when complete.
Apple's NSURLConnection class calls back to its delegate methods on the thread from which it was initiated, while doing its work on a background thread. That seems like a sensible procedure. It's likely that a user of your framework will not enjoy having to worry about thread safety when writing a simple callback block, as they would if you created a new thread to run it on.
The two sides of the coin: If the callback touches the GUI, it has to be run on the main thread. On the other hand, if it doesn't, and is going to do a lot of work, running it on the main thread will block the GUI, causing frustration for the end user.
It's probably best to put the callback on a known, documented thread, and let the app programmer make the determination of the effect on the GUI.