How to dispatch on main queue synchronously without a deadlock? - objective-c

I need to dispatch a block on the main queue, synchronously. I don’t know if I’m currently running on the main thread or no. The naive solution looks like this:
dispatch_sync(dispatch_get_main_queue(), block);
But if I’m currently inside of a block running on the main queue, this call creates a deadlock. (The synchronous dispatch waits for the block to finish, but the block does not even start running, since we are waiting for the current one to finish.)
The obvious next step is to check for the current queue:
if (dispatch_get_current_queue() == dispatch_get_main_queue()) {
block();
} else {
dispatch_sync(dispatch_get_main_queue(), block);
}
This works, but it’s ugly. Before I at least hide it behind some custom function, isn’t there a better solution for this problem? I stress that I can’t afford to dispatch the block asynchronously – the app is in a situation where the asynchronously dispatched block would get executed “too late”.

I need to use something like this fairly regularly within my Mac and iOS applications, so I use the following helper function (originally described in this answer):
void runOnMainQueueWithoutDeadlocking(void (^block)(void))
{
if ([NSThread isMainThread])
{
block();
}
else
{
dispatch_sync(dispatch_get_main_queue(), block);
}
}
which you call via
runOnMainQueueWithoutDeadlocking(^{
//Do stuff
});
This is pretty much the process you describe above, and I've talked to several other developers who have independently crafted something like this for themselves.
I used [NSThread isMainThread] instead of checking dispatch_get_current_queue(), because the caveats section for that function once warned against using this for identity testing and the call was deprecated in iOS 6.

For syncing on the main queue or on the main thread (that is not the same) I use:
import Foundation
private let mainQueueKey = UnsafeMutablePointer<Void>.alloc(1)
private let mainQueueValue = UnsafeMutablePointer<Void>.alloc(1)
public func dispatch_sync_on_main_queue(block: () -> Void)
{
struct dispatchonce { static var token : dispatch_once_t = 0 }
dispatch_once(&dispatchonce.token,
{
dispatch_queue_set_specific(dispatch_get_main_queue(), mainQueueKey, mainQueueValue, nil)
})
if dispatch_get_specific(mainQueueKey) == mainQueueValue
{
block()
}
else
{
dispatch_sync(dispatch_get_main_queue(),block)
}
}
extension NSThread
{
public class func runBlockOnMainThread(block: () -> Void )
{
if NSThread.isMainThread()
{
block()
}
else
{
dispatch_sync(dispatch_get_main_queue(),block)
}
}
public class func runBlockOnMainQueue(block: () -> Void)
{
dispatch_sync_on_main_queue(block)
}
}

I recently began experiencing a deadlock during UI updates. That lead me this Stack Overflow question, which lead to me implementing a runOnMainQueueWithoutDeadlocking-type helper function based on the accepted answer.
The real issue, though, is that when updating the UI from a block I had mistakenly used dispatch_sync rather than dispatch_async to get the Main queue for UI updates. Easy to do with code completion, and perhaps hard to notice after the fact.
So, for others reading this question: if synchronous execution is not required, simply using dispatch_**a**sync will avoid the deadlock you may be intermittently hitting.

Related

How to cancel kotlin coroutine with potentially "un-cancellable" method call inside it?

I have this piece of code:
// this method is used to evaluate the input string, and it returns evaluation result in string format
fun process(input: String): String {
val timeoutMillis = 5000L
val page = browser.newPage()
try {
val result = runBlocking {
withTimeout(timeoutMillis) {
val result = page.evaluate(input).toString()
return#withTimeout result
}
}
return result
} catch (playwrightException: PlaywrightException) {
return "Could not parse template! '${playwrightException.localizedMessage}'"
} catch (timeoutException: TimeoutCancellationException) {
return "Could not parse template! (timeout)"
} finally {
page.close()
}
}
It should throw exception after 5 seconds if the method is taking too long to execute (example: input potentially contains infinite loop) but it doesent (becomes deadlock I assume) coz coroutines should be cooperative. But the method I am calling is from another library and I have no control over its computation (for sticking yield() or smth like it).
So the question is: is it even possible to timeout such coroutine? if yes, then how?
Should I use java thread insted and just kill it after some time?
But the method I am calling is from another library and I have no control over its computation (for sticking yield() or smth like it).
If that is the case, I see mainly 2 situations here:
the library is aware that this is a long-running operation and supports thread interrupts to cancel it. This is the case for Thread.sleep and some I/O operations.
the library function really does block the calling thread for the whole time of the operation, and wasn't designed to handle thread interrupts
Situation 1: the library function is interruptible
If you are lucky enough to be in situation 1, then simply wrap the library's call into a runInterruptible block, and the coroutines library will translate cancellation into thread interruptions:
fun main() {
runBlocking {
val elapsed = measureTimeMillis {
withTimeoutOrNull(100.milliseconds) {
runInterruptible {
interruptibleBlockingCall()
}
}
}
println("Done in ${elapsed}ms")
}
}
private fun interruptibleBlockingCall() {
Thread.sleep(3000)
}
Situation 2: the library function is NOT interruptible
In the more likely situation 2, you're kind of out of luck.
Should I use java thread insted and just kill it after some time?
There is no such thing as "killing a thread" in Java. See Why is Thread.stop deprecated?, or How do you kill a Thread in Java?.
In short, in that case you do not have a choice but to block some thread.
I do not know a solution to this problem that doesn't leak resources. Using an ExecutorService would not help if the task doesn't support thread interrupts - the threads will not die even with shutdownNow() (which uses interrupts).
Of course, the blocked thread doesn't have to be your thread. You can technically launch a separate coroutine on another thread (using another dispatcher if yours is single-threaded), to wrap the libary function call, and then join() the job inside a withTimeout to avoid waiting for it forever. That is however probably bad, because you're basically deferring the problem to whichever scope you use to launch the uncancellable task (this is actually why we can't use a simple withContext here).
If you use GlobalScope or another long-running scope, you effectively leak the hanging coroutine (without knowing for how long).
If you use a more local parent scope, you defer the problem to that scope. This is for instance the case if you use the scope of an enclosing runBlocking (like in your example), which makes this solution pointless:
fun main() {
val elapsed = measureTimeMillis {
doStuff()
}
println("Completely done in ${elapsed}ms")
}
private fun doStuff() {
runBlocking {
val nonCancellableJob = launch(Dispatchers.IO) {
uncancellableBlockingCall()
}
val elapsed = measureTimeMillis {
withTimeoutOrNull(100.milliseconds) {
nonCancellableJob.join()
}
}
println("Done waiting in ${elapsed}ms")
} // /!\ runBlocking will still wait here for the uncancellable child coroutine
}
// Thread.sleep is in fact interruptible but let's assume it's not for the sake of the example
private fun uncancellableBlockingCall() {
Thread.sleep(3000)
}
Outputs something like:
Done waiting in 122ms
Completely done in 3055ms
So the bottom line is either live with this long thing potentially hanging, or ask the developers of that library to handle interruption or make the task cancellable.

Coroutine launch() while inside another coroutine

I have the following code (pseudocode)
fun onMapReady()
{
//do some stuff on current thread (main thread)
//get data from server
GlobalScope.launch(Dispatchers.IO){
getDataFromServer { result->
//update UI on main thread
launch(Dispatchers.Main){
updateUI(result) //BREAKPOINT HERE NEVER CALLED
}
}
}
}
As stated there as a comment, the code never enters the coroutine dispatching onto main queue. The below however works if I explicitly use GlobalScope.launch(Dispatchers.Main) instead of just launch(Dispatchers.Main)
fun onMapReady()
{
//do some stuff on current thread (main thread)
//get data from server
GlobalScope.launch(Dispatchers.IO){
getDataFromServer { result->
//update UI on main thread
GlobalScope.launch(Dispatchers.Main){
updateUI(result) //BREAKPOINT HERE IS CALLED
}
}
}
}
Why does the first approach not work?
I believe the problem here is that getDataFromServer() is asynchronous, it immediately returns and therefore you invoke launch(Dispatchers.Main) after you exited from the GlobalScope.launch(Dispatchers.IO) { ... } block. In other words: you try to start a coroutine using a coroutine scope that has finished already.
My suggestion is to not mix asynchronous, callback-based APIs with coroutines like this. Coroutines work best with suspend functions that are synchronous. Also, if you prefer to execute everything asynchronously and independently of other tasks (your onMapReady() started 3 separate asynchronous operations) then I think coroutines are not at all a good choice.
Speaking about your example: are you sure you can't execute getDataFromServer() from the main thread directly? It shouldn't block the main thread as it is asynchronous. Similarly, in some libraries callbacks are automatically executed in the main thread and in such case your example could be replaced with just:
fun onMapReady() {
getDataFromServer { result->
updateUI(result)
}
}
If the result is executed in a background thread then you can use GlobalScope.launch(Dispatchers.Main) as you did, but this is not really the usual way how we use coroutines. Or you can use utilities like e.g. runOnUiThread() on Android which probably makes more sense.
#broot already explained the gist of the problem. You're trying to launch a coroutine in the child scope of the outer GlobalScope.launch, but that scope is already done when the callback of getDataFromServer is called.
So in short, don't capture the outer scope in a callback that will be called in a place/time that you don't control.
One nicer way to deal with your problem would be to make getDataFromServer suspending instead of callback-based. If it's an API you don't control, you can create a suspending wrapper this way:
suspend fun getDataFromServerSuspend(): ResultType = suspendCoroutine { cont ->
getDataFromServer { result ->
cont.resume(result)
}
}
You can then simplify your calling code:
fun onMapReady() {
// instead of GlobalScope, please use viewModelScope or lifecycleScope,
// or something more relevant (see explanation below)
GlobalScope.launch(Dispatchers.IO) {
val result = getDataFromServer()
// you don't need a separate coroutine, just a context switch
withContext(Dispatchers.Main) {
updateUI(result)
}
}
}
As a side note, GlobalScope is probably not what you want, here. You should instead use a scope that maps to the lifecycle of your view or view model (viewModelScope or lifecycleScope) because you're not interested in the result of this coroutine if the view is destroyed (so it should just be cancelled). This will avoid coroutine leaks if for some reason something hangs or loops inside the coroutine.

Sending HTTP requests in a MacCatalyst app opened in command line [duplicate]

When writing a Command Line Tool (CLT) in Swift, I want to process a lot of data. I've determined that my code is CPU bound and performance could benefit from using multiple cores. Thus I want to parallelize parts of the code. Say I want to achieve the following pseudo-code:
Fetch items from database
Divide items in X chunks
Process chunks in parallel
Wait for chunks to finish
Do some other processing (single-thread)
Now I've been using GCD, and a naive approach would look like this:
let group = dispatch_group_create()
let queue = dispatch_queue_create("", DISPATCH_QUEUE_CONCURRENT)
for chunk in chunks {
dispatch_group_async(group, queue) {
worker(chunk)
}
}
dispatch_group_wait(group, DISPATCH_TIME_FOREVER)
However GCD requires a run loop, so the code will hang as the group is never executed. The runloop can be started with dispatch_main(), but it never exits. It is also possible to run the NSRunLoop just a few seconds, however that doesn't feel like a solid solution. Regardless of GCD, how can this be achieved using Swift?
I mistakenly interpreted the locking thread for a hanging program. The work will execute just fine without a run loop. The code in the question will run fine, and blocking the main thread until the whole group has finished.
So say chunks contains 4 items of workload, the following code spins up 4 concurrent workers, and then waits for all of the workers to finish:
let group = DispatchGroup()
let queue = DispatchQueue(label: "", attributes: .concurrent)
for chunk in chunk {
queue.async(group: group, execute: DispatchWorkItem() {
do_work(chunk)
})
}
_ = group.wait(timeout: .distantFuture)
Just like with an Objective-C CLI, you can make your own run loop using NSRunLoop.
Here's one possible implementation, modeled from this gist:
class MainProcess {
var shouldExit = false
func start () {
// do your stuff here
// set shouldExit to true when you're done
}
}
println("Hello, World!")
var runLoop : NSRunLoop
var process : MainProcess
autoreleasepool {
runLoop = NSRunLoop.currentRunLoop()
process = MainProcess()
process.start()
while (!process.shouldExit && (runLoop.runMode(NSDefaultRunLoopMode, beforeDate: NSDate(timeIntervalSinceNow: 2)))) {
// do nothing
}
}
As Martin points out, you can use NSDate.distantFuture() as NSDate instead of NSDate(timeIntervalSinceNow: 2). (The cast is necessary because the distantFuture() method signature indicates it returns AnyObject.)
If you need to access CLI arguments see this answer. You can also return exit codes using exit().
Swift 3 minimal implementation of Aaron Brager solution, which simply combines autoreleasepool and RunLoop.current.run(...) until you break the loop:
var shouldExit = false
doSomethingAsync() { _ in
defer {
shouldExit = true
}
}
autoreleasepool {
var runLoop = RunLoop.current
while (!shouldExit && (runLoop.run(mode: .defaultRunLoopMode, before: Date.distantFuture))) {}
}
I think CFRunLoop is much easier than NSRunLoop in this case
func main() {
/**** YOUR CODE START **/
let group = dispatch_group_create()
let queue = dispatch_queue_create("", DISPATCH_QUEUE_CONCURRENT)
for chunk in chunks {
dispatch_group_async(group, queue) {
worker(chunk)
}
}
dispatch_group_wait(group, DISPATCH_TIME_FOREVER)
/**** END **/
}
let runloop = CFRunLoopGetCurrent()
CFRunLoopPerformBlock(runloop, kCFRunLoopDefaultMode) { () -> Void in
dispatch_async(dispatch_queue_create("main", nil)) {
main()
CFRunLoopStop(runloop)
}
}
CFRunLoopRun()

Kotlin/Native: How to perform blocking calls asynchronously?

As of now, Kotlin/Native is single-threaded. Therefore, the following code will become blocked by sleep:
coroutineScope {
launch { plaform.posix._sleep(100000) }
launch { println("Hello") }
}
However, it has a novel concurrency mechanism called Workers. Yet, even with worker the main thread is going to be blocked by long-running posix call:
coroutineScope {
launch { Worker.start().execute(TransferMode.SAFE, { }, { plaform.posix._sleep(100000) }).consume{ } }
launch { println("Hello") }
}
Both of the snippets above will never print Hello.
What is the correct way to perform a series of expensive blocking calls asynchronously?
There is a multithreaded version of kotlin coroutines for K/N, which lives on a separate branch currently: native-mt.
You could use Dispatchers.Default to offload tasks to the background thread
So, the problem is that I used the consume method, which blocks the main thread.
Instead, you need to execute the worker, and then check manually whether it finished:
val job = Worker.start().execute(TransferMode.SAFE, { }) { plaform.posix._sleep(100000) }
...
if (job.state == FutureState.Computed)
job.result

TornadoFx FXTask OutOfMemoryError

So i have an interesting piece of code and i run into an OutOfMemoryError.
So my problem is that i am creating inside my searchThread new threads which are searching again. This abviously creates an OutOfMemoryError but i wannted to use TornadoFX code only to solve that without any luck.
searchThread = runAsync {
while (!searchThread.isCancelled) {
runAsync {
// Searching for Sth
} ui {
// Updating UI
}
}
}
}
How can i get, if runAsync inside my search thread, is still running so i can skip the creation of an new thread?
What you are doing where is creating new tasks in a tight loop, so obviously you'll run out of memory. The call for the nested runAsync will not wait, just execute again until the condition is false.
Remove the inner runAsync and just do whatever you want to do, then call runLater if you want to update something on the UI thread.
I think I understand your problem. Your goal is to have only one search thread that doesn't get called if it is already running. Like Edvin said, looping the calling of async threads is really really bad. Not to mention, the nested threads might not even have a kill condition. This would be a simple solution but wouldn't this make more sense?:
val searchTask: Task<YourReturnType>? = null
private fun search() {
if(searchTask?.isRunning != true) {
searchTask = runAsync {
//Do your search thread things
} ui { result ->
//do things with your UI based on your result
}
}
}
Similarly, if you want to have an old running search thread be replaced by a new one instead, you could try something like:
val searchTask: Task<YourReturnType>? = null
private fun search() {
if(searchTask?.isRunning == true) {
searchTask?.cancel()
//You should probably do something to check if the cancel succeeded.
}
searchTask = runAsync {
//Do your search thread things
} ui { result ->
//do things with your UI based on your result
}
}