I'm trying to create an object which can execute some tasks sequentially in its own thread like it is a queue.
The following sample is just for demonstrating my setup and may be completely wrong.
class CoroutinesTest {
fun a() {
GlobalScope.launch {
println("a started")
delay(1000)
println("a completed")
}
}
fun b() {
GlobalScope.launch {
println("b started")
delay(2000)
println("b completed")
}
}
fun complex() {
a()
b()
}
}
fun main() {
runBlocking {
val coroutinesTest = CoroutinesTest()
coroutinesTest.complex()
delay(10000)
}
}
For now this code prints the following
a started
b started
a completed
b completed
which means a and b executed in parallel. Methods a, b and complex can be called from different threads. Of course, the complex method should also support this concept. For now, I need a mechanism that allows me to execute only one task at a moment, so I could get the following output:
a started
a completed
b started
b completed
I did some research and think that actor with a Channel can do what needed, but actor for now is marked as obsolete (issue #87). I don't like the idea of using API that is subject to change, so I would like to do the thing in a common way.
TL;DR There are a few options for controlling sequential coroutines.
Use a Channel to make them run one at a time in the order called
Use a Mutex to make them run one at a time but without a guarantee of order
Use a Flow (as described in the answer below by BigSt) to make them run one at a time in the order called, however make sure that the flow buffer is large enough or jobs can be lost if the number of jobs "in flight" is larger than the buffer size.
If the desired sequence is always the same, put the actual work into suspend functions and call the sequence from within the same coroutine scope to make them run one at a time in the order prescribed by the code
Channel
One way to control execution order is to use a Channel - where lazily executed coroutine jobs are passed to the channel to be run in sequence. Unlike the Mutex, the Channel guarantees that the jobs are run in the order they are launched.
class CoroutinesTest {
private val channel = Channel<Job>(capacity = Channel.UNLIMITED).apply {
GlobalScope.launch {
consumeEach { it.join() }
}
}
fun a() {
channel.trySend(
GlobalScope.launch(start = CoroutineStart.LAZY) {
println("a started")
delay(1000)
println("a completed")
}
)
}
fun b() {
channel.trySend(
GlobalScope.launch(start = CoroutineStart.LAZY) {
println("b started")
delay(2000)
println("b completed")
}
)
}
fun complex() {
// add two separate jobs to the channel,
// this will run a, then b
a()
b()
}
}
Calling complex always produces:
a started
a completed
b started
b completed
Mutex
You can keep jobs from running at the same time with a Mutex and withLock call. The call order is not guaranteed if you make a bunch of calls in short succession. For example:
class CoroutinesTest {
private val lock = Mutex()
fun a() {
GlobalScope.launch {
lock.withLock {
println("a started")
delay(1000)
println("a completed")
}
}
}
fun b() {
GlobalScope.launch {
lock.withLock {
println("b started")
delay(2000)
println("b completed")
}
}
}
fun complex() {
a()
b()
}
}
Calling complex can produce:
a started
a completed
b started
b completed
or:
b started
b completed
a started
a completed
Suspend Functions
If you must always run a then b you can make both of them suspend functions and call them from within a single scope (only allowing the complex call, not individual a and b calls). In this case, the complex call does guarantee that a runs and completes before starting b.
class CoroutinesTest {
suspend fun aImpl() {
println("a started")
delay(1000)
println("a completed")
}
suspend fun bImpl() {
println("b started")
delay(2000)
println("b completed")
}
fun complex() {
GlobalScope.launch {
aImpl()
bImpl()
}
}
}
Calling complex always produces:
a started
a completed
b started
b completed
Old question but here's a simpler approach anyway.
Change a() to return the Coroutine job:
fun a() = GlobalScope.launch {
println("a started")
delay(1000)
println("a completed")
}
Then you can invoke a() / b() like this:
a().invokeOnCompletion { b() }
This way b() won't be triggered before a() terminates.
Alternatively you can use join:
fun complex() {
GlobalScope.launch {
a().join()
b()
}
}
Flows are sequential, using MutableSharedFlow it can be achieved like the following:
class CoroutinesTest {
// make sure replay(in case some jobs were emitted before sharedFlow is being collected and could be lost)
// and extraBufferCapacity are large enough to handle all the jobs.
// In case some jobs are lost try to increase either of the values.
private val sharedFlow = MutableSharedFlow<Job>(replay = 2, extraBufferCapacity = 2)
init {
sharedFlow.onEach { job ->
job.join()
}.launchIn(GlobalScope)
}
fun a() {
// emit job to the Flow to execute sequentially
sharedFlow.tryEmit(
// using CoroutineStart.LAZY here to start a coroutine when join() is called
GlobalScope.launch(start = CoroutineStart.LAZY) {
println("a started")
delay(1000)
println("a completed")
}
)
}
fun b() {
// emit job to the Flow to execute sequentially
sharedFlow.tryEmit(
// using CoroutineStart.LAZY here to start a coroutine when join() is called
GlobalScope.launch(start = CoroutineStart.LAZY) {
println("b started")
delay(2000)
println("b completed")
}
)
}
fun complex() {
a()
b()
}
}
Note: GlobalScope is not recommended to use, it violates the principle of structured concurrency.
Related
I have my code below
interface Listener {
fun onGetData(data: Int)
fun onClose()
}
class MyEmitter {
var listener: Listener? = null
fun sendData(data: Int) = listener?.onGetData(data)
fun close() = listener?.onClose()
}
fun handleInput(myEmitter: MyEmitter) = channelFlow {
myEmitter.listener = object:Listener {
override fun onGetData(data: Int) { trySend(data) }
override fun onClose() { close() }
}
}
fun main(): Unit = runBlocking {
val myEmitter = MyEmitter()
handleInput(myEmitter).collect {
println(it)
}
myEmitter.sendData(1)
myEmitter.sendData(2)
myEmitter.close()
}
Whenever I send the data e.g. myEmitter.sendData(1), it does get into trySend(data), but the result is closed.
Why is it closed? How can I keep it open?
I think it's not documented terribly clearly, but just like the flow builder, the channelFlow's Flow is considered complete once the suspend lambda returns. Since all you are doing is setting a listener and not waiting around, it will return almost immediately. When a channel Flow is completed, it's channel is also closed.
If you want your channelFlow to stay open until the Flow is canceled, call awaitClose() at the end. This function suspends until the channel is closed, so it will hold your Flow open until it's canceled or the event in your listener closes the Channel.
fun handleInput(myEmitter: MyEmitter) = channelFlow {
myEmitter.listener = object:Listener {
override fun onGetData(data: Int) { trySend(data) }
override fun onClose() { close() }
}
awaitClose()
}
If you are familiar with callbackFlow, it is a specialized version of channelFlow and it enforces the awaitClose() call because it is meant for waiting for a listener, so there's no reason you would ever not want to await. It's also where you can deregister any listener you created inside the flow builder.
To get this working, I did 3 things
Add awaitClose to ensure the flow is not terminated
Move the entire flow behind launch, so that it is not blocking the main() function flow
Add a little delay before myEmitter.sendData(1), so that to ensure the launch get triggered first before doing the external sendData.
Full changed code as below
interface Listener {
fun onGetData(data: Int)
fun onClose()
}
class MyEmitter {
var listener: Listener? = null
fun sendData(data: Int) = listener?.onGetData(data)
fun close() = listener?.onClose()
}
fun handleInput(myEmitter: MyEmitter) = channelFlow {
myEmitter.listener = object:Listener {
override fun onGetData(data: Int) { trySend(data) }
override fun onClose() { close() }
}
awaitClose { myEmitter.listener = null } // Need awaitClose to keep the flow alive
}
fun main(): Unit = runBlocking {
val myEmitter = MyEmitter()
launch { // Need to run to avoid it from blocking the main() function flow due to having `awaitClose` there
handleInput(myEmitter).collect {
println(it)
}
}
delay(100) // Add some delay to get this triggered after the launch run
myEmitter.sendData(1)
myEmitter.sendData(2)
myEmitter.close()
}
The 3rd step is a little hack I think.
In Coroutines, when I want to guard a block of code against cancellation, I should add NonCancellable to the Context:
#Test
fun coroutineCancellation_NonCancellable() {
runBlocking {
val scopeJob = Job()
val scope = CoroutineScope(scopeJob + Dispatchers.Default + CoroutineName("outer scope"))
val launchJob = scope.launch(CoroutineName("cancelled coroutine")) {
launch (CoroutineName("nested coroutine")) {
withContext(NonCancellable) {
delay(1000)
}
}
}
scope.launch {
delay(100)
launchJob.cancel()
}
launchJob.join()
}
}
The above unit test will take ~1.1sec to execute, even though the long-running Coroutine is cancelled after just 100ms. That's the effect of NonCancellable and I understand this point.
However, the below code seems to be functionally equivalent:
#Test
fun coroutineCancellation_newJobInsteadOfNonCancellable() {
runBlocking {
val scopeJob = Job()
val scope = CoroutineScope(scopeJob + Dispatchers.Default + CoroutineName("outer scope"))
val launchJob = scope.launch(CoroutineName("cancelled coroutine")) {
launch (CoroutineName("nested coroutine")) {
withContext(Job()) {
delay(1000)
}
}
}
scope.launch {
delay(100)
launchJob.cancel()
}
launchJob.join()
}
}
I tried to find any functional differences between these two approaches in terms of cancellation, error handling and general functionality, but so far I found none. Currently, it looks like NonCancellable is in the framework just for readability.
Now, readability is important, so I'd prefer to use NonCancellable in code. However, its documentation makes it sound like it is, in fact, somehow different from a regular Job, so I want to understand this aspect in details.
So, my quesiton is: is there any functional difference between these two approaches (i.e. how can I modify these unit tests to have difference in outcomes)?
Edit:
Following Louis's answer I tested "making cleanup non-cancellable" scenario and in this case Job() also works analogous to NonCancellable. In the below example, unit test will run for more than 1sec, even though the coroutine is cancelled just after 200ms:
#Test
fun coroutineCancellation_jobInsteadOfNonCancellableInCleanup() {
runBlocking {
val scope = CoroutineScope(Job() + Dispatchers.Default + CoroutineName("outer scope"))
val launchJob = scope.launch(CoroutineName("test coroutine")) {
try {
delay(100)
throw java.lang.RuntimeException()
} catch (e: Exception) {
withContext(Job()) {
cleanup()
}
}
}
scope.launch {
delay(200)
launchJob.cancel()
}
launchJob.join()
}
}
private suspend fun cleanup() {
delay(1000)
}
NonCancellable doesn't respond to cancellation, while Job() does.
NonCancellable implements Job in a custom way, and it doesn't have the same behavior as Job() that is using cancellable implementation.
cancel() on NonCancellable is no-op, unlike for Job() where it would cancel any child coroutine, and where any crash in the child coroutines would propagate to that parent Job.
If I have a List<A> and a function suspend (A) -> B, how can I apply this function on the list in parallel?
coroutineScope {
list.map {
async {
process(it)
}
} // List<Deferred<B>>
.awaitAll() // List<B>
}
suspend fun process(a: A): B {
...
}
This assumes you are already in a suspend context. Otherwise, you need to launch a new coroutine on the appropriate scope instead of using the coroutineScope scoping function.
You can create an extension function on CoroutineScope, go through each element of the list and launch a coroutine for each element. In this way elements of the list will be processed in parallel. Some code snippet:
fun CoroutineScope.processListInParallel(list: List<A>): List<Deferred<B>> = list.map {
async { // launch a coroutine
processA(it)
}
}
GlobalScope.launch {
val list = listOf(A("name1"), A("name2"), A("name3"))
val deferredList = processListInParallel(list)
val results: List<B> = deferredList.awaitAll() // wait for all items to be processed
}
suspend fun processA(a: A): B {
delay(1000) // emulate suspension
return B("Result ${a.name}")
}
data class A(val name: String) {}
data class B(val name: String) {}
Note: GlobalScope is used here as an example, using it is highly discouraged, application code usually should use an application-defined CoroutineScope.
Trying to grasp coroutines. I have an expectation that this code shouldn't print anything. However, it prints "work done" so cancellation didn't do a thing. How is it so?
suspend fun foo() = coroutineScope {
launch { doSomeWork() }
}
suspend fun doSomeWork() {
delay(10000)
println("Work done")
}
suspend fun main() {
val fooResult = foo()
fooResult.cancel()
}
I finally got it. The main coroutine suspends on "coroutineScope" call, that's why
How can I launch a coroutine from a suspend function and have it use the current Scope? (so that the Scope doesn't end until the launched coroutine also ends)
I'd like to write something like the following –
import kotlinx.coroutines.*
fun main() = runBlocking { // this: CoroutineScope
go()
}
suspend fun go() {
launch {
println("go!")
}
}
But this has a syntax error: "Unresolved Reference: launch". It seems launch must be run in one of the following ways –
GlobalScope.launch {
println("Go!")
}
Or
runBlocking {
launch {
println("Go!")
}
}
Or
withContext(Dispatchers.Default) {
launch {
println("Go!")
}
}
Or
coroutineScope {
launch {
println("Go!")
}
}
None of these alternatives does what I need. Either the code "blocks" instead of "spawning", or it spawns but the parent scope won't wait for its completion before the parent scope itself ends.
I need it to "spawn" (launch) in the current parent coroutine scope, and that parent scope should wait for the spawned coroutine to finish before it ends itself.
I expected that a simple launch inside a suspend fun would be valid and use its parent scope.
I'm using Kotlin 1.3 and cotlinx-coroutines-core:1.0.1.
You should make the function go an extension function of CoroutineScope:
fun main() = runBlocking {
go()
go()
go()
println("End")
}
fun CoroutineScope.go() = launch {
println("go!")
}
Read this article to understand why it is not a good idea to start in a suspend functions other coroutines without creating a new coroutineScope{}.
The convention is: In a suspend functions call other suspend functions and create a new CoroutineScope, if you need to start parallel coroutines. The result is, that the coroutine will only return, when all newly started coroutines have finished (structured concurrency).
On the other side, if you need to start new coroutines without knowing the scope, You create an extensions function of CoroutineScope, which itself it not suspendable. Now the caller can decide which scope should be used.
I believe I found a solution, which is with(CoroutineScope(coroutineContext). The following example illustrates this –
import kotlinx.coroutines.*
fun main() = runBlocking {
go()
go()
go()
println("End")
}
suspend fun go() {
// GlobalScope.launch { // spawns, but doesn't use parent scope
// runBlocking { // blocks
// withContext(Dispatchers.Default) { // blocks
// coroutineScope { // blocks
with(CoroutineScope(coroutineContext)) { // spawns and uses parent scope!
launch {
delay(2000L)
println("Go!")
}
}
}
However, Rene posted a much better solution above.
Say you are dealing with some RxJava Observable and it isn't the time to refactor them, you can now get a hold of a suspend function's CoroutineScope this way:
suspend fun yourExtraordinarySuspendFunction() = coroutineScope {
val innerScope = this // i.e. coroutineScope
legacyRxJavaUggh.subscribe { somePayloadFromRxJava ->
innerScope.launch {
// TODO your extraordinary work
}
}
}