This is what I thought:
When using coroutines you go piling up async ops and once you are done with synchronous op..call them in FIFO order..but that's not always true
In this example you get what I expected:
fun main() = runBlocking {
launch {
println("1")
}
launch {
println("2")
}
println("0")
}
Also here(with nested launch):
fun main() = runBlocking {
launch {
println("1")
}
launch {
launch {
println("3")
}
println("2")
}
println("0")
}
Now in this example with a scope builder and creating another "pile"(not the real term) the order changes but still..you get as expected
fun main() = runBlocking {
launch {
println("2")
}
// replacing launch
coroutineScope {
println("0")
}
println("1")
}
Finally..the reason of this question..example 2 with scope builder:
fun main() = runBlocking {
launch {
println("3")
}
coroutineScope {
launch {
println("1")
}
println("0")
}
println("2")
}
I get this:
0
3
1
2
Why??
Was my assumption wrong and that's not how coroutines work
If so..then how should I determine the correct order when coding
edited: I've tried running the same code on different machines and different platforms but always got the same result..also tried more complicated nesting to prove non-variability of results
And digging the documentation found that coroutines are just code transformation(as I initially thought)
Remember that even if the like to call them 'light-weight' threads they run in a single 'real' thread(note: without newSingleThreadContext)
Thus I chose to believe execution order is pre-established at compile-time and not decided at runtime
After all..I still can't anticipate the order..and that's what I want
Don't assume coroutines will be run in a specific order, the runtime will decide what's best to run when and in what order. What you may be interested in that will help is the kotlinx.coroutines documentation. It does a great job of explaining how they work and also provides some handy abstractions to help managing coroutines make more sense. I personally recommend checking out channels, jobs, and Deferred (async/await).
For example, if I wanted things done in a certain order by number, I'd use channels to ensure things arrived in the order I wanted.
runBlocking {
val channel = Channel<Int>()
launch {
for (x in 0..5) channel.send(x * x)
channel.close()
}
for (msg in channel) {
// Pretend we're doing some work with channel results
println("Message: $msg")
}
}
Hopefully that can give you more context or what coroutines are and what they're good for.
Related
I have a Kotlin Backend/server API using Ktor, and inside a certain endpoint's service logic I need to concurrently get details for a list of ids and then return it all to the client with the 200 response.
The way I wanted to do it is by using async{} and awaitAll()
However, I can't understand whether I should use runBlocking or GlobalScope.
What is really the difference here?
fun getDetails(): List<Detail> {
val fetched: MutableList<Details> = mutableListOf()
GlobalScope.launch { --> Option 1
runBlocking { ---> Option 2
Dispatchers.IO --> Option 3 (or any other dispatcher ..)
myIds.map { id ->
async {
val providerDetails = getDetails(id)
fetched += providerDetails
}
}.awaitAll()
}
return fetched
}
launch starts a coroutine that runs in parallel with your current code, so fetched would still be empty by the time your getDetails() function returns. The coroutine will continue running and mutating the List that you have passed out of the function while the code that retrieved the list already has the reference back and will be using it, so there's a pretty good chance of triggering a ConcurrentModificationException. Basically, this is not a viable solution at all.
runBlocking runs a coroutine while blocking the thread that called it. The coroutine will be completely finished before the return fetched line, so this will work if you are OK with blocking the calling thread.
Specifying a Dispatcher isn't an alternative to launch or runBlocking. It is an argument that you can add to either to determine the thread pool used for the coroutine and its children. Since you are doing IO and parallel work, you should probably be using runBlocking(Dispatchers.IO).
Your code can be simplified to avoid the extra, unnecessary mutable list:
fun getDetails(): List<Detail> = runBlocking(Dispatchers.IO) {
myIds.map { id ->
async {
getDetails(id)
}
}.awaitAll()
}
Note that this function will rethrow any exceptions thrown by getDetails().
If your project uses coroutines more generally, you probably have higher level coroutines running, in which case this should probably be a suspend function (non-blocking) instead:
suspend fun getDetails(): List<Detail> = withContext(Dispatchers.IO) {
myIds.map { id ->
async {
getDetails(id)
}
}.awaitAll()
}
The Wear OS tiles example is great, not so much of an issue but how would one start the background media service that play the songs selected in the primary app, when every I try to start the service, I get the following error. The is no UI thread to reference and the documentation only has to methods for onclick, LoadAction and LaunchAction.
override fun onTileRequest(request: TileRequest) = serviceScope.future {
when(request.state!!.lastClickableId){
"play"-> playClicked()
}....
suspend fun playClicked(){
try {
// Convert the asynchronous callback to a suspending coroutine
suspendCancellableCoroutine<Unit> { cont ->
mMediaBrowserCompat = MediaBrowserCompat(
applicationContext, ComponentName(applicationContext, MusicService::class.java),
mMediaBrowserCompatConnectionCallback, null
)
mMediaBrowserCompat!!.connect()
}
}catch (e:Exception){
e.printStackTrace()
} finally {
mMediaBrowserCompat!!.disconnect()
}
}
ERROR
java.lang.RuntimeException: Can't create handler inside thread Thread[DefaultDispatcher-worker-1,5,main] that has not called Looper.prepare()
serviceScope is running on Dispatchers.IO, you should use withContext(Dispatchers.Main) when making any calls to MediaBrowserCompat.
Responding to the answer above, the serviceScope.future creates a CoroutineScope that will cause the future returned to the service to wait for all child jobs to complete.
If you want to have it run detached from the onTileRequest call, you can run the following, which will launch a new job inside the application GlobalScope and let the onTileRequest return immediately.
"play" -> GlobalScope.launch {
}
The benefit to this is that you don't throw a third concurrency model into the mix, ListenableFutures, Coroutines, and now Handler. LF and Coroutines are meant to avoid you having to resort to a third concurrency option.
Thanks Yuri that worked but, it ended up blocking the UI thread, the solution that is work is below
fun playClicked(){
mainHandler.post(playSong)
}
private val playSong: Runnable = object : Runnable {
#RequiresApi(Build.VERSION_CODES.N)
override fun run() {
mMediaBrowserCompat = MediaBrowserCompat(
applicationContext, ComponentName(applicationContext, MusicaWearService::class.java),
mMediaBrowserCompatConnectionCallback, null
)
mMediaBrowserCompat!!.connect()
}
}
Cool Yuri, the below worked and I think is more efficient
fun playClicked() = GlobalScope.launch(Dispatchers.Main) {
mMediaBrowserCompat = MediaBrowserCompat(
applicationContext, ComponentName(applicationContext, MusicaWearService::class.java),
mMediaBrowserCompatConnectionCallback, null
)
mMediaBrowserCompat!!.connect()
}
I tried reading the docs, but it just doesn't make sense to me.
I need to make three calls to an external webserivce and log the result of each after it returns. Each webservice call is indepdentant of the responses of the others. Done synchronously, it looks like this:
fun makeWebserviceCalls(){
callOne()
callTwo()
callThree()
}
fun callOne(){
// make webservice call
// log result
}
fun callTwo(){
// make webservice call
// log result
}
fun callThree(){
// make webservice call
// log result
}
Now I just need to do that in parallel. It shouldn't be that hard, but it's just not making sense to me.
I've tried:
fun makeWebserviceCalls(){
callOne()
callTwo()
callThree()
}
fun callOne(){
launch{
// make webservice call
// log result
}
}
but that doesn't compile.
I've tried:
fun makeWebserviceCalls(){
runBlocking{
callOne()
callTwo()
callThree()
}
}
suspend fun callOne(){
launch{
// make webservice call
// log result
}
}
but that doesn't compile.
I've tried:
fun makeWebserviceCalls(){
runBlocking{
callOne()
callTwo()
callThree()
}
}
suspend fun callOne(){
withContext(Dispatchers.IO){
// make webservice call
// log result
}
}
but this can't be right, because withContext is used when you need a result returned, which I don't.
What's the right way to do what I'm trying to do?
The primary goal of Coroutines is to do efficient asynchronous operations, which is different than doing concurrent operations. Here’s an example from the official docs:
val client = HttpClient()
//Running in the main thread, start a `get` call
client.get<String>("https://example.com/some/rest/call")
//The get call will suspend and let other work happen in the main thread, and resume when the get call completes
All of the above happen on the main thread, no separate threads are spawned. A future versions of Coroutines will provide OOTB support for concurrent Coroutines on multiple threads, but the main branch has no support for that yet.
A suspend function can be executed on a different thread:
suspend fun differentThread() = withContext(Dispatchers.Default){
println("Different thread")
}
Of course, if you call the suspend function from a regular function, you’ll need to do it within runBlocking.
Other alternatives are proposed here: https://kotlinlang.org/docs/mobile/concurrency-and-coroutines.html#alternatives-to-kotlinx-coroutines. For simple use cases, CoroutineWorker is a good option.
For more details, see the official docs: https://kotlinlang.org/docs/mobile/concurrency-and-coroutines.html
If you need your makeWebserviceCalls() to wait until all requests to finish:
runBlocking(Dispatchers.IO) {
launch { callOne() }
launch { callTwo() }
launch { callThree() }
}
If you need to start them in the background and return immediately:
GlobalScope.launch(Dispatchers.IO) {
launch { callOne() }
launch { callTwo() }
launch { callThree() }
}
But you need to understand that this is not the usual way of using coroutines. Normally, your makeWebserviceCalls() function and all call* functions would be suspend functions which makes them more coroutines-friendly.
You start parallel execution from out of the coroutines context, also parallel blocks of code are blocking and in such a case I'm not sure if it makes sense to use coroutines at all. You can just start 3 background threads, it will be effectively almost the same.
I'm working with livedata. I want to run some arbitrary code in IO and then once that has completed, run some arbitrary code in the Main thread.
In JavaScript, you can accomplish something like this by chaining promises together. I know Kotlin is different, but that's at least a framework I'm coming from that I understand.
I have a function that will sometimes be called from Main and sometimes from IO, but it requires no special IO features itself. From within class VM: ViewModel():
private val mState = MyState() // data class w/property `a`
val myLiveData<MyState> = MutableLiveData(mState)
fun setVal(a: MyVal) {
mState = mState.copy(a=a)
myLiveData.value = mState
}
fun buttonClickHandler(a: MyVal) {
setVal(a) // Can execute in Main
}
fun getValFromDb() {
viewModelScope.launch(Dispatchers.IO) {
val a: MyVal = fetchFromDb()
setVal(a) // Error! Cannot call setValue from background thread!
}
}
Seems to me the obvious way would be to execute val a = fetchFromDb() from IO and then pull setVal(a) out of that block and into Main.
Is there a way to accomplish this? I don't see a conceptual reason why this feature could not exist. Is there some idea like
doAsyncThatReturnsValue(Dispatchers.IO) { fetchFromDb()}
.then(previousBlockReturnVal, Dispatchers.Main) { doInMain() }
that could be run in a ViewModel?
Please substitute "coroutine" for "thread" wherever appropriate above. :)
Launch is fine. You just have to switch around the dispatchers and use withContext:
fun getValFromDb() {
// run this coroutine on main thread
viewModelScope.launch(Dispatchers.Main) {
// obtain result by running given block on IO thread
// suspends coroutine until it's ready (without blocking the main thread)
val a: MyVal = withContext(Dispatchers.IO){ fetchFromDb() }
// executed on main thread
setVal(a)
}
}
I have been reading kotlin docs, and if I understood correctly the two Kotlin functions work as follows :
withContext(context): switches the context of the current coroutine, when the given block executes, the coroutine switches back to previous context.
async(context): Starts a new coroutine in the given context and if we call .await() on the returned Deferred task, it will suspends the calling coroutine and resume when the block executing inside the spawned coroutine returns.
Now for the following two versions of code :
Version1:
launch(){
block1()
val returned = async(context){
block2()
}.await()
block3()
}
Version2:
launch(){
block1()
val returned = withContext(context){
block2()
}
block3()
}
In both versions block1(), block3() execute in default context(commonpool?) where as block2() executes in the given context.
The overall execution is synchronous with block1() -> block2() -> block3() order.
Only difference I see is that version1 creates another coroutine, where as version2 executes only one coroutine while switching context.
My questions are :
Isn't it always better to use withContext rather than async-await as it is functionally similar, but doesn't create another coroutine. Large numbers of coroutines, although lightweight, could still be a problem in demanding applications.
Is there a case async-await is more preferable to withContext?
Update:
Kotlin 1.2.50 now has a code inspection where it can convert async(ctx) { }.await() to withContext(ctx) { }.
Large number of coroutines, though lightweight, could still be a problem in demanding applications
I'd like to dispel this myth of "too many coroutines" being a problem by quantifying their actual cost.
First, we should disentangle the coroutine itself from the coroutine context to which it is attached. This is how you create just a coroutine with minimum overhead:
GlobalScope.launch(Dispatchers.Unconfined) {
suspendCoroutine<Unit> {
continuations.add(it)
}
}
The value of this expression is a Job holding a suspended coroutine. To retain the continuation, we added it to a list in the wider scope.
I benchmarked this code and concluded that it allocates 140 bytes and takes 100 nanoseconds to complete. So that's how lightweight a coroutine is.
For reproducibility, this is the code I used:
fun measureMemoryOfLaunch() {
val continuations = ContinuationList()
val jobs = (1..10_000).mapTo(JobList()) {
GlobalScope.launch(Dispatchers.Unconfined) {
suspendCoroutine<Unit> {
continuations.add(it)
}
}
}
(1..500).forEach {
Thread.sleep(1000)
println(it)
}
println(jobs.onEach { it.cancel() }.filter { it.isActive})
}
class JobList : ArrayList<Job>()
class ContinuationList : ArrayList<Continuation<Unit>>()
This code starts a bunch of coroutines and then sleeps so you have time to analyze the heap with a monitoring tool like VisualVM. I created the specialized classes JobList and ContinuationList because this makes it easier to analyze the heap dump.
To get a more complete story, I used the code below to also measure the cost of withContext() and async-await:
import kotlinx.coroutines.*
import java.util.concurrent.Executors
import kotlin.coroutines.suspendCoroutine
import kotlin.system.measureTimeMillis
const val JOBS_PER_BATCH = 100_000
var blackHoleCount = 0
val threadPool = Executors.newSingleThreadExecutor()!!
val ThreadPool = threadPool.asCoroutineDispatcher()
fun main(args: Array<String>) {
try {
measure("just launch", justLaunch)
measure("launch and withContext", launchAndWithContext)
measure("launch and async", launchAndAsync)
println("Black hole value: $blackHoleCount")
} finally {
threadPool.shutdown()
}
}
fun measure(name: String, block: (Int) -> Job) {
print("Measuring $name, warmup ")
(1..1_000_000).forEach { block(it).cancel() }
println("done.")
System.gc()
System.gc()
val tookOnAverage = (1..20).map { _ ->
System.gc()
System.gc()
var jobs: List<Job> = emptyList()
measureTimeMillis {
jobs = (1..JOBS_PER_BATCH).map(block)
}.also { _ ->
blackHoleCount += jobs.onEach { it.cancel() }.count()
}
}.average()
println("$name took ${tookOnAverage * 1_000_000 / JOBS_PER_BATCH} nanoseconds")
}
fun measureMemory(name:String, block: (Int) -> Job) {
println(name)
val jobs = (1..JOBS_PER_BATCH).map(block)
(1..500).forEach {
Thread.sleep(1000)
println(it)
}
println(jobs.onEach { it.cancel() }.filter { it.isActive})
}
val justLaunch: (i: Int) -> Job = {
GlobalScope.launch(Dispatchers.Unconfined) {
suspendCoroutine<Unit> {}
}
}
val launchAndWithContext: (i: Int) -> Job = {
GlobalScope.launch(Dispatchers.Unconfined) {
withContext(ThreadPool) {
suspendCoroutine<Unit> {}
}
}
}
val launchAndAsync: (i: Int) -> Job = {
GlobalScope.launch(Dispatchers.Unconfined) {
async(ThreadPool) {
suspendCoroutine<Unit> {}
}.await()
}
}
This is the typical output I get from the above code:
Just launch: 140 nanoseconds
launch and withContext : 520 nanoseconds
launch and async-await: 1100 nanoseconds
Yes, async-await takes about twice as long as withContext, but it's still just a microsecond. You'd have to launch them in a tight loop, doing almost nothing besides, for that to become "a problem" in your app.
Using measureMemory() I found the following memory cost per call:
Just launch: 88 bytes
withContext(): 512 bytes
async-await: 652 bytes
The cost of async-await is exactly 140 bytes higher than withContext, the number we got as the memory weight of one coroutine. This is just a fraction of the complete cost of setting up the CommonPool context.
If performance/memory impact was the only criterion to decide between withContext and async-await, the conclusion would have to be that there's no relevant difference between them in 99% of real use cases.
The real reason is that withContext() a simpler and more direct API, especially in terms of exception handling:
An exception that isn't handled within async { ... } causes its parent job to get cancelled. This happens regardless of how you handle exceptions from the matching await(). If you haven't prepared a coroutineScope for it, it may bring down your entire application.
An exception not handled within withContext { ... } simply gets thrown by the withContext call, you handle it just like any other.
withContext also happens to be optimized, leveraging the fact that you're suspending the parent coroutine and awaiting on the child, but that's just an added bonus.
async-await should be reserved for those cases where you actually want concurrency, so that you launch several coroutines in the background and only then await on them. In short:
async-await-async-await — don't do that, use withContext-withContext
async-async-await-await — that's the way to use it.
Isn't it always better to use withContext rather than asynch-await as it is funcationally similar, but doesn't create another coroutine. Large numebrs coroutines, though lightweight could still be a problem in demanding applications
Is there a case asynch-await is more preferable to withContext
You should use async/await when you want to execute multiple tasks concurrently, for example:
runBlocking {
val deferredResults = arrayListOf<Deferred<String>>()
deferredResults += async {
delay(1, TimeUnit.SECONDS)
"1"
}
deferredResults += async {
delay(1, TimeUnit.SECONDS)
"2"
}
deferredResults += async {
delay(1, TimeUnit.SECONDS)
"3"
}
//wait for all results (at this point tasks are running)
val results = deferredResults.map { it.await() }
//Or val results = deferredResults.awaitAll()
println(results)
}
If you don't need to run multiple tasks concurrently you can use withContext.
When in doubt, remember this like a rule of thumb:
If multiple tasks have to happen in parallel and the final result depends on completion of all of them, then use async.
For returning the result of a single task, use withContext.