I love the concept of co-routines and I've been using in my android projects. Currently i'm working on a JVM module which i'll be including in a Ktor project and i know ktor has support for co-routines.
(find the attached code snippet)
Just wanted to know is this the right approach?
How do i use async with recursion?
Any resources that you can recommend which can help me grasp more in-depth knowledge of co-routines would be helpful.
Thanks in advance!
override suspend fun processInstruction(args.. ): List<Any> = coroutineScope {
val dataWithFields = async{
listOfFields.fold(mutableList()){ acc,field ->
val data = someProcess(field)
val nested = processInstruction(...nestedField) // nested call
acc.addAll(data)
acc.addAll(nested)
acc
}
}
return#coroutineScope postProcessData(dataWithFields.await())
}
If you want to process all nested calls in parallel, you should wrap each of them in async (async should be inside of the loop). And then, after the loop, you should await all the results. (In your code you run await right after single async, so there is no parallel execution).
For example, if you have Element:
interface Element {
val subElements: List<Element>
suspend fun calculateData(): SomeData
}
interface SomeData
And you want to calculateData of all subElements in parallel, you can do it like this:
suspend fun Element.calculateAllData(): List<SomeData> = coroutineScope {
val data = async { calculateData() }
val subData = subElements.map { sub -> async { sub.calculateAllData() } }
return#coroutineScope listOf(data.await()) + subData.awaitAll().flatten()
}
As you said in a comments section, you need parent-data to calculate sub-data, therefore the first thing calculateAllData() should do is calculate the parent-data:
suspend fun Element.calculateAllData(
parentData: SomeData = defaultParentData()
): List<SomeData> = coroutineScope {
val data = calculateData(parentData)
val subData = subElements.map { sub -> async { sub.calculateAllData(data) } }
return#coroutineScope listOf(data) + subData.awaitAll().flatten()
}
Now you may wonder how fast it works. Consider the following Element implementation:
class ElementImpl(override val subElements: List<Element>) : Element {
override suspend fun calculateData(parentData: SomeData): SomeData {
delay(1000)
return SomeData()
}
}
fun elmOf(vararg elements: Element) = ElementImpl(listOf(*elements))
And the following test:
println(measureTime {
elmOf(
elmOf(),
elmOf(
elmOf(),
elmOf(
elmOf(),
elmOf(),
elmOf()
)
),
elmOf(
elmOf(),
elmOf()
),
elmOf()
).calculateAllData()
})
If parent-data isn't needed to calculate sub-data, it prints 1.06s, since in this case, all the data is calculated in parallel. Otherwise, it prints 4.15s, since elements tree height is 4.
Related
viewModelScope blocks UI in Jetpack Compose
I know viewModelScope.launch(Dispatchers.IO) {} can avoid this problem, but how to use viewModelScope.launch(Dispatchers.IO) {}?
This is my UI level code
#Composable
fun CountryContent(viewModel: CountryViewModel) {
SingleRun {
viewModel.getCountryList()
}
val pagingItems = viewModel.countryGroupList.collectAsLazyPagingItems()
// ...
}
Here is my ViewModel, Pager is my pagination
#HiltViewModel
class CountryViewModel #Inject constructor() : BaseViewModel() {
var countryGroupList = flowOf<PagingData<CountryGroup>>()
private val config = PagingConfig(pageSize = 26, prefetchDistance = 1, initialLoadSize = 26)
fun getCountryList() {
countryGroupList = Pager(config) {
CountrySource(api)
}.flow.cachedIn(viewModelScope)
}
}
This is the small package
#Composable
fun SingleRun(onClick: () -> Unit) {
val execute = rememberSaveable { mutableStateOf(true) }
if (execute.value) {
onClick()
execute.value = false
}
}
I don't use Compose much yet, so I could be wrong, but this stood out to me.
I don't think your thread is being blocked. I think you subscribed to an empty flow before replacing it, so there is no data to show.
You shouldn't use a var property for your flow, because the empty original flow could be collected before the new one replaces it. Also, it defeats the purpose of using cachedIn because the flow could be replaced multiple times.
You should eliminate the getCountryList() function and just directly assign the flow. Since it is a cachedIn flow, it doesn't do work until it is first collected anyway. See the documentation:
It won't execute any unnecessary code unless it is being collected.
So your view model should look like:
#HiltViewModel
class CountryViewModel #Inject constructor() : BaseViewModel() {
private val config = PagingConfig(pageSize = 26, prefetchDistance = 1, initialLoadSize = 26)
val countryGroupList = Pager(config) {
CountrySource(api)
}.flow.cachedIn(viewModelScope)
}
}
...and you can remove the SingleRun block from your Composable.
You are not doing anything that would require you to specify dispatchers. The default of Dispatchers.Main is fine here because you are not calling any blocking functions directly anywhere in your code.
I'd like to test a function where I use the scope of a callbackFlow builder. Assuming I have a function inside the flow builder like this:
fun items(): Flow<Items> = callbackFlow {
getItems(this) {
trySend(it)
}
awaitClose()
}
In getItems function, I received data from websockets. The scope of ProducerScope is used to either launch a new coroutine with a delay and do something or to close the scope if an error happens. So it might call scope.launch { } or scope.close().
For example, this could do something as follows:
fun getItems(scope: ProducerScope<Items>, callback: (Items) -> Unit) {
if (something) {
scope.launch { ... }
}
if (somethingElse) {
...
scope.close(error)
}
...
callback(items)
}
The callbackFlow's block uses a ProducerScope, extension of CoroutineScope and SendChannel, I tried to mock it using Mockk:
val scope: ProducerScope<Items> = mockk()
Unfortunately, I end up with:
java.lang.ClassCastException: class kotlin.coroutines.CoroutineContext$Element$Subclass6 cannot be cast to class kotlin.coroutines.ContinuationInterceptor
How can I mock a ProducerScope?
How do I unit test getItems above when scope can be either a CoroutineScope and a SendChannel?
Thanks in advance.
After many tries, I cannot do this easily without expecting strange behaviors. So I refactored my function to use a Channel and a CoroutineScope separately. Thanks to the CoroutineScope plus extension, I can create a new scope from the flow builder. This is now testable!
Therefore, the flow builder became:
fun items(): Flow<Items> = callbackFlow {
val channel = this.channel
val scope = this.plus(this.coroutineContext)
getItems(channel, scope) {
...
}
...
}
My function still uses both but gets them separately:
fun getItems(
channel: SendChannel<Items>,
scope: CoroutineScope,
callback: (Items) -> Unit
) {
if (something) {
scope.launch { ... } // <-- use scope
}
if (somethingElse) {
...
channel.close(error) // <-- use channel
}
...
callback(items)
}
Then, I can now test using a Channel with the same requirements than the one in callbackFlow and the scope from runTest:
#Test
fun `get items and succeed`() = runTest {
val channel = Channel<Any>(Channel.BUFFERED, BufferOverflow.SUSPEND)
...
service.getItems(channel, this#runTest, callback)
...
}
I am trying to use arrow in kotlin
Arrow has three functions
IO {}
IO.fx {}
IO.fx { !effect}
I want to know the difference between these. I know IO.fx and IO.fx {!effect} help us use side effects but then whats the difference between the two and why would I use one over the other
While this is going to change shortly, on version 0.11.X:
IO { } is a constructor that takes a suspend function, so you can call any suspend function inside. It's a shortcut for IO.effect { }
suspend fun bla(): Unit = ...
fun myIO(): IO<Unit> = IO { bla() }
fun otherIO(): IO<Unit> = IO.effect { bla() }
IO.fx { } is the same as IO except it adds a few DSL functions that are shortcuts for other APIs of IO. The most important one is ! or bind, which executes another IO inside.
fun myIO(): IO<Unit> = IO.fx { bla() }
fun nestIO(): IO<IO<Unit>> = IO.fx { myIO() }
fun unpackIO(): IO<Unit> = IO.fx { !myIO() }
Another function it enables is the constructor effect from the first point. So what you're effectively doing is adding an additional layer of wrapping that may not be necessary.
fun inefficientNestIO(): IO<IO<Unit>> = IO.fx { effect { bla() } }
fun inefficientUnpackedIO(): IO<Unit> = IO.fx { !effect { bla() } }
We frequently see that inefficientUnpackedIO from people who come to the support channels, and it's easily replaceable by just IO { bla() }.
Why have two ways of doing the same in effect and fx? It's something we're looking to improve on the next releases. We recommend using the least powerful abstraction wherever possible, so reserve fx only when using other IO-based APIs such as scheduling or parallelization.
IO.fx {
val id = getUserIdSuspend()
val friends: List<User> =
!parMapN(
userFriends(id),
IO { userProfile(id) },
::toUsers
)
!friends.parTraverse(IO.applicative()) { user ->
IO { broadcastStatus(user) }
}
}
I have a fucntion:
suspend fun getChats() {
val chatList = mutableListOf<Chat>()
getMyChats { chats ->
chats.forEach {
it.getDetail().await()
}
}.await()
}
But compiler show Suspension functions can be called only within coroutine body for await() which inside of forEach loop. How can I avoid this problem or how can I pass parent scope for it?
**getMyChats() receives a callback
According to you, the getMyChats doesn't support taking suspendable block (lambda).
So you can wrap it with a suspendCancellableCoroutine.
suspend fun getMyChatsSuspend(): List<Chat> = suspendCancellableCoroutine { cont ->
getMyChats { cont.resume(it) }
}
Now use your function like this:
suspend fun getChats() {
...
val chats = getMyChatsSuspend()
val chatDetails = chats.map{ chat.getDetail() }
val chatDetailsAwait = awaitAll( *chatDetails.toTypedArray() )
}
Obviously just chain the calls instead of creating multiple variables if you want
If you want everything to be done in single line you can do:
val resolvedDetails = getMyChatsSuspend().map{ chat.getDetail() }.let { awaitAll(*it.toTypedArray()) }
You have to isolate the getMyChats function like #Animesh Sahu said, but that last call to await() looks very suspicious so I'll rewrite it.
I'll also assume that await is not necessarily on a Deferred<T>.
suspend fun getChats() {
val chatList = mutableListOf<Chat>()
val result = CompletableDeferred<List<Chat>>()
getMyChats { result.complete(it) }.await()
val chats = result.await()
chats.forEach {
it.getDetail().await()
}
}
If you provide the function signatures of the functions involved I might be able give you a nicer solution.
Although without looking at anything else, I can tell you that the getMyChats function needs a refactor.
I have a collection of objects, which I need to perform some transformation on. Currently I am using:
var myObjects: List<MyObject> = getMyObjects()
myObjects.forEach{ myObj ->
someMethod(myObj)
}
It works fine, but I was hoping to speed it up by running someMethod() in parallel, instead of waiting for each object to finish, before starting on the next one.
Is there any way to do this in Kotlin? Maybe with doAsyncTask or something?
I know when this was asked over a year ago it was not possible, but now that Kotlin has coroutines like doAsyncTask I am curious if any of the coroutines can help
Yes, this can be done using coroutines. The following function applies an operation in parallel on all elements of a collection:
fun <A>Collection<A>.forEachParallel(f: suspend (A) -> Unit): Unit = runBlocking {
map { async(CommonPool) { f(it) } }.forEach { it.await() }
}
While the definition itself is a little cryptic, you can then easily apply it as you would expect:
myObjects.forEachParallel { myObj ->
someMethod(myObj)
}
Parallel map can be implemented in a similar way, see https://stackoverflow.com/a/45794062/1104870.
Java Stream is simple to use in Kotlin:
tasks.stream().parallel().forEach { computeNotSuspend(it) }
If you are using Android however, you cannot use Java 8 if you want an app compatible with an API lower than 24.
You can also use coroutines as you suggested. But it's not really part of the language as of now (August 2017) and you need to install an external library. There is very good guide with examples.
runBlocking<Unit> {
val deferreds = tasks.map { async(CommonPool) { compute(it) } }
deferreds.forEach { it.await() }
}
Note that coroutines are implemented with non-blocking multi-threading, which mean they can be faster than traditional multi-threading. I have code below benchmarking the Stream parallel versus coroutine and in that case the coroutine approach is 7 times faster on my machine. However you have to do some work yourself to make sure your code is "suspending" (non-locking) which can be quite tricky. In my example I'm just calling delay which is a suspend function provided by the library. Non-blocking multi-threading is not always faster than traditional multi-threading. It can be faster if you have many threads doing nothing but waiting on IO, which is kind of what my benchmark is doing.
My benchmarking code:
import kotlinx.coroutines.experimental.CommonPool
import kotlinx.coroutines.experimental.async
import kotlinx.coroutines.experimental.delay
import kotlinx.coroutines.experimental.launch
import kotlinx.coroutines.experimental.runBlocking
import java.util.*
import kotlin.system.measureNanoTime
import kotlin.system.measureTimeMillis
class SomeTask() {
val durationMS = random.nextInt(1000).toLong()
companion object {
val random = Random()
}
}
suspend fun compute(task: SomeTask): Unit {
delay(task.durationMS)
//println("done ${task.durationMS}")
return
}
fun computeNotSuspend(task: SomeTask): Unit {
Thread.sleep(task.durationMS)
//println("done ${task.durationMS}")
return
}
fun main(args: Array<String>) {
val n = 100
val tasks = List(n) { SomeTask() }
val timeCoroutine = measureNanoTime {
runBlocking<Unit> {
val deferreds = tasks.map { async(CommonPool) { compute(it) } }
deferreds.forEach { it.await() }
}
}
println("Coroutine ${timeCoroutine / 1_000_000} ms")
val timePar = measureNanoTime {
tasks.stream().parallel().forEach { computeNotSuspend(it) }
}
println("Stream parallel ${timePar / 1_000_000} ms")
}
Output on my 4 cores computer:
Coroutine: 1037 ms
Stream parallel: 7150 ms
If you uncomment out the println in the two compute functions you will see that in the non-blocking coroutine code the tasks are processed in the right order, but not with Streams.
You can use RxJava to solve this.
List<MyObjects> items = getList()
Observable.from(items).flatMap(object : Func1<MyObjects, Observable<String>>() {
fun call(item: MyObjects): Observable<String> {
return someMethod(item)
}
}).subscribeOn(Schedulers.io()).observeOn(AndroidSchedulers.mainThread()).subscribe(object : Subscriber<String>() {
fun onCompleted() {
}
fun onError(e: Throwable) {
}
fun onNext(s: String) {
// do on output of each string
}
})
By subscribing on Schedulers.io(), some method is scheduled on background thread.
To process items of a collection in parallel you can use Kotlin Coroutines. For example the following extension function processes items in parallel and waits for them to be processed:
suspend fun <T, R> Iterable<T>.processInParallel(
dispatcher: CoroutineDispatcher = Dispatchers.IO,
processBlock: suspend (v: T) -> R,
): List<R> = coroutineScope { // or supervisorScope
map {
async(dispatcher) { processBlock(it) }
}.awaitAll()
}
This is suspend extension function on Iterable<T> type, which does a parallel processing of items and returns some result of processing each item. By default it uses Dispatchers.IO dispatcher to offload blocking tasks to a shared pool of threads. Must be called from a coroutine (including a coroutine with Dispatchers.Main dispatcher) or another suspend function.
Example of calling from a coroutine:
val myObjects: List<MyObject> = getMyObjects()
someCoroutineScope.launch {
val results = myObjects.processInParallel {
someMethod(it)
}
// use processing results
}
where someCoroutineScope is an instance of CoroutineScope.
Or if you want to just launch and forget you can use this function:
fun <T> CoroutineScope.processInParallelAndForget(
iterable: Iterable<T>,
dispatcher: CoroutineDispatcher = Dispatchers.IO,
processBlock: suspend (v: T) -> Unit
) = iterable.forEach {
launch(dispatcher) { processBlock(it) }
}
This is an extension function on CoroutineScope, which doesn't return any result. It also uses Dispatchers.IO dispatcher by default. Can be called using CoroutineScope or from another coroutine.
Calling example:
someoroutineScope.processInParallelAndForget(myObjects) {
someMethod(it)
}
// OR from another coroutine:
someCoroutineScope.launch {
processInParallelAndForget(myObjects) {
someMethod(it)
}
}
where someCoroutineScope is an instance of CoroutineScope.