This question is linked to one of my previous questions: Kotlin - Coroutines with loops.
So, this is my current implementation:
fun propagate() = runBlocking {
logger.info("Propagating objectives...")
val variablesWithSetObjectives: List<ObjectivePropagationMapping> =
variables.filter { it.variable.objective != Objective.NONE }
variablesWithSetObjectives.forEach { variableWithSetObjective ->
logger.debug("Propagating objective ${variableWithSetObjective.variable.objective} from variable ${variableWithSetObjective.variable.name}")
val job: Job = launch {
propagate(variableWithSetObjective, variableWithSetObjective.variable.objective, this, variableWithSetObjective)
}
job.join()
traversedVariableNames.clear()
}
logger.info("Done")
}
private tailrec fun propagate(currentVariable: ObjectivePropagationMapping, objectiveToPropagate: Objective, coroutineScope: CoroutineScope, startFromVariable: ObjectivePropagationMapping? = null) {
if (traversedVariableNames.contains(currentVariable.variable.name)) {
logger.debug("Detected loopback condition, stopping propagation to prevent loop")
return
}
traversedVariableNames.add(currentVariable.variable.name)
val objectiveToPropagateNext: Objective =
if (startFromVariable != currentVariable) {
logger.debug("Propagating objective $objectiveToPropagate to variable ${currentVariable.variable.name}")
computeNewObjectiveForVariable(currentVariable, objectiveToPropagate)
}
else startFromVariable.variable.objective
logger.debug("Choosing variable to propagate to next")
val variablesToPropagateToNext: List<ObjectivePropagationMapping> =
causalLinks
.filter { it.toVariable.name == currentVariable.variable.name }
.map { causalLink -> variables.first { it.variable.name == causalLink.fromVariable.name } }
if (variablesToPropagateToNext.isEmpty()) {
logger.debug("Detected end of path, stopping propagation...")
return
}
val variableToPropagateToNext: ObjectivePropagationMapping = variablesToPropagateToNext.random()
logger.debug("Chose variable ${variableToPropagateToNext.variable.name} to propagate to next")
if (variablesToPropagateToNext.size > 1) {
logger.debug("Detected split condition")
variablesToPropagateToNext.filter { it != variableToPropagateToNext }.forEach {
logger.debug("Launching child thread for split variable ${it.variable.name}")
coroutineScope.launch {
propagate(it, objectiveToPropagateNext, this)
}
}
}
propagate(variableToPropagateToNext, objectiveToPropagateNext, coroutineScope)
}
I'm currently running the algorithm on the following variable topology (Note that the algorithm follows arrows coming to a variable, but not arrows leaving from a variable):
Currently I am getting the following debug print result: https://pastebin.com/ya2tmc6s.
As you can see, even though I launch coroutines they don't begin executing until the main propagate recursive function has finished exploring a complete path.
I would want the launched coroutines to start executing immediately instead...
Unless otherwise specified, all the coroutines you start within runBlocking will run on the same thread.
If you want to enable multithreading, you can just change that to runBlocking(Dispatchers.Default). I'm just going to assume that all that code is thread-safe.
If you don't really want to enable multithreading, then you really shouldn't care what order the coroutines run in.
Related
So I have a flow where I need it to emit a value from cache, but at the end it will make an API call to pull values in case there was nothing in cache (or refresh the value it has). I am trying this
override val data: Flow<List<Data>> = dataDao.getAllCachedData()
.onCompletion {
coroutineScope {
launch {
requestAndCacheDataOrEmitError()
}
}
}
.map { entities ->
entities
.map { it.toData() }
.filter { it !is Data.Unknown }
}
.filterNotNull()
.catch { emitRepositoryError(it) }
So the idea is that we emit the cache, and then make an API call to fetch new data regardless of the original mapping. But I do not want it blocking. For example, if we use this flow, I do not ever want the calling function to be blocked by the onCompletion.
I think the problem is that the onCompletion never runs. I set some breakpoints/logs and it never runs at all, even outside of the coroutineScope.
I don't quite understand the work you are doing but I think when you are collecting flow on a certain scope. You end the scope that flow will be put into onCompletion
var job : Job? = null
fun scan() {
job = viewModelScope.launch {
bigfileManager.bigFile.collect {
if (it is ResultOrProgress.Result) {
_bigFiles.value = it.result ?: emptyList()
} else {
_updateProgress.value = (it as ResultOrProgress.Progress).progress ?: 0
}
}
}
}
fun endScreen(){
job?.cancel()
}
I'm trying to get comfortable with Kotlin/coroutines. My current goal is to read a text file in one coroutine, and emit each line through a Channel to be printed in another coroutine. Here's what I have so far:
fun main() = runBlocking {
val ch = Channel<String>()
launch {
for (msg in ch) {
println(msg.length)
}
}
launch {
File("file.txt").forEachLine {
ch.send(it)
}
}
}
Hopefully this shows my intent, but it doesn't compile because you can't call a suspending function (send) from the lambda passed to forEachLine. In Golang everything is modeled synchronously, so I would just run it in a goroutine and send would block, but Kotlin seems to have a lower level concurrency model. What would be the canonical way to accomplish this?
If it's helpful, my final goal is to read JSON events emitted from a subprocess via stdout. I'll have a separate JSON object on each line, and will need to parse and handle each separately.
This is the best I've been able to come up with so far. It seems to work but I feel like there must be a more idiomatic way to accomplish this.
fun main() = runBlocking {
val ch = Channel<String>()
launch {
for (msg in ch) {
println(msg.length)
}
}
launch {
val istream = File("file.txt").inputStream()
val buf = ByteArray(4096)
while (true) {
val n = istream.read(buf)
if (n == -1) {
break
}
val msg = buf.sliceArray(0..n-1).toString(Charsets.UTF_8)
ch.send(msg)
}
ch.close()
}
}
I've been trying the same and based on some ideas I got from https://kotlinlang.org/docs/channels.html#fan-out the following seems to work nicely:
fun main() {
val fileToRead = File("somefile.csv")
runBlocking {
// Producer reading the file
val fileChannel = readFileIntoChannel(fileToRead)
// Consumer writing file lines to stdout
launch { fileChannel.consumeEach { line -> println(line) } }
}
}
fun CoroutineScope.readFileIntoChannel(f: File) = produce<String> {
for (line in f.bufferedReader().lines() ) { send(line) }
}
Update Coroutines 1.3.0-RC
Working version:
#FlowPreview
suspend fun streamTest(): Flow<String> = channelFlow {
listener.onSomeResult { result ->
if (!isClosedForSend) {
offer(result)
}
}
awaitClose {
listener.unsubscribe()
}
}
Also checkout this Medium article by Roman Elizarov: Callbacks and Kotlin Flows
Original Question
I have a Flow emitting multiple Strings:
#FlowPreview
suspend fun streamTest(): Flow<String> = flowViaChannel { channel ->
listener.onSomeResult { result ->
if (!channel.isClosedForSend) {
channel.sendBlocking(result)
}
}
}
After some time I want to unsubscribe from the stream. Currently I do the following:
viewModelScope.launch {
beaconService.streamTest().collect {
Timber.i("stream value $it")
if(it == "someString")
// Here the coroutine gets canceled, but streamTest is still executed
this.cancel()
}
}
If the coroutine gets canceled, the stream is still executed. There is just no subscriber listening to new values. How can I unsubscribe and stop the stream function?
A solution is not to cancel the flow, but the scope it's launched in.
val job = scope.launch { flow.cancellable().collect { } }
job.cancel()
NOTE: You should call cancellable() before collect if you want your collector stop when Job is canceled.
You could use the takeWhile operator on Flow.
flow.takeWhile { it != "someString" }.collect { emittedValue ->
//Do stuff until predicate is false
}
For those willing to unsubscribe from the Flow within the Coroutine scope itself, this approach worked for me :
viewModelScope.launch {
beaconService.streamTest().collect {
//Do something then
this.coroutineContext.job.cancel()
}
}
With the current version of coroutines / Flows (1.2.x) I don't now a good solution. With onCompletion you will get informed when the flow stops, but you are then outside of the streamTest function and it will be hard to stop listening of new events.
beaconService.streamTest().onCompletion {
}.collect {
...
}
With the next version of coroutines (1.3.x) it will be really easy. The function flowViaChannel is deprecated in favor for channelFlow. This function allows you to wait for closing of the flow and do something in this moment, eg. remove listener:
channelFlow<String> {
println("Subscribe to listener")
awaitClose {
println("Unsubscribe from listener")
}
}
When a flow runs in couroutin scope, you can get a job from it to controls stop subscribe.
// Make member variable if you want.
var jobForCancel : Job? = null
// Begin collecting
jobForCancel = viewModelScope.launch {
beaconService.streamTest().collect {
Timber.i("stream value $it")
if(it == "someString")
// Here the coroutine gets canceled, but streamTest is still executed
// this.cancel() // Don't
}
}
// Call whenever to canceled
jobForCancel?.cancel()
For completeness, there is a newer version of the accepted answer. Instead of explicitly using the launch coroutine builder, we can use the launchIn method directly on the flow:
val job = flow.cancellable().launchIn(scope)
job.cancel()
Based on #Ronald answer this works great for testing when you need to make your Flow emits again.
val flow = MutableStateFlow(initialValue)
flow.take(n).collectIndexed { index, _ ->
if (index == something) {
flow.value = update
}
}
//your assertions
We have to know how many emissions in total we expect n and then we can use the index to know when to update the Flow so we can receive more emissions.
If you want to cancel only the subscription being inside it, you can do it like this:
viewModelScope.launch {
testScope.collect {
return#collect cancel()
}
}
There are two ways to do this that are by design from the Kotlin team:
As #Ronald pointed out in another comment:
Option 1: takeWhile { //predicate }
Cancel collection when the predicate is false. Final value will not be collected.
flow.takeWhile { value ->
value != "finalString"
}.collect { value ->
//Do stuff, but "finalString" will never hit this
}
Option 2: transformWhile { //predicate }
When predicate is false, collect that value, then cancel
flow.transformWhile { value ->
emit(value)
value != "finalString"
}.collect { value ->
//Do stuff, but "finalString" will be the last value
}
https://github.com/Kotlin/kotlinx.coroutines/issues/2065
Usually I'm using standard kotlin-jdk8 library to jump from Java *future API world into the Kotlin's suspend heaven.
And it worked great for me, until I encountered Neo4J cursor API, where I can't do .await() on the completion stage, because it immediately starts fetching millions of records into memory.
Kotlin way does not work for me, like this:
suspend fun query() {
driver.session().use { session ->
val cursor: StatementResultCursor = session.readTransactionAsync {
it.runAsync("query ...", params)
}.await() // HERE WE DIE WITH OOM
var record = cursor.nextAsync().await()
while (record != null) {
val node = record.get("node")
mySuspendProcessingFunction(node)
record = cursor.nextAsync().await()
}
}
}
At the same time, Java API works good, we fetch records one by one:
suspend fun query() {
session.readTransactionAsync { transaction ->
transaction.runAsync("query ...", params).thenCompose { cursor ->
cursor.forEachAsync { record ->
runBlocking { // BUT I NEED TO DO RUN BLOCKING HERE :(
val node = record.get("node")
mySuspendProcessingFunction(node)
}
}
}
}.thenCompose {
session.closeAsync()
}.await()
}
The second option works for me, but it is pretty ugly - definitely not Kotlin way, and what is more important, I need to use runBlocking (but these whole block is executed within suspend function)
What am I doing wrong? Is there a better way?
UPD
Tried to do this exercise using new Flow() feature, unfortunately results are the same:
suspend fun query() {
session.readTransactionAsync { transaction ->
transaction.runAsync(query, params).thenApply { cursor ->
cursor.asFlow().onEach { record ->
val node = record.get("node")
mySuspendProcessingFunction(node)
}
}
}.thenCompose {
session.closeAsync()
}.await()
}
fun StatementResultCursor.asFlow() = flow {
do {
val record = nextAsync().await()
if (record != null) emit(record)
} while (record != null)
}
I am new to Kotlin/Coroutines, so hopefully I am just missing something/don't fully understand how to structure my code for the problem I am trying to solve.
Essentially, I am taking a list of strings, and for each item in the list I want to send it to another method to do work (make a network call and return data based on the response). (Edit:) I want all calls to launch concurrently, and block until all calls are done/the response is acted on, and then return a new list with the info of each response.
I probably don't yet fully understand when to use launch/async, but I've tried to following with both launch (with joinAll), and async (with await).
fun processData(lstInputs: List<String>): List<response> {
val lstOfReturnData = mutableListOf<response>()
runBlocking {
withContext(Dispatchers.IO) {
val jobs = List(lstInputs.size) {
launch {
lstOfReturnData.add(networkCallToGetData(lstInputs[it]))
}
}
jobs.joinAll()
}
}
return lstofReturnData
What I am expecting to happen, is if my lstInputs is a size of 120, when all jobs are joined, my lstOfReturnData should also have a size of 120.
What actually is happening is inconsitent results. I'll run it once, and I get 118 in my final list, run it again, it's 120, run it again, it's 117, etc. In the networkCallToGetData() method, I am handling any exceptions, to at least return something for every request, regardless if the network call fails.
Can anybody help explain why I am getting inconsistent results, and what I need to do to ensure I am blocking appropriately and all jobs are being joined before moving on?
mutableListOf() creates an ArrayList, which is not thread-safe.
Try using ConcurrentLinkedQueue instead.
Also, do you use the stable version of Kotlin/Kotlinx.coroutine (not the old experimental one)? In the stable version, with the introduction of structured concurrency, there is no need to write jobs.joinAll anymore. launch is an extesion function of runBlocking which will launch new coroutines in the scope of the runBlocking and the runBlocking scope will automatically wait for all the launched jobs to finsish. So the code above can be shorten to
val lstOfReturnData = ConcurrentLinkedQueue<response>()
runBlocking {
lstInputs.forEach {
launch(Dispatches.IO) {
lstOfReturnData.add(networkCallToGetData(it))
}
}
}
return lstOfReturnData
runBlocking blocks current thread interruptibly until its completion. I guess it's not what you want. If I think wrong and you want to block the current thread than you can get rid of coroutine and just make network call in the current thread:
val lstOfReturnData = mutableListOf<response>()
lstInputs.forEach {
lstOfReturnData.add(networkCallToGetData(it))
}
But if it is not your intent you can do the following:
class Presenter(private val uiContext: CoroutineContext = Dispatchers.Main)
: CoroutineScope {
// creating local scope for coroutines
private var job: Job = Job()
override val coroutineContext: CoroutineContext
get() = uiContext + job
// call this to cancel job when you don't need it anymore
fun detach() {
job.cancel()
}
fun processData(lstInputs: List<String>) {
launch {
val deferredList = lstInputs.map {
async(Dispatchers.IO) { networkCallToGetData(it) } // runs in parallel in background thread
}
val lstOfReturnData = deferredList.awaitAll() // waiting while all requests are finished without blocking the current thread
// use lstOfReturnData in Main Thread, e.g. update UI
}
}
}
Runblocking should mean you don't have to call join.
Launching a coroutine from inside a runblocking scope should do this for you.
Have you tried just:
fun processData(lstInputs: List<String>): List<response> {
val lstOfReturnData = mutableListOf<response>()
runBlocking {
lstInputs.forEach {
launch(Dispatchers.IO) {
lstOfReturnData.add(networkCallToGetData(it))
}
}
}
return lstofReturnData