Moving Window With Kotlin Flow - kotlin

I am trying to create a moving window of data using Kotlin Flows.
It can be achieved in RxKotlin using a buffer, but buffer is not the same using Flows.
RxKotlin has a buffer operator, periodically gathers items emitted by an Observable into bundles and emit these bundles rather than emitting the items one at a time - buffer(count, skip)
Kotlin Flow has a buffer but that just runs a collector in a separate coroutine - buffer
Is there an existing operator in Flows that can achieve this?

I think what you are looking for is not available in the Kotlinx Coroutines library but there is an open issue.
There is also a possible implementation in this comment which I will also include here:
fun <T> Flow<T>.windowed(size: Int, step: Int): Flow<List<T>> = flow {
// check that size and step are > 0
val queue = ArrayDeque<T>(size)
val toSkip = max(step - size, 0) < if sbd would like to skip some elements before getting another window, by serving step greater than size, then why not?
val toRemove = min(step, size)
var skipped = 0
collect { element ->
if(queue.size < size && skipped == toSkip) {
queue.add(element)
}
else if (queue.size < size && skipped < toSkip) {
skipped++
}
if(queue.size == size) {
emit(queue.toList())
repeat(toRemove) { queue.remove() }
skipped = 0
}
}
}

Related

How to pass Observable emissions to MutableSharedFlow?

well, I have an Observable, I’ve used asFlow() to convert it but doesn’t emit.
I’m trying to migrate from Rx and Channels to Flow, so I have this function
override fun processIntents(intents: Observable<Intent>) {
intents.asFlow().shareTo(intentsFlow).launchIn(this)
}
shareTo() is an extension function which does onEach { receiver.emit(it) }, processIntents exists in a base ViewModel, and intentsFlow is a MutableSharedFlow.
fun <T> Flow<T>.shareTo(receiver: MutableSharedFlow<T>): Flow<T> {
return onEach { receiver.emit(it) }
}
I want to pass emissions coming from the intents Observable to intentsFlow, but it doesn’t work at all and the unit test keeps failing.
#Test(timeout = 4000)
fun `WHEN processIntent() with Rx subject or Observable emissions THEN intentsFlow should receive them`() {
return runBlocking {
val actual = mutableListOf<TestNumbersIntent>()
val intentSubject = PublishSubject.create<TestNumbersIntent>()
val viewModel = FlowViewModel<TestNumbersIntent, TestNumbersViewState>(
dispatcher = Dispatchers.Unconfined,
initialViewState = TestNumbersViewState()
)
viewModel.processIntents(intentSubject)
intentSubject.onNext(OneIntent)
intentSubject.onNext(TwoIntent)
intentSubject.onNext(ThreeIntent)
viewModel.intentsFlow.take(3).toList(actual)
assertEquals(3, actual.size)
assertEquals(OneIntent, actual[0])
assertEquals(TwoIntent, actual[1])
assertEquals(ThreeIntent, actual[2])
}
}
test timed out after 4000 milliseconds
org.junit.runners.model.TestTimedOutException: test timed out after
4000 milliseconds
This works
val ps = PublishSubject.create<Int>()
val mf = MutableSharedFlow<Int>()
val pf = ps.asFlow()
.onEach {
mf.emit(it)
}
launch {
pf.take(3).collect()
}
launch {
mf.take(3).collect {
println("$it") // Prints 1 2 3
}
}
launch {
yield() // Without this we suspend indefinitely
ps.onNext(1)
ps.onNext(2)
ps.onNext(3)
}
We need the take(3)s to make sure our program terminates, because MutableSharedFlow and PublishSubject -> Flow collect indefinitely.
We need the yield because we're working with a single thread and we need to give the other coroutines an opportunity to start working.
Take 2
This is much better. Doesn't use take, and cleans up after itself.
After emitting the last item, calling onComplete on the PublishSubject terminates MutableSharedFlow collection. This is a convenience, so that when this code runs it terminates completely. It is not a requirement. You can arrange your Job termination however you like.
Your code never terminating is not related to the emissions never being collected by the MutableSharedFlow. These are separate concerns. The first is due to the fact that neither a flow created from a PublishSubject, nor a MutableSharedFlow, terminates on its own. The PublishSubject flow will terminate when onComplete is called. The MutableSharedFlow will terminate when the coroutine (specifically, its Job) collecting it terminates.
The Flow constructed by PublishSubject.asFlow() drops any emissions if, at the time of the emission, collection of the Flow hasn't suspended, waiting for emissions. This introduces a race condition between being ready to collect and code that calls PublishSubject.onNext().
This, I believe, is the reason why flow collection isn't picking up the onNext emissions in your code.
It's why a yield is required right after we launch the coroutine that collects from psf.
val ps = PublishSubject.create<Int>()
val msf = MutableSharedFlow<Int>()
val psf = ps.asFlow()
.onEach {
msf.emit(it)
}
val j1 = launch {
psf.collect()
}
yield() // Use this to allow psf.collect to catch up
val j2 = launch {
msf.collect {
println("$it") // Prints 1 2 3 4
}
}
launch {
ps.onNext(1)
ps.onNext(2)
ps.onNext(3)
ps.onNext(4)
ps.onComplete()
}
j1.invokeOnCompletion { j2.cancel() }
j2.join()

Compare to sets of files with coroutines in Kotlin

I have written a function that scans files (pictures) from two Lists and check if a file is in both lists.
The code below is working as expected, but for large sets it takes some time. So I tried to do this in parallel with coroutines. But in sets of 100 sample files the programm was always slower than without coroutines.
The code:
private fun doJob() {
val filesToCompare = File("C:\\Users\\Tobias\\Desktop\\Test").walk().filter { it.isFile }.toList()
val allFiles = File("\\\\myserver\\Photos\\photo").walk().filter { it.isFile }.toList()
println("Files to scan: ${filesToCompare.size}")
filesToCompare.forEach { file ->
var multipleDuplicate = 0
var s = "This file is a duplicate"
s += "\n${file.absolutePath}"
allFiles.forEach { possibleDuplicate ->
if (file != possibleDuplicate) { //only needed when both lists are the same
// Files that have the same name or contains the name, so not every file gets byte comparison
if (possibleDuplicate.nameWithoutExtension.contains(file.nameWithoutExtension)) {
try {
if (Files.mismatch(file.toPath(), possibleDuplicate.toPath()) == -1L) {
s += "\n${possibleDuplicate.absolutePath}"
i++
multipleDuplicate++
println(s)
}
} catch (e: Exception) {
println(e.message)
}
}
}
}
if (multipleDuplicate > 1) {
println("This file has $multipleDuplicate duplicate(s)")
}
}
println("Files scanned: ${filesToCompare.size}")
println("Total number of duplicates found: $i")
}
How have I tried to add the coroutines?
I wrapped the code inside the first forEach in launch{...} the idea was that for each file a coroutine starts and the second loop is done concurrently. I expected the program to run faster but in fact it was about the same time or slower.
How can I achieve this code to run in parallel faster?
Running each inner loop in a coroutine seems to be a decent approach. The problem might lie in the dispatcher you were using. If you used runBlocking and launch without context argument, you were using a single thread to run all your coroutines.
Since there is mostly blocking IO here, you could instead use Dispatchers.IO to launch your coroutines, so your coroutines are dispatched on multiple threads. The parallelism should be automatically limited to 64, but if your memory can't handle that, you can also use Dispatchers.IO.limitedParallelism(n) to reduce the number of threads.

Get value from withTimeout

I'm running a coroutine that is reading from a receiverChannel. I've got this coroutine wrapped within a timeout and I want to get the number of messages it managed to read before the timeout cancels it. Here's what I have:
runBlocking {
val receivedMessages = withTimeoutOrNull(someTimeout) {
var found = 0
while (isActive && found < expectedAmount){
val message = incoming.receive()
// some filtering
found++
}
found
} ?: 0 // <- to not have null...
// Currently prints 0, but I want the messages it managed to read
println("I've received $receivedMessages messages")
}
I know I can use atomicInteger, but I would like to keep away from java specifics here
Local variables in a coroutine don't need to be atomic because of the happens-before guarantee even though there may be some thread swapping going on. Your code doesn't have any parallelism, so you can use the following:
runBlocking {
var receivedMessages = 0
withTimeoutOrNull(someTimeout) {
while (isActive && receivedMessages < expectedAmount){
val message = incoming.receive()
// some filtering
receivedMessages++
}
}
println("I've received $receivedMessages messages")
}
If you do have multiple children coroutines running in parallel, you can use a Mutex. More info here

KafkaConsumer: `seekToEnd()` does not make consumer consume from latest offset

I have the following code
class Consumer(val consumer: KafkaConsumer<String, ConsumerRecord<String>>) {
fun run() {
consumer.seekToEnd(emptyList())
val pollDuration = 30 // seconds
while (true) {
val records = consumer.poll(Duration.ofSeconds(pollDuration))
// perform record analysis and commitSync()
}
}
}
}
The topic which the consumer is subscribed to continously receives records. Occasionally, the consumer will crash due to the processing step. When the consumer then is restarted, I want it to consume from the latest offset on the topic (i.e. ignore records that were published to the topic while the consumer was down). I thought the seekToEnd() method would ensure that. However, it seems like the method has no effect at all. The consumer starts to consume from the offset from which it crashed.
What is the correct way to use seekToEnd()?
Edit: The consumer is created with the following configs
fun <T> buildConsumer(valueDeserializer: String): KafkaConsumer<String, T> {
val props = setupConfig(valueDeserializer)
Common.setupConsumerSecurityProtocol(props)
return createConsumer(props)
}
fun setupConfig(valueDeserializer: String): Properties {
// Configuration setup
val props = Properties()
props[ConsumerConfig.GROUP_ID_CONFIG] = config.applicationId
props[ConsumerConfig.CLIENT_ID_CONFIG] = config.kafka.clientId
props[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = config.kafka.bootstrapServers
props[AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG] = config.kafka.schemaRegistryUrl
props[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = config.kafka.stringDeserializer
props[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = valueDeserializer
props[KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG] = "true"
props[ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG] = config.kafka.maxPollIntervalMs
props[ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG] = config.kafka.sessionTimeoutMs
props[ConsumerConfig.ALLOW_AUTO_CREATE_TOPICS_CONFIG] = "false"
props[ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG] = "false"
props[ConsumerConfig.AUTO_OFFSET_RESET_CONFIG] = "latest"
return props
}
fun <T> createConsumer(props: Properties): KafkaConsumer<String, T> {
val consumer = KafkaConsumer<String, T>(props)
consumer.subscribe(listOf(config.kafka.inputTopic))
return consumer
}
I found a solution!
I needed to add a dummy poll as a part of the consumer initialization process. Since several Kafka methods are evaluated lazily, it is necessary with a dummy poll to assign partitions to the consumer. Without the dummy poll, the consumer tries to seek to the end of partitions that are null. As a result, seekToEnd() has no effect.
It is important that the dummy poll duration is long enough for the partitions to get assigned. For instance with consumer.poll((Duration.ofSeconds(1)), the partitions did not get time to be assigned before the program moved on to the next method call (i.e. seekToEnd()).
Working code could look something like this
class Consumer(val consumer: KafkaConsumer<String, ConsumerRecord<String>>) {
fun run() {
// Initialization
val pollDuration = 30 // seconds
consumer.poll((Duration.ofSeconds(pollDuration)) // Dummy poll to get assigned partitions
// Seek to end and commit new offset
consumer.seekToEnd(emptyList())
consumer.commitSync()
while (true) {
val records = consumer.poll(Duration.ofSeconds(pollDuration))
// perform record analysis and commitSync()
}
}
}
}
The seekToEnd method requires the information on the actual partition (in Kafka terms TopicPartition) on which you plan to make your consumer read from the end.
I am not familiar with the Kotlin API, but checking the JavaDocs on the KafkaConsumer's method seekToEnd you will see, that it asks for a collection of TopicPartitions.
As you are currently using emptyList(), it will have no impact at all, just like you observed.

How to read all available elements from a Channel in Kotlin

I would like to read all available elements from a channel so that I can do batch processing on them if my receiver is slower then my sender (in hopes that processing a batch will be more performant and allow the receiver to catch up). I only want to suspend if the channel is empty, not suspend until my batch is full or timeout unlike this question.
Is there anything built into the standard kotlin library to accomplish this?
I did not find anything in the standard kotlin library, but here is what I came up with. This will suspend only for the first element and then poll all remaining elements. This only really works with a Buffered Channel so that elements ready for processing are queued and available for poll
/**
* Receive all available elements up to [max]. Suspends for the first element if the channel is empty
*/
internal suspend fun <E> ReceiveChannel<E>.receiveAvailable(max: Int): List<E> {
if (max <= 0) {
return emptyList()
}
val batch = mutableListOf<E>()
if (this.isEmpty) {
// suspend until the next message is ready
batch.add(receive())
}
fun pollUntilMax() = if (batch.size >= max) null else poll()
// consume all other messages that are ready
var next = pollUntilMax()
while (next != null) {
batch.add(next)
next = pollUntilMax()
}
return batch
}
I tested Jakes code and it worked well for me (thanks!). Without max limit, I got it down to:
suspend fun <E> ReceiveChannel<E>.receiveAvailable(): List<E> {
val allMessages = mutableListOf<E>()
allMessages.add(receive())
var next = poll()
while (next != null) {
allMessages.add(next)
next = poll()
}
return allMessages
}