How do I concat two Flux.interval? - kotlin

I want to concat two Flux. Let's say that I'm making a subscription query to some reactive database. I put the initial result in Flux and concat the update result using concatWith. I need to use the data that got from database to make a poll request over and over again. This is what I've tried.
val startSource = "id-1"
val updateSource = "id"
// start subscription
Flux.just(startSource)
.concatWith (
// subscription updates
Flux.interval(Duration.ofSeconds(10))
.flatMap { update -> Mono.just("$updateSource-$update") }
)
.flatMap { id ->
// use data from subscription to poll status from server
Flux.interval(Duration.ofSeconds(1))
.flatMap { isPaymentReady(id) }
}
.doOnNext { println("Result $it") }
.subscribe()
Thread.sleep(20000)
As you can see, I'm using flatMap and everything works fine until there's a signal emitting from subscription updates. It started flatMap on a new thread so now I have two poll request on difference thread. I don't want to do that I just want to restart poll request using an updated data how can I do that ?
update:
I solve it by using switchMap instead of flatMap. for some reason it works I don't even know lol.

Related

How to make several synchronuous call of rxjava Single

I have difficulties making sequential calls of RxJava Single observerable. What I mean is that I have a function that makes http request using retrofit that returns a Single.
fun loadFriends(): Single<List<Friend>> {
Log.d("msg" , "make http request")
return webService.getFriends()
}
and if I subscribe from several places at the same time:
loadFriends().subscribeOn(Schedulers.io()).subscribe()
loadFriends().subscribeOn(Schedulers.io()).subscribe()
I want that loadFriends() makes only one https request but in this case I have two http request
I know how to solve this problem in blocking way:
The solution is to make loadFriends() blocking.
private val lock = Object()
prival var inMemoryCache: List<Friends>? = null
fun loadFriends(): Single<List<Friend>> {
return Single.fromCallable {
if(inMemoryCache == null) {
synchronize(lock) {
if(inMemoryCache == null) {
inMemoryCache = webService.getFriends().blockingGet()
}
}
}
inMemoryCache
}
But I want to solve this problem in a reactive way
You can remedy this by creating one common source for all your consumers to subscribe to, and that source will have the cache() operator invoked against it. The effect of this operator is that the first subscriber's subscription will be delegated downstream (i.e. the network request will be invoked), and subsequent subscribers will see internally cached results produced as a result of that first subscription.
This might look something like this:
class Friends {
private val friendsSource by lazy { webService.getFriends().cache() }
fun someFunction() {
// 1st subscription - friends will be fetched from network
friendsSource
.subscribeOn(Schedulers.io())
.subscribe()
// 2nd subscription - friends will be fetched from internal cache
friendsSource
.subscribeOn(Schedulers.io())
.subscribe()
}
}
Note that the cache is indefinite, so if periodically refreshing the list of friends is important you'll need to come up with a way to do so.

Multiple conditional inserts of a new entity gives duplicate entry error in R2DBC

Let's consider this function
#Transactional
fun conditionalInsertEntity(dbEntity: DBEntity): Mono<DBEntity> {
return fetchObjectByPublicId(dbEntity.publicId)
.switchIfEmpty {
r2DatabaseClient.insert()
.into(DBEntity::class.java)
.using(Flux.just(dbEntity))
.fetch()
.one()
.map { it["entity_id"] as Long }
.flatMap { fetchObjectById(it) }
}
}
while running above function with following driver code I get duplicate entry errors if the list contains duplicates. Ideally it shouldn't give that error because the above function is already handling the case for duplicate inserts!!
val result = Flux.fromIterable(listOf(dbEntity1, dbEntity1, dbEntity2))
.flatMap { conditionalInsertEntity(it) }
.collectList()
.block()
Realized that this is an issue of using flatMap instead of concatMap.
ConcatMap collects the result from individual publishers sequentially unlike flatMap. (more here)
Because I used flatMap, multiple publishers thought that the entity isn't already available in the DB

Why Flux.flatMap() doesn't wait for completion of inner publisher?

Could you please explain what exactly happens in Flux/Mono returned by HttpClient.response() ? I thought value generated by http client will NOT be passed downstream until Mono completes but I see that tons of requests are generated which ends up with reactor.netty.internal.shaded.reactor.pool.PoolAcquirePendingLimitException: Pending acquire queue has reached its maximum size of 8 exception. It works as expected (items being processed one by one) if I replace call to testRequest() with Mono.fromCallable { }.
What am I missing ?
Test code:
import org.asynchttpclient.netty.util.ByteBufUtils
import reactor.core.publisher.Flux
import reactor.core.publisher.Mono
import reactor.netty.http.client.HttpClient
import reactor.netty.resources.ConnectionProvider
class Test {
private val client = HttpClient.create(ConnectionProvider.create("meh", 4))
fun main() {
Flux.fromIterable(0..99)
.flatMap { obj ->
println("Creating request for: $obj")
testRequest()
.doOnError { ex ->
println("Failed request for: $obj")
ex.printStackTrace()
}
.map { res ->
obj to res
}
}
.doOnNext { (obj, res) ->
println("Created request for: $obj ${res.length} characters")
}
.collectList().block()!!
}
fun testRequest(): Mono<String> {
return client.get()
.uri("https://projectreactor.io/docs/netty/release/reference/index.html#_connection_pool")
.responseContent()
.reduce(StringBuilder(), { sb, buf ->
val str= ByteBufUtils.byteBuf2String(Charsets.UTF_8, buf)
sb.append(str)
})
.map { it.toString() }
}
}
When you create the ConnectionProvider like this ConnectionProvider.create("meh", 4), this means connection pool with max connections 4 and max pending requests 8. See here more about this.
When you use flatMap this means Transform the elements emitted by this Flux asynchronously into Publishers, then flatten these inner publishers into a single Flux through merging, which allow them to interleave See here more about this.
So what happens is that you are trying to run all requests simultaneously.
So you have two options:
If you want to use flatMap then increase the number of the pending requests.
If you want to keep the number of the pending requests you may consider for example using concatMap instead of flatMap, which means Transform the elements emitted by this Flux asynchronously into Publishers, then flatten these inner publishers into a single Flux, sequentially and preserving order using concatenation. See more here about this.

Kotlin Coroutines - unlimited stream to fan out batches

I'm looking to implement a pipeline for processing an infinite stream of messages. I'm new to coroutines and trying to follow along with the docs but I'm not confident I'm doing the right thing.
My infinite stream is of batches of records and I'd like to fan out the processing of each record to a coroutine, wait for a batch to finish (to log stats and stuff) before continuing to the next batch.
-> process [record] \
source -> [records] -> process [record] -> [log batch stats]
-> process [record] /
|------------------- while(true) -------------------|
What I had planned is to have 2 Channels, one for the infinite stream, and one for the intermediate records that will fill up and empty on each batch.
runBlocking {
val infinite: Channel<List<Record>> = produce { send(source.getBatch()) }
val records = Channel<Record>(Channel.Factory.UNLIMITED)
while(true) {
infinite.receive().forEach { records.send(it) }
while(!records.isEmpty()) {
launch { process(records.receive()) }
}
// ??? Wait for jobs?
logBatchStats()
}
}
From googling, it seems that waiting for jobs is discouraged, plus I wasn't sure if calling .map on a channel will actually receive messages to convert them to jobs:
records.map { record -> launch { process(record) } }
yields a Channel<Job>. It seems I can call .toList() on it to collapse it, but then I need to join the jobs? Again, google suggested to do that by having a parent job, but I'm not really sure how to do that with launch.
Anyway, very much a n00b to this.
Thanks for the help.
I don't see a reason to have two channels. You could directly iterate over the list of records. And you should use async instead of launch. Then you can use await or even better awaitAll for the list of results.
val infinite: ReceiveChannel<List<Record>> = produce { ... }
while(true) {
val resultsDeferred = infinite.receive().map {
async {
process(it)
}
}
val results = resultsDeferred.awaitAll()
logBatchStats()
}

Spring Web Flux web client doesn't receive value one by one

#GetMapping("/test")
fun fluxTest(): Flux<Int> {
return Flux.create {em ->
Thread{
(0..10).forEach{
em.next(it)
Thread.sleep(1000)
}
em.complete()
}.run()
}
}
So the code above is a Spring MVC controller method to emit 0 ~ 10 numbers at interval of 1 second.
This is my client code.
val client = WebClient.builder().baseUrl("http://localhost:8083/api/v1")
.build()
val disposable = client.get()
.uri("/test")
.retrieve()
.bodyToFlux(Int::class.java)
.subscribe ({
System.out.println("Value arrived : $it")
}, {err ->
err.printStackTrace()
})
The issue is that client program prints out 0~10 at once, rather than one by one at interval of 1 second.
So it doesn't print values from server one by one but print whole received values when stream is completed.
Can anyone help me with this issue?
Thanks
Looks like you should enable Server-Sent Events, easy way just add producer to the enpoint like this:
#GetMapping(path = "/test", produces=MediaType.TEXT_EVENT_STREAM_VALUE)