Spring Web Flux web client doesn't receive value one by one - spring-webflux

#GetMapping("/test")
fun fluxTest(): Flux<Int> {
return Flux.create {em ->
Thread{
(0..10).forEach{
em.next(it)
Thread.sleep(1000)
}
em.complete()
}.run()
}
}
So the code above is a Spring MVC controller method to emit 0 ~ 10 numbers at interval of 1 second.
This is my client code.
val client = WebClient.builder().baseUrl("http://localhost:8083/api/v1")
.build()
val disposable = client.get()
.uri("/test")
.retrieve()
.bodyToFlux(Int::class.java)
.subscribe ({
System.out.println("Value arrived : $it")
}, {err ->
err.printStackTrace()
})
The issue is that client program prints out 0~10 at once, rather than one by one at interval of 1 second.
So it doesn't print values from server one by one but print whole received values when stream is completed.
Can anyone help me with this issue?
Thanks

Looks like you should enable Server-Sent Events, easy way just add producer to the enpoint like this:
#GetMapping(path = "/test", produces=MediaType.TEXT_EVENT_STREAM_VALUE)

Related

How do I concat two Flux.interval?

I want to concat two Flux. Let's say that I'm making a subscription query to some reactive database. I put the initial result in Flux and concat the update result using concatWith. I need to use the data that got from database to make a poll request over and over again. This is what I've tried.
val startSource = "id-1"
val updateSource = "id"
// start subscription
Flux.just(startSource)
.concatWith (
// subscription updates
Flux.interval(Duration.ofSeconds(10))
.flatMap { update -> Mono.just("$updateSource-$update") }
)
.flatMap { id ->
// use data from subscription to poll status from server
Flux.interval(Duration.ofSeconds(1))
.flatMap { isPaymentReady(id) }
}
.doOnNext { println("Result $it") }
.subscribe()
Thread.sleep(20000)
As you can see, I'm using flatMap and everything works fine until there's a signal emitting from subscription updates. It started flatMap on a new thread so now I have two poll request on difference thread. I don't want to do that I just want to restart poll request using an updated data how can I do that ?
update:
I solve it by using switchMap instead of flatMap. for some reason it works I don't even know lol.

KafkaConsumer: `seekToEnd()` does not make consumer consume from latest offset

I have the following code
class Consumer(val consumer: KafkaConsumer<String, ConsumerRecord<String>>) {
fun run() {
consumer.seekToEnd(emptyList())
val pollDuration = 30 // seconds
while (true) {
val records = consumer.poll(Duration.ofSeconds(pollDuration))
// perform record analysis and commitSync()
}
}
}
}
The topic which the consumer is subscribed to continously receives records. Occasionally, the consumer will crash due to the processing step. When the consumer then is restarted, I want it to consume from the latest offset on the topic (i.e. ignore records that were published to the topic while the consumer was down). I thought the seekToEnd() method would ensure that. However, it seems like the method has no effect at all. The consumer starts to consume from the offset from which it crashed.
What is the correct way to use seekToEnd()?
Edit: The consumer is created with the following configs
fun <T> buildConsumer(valueDeserializer: String): KafkaConsumer<String, T> {
val props = setupConfig(valueDeserializer)
Common.setupConsumerSecurityProtocol(props)
return createConsumer(props)
}
fun setupConfig(valueDeserializer: String): Properties {
// Configuration setup
val props = Properties()
props[ConsumerConfig.GROUP_ID_CONFIG] = config.applicationId
props[ConsumerConfig.CLIENT_ID_CONFIG] = config.kafka.clientId
props[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = config.kafka.bootstrapServers
props[AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG] = config.kafka.schemaRegistryUrl
props[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = config.kafka.stringDeserializer
props[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = valueDeserializer
props[KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG] = "true"
props[ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG] = config.kafka.maxPollIntervalMs
props[ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG] = config.kafka.sessionTimeoutMs
props[ConsumerConfig.ALLOW_AUTO_CREATE_TOPICS_CONFIG] = "false"
props[ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG] = "false"
props[ConsumerConfig.AUTO_OFFSET_RESET_CONFIG] = "latest"
return props
}
fun <T> createConsumer(props: Properties): KafkaConsumer<String, T> {
val consumer = KafkaConsumer<String, T>(props)
consumer.subscribe(listOf(config.kafka.inputTopic))
return consumer
}
I found a solution!
I needed to add a dummy poll as a part of the consumer initialization process. Since several Kafka methods are evaluated lazily, it is necessary with a dummy poll to assign partitions to the consumer. Without the dummy poll, the consumer tries to seek to the end of partitions that are null. As a result, seekToEnd() has no effect.
It is important that the dummy poll duration is long enough for the partitions to get assigned. For instance with consumer.poll((Duration.ofSeconds(1)), the partitions did not get time to be assigned before the program moved on to the next method call (i.e. seekToEnd()).
Working code could look something like this
class Consumer(val consumer: KafkaConsumer<String, ConsumerRecord<String>>) {
fun run() {
// Initialization
val pollDuration = 30 // seconds
consumer.poll((Duration.ofSeconds(pollDuration)) // Dummy poll to get assigned partitions
// Seek to end and commit new offset
consumer.seekToEnd(emptyList())
consumer.commitSync()
while (true) {
val records = consumer.poll(Duration.ofSeconds(pollDuration))
// perform record analysis and commitSync()
}
}
}
}
The seekToEnd method requires the information on the actual partition (in Kafka terms TopicPartition) on which you plan to make your consumer read from the end.
I am not familiar with the Kotlin API, but checking the JavaDocs on the KafkaConsumer's method seekToEnd you will see, that it asks for a collection of TopicPartitions.
As you are currently using emptyList(), it will have no impact at all, just like you observed.

How to make several synchronuous call of rxjava Single

I have difficulties making sequential calls of RxJava Single observerable. What I mean is that I have a function that makes http request using retrofit that returns a Single.
fun loadFriends(): Single<List<Friend>> {
Log.d("msg" , "make http request")
return webService.getFriends()
}
and if I subscribe from several places at the same time:
loadFriends().subscribeOn(Schedulers.io()).subscribe()
loadFriends().subscribeOn(Schedulers.io()).subscribe()
I want that loadFriends() makes only one https request but in this case I have two http request
I know how to solve this problem in blocking way:
The solution is to make loadFriends() blocking.
private val lock = Object()
prival var inMemoryCache: List<Friends>? = null
fun loadFriends(): Single<List<Friend>> {
return Single.fromCallable {
if(inMemoryCache == null) {
synchronize(lock) {
if(inMemoryCache == null) {
inMemoryCache = webService.getFriends().blockingGet()
}
}
}
inMemoryCache
}
But I want to solve this problem in a reactive way
You can remedy this by creating one common source for all your consumers to subscribe to, and that source will have the cache() operator invoked against it. The effect of this operator is that the first subscriber's subscription will be delegated downstream (i.e. the network request will be invoked), and subsequent subscribers will see internally cached results produced as a result of that first subscription.
This might look something like this:
class Friends {
private val friendsSource by lazy { webService.getFriends().cache() }
fun someFunction() {
// 1st subscription - friends will be fetched from network
friendsSource
.subscribeOn(Schedulers.io())
.subscribe()
// 2nd subscription - friends will be fetched from internal cache
friendsSource
.subscribeOn(Schedulers.io())
.subscribe()
}
}
Note that the cache is indefinite, so if periodically refreshing the list of friends is important you'll need to come up with a way to do so.

Why Flux.flatMap() doesn't wait for completion of inner publisher?

Could you please explain what exactly happens in Flux/Mono returned by HttpClient.response() ? I thought value generated by http client will NOT be passed downstream until Mono completes but I see that tons of requests are generated which ends up with reactor.netty.internal.shaded.reactor.pool.PoolAcquirePendingLimitException: Pending acquire queue has reached its maximum size of 8 exception. It works as expected (items being processed one by one) if I replace call to testRequest() with Mono.fromCallable { }.
What am I missing ?
Test code:
import org.asynchttpclient.netty.util.ByteBufUtils
import reactor.core.publisher.Flux
import reactor.core.publisher.Mono
import reactor.netty.http.client.HttpClient
import reactor.netty.resources.ConnectionProvider
class Test {
private val client = HttpClient.create(ConnectionProvider.create("meh", 4))
fun main() {
Flux.fromIterable(0..99)
.flatMap { obj ->
println("Creating request for: $obj")
testRequest()
.doOnError { ex ->
println("Failed request for: $obj")
ex.printStackTrace()
}
.map { res ->
obj to res
}
}
.doOnNext { (obj, res) ->
println("Created request for: $obj ${res.length} characters")
}
.collectList().block()!!
}
fun testRequest(): Mono<String> {
return client.get()
.uri("https://projectreactor.io/docs/netty/release/reference/index.html#_connection_pool")
.responseContent()
.reduce(StringBuilder(), { sb, buf ->
val str= ByteBufUtils.byteBuf2String(Charsets.UTF_8, buf)
sb.append(str)
})
.map { it.toString() }
}
}
When you create the ConnectionProvider like this ConnectionProvider.create("meh", 4), this means connection pool with max connections 4 and max pending requests 8. See here more about this.
When you use flatMap this means Transform the elements emitted by this Flux asynchronously into Publishers, then flatten these inner publishers into a single Flux through merging, which allow them to interleave See here more about this.
So what happens is that you are trying to run all requests simultaneously.
So you have two options:
If you want to use flatMap then increase the number of the pending requests.
If you want to keep the number of the pending requests you may consider for example using concatMap instead of flatMap, which means Transform the elements emitted by this Flux asynchronously into Publishers, then flatten these inner publishers into a single Flux, sequentially and preserving order using concatenation. See more here about this.

How to inform a Flux that I have an item ready to publish?

I am trying to make a class that would take incoming user events, process them and then pass the result to whoever subscribed to it:
class EventProcessor
{
val flux: Flux<Result>
fun onUserEvent1(e : Event)
{
val result = process(e)
// Notify flux that I have a new result
}
fun onUserEvent2(e : Event)
{
val result = process(e)
// Notify flux that I have a new result
}
fun process(e : Event): Result
{
...
}
}
Then the client code can subscribe to EventProcessor::flux and get notified each time a user event has been successfully processed.
However, I do not know how to do this. I tried to construct the flux with the Flux::generate function like this:
class EventProcessor
{
private var sink: SynchronousSink<Result>? = null
val flux: Flux<Result> = Flux.generate{ sink = it }
fun onUserEvent1(e : Event)
{
val result = process(e)
sink?.next(result)
}
fun onUserEvent2(e : Event)
{
val result = process(e)
sink?.next(result)
}
....
}
But this does not work, since I am supposed to immediately call next on the SynchronousSink<Result> passed to me in Flux::generate. I cannot store the sink as in the example:
reactor.core.Exceptions$ErrorCallbackNotImplemented:
java.lang.IllegalStateException: The generator didn't call any of the
SynchronousSink method
I was also thinking about the Flux::merge and Flux::concat methods, but these are static and they create a new Flux. I just want to push things into the existing flux, such that whoever holds it, gets notified.
Based on my limited understanding of the reactive types, this is supposed to be a common use case. Yet I find it very difficult to actually implement it. This brings me to a suspicion that I am missing something crucial or that I am using the library in an odd way, in which it was not intended to be used. If this is the case, any advice is warmly welcome.