Subscribe to Flux Websocket inbound connection with projectreactor blocking? - intellij-idea

In the below code, IntelliJ warns the subscribe should not be called in a blocking scope. Unfortunately subscribe seems to be the most intuitive way of associating a consumer with the inbound message stream, is there a better way?
Code snippet in Kotlin, based on the example Java code in the projectreactor documentation.
I want to subscribe to the inbound messages with a consumer that is injected, or expose the inbound messages flux in a way that other consumers can access and subscribe to it and I don't want this to be blocking.
import io.netty.buffer.Unpooled
import io.netty.util.CharsetUtil
import reactor.core.publisher.Flux
import reactor.netty.http.client.HttpClient
fun main() {
HttpClient.create()
.websocket()
.uri("wss://echo.websocket.org")
.handle { inbound, outbound ->
inbound.receive()
.asString()
.take(1)
.subscribe(
{ println(it) },
{ println("error $it") },
{ println("completed") }
)
val msgBytes = "hello".toByteArray(CharsetUtil.ISO_8859_1)
outbound.send(Flux.just(Unpooled.wrappedBuffer(msgBytes))).neverComplete()
}
.blockLast()
}

We found an alternative to subscribe that was non-blocking. then and zip. Example in Kotlin.
import io.netty.buffer.Unpooled
import io.netty.util.CharsetUtil.UTF_8
import reactor.core.publisher.Flux
import reactor.netty.http.client.HttpClient
fun main() {
val outgoingMessagesFlux = Flux.just(Unpooled.wrappedBuffer("hello".toByteArray(UTF_8)))
HttpClient.create()
.websocket()
.uri("wss://echo.websocket.org")
.handle { inbound, outbound ->
val thenInbound = inbound.receive()
.asString()
.doOnNext { println(it) }
.then()
val thenOutbound = outbound.send(outgoingMessagesFlux).neverComplete()
Flux.zip(thenInbound, thenOutbound).then()
}.blockLast()
}
This was based on the Spring WebFlux Netty websocket client source code implementation and the current spring-framework documentation.

Related

How do I properly use Kotlin Flow in Ktor streaming responses?

emphasized textI am trying to use Kotlin Flow to process some data asynchronously and in parallel, and stream the responses to the client as they occur, as opposed to waiting until all the jobs are complete.
After unsuccessfully trying to just send the flow itself to the response, like this: call.respond(HttpStatusCode.OK, flow.toList())
... I tinkered for hours trying to figure it out, and came up with the following. Is this correct? It seems there should be a more idiomatic way of sending a Flow<MyData> as a response, like one can with a Flux<MyData> in Spring Boot.
Also, it seems that using the below method does not cancel the Flow when the HTTP request is cancelled, so how would one cancel it in Ktor?
data class MyData(val number: Int)
class MyService {
fun updateAllJobs(): Flow<MyData> =
flow {
buildList { repeat(10) { add(MyData(Random.nextInt())) } }
// Docs recommend using `onEach` to "delay" elements.
// However, if I delay here instead of in `map`, all elements are held
// and emitted at once at the very end of the cumulative delay.
// .onEach { delay(500) }
.map {
// I want to emit elements in a "stream" as each is computed.
delay(500)
emit(it)
}
}
}
fun Route.jobRouter() {
val service: MyService by inject() // injected with Koin
put("/jobs") {
val flow = service.updateAllJobs()
// Just using the default Jackson mapper for this example.
val mapper = jsonMapper { }
// `respondOutputStream` seems to be the only way to send a Flow as a stream.
call.respondOutputStream(ContentType.Application.Json, HttpStatusCode.OK) {
flow.collect {
println(it)
// The data does not stream without the newline and `flush()` call.
write((mapper.writeValueAsString(it) + "\n").toByteArray())
flush()
}
}
}
}
The best solution I was able to find (although I don't like it) is to use respondBytesWriter to write data to a response body channel. In the handler, a new job to collect the flow is launched to be able to cancel it if the channel is closed for writing (HTTP request is canceled):
fun Route.jobRouter(service: MyService) {
put("/jobs") {
val flow = service.updateAllJobs()
val mapper = jsonMapper {}
call.respondBytesWriter(contentType = ContentType.Application.Json) {
val job = launch {
flow.collect {
println(it)
try {
writeStringUtf8(mapper.writeValueAsString(it))
flush()
} catch (_: ChannelWriteException) {
cancel()
}
}
}
job.join()
}
}
}

Spring WebFlux handler for kotlin SharedFlow

I can see the following example working in Spring WebFlux handler for a flow builder:
suspend fun getDummyFlow(req: ServerRequest): ServerResponse {
val flow = flow<String> { // flow builder
for (i in 1..3) {
delay(1000) // pretend we are doing something useful here
emit("<p>Hello $i</p>") // emit next value
}
}
return ServerResponse
.ok()
.contentType(MediaType.TEXT_HTML)
.bodyAndAwait(flow)
}
Yet, I need to build a flow with a MutableSharedFlow which is not working in Spring Web Flux. Here it is an example:
suspend fun getDummyFlow(req: ServerRequest): ServerResponse {
return coroutineScope {
val flow = MutableSharedFlow<String>()
launch {
for (i in 1..3) {
delay(1000) // pretend we are doing something useful here
flow.emit("<p>Hello $i</p>") // emit next value
}
}
ServerResponse
.ok()
.contentType(MediaType.TEXT_HTML)
.bodyAndAwait(
flow
.asSharedFlow()
.take(3)
)
}
My implementation is based on the example of SharedFlow documentation.
Yet, any HTTP GET request to this endpoint stays pending and waiting for a response, whereas the former example with flow builder receives the response progressively and fine.
I have already traced my code in debug and I see .bodyAndAwait(..) being called and then emit() in both cases.

Is is possible to bridge between reactive code and kotlin coroutines without blocking with runBlocking?

I writing a KafkaConsumer in Kotlin using the reactive framework, the problem is the whole project structure is based on Kotlin coroutines and now the kafka consumer follows a Flux publisher pipeline,
I got it to work with runBlocking however I am aware this is not a good idea to have blocking code in our project,
I tried using #KafkaListener (fails when adding suspend modifier),
import com.github.avrokotlin.avro4k.Avro
import kotlinx.coroutines.runBlocking
import org.apache.avro.generic.GenericRecord
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.springframework.boot.CommandLineRunner
import org.springframework.kafka.core.reactive.ReactiveKafkaConsumerTemplate
import org.springframework.stereotype.Component
#Component
class KafkaConsumer(
val slackNotificationService: SlackNotificationService,
val consumerTemplate: ReactiveKafkaConsumerTemplate<String, GenericRecord>
) : CommandLineRunner {
suspend fun sendNotification(record: ConsumerRecord<String, GenericRecord>) {
val tagNotification = Avro.default.fromRecord(TagNotification.serializer(), record.value())
slackNotificationService.notifyUsers(tagNotification)
}
override fun run(vararg args: String?) {
consumerTemplate
.receiveAutoAck()
.subscribe {
runBlocking {
sendNotification(it)
}
}
}
}
I can successfully receive the kafka messages and all the rest of the project is working fine, but I couldn't figure out to create this non blocking bridge here,
Does anyone know if there's a better way to handle this?
Thank you in advance :)
If you want to invoke sendNotification() asynchronously then create a CoroutineScope and launch coroutines with it:
class KafkaConsumer(
...
private val coroutineScope = CoroutineScope(Dispatchers.Default)
...
.subscribe {
coroutineScope.launch {
sendNotification(it)
}
}
If KafkaConsumer may be destroyed/shutdown then it is advised to invoke coroutineScope.cancel() when it happens.

Is there any way to make a fake call from Ktor to itself, to make request pass through all pipeline?

I have an ktor web server that successfully responds on http requests. Now there is a need to read data from kafka's topic and process it.
Is there any way send the data I've read to ktor, like this data came from outside, to make it pass through all pipeline, like ContentNegotiation and other features?
Application class has method execute(), which takes ApplicationCall, but I've found zero examples - how can I fill my implementation of this class properly. Especially route - do I need the real one? Would be nice if this route would be private and would be unavailable from the outside.
You can use the withTestApplication function to test your application's modules without making an actual network connection. Here is an example:
import io.ktor.application.*
import io.ktor.http.*
import io.ktor.request.*
import io.ktor.response.*
import io.ktor.routing.*
import io.ktor.server.testing.*
import org.junit.jupiter.api.Test
import kotlin.test.assertEquals
class SimpleTest {
#Test
fun test() = withTestApplication {
application.module()
// more modules to test here
handleRequest(HttpMethod.Post, "/post") {
setBody("kafka data")
}.response.let { response ->
assertEquals("I get kafka data", response.content)
}
}
}
fun Application.module() {
routing {
post("/post") {
call.respondText { "I get ${call.receiveText()}" }
}
}
}
I think that #AlekseiTirman answer is great and most probably you should go for it
But I have to mention that it's easy to do it even in "real life" run. Your local machine ip is 0.0.0.0, you can get port from the env variable, so you just can create a simple HttpClient and send a request:
CoroutineScope(Dispatchers.IO).launch {
delay(1000)
val client = HttpClient {
defaultRequest {
// actually this is already a default value so no need to setting it
host = "0.0.0.0"
port = environment.config.property("ktor.deployment.port").getString().toInt()
}
}
val result = client.get<String>("good")
println("local response $result")
}
routing {
get("good") {
call.respond("hello world")
}
}

Get webflux event-loop scheduler

I use webflux with netty and jdbc, so I wrap blocking jdbc operation the next way:
static <T> Mono<T> fromOne(Callable<T> blockingOperation) {
return Mono.fromCallable(blockingOperation)
.subscribeOn(jdbcScheduler)
.publishOn(Schedulers.parallel());
}
Blocking operation will be processed by the jdbcScheduler, and I want the other pipeline will be proccesed by webflux event-loop scheduler.
How to get webflux event-loop scheduler?
I will strongly advise to revisit the technology options. If you are going to use jdbc, which is still blocking, then you should not use webflux. This is because webflux will shine in a non-blocking stack but coupled with Jdbc it will act as a bottleneck. The performance will actually go down.
I agree with #Vikram Rawat use jdbc is very dangerous mainly because jdbc is a bocking IO api and use an event loop reactive model is very dangerous because basically it is very easy block all the server.
However, even if it is an experimental effort I suggest you to stay tuned to R2DBC project that is able to leverage a no blocking api for sql I used it for a spike and it is very elegant.
I can provide you an example taken from a my home project on github based on sprign boot 2.1 and kotlin:
web layer
#Configuration
class ReservationRoutesConfig {
#Bean
fun reservationRoutes(#Value("\${baseServer:http://localhost:8080}") baseServer: String,
reservationRepository: ReservationRepository) =
router {
POST("/reservation") {
it.bodyToMono(ReservationRepresentation::class.java)
.flatMap { Mono.just(ReservationRepresentation.toDomain(reservationRepresentation = it)) }
.flatMap { reservationRepository.save(it).toMono() }
.flatMap { ServerResponse.created(URI("$baseServer/reservation/${it.reservationId}")).build() }
}
GET("/reservation/{reservationId}") {
reservationRepository.findOne(it.pathVariable("reservationId")).toMono()
.flatMap { Mono.just(ReservationRepresentation.toRepresentation(it)) }
.flatMap { ok().body(BodyInserters.fromObject(it)) }
}
DELETE("/reservation/{reservationId}") {
reservationRepository.delete(it.pathVariable("reservationId")).toMono()
.then(noContent().build())
}
}
}
repository layer:
class ReactiveReservationRepository(private val databaseClient: TransactionalDatabaseClient,
private val customerRepository: CustomerRepository) : ReservationRepository {
override fun findOne(reservationId: String): Publisher<Reservation> =
databaseClient.inTransaction {
customerRepository.find(reservationId).toMono()
.flatMap { customer ->
it.execute().sql("SELECT * FROM reservation WHERE reservation_id=$1")
.bind("$1", reservationId)
.exchange()
.flatMap { sqlRowMap ->
sqlRowMap.extract { t, u ->
Reservation(t.get("reservation_id", String::class.java)!!,
t.get("restaurant_name", String::class.java)!!,
customer, t.get("date", LocalDateTime::class.java)!!)
}.one()
}
}
}
override fun save(reservation: Reservation): Publisher<Reservation> =
databaseClient.inTransaction {
customerRepository.save(reservation.reservationId, reservation.customer).toMono()
.then(it.execute().sql("INSERT INTO reservation (reservation_id, restaurant_name, date) VALUES ($1, $2, $3)")
.bind("$1", reservation.reservationId)
.bind("$2", reservation.restaurantName)
.bind("$3", reservation.date)
.fetch().rowsUpdated())
}.then(Mono.just(reservation))
override fun delete(reservationId: String): Publisher<Void> =
databaseClient.inTransaction {
customerRepository.delete(reservationId).toMono()
.then(it.execute().sql("DELETE FROM reservation WHERE reservation_id = $1")
.bind("$1", reservationId)
.fetch().rowsUpdated())
}.then(Mono.empty())
}
I hope that can help you