I have been using Spring's WebFlux framework with Kotlin for about a month now, and have been loving it. As I got ready to make the dive into writing production code with WebFlux and Kotlin I found myself struggling to unit test my routers in a simple, lightweight way.
Spring Test is an excellent framework, however it is heavier weight than what I was wanting, and I was looking for a test framework that was more expressive than traditional JUnit. Something in the vein of JavaScript's Mocha. Kotlin's Spek fit the bill perfectly.
What follows below is an example of how I was able to unit test a simple router using Spek.
WebFlux defines an excellent DSL using Kotlin's Type-Safe Builders for building routers. While the syntax is very succinct and readable it is not readily apparent how to assert that the router function bean it returns is configured properly as its properties are mostly inaccessible to client code.
Say we have the following router:
#Configuration
class PingRouter(private val pingHandler: PingHandler) {
#Bean
fun pingRoute() = router {
accept(MediaType.APPLICATION_JSON).nest {
GET("/ping", pingHandler::handlePing)
}
}
}
We want to assert that when a request comes in that matches the /ping route with an application/json content header the request is passed off to our handler function.
object PingRouterTest: Spek({
describe("PingRouter") {
lateinit var pingHandler: PingHandler
lateinit var pingRouter: PingRouter
beforeGroup {
pingHandler = mock()
pingRouter = PingRouter(pingHandler)
}
on("Ping route") {
/*
We need to setup a dummy ServerRequest who's path will match the path of our router,
and who's headers will match the headers expected by our router.
*/
val request: ServerRequest = mock()
val headers: ServerRequest.Headers = mock()
When calling request.pathContainer() itReturns PathContainer.parsePath("/ping")
When calling request.method() itReturns HttpMethod.GET
When calling request.headers() itReturns headers
When calling headers.accept() itReturns listOf(MediaType.APPLICATION_JSON)
/*
We call pingRouter.pingRoute() which will return a RouterFunction. We then call route()
on the RouterFunction to actually send our dummy request to the router. WebFlux returns
a Mono that wraps the reference to our PingHandler class's handler function in a
HandlerFunction instance if the request matches our router, if it does not, WebFlux will
return an empty Mono. Finally we invoke handle() on the HandlerFunction to actually call
our handler function in our PingHandler class.
*/
pingRouter.pingRoute().route(request).subscribe({ it.handle(request) })
/*
If our pingHandler.handlePing() was invoked by the HandlerFunction, we know we properly
configured our route for the request.
*/
it("Should call the handler with request") {
verify(pingHandler, times(1)).handlePing(request)
}
}
}
})
Related
I am trying to write a Kotlin function that executes a HTTP request, then gives the result back to JavaScript.
Because with the IR compiler I cannot use a suspended function from JavaScript, I am trying to use a callback instead.
However, the callback is never executed when called from a coroutine.
Here's a small sample of what I am doing:
private val _httpClient = HttpClient(JsClient()) {
install(ContentNegotiation) { json() }
defaultRequest { url(settings.baseUrl) }
}
fun requestJwtVcJsonCredential(
request: JSJwtVcJsonVerifiableCredentialRequest,
callback: (JSDeferredJsonCredentialResponse?, JSJwtVcJsonVerifiableCredentialResponse?, Any?) -> Unit
) {
CoroutineScope(_httpClient.coroutineContext).launch {
// call suspend function
val response = requestCredential(convert(request))
// this never runs, even though the coroutine does run successfully
println("Coroutine received: $response")
callback(response.first, response.second, response.third)
}
}
I've noticed this question had a similar problem in Android, but the suggested fix does not apply to JavaScript... specifically, using a Channel does not help in my case because I don't have a coroutine to receive from, and trying to start a new coroutine to receive from the channel, then calling the callback from that coroutine, also doesn't work (the root problem seems to be that I cannot call a callback function from any coroutine).
What's the best way to solve this problem? Assume the function I need to call is a suspend function (the HTTP Client function) and I cannot change that, but I could change everything around it so that it works from a non-suspend function (as that's a limitation of Kotlin JS).
The root problem was that the suspend function was actually failing, but there seems to be no default exception handler so the Exception was not logged anywhere, causing the function to fail silently, making it look like the callback was being called but not executing.
However, I think it's worth it mentioning that KotlinJS supports Promise<T>, so the better way to expose a suspend function to JS is to actually write an "adapter" function that returns a Promise instead.
There is a promise extension function on CouroutineScope which can be used for this.
So, for example, if you've got a Kotlin function like this:
suspend fun makeRequest(request: Request): Response
To expose it in JavaScript you can have an adapter function like this:
#JsExport
fun makeRequestJS(request: Request): Promise<Response> {
// KTor's HttpClient itself is a CoroutineScope
return _httpClient.promise { makeRequest(request) }
}
This avoids the need to introduce a callback function.
I have the following project, still in development: https://github.com/TarekSaid/blotit, using Kotlin and Spring Webflux.
I was writing my unit tests with Spock (Groovy), but after some issues with testing Kotlin Coroutines and being unable to use syntactic sugar with data classes (even with #JvmOverloads), I've decided to switch to Kotest + Mockk.
My only issue now is with the handler unit tests' performance, as I have to use mockkstatic on ServerRequestExtensionsKt to mock request.awaitBodyOrNull. While the Spock specification runs in 0.072s, the equivalent Kotest test runs in 0.440s. While negligible, it could add up as I add more tests.
I was wondering if there was a better way to unit test the handler with Kotest (please note that I already use WebTestClient to run integration tests). I'll eventually add some verifications to check for service calls, etc, which will be mocked. That's why I'm testing the handler directly.
The handler itself is still very simple:
#Component
class RatingHandler {
suspend fun rate(request: ServerRequest): ServerResponse {
// DataSheet is a data class used as the request body
return request.awaitBodyOrNull(DataSheet::class)?.let {
ServerResponse.ok().buildAndAwait()
} ?: ServerResponse.badRequest().buildAndAwait()
}
}
Here's my test class:
class RatingHandlerTest : StringSpec({
val handler = RatingHandler()
val request: ServerRequest = mockk()
val sheet: DataSheet = mockk()
// tried to use beforeSpec to see if it'd make a difference
beforeSpec {
mockkStatic("org.springframework.web.reactive.function.server.ServerRequestExtensionsKt")
}
"rate should return status ok when the body is present" {
coEvery { request.awaitBodyOrNull(DataSheet::class) } returns sheet
handler.rate(request).statusCode() shouldBe HttpStatus.OK
}
"rate should return invalid request for missing sheet" {
coEvery { request.awaitBodyOrNull(DataSheet::class) } returns null
handler.rate(request).statusCode() shouldBe HttpStatus.BAD_REQUEST
}
})
Is that how I'm supposed to unit test the handler, or is there a better way?
I have read the set-based consistency validation blog and I want to validate through a dispatch interceptor. I follow the example, but I use reactive repository and it doesn't really work for me. I have tried both block and not block. with block it throws error, but without block it doesn't execute anything. here is my code.
class SubnetCommandInterceptor : MessageDispatchInterceptor<CommandMessage<*>> {
#Autowired
private lateinit var privateNetworkRepository: PrivateNetworkRepository
override fun handle(messages: List<CommandMessage<*>?>): BiFunction<Int, CommandMessage<*>, CommandMessage<*>> {
return BiFunction<Int, CommandMessage<*>, CommandMessage<*>> { index: Int?, command: CommandMessage<*> ->
if (CreateSubnetCommand::class.simpleName == (command.payloadType.simpleName)){
val interceptCommand = command.payload as CreateSubnetCommand
privateNetworkRepository
.findById(interceptCommand.privateNetworkId)
// ..some validation logic here ex.
// .filter { network -> network.isSubnetOverlap() }
.switchIfEmpty(Mono.error(IllegalArgumentException("Requested subnet is overlap with the previous subnet.")))
// .block() also doesn't work here it throws error
// block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-
}
command
}
}
}
Subscribing to a reactive repository inside a message dispatcher is not really recommended and might lead to weird behavior as underling ThreadLocal (used by Axox) is not adapted to be used in reactive programing
Instead, check out Axon's Reactive Extension and reactive interceptors section.
For example what you might do:
reactiveCommandGateway.registerDispatchInterceptor(
cmdMono -> cmdMono.flatMap(cmd->privateNetworkRepository
.findById(cmd.privateNetworkId))
.switchIfEmpty(
Mono.error(IllegalArgumentException("Requested subnet is overlap with the previous subnet."))
.then(cmdMono)));
I have ktor application which expects file from multipart in code like this:
multipart.forEachPart { part ->
when (part) {
is PartData.FileItem -> {
image = part.streamProvider().readAllBytes()
}
else -> // irrelevant
}
}
The Intellij IDEA marks readAllBytes() as inappropriate blocking call since ktor operates on top of coroutines. How to replace this blocking call to the appropriate one?
Given the reputation of Ktor as a non-blocking, suspending IO framework, I was surprised that apparently for FileItem there is nothing else but the blocking InputStream API to retrieve it. Given that, your only option seems to be delegating to the IO dispatcher:
image = withContext(Dispatchers.IO) { part.streamProvider().readBytes() }
On the backend im doing:
#PostMapping(path = "/products", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public void saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> log.info("product: " + product.toString()));
}
And on the frontend im calling this using:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.exchange()
.subscribe();
What happens now is that I have 472 products which need to get saved but only one of them is actually saving. The stream closes after the first and I cant find out why.
If I do:
...
.retrieve()
.bodyToMono(Void.class);
instead, the request isnt even arriving at the backend.
I also tried fix amount of elements:
.body(Flux.just(new Product("123"), new Product("321")...
And with that also only the first arrived.
EDIT
I changed the code:
#PostMapping(path = "/products", consumes =
MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> this.service.saveProduct(product));
return Mono.empty();
}
and:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.exchange()
.block();
That led to the behaviour that one product was saved twice (because the backend endpoint was called twice) but again only just one item. And also we got an error on the frontend side:
IOException: Connection reset by peer
Same for:
...
.retrieve()
.bodyToMono(Void.class)
.subscribe();
Doing the following:
this.targetWebClient
.post()
.uri(productUri)
.accept(MediaType.APPLICATION_STREAM_JSON)
.contentType(MediaType.APPLICATION_STREAM_JSON)
.body(this.sourceWebClient
.get()
.uri(uriBuilder -> uriBuilder.path(this.sourceEndpoint + "/id")
.queryParam("date", date)
.build())
.accept(MediaType.APPLICATION_STREAM_JSON)
.retrieve()
.bodyToFlux(Product.class), Product.class)
.retrieve();
Leads to the behaviour that the backend again isnt called at all.
The Reactor documentation does say that nothing happens until you subscribe, but it doesn't mean you should subscribe in your Spring WebFlux code.
Here are a few rules you should follow in Spring WebFlux:
If you need to do something in a reactive fashion, the return type of your method should be Mono or Flux
Within a method returning a reactive typoe, you should never call block or subscribe, toIterable, or any other method that doesn't return a reactive type itself
You should never do I/O-related in side-effects DoOnXYZ operators, as they're not meant for that and this will cause issues at runtime
In your case, your backend should use a reactive repository to save your data and should look like:
#PostMapping(path = "/products", consumes = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
return productRepository.saveAll(products).then();
}
In this case, the Mono<Void> return type means that your controller won't return anything as a response body but will signal still when it's done processing the request. This might explain why you're seeing that behavior - by the time the controller is done processing the request, all products are not saved in the database.
Also, remember the rules noted above. Depending on where your targetWebClient is used, calling .subscribe(); on it might not be the solution. If it's a test method that returns void, you might want to call block on it and get the result to test assertions on it. If this is a component method, then you should probably return a Publisher type as a return value.
EDIT:
#PostMapping(path = "/products", consumes =
MediaType.APPLICATION_STREAM_JSON_VALUE)
public Mono<Void> saveProducts(#Valid #RequestBody Flux<Product> products) {
products.subscribe(product -> this.service.saveProduct(product));
return Mono.empty();
}
Doing this isn't right:
calling subscribe decouples the processing of the request/response from that saveProduct operation. It's like starting that processing in a different executor.
returning Mono.empty() signals Spring WebFlux that you're done right away with the request processing. So Spring WebFlux will close and clean the request/response resources; but your saveProduct process is still running and won't be able to read from the request since Spring WebFlux closed and cleaned it.
As suggested in the comments, you can wrap blocking operations with Reactor (even though it's not advised and you may encounter performance issues) and make sure that you're connecting all the operations in a single reactive pipeline.