I am using Spring WebFlux Netty & have multiple HTTP Endpoints on multiple #RestController that return Mono.
When a client disconnects it seems that the subscription is canceled as expected, I want to log such events where a client disconnects due to timeout.
I know that I can add .doOnCancel/doOnDispose to all Mono in each method but I am looking for a solution that works globally across all endpoints.
Related
We are using ktor 2.1.0 version and have configured multiple JWK providers.
Our application have 3 endpoints, one is health check without authorization.
When one of the JWK providers starts responding very slow, our application endpoints stops responding (event that without authorization health check) and waits while first request for the slow JWK provider will be finished.
It's look like what all Ktor stops responding during bearer token validation.
Can it be some configuration issue? Or it is a bug in Ktor?
We upgraded jwks-rsa to the 0.21.2 version. This one gives us ability to set the timeouts, but with these timeouts we cover the problem, but not solving.
Is it possible to write a ktor test without any client call? We have a ktor kafka consumer service that executes http calls. The shape of our test is:
startAndConfigureWiremock()
testApplication{
application{
changeConfiguration()
}
sendKafkaMessage()
verifyExternalCall()
}
but the tests with any client calls do not work. Test code verifyExternalCall() is executed before the service is up and blocks the startup without any testBlocking.
When we try to add parallelism like GlobalScope.launch it runs the application, but it just starts and stops.
Looks like testApplication needs a client call to work at all and even ktor test relay on it: https://github.com/ktorio/ktor/blob/8b784f45a6339728ce7181498a5854b29bf9d2a5/ktor-server/ktor-server-core/jvmAndNix/test/io/ktor/server/application/HooksTest.kt#L81
This might not be the answer you are looking for but...
This definately looks like the incorrect place and method for performing this test.
Ktor is for building web APIs, it is for dealing with things like routing and serialisation. Why are you testing if an internal service is being started and is polling kafka for messages? This has nothing to do with ktor.
If you really want to do an integration test on this service rather:
do not start ktor
start kafka
start your Kafka polling service
send your message
do your verification
If you really want to check that your kafka service is started by your ktor application just do a verification that some "startKafkaConsumer" function was called in your start up. This should most likely be done on some injected mock of your consuming service.
I have experienced unexpected behavior from Service Fabric Reverse Proxy.
When I abort a long-running request the proxy request to the Service Fabric Service is not being aborted and the whole request is executed.
If we make requests directly to the service, requests are canceled as expected.
An uninterrupted run of the long request - directly to the service
An interrupted run of the long request - directly to the service
Is there a setting that we need to enable so Service Fabric Reverse Proxy handles the requests as we expect it to?
Consider using Traefik instead, which is a more mature product.
It comes with an active community and request termination support.
It also doesn't have the undesired side effect of exposing all SF services by default.
We are making a cloud native enterprise business application on dotnet core mvc platform. The dotnet core default api gateway between frontend application and backend microservices is Ocelot used in Async mode.
We have been suggested to use RabbitMQ message broker instead of Ocelot. The reasoning given for this shift is asynchronous request - response exchange between frontend and microservices. Here, we would like to declare that our application would have few hundred cshtml pages spanning over several frontend modules. We are expecting over thousand users concurrently using the application.
Our concern is that, is it the right suggestion or not. Our development team feels that we should continue using Ocelot api gateway for general request - response exchange between frontend and microservices and use RabbitMQ only for events which are going to trigger backgroup processing and respond after a delay when the job gets completed.
In case you guys feel that yes we can replace Ocelot, then our further concerns about reliable session based request and response. We should not have to programmaticaly corelate response to session requests. Here it may please be noted that with RabbitMQ we are testing with dotnet core MassTransit library. The Ocelot API Gateway is designed to handle session based request-response commnunication.
In RabbitMQ should we make reply queue for each request or should the client maintain a single reply queue for all requests. Should the reply queue be exclusive or durable.
Can single reply queue per client be able to serve to all request or will it be correct to make multiple receive endpoint based on application modules/cshtml pages to serve all our concurrent users with efficient way.
Thanking you all, we eagerly wait for your replies.
I recommend to implement RabbitMQ. You might need to change ocelot to rabbit mq.
I'm using WCF through Spring.net WCF integration link text
This works relatively fine, however it seems that WCF and Spring get in each other's way when instantiating client channels. This means that only a single client channel is created for a service and therefore the clients get a timeout after the configured timeout is expired since the same client channel has been open since it was instantiated by Spring.
To make the matters worst, once a channel goes to a fault state, it affect all users of that service since spring doesn't create a new channel for each user.
Has anyone managed to use WCF and Spring.net work together without these issues?
I've created a small library to help you with Spring.NET in these circumstances. You can find the svn repo here. More info can be found on my blog.