Spring Cloud Gateway Filter with external configuration service call - spring-webflux

I am working on a Spring Cloud Gateway app that has a filter controlling access to certain paths or features based on a configuration held by a different service. So if a path is associated with feature x then only allow access if the configuration service returns that feature x is enabled.
The configuration is returned as a Mono and then flatMapped to check the enabled features. This all appears to work correctly. If the feature is enabled then the request is allowed to proceed through the chain. If the feature is disabled, then the response status is set to forbidden and the request marked as complete. However, this does not appear to stop the filter chain, and the request continues to be processed and eventually returns a 200 response.
If the feature configuration is not returned from an external source and is immediately available then this logic works correctly, but this involves a blocking call and does not seem desirable. I cannot see what is wrong with the first approach. It seems to be similar to examples available elsewhere.
I believe my question is similar to this one:
https://stackoverflow.com/questions/73496938/spring-cloud-api-gateway-custom-filters-with-external-api-for-authorization/75095356#75095356
Filter 1
This is the way I would like to do this:
override fun filter(exchange: ServerWebExchange, chain: GatewayFilterChain): Mono<Void> {
logger.info("Feature Security Filter")
// getFeatures returns Mono<Map<String, Boolean>>
return featureConfigService.getFeatures().flatMap { features ->
val path = exchange.request.path.toString()
val method = exchange.request.method.toString()
if (featureMappings.keys.any { it.matcher(path).matches() }) {
val pathIsRestricted = featureMappings
.filter { it.key.matcher(path).matches() }
.filter { features[it.value.requiresFeature] != true || !it.value.methodsAllowed.contains(method) }
.isNotEmpty()
if (pathIsRestricted) {
logger.warn("Access to path [$method|$path] restricted. ")
exchange.response.statusCode = HttpStatus.FORBIDDEN
exchange.response.setComplete()
// processing should stop here but continues through other filters
}
}
chain.filter(exchange);
}
}
Filter 2
This way works but involves a blocking call in featureService.
override fun filter(exchange: ServerWebExchange, chain: GatewayFilterChain): Mono<Void> {
logger.info("Feature Security Filter")
// this call returns a Map<String, Boolean> instead of a Mono
val features = featureService.getFeatureConfig()
val path = exchange.request.path.toString()
val method = exchange.request.method.toString()
if (featureMappings.keys.any { it.matcher(path).matches() }) {
val pathIsRestricted = featureMappings
.filter { it.key.matcher(path).matches() }
.filter { features[it.value.requiresFeature] != true || !it.value.methodsAllowed.contains(method) }
.isNotEmpty()
if (pathIsRestricted) {
logger.warn("Access to path [$method|$path] restricted. ")
val response: ServerHttpResponse = exchange.response
response.statusCode = HttpStatus.FORBIDDEN;
return response.setComplete()
// this works as this request will complete here
}
}
return chain.filter(exchange)
}
When the tests run I can see that a path is correctly logged as restricted, and the response status is set to HttpStatus.FORBIDDEN as expected, but the request continues to be processed by filters later in the chain, and eventually returns a 200 response.
I've tried returning variations on Mono.error and onErrorComplete but I get the same behaviour. I am new to Spring Cloud Gateway and cannot see what I am doing wrong

After doing a few tests, I figured out that Filters are executed after route filters even if you set high order. If you need to filter requests before routing, you can use WebFilter. Here is a working Java example based on your requirements.
package com.test.test.filters;
import lombok.extern.slf4j.Slf4j;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.Ordered;
import org.springframework.http.HttpStatus;
import org.springframework.web.server.ServerWebExchange;
import org.springframework.web.server.WebFilter;
import org.springframework.web.server.WebFilterChain;
import reactor.core.publisher.Mono;
import java.util.Map;
#Configuration
#Slf4j
public class TestGlobalFilter implements WebFilter, Ordered {
private Mono<Map<String, Boolean>> test() {
return Mono.just(Map.of("test", Boolean.TRUE));
}
#Override
public int getOrder() {
return Ordered.HIGHEST_PRECEDENCE;
}
#Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
log.info("Feature Security Filter");
// getFeatures returns Mono<Map<String, Boolean>>
return test().flatMap(features -> {
final var isRestricted = features.get("test");
if (Boolean.TRUE.equals(isRestricted)) {
log.info("Feature Security stop");
exchange.getResponse().setStatusCode(HttpStatus. FORBIDDEN);
return exchange.getResponse().setComplete();
}
return chain.filter(exchange);
});
}
}

Related

How to reuse cached value in ProjectReactor

I would like my server to call the login endpoint of another server and cache the auth token for later use. My issue is that when the server tries to reuse the existing token it hangs indefinitely or infinite loops.
#Component
class ApiWebClient {
private var authToken = Mono.just(AuthToken("", Instant.ofEpochSecond(0)))
fun login(): Mono<AuthToken> {
authToken = doesTokenNeedRefreshing().flatMap { needsRefreshing ->
if (needsRefreshing) {
WebClient.create().post()
.uri("https://example.com/login")
.body(
Mono.just("Credentials"),
String::class.java
).exchangeToMono { response ->
response.bodyToMono<LoginResponse>()
}.map { response ->
LOGGER.info("Successfully logged in")
AuthToken(response.token, Instant.now())
}
} else {
LOGGER.info("Reuse token")
authToken
}
}.cache()
return authToken
}
private fun doesTokenNeedRefreshing(): Mono<Boolean> {
return authToken.map {
Instant.now().minusMillis(ONE_MINUTE_IN_MILLIS).isAfter(it.lastModified)
}
}
class AuthToken(
var token: String,
var lastModified: Instant
)
companion object {
private const val ONE_MINUTE_IN_MILLIS = 60 * 1000L
#Suppress("JAVA_CLASS_ON_COMPANION")
#JvmStatic
private val LOGGER = LoggerFactory.getLogger(javaClass.enclosingClass)
}
}
If login gets called twice within the ONE_MINUTE_IN_MILLIS amount of time then it just hangs. I suspect this is because the doesTokenNeedRefreshing() calls a .map {} on authToken and then later down the chain authToken is reassigned to itself. As well, there's an attempt to recache that exact same value. I've played around with recreating AuthToken each time instead of returning the same instance but no luck. The server either hangs or infinite loops.
How can I achieve returning the same instance of the cached value so I don't have to make a web request each time?
The best way would be to use ServerOAuth2AuthorizedClientExchangeFilterFunction from Spring Security that you could customize to satisfy your needs. In this case token will be updated automatically behind the scene.
If you are looking for "manual" approach, you just need to combine 2 requests.
public Mono<String> getResourceWithToken() {
return getToken()
.flatMap(token -> getResource(token));
}
private Mono<String> getResource(String authToken) {
return client.get()
.uri("/resource")
.headers(h -> h.setBearerAuth(authToken))
.retrieve()
.bodyToMono(String.class);
}
private Mono<String> getToken() {
return client.post()
.uri("/oauth/token")
.header("Authorization", "Basic " + {CREDS})
.body(BodyInserters.fromFormData("grant_type", "client_credentials"))
.retrieve()
.bodyToMono(JsonNode.class)
.map(tokenResponse ->
tokenResponse.get("access_token").textValue()
);
}
In this case you will always execute 2 requests and get token for every resource request. Usually token has expiration and you could save some requests by caching getToken() Mono.
private Mono<String> tokenRequest = getToken().cache(Duration.ofMinutes(30));
public Mono<String> getResourceWithToken() {
return tokenRequest
.flatMap(token -> getResource(token));
}
What I was looking for was a switchIfEmpty statement to return the existing cached value if it doesn't need to be refreshed.
#Component
class ApiWebClient {
private var authToken = Mono.just(AuthToken("", Instant.ofEpochSecond(0)))
fun login(): Mono<AuthToken> {
authToken = doesTokenNeedRefreshing().flatMap { needsRefreshing ->
if (needsRefreshing) {
WebClient.create().post()
.uri("https://example.com/login")
.body(
Mono.just("Credentials"),
String::class.java
).exchangeToMono { response ->
response.bodyToMono<LoginResponse>()
}.map { response ->
LOGGER.info("Successfully logged in")
AuthToken(response.token, Instant.now())
}
} else {
LOGGER.info("Reuse token")
Mono.empty()
}
}.cache()
.switchIfEmpty(authToken)
return authToken
}
}
In this example, if the token doesn't need to be refreshed then the stream returns an empty Mono. Then the switchIfEmpty statement returns the original auth token. This, therefore, avoids "recursively" returning the same stream over and over again.

Is there any way to drop request inside plugin

Now I'm developing server application with ktor 2(2.0.0-eap-256).
What I want to do is, according to header or other information, Reject or set adequate http status to response and do not let request go into service logic.
Below is What I tried.
val testPlugin = createApplication("testPlugin") {
onCall {
if (call.request.headers["auth"] == null) {
call.respond(HttpStatusCode.BadRequest)
return#onCall
}
}
}
fun Application.testRouting() {
routing {
get("/") { call.respond("hello") }
}
}
fun Application.applyPlugin() {
install(testPlugin)
}
But request goes into service logic defined by routing(with response which has HttpStatusCode.BadRequest). Is there any idea?
And also, I want to ask my understand about onCall/onCallReceive/onCallRespond is right
onCall is invoked first, when request come.
then, onCallReceive is invoked to handle request data such as file, body, etc
after all service logic, onCallRespond is invoked.
Edit
About the last question, it is solved. onCallReceive is called when I invoke call.receive() to get request content
Edit
Add routing code
Edit
So, I edit plugin like this.
val testPlugin = createApplication(
name = "testPlugin",
createConfiguration = { TestPluginConfig() }
) {
pluginConfig.apply {
pipeline!!.intercept(ApplicationCallPipeline.Plugins){
if (call.request.headers["auth"] == null) {
call.respond(HttpStatusCode.BadRequest)
finish()
}
}
}
}
data class TestPluginConfig(
var pipeline: Application? = null // io.ktor.sever.Application
)
fun Application.testRouting() {
routing {
get("/") { call.respond("hello") }
}
}
fun Application.applyPlugin() {
val pipeline = this // io.ktor.sever.Application
install(testPlugin) { pipeline = pipeline }
}
It works just as I wanted
very thanks to Aleksei Tirman
According to Rustam Siniukov on the kotlin slack here it's enough to use call.respond in the plugin.
My tests confirmed this.

Access ApplicationCall in object without propagation

Is there a thread-safe method in Ktor where it is possible to statically access the current ApplicationCall? I am trying to get the following simple example to work;
object Main {
fun start() {
val server = embeddedServer(Jetty, 8081) {
intercept(ApplicationCallPipeline.Call) {
// START: this will be more dynamic in the future, we don't want to pass ApplicationCall
Addon.processRequest()
// END: this will be more dynamic in the future, we don't want to pass ApplicationCall
call.respondText(output, ContentType.Text.Html, HttpStatusCode.OK)
return#intercept finish()
}
}
server.start(wait = true)
}
}
fun main(args: Array<String>) {
Main.start();
}
object Addon {
fun processRequest() {
val call = RequestUtils.getCurrentApplicationCall()
// processing of call.request.queryParameters
// ...
}
}
object RequestUtils {
fun getCurrentApplicationCall(): ApplicationCall {
// Here is where I am getting lost..
return null
}
}
I would like to be able to get the ApplicationCall for the current context to be available statically from the RequestUtils so that I can access information about the request anywhere. This of course needs to scale to be able to handle multiple requests at the same time.
I have done some experiments with dependency inject and ThreadLocal, but to no success.
Well, the application call is passed to a coroutine, so it's really dangerous to try and get it "statically", because all requests are treated in a concurrent context.
Kotlin official documentation talks about Thread-local in the context of coroutine executions. It uses the concept of CoroutineContext to restore Thread-Local values in specific/custom coroutine context.
However, if you are able to design a fully asynchronous API, you will be able to bypass thread-locals by directly creating a custom CoroutineContext, embedding the request call.
EDIT: I've updated my example code to test 2 flavors:
async endpoint: Solution fully based on Coroutine contexts and suspend functions
blocking endpoint: Uses a thread-local to store application call, as referred in kotlin doc.
import io.ktor.server.engine.embeddedServer
import io.ktor.server.jetty.Jetty
import io.ktor.application.*
import io.ktor.http.ContentType
import io.ktor.http.HttpStatusCode
import io.ktor.response.respondText
import io.ktor.routing.get
import io.ktor.routing.routing
import kotlinx.coroutines.asContextElement
import kotlinx.coroutines.launch
import kotlin.coroutines.AbstractCoroutineContextElement
import kotlin.coroutines.CoroutineContext
import kotlin.coroutines.coroutineContext
/**
* Thread local in which you'll inject application call.
*/
private val localCall : ThreadLocal<ApplicationCall> = ThreadLocal();
object Main {
fun start() {
val server = embeddedServer(Jetty, 8081) {
routing {
// Solution requiring full coroutine/ supendable execution.
get("/async") {
// Ktor will launch this block of code in a coroutine, so you can create a subroutine with
// an overloaded context providing needed information.
launch(coroutineContext + ApplicationCallContext(call)) {
PrintQuery.processAsync()
}
}
// Solution based on Thread-Local, not requiring suspending functions
get("/blocking") {
launch (coroutineContext + localCall.asContextElement(value = call)) {
PrintQuery.processBlocking()
}
}
}
intercept(ApplicationCallPipeline.ApplicationPhase.Call) {
call.respondText("Hé ho", ContentType.Text.Plain, HttpStatusCode.OK)
}
}
server.start(wait = true)
}
}
fun main() {
Main.start();
}
interface AsyncAddon {
/**
* Asynchronicity propagates in order to properly access coroutine execution information
*/
suspend fun processAsync();
}
interface BlockingAddon {
fun processBlocking();
}
object PrintQuery : AsyncAddon, BlockingAddon {
override suspend fun processAsync() = processRequest("async", fetchCurrentCallFromCoroutineContext())
override fun processBlocking() = processRequest("blocking", fetchCurrentCallFromThreadLocal())
private fun processRequest(prefix : String, call : ApplicationCall?) {
println("$prefix -> Query parameter: ${call?.parameters?.get("q") ?: "NONE"}")
}
}
/**
* Custom coroutine context allow to provide information about request execution.
*/
private class ApplicationCallContext(val call : ApplicationCall) : AbstractCoroutineContextElement(Key) {
companion object Key : CoroutineContext.Key<ApplicationCallContext>
}
/**
* This is your RequestUtils rewritten as a first-order function. It defines as asynchronous.
* If not, you won't be able to access coroutineContext.
*/
suspend fun fetchCurrentCallFromCoroutineContext(): ApplicationCall? {
// Here is where I am getting lost..
return coroutineContext.get(ApplicationCallContext.Key)?.call
}
fun fetchCurrentCallFromThreadLocal() : ApplicationCall? {
return localCall.get()
}
You can test it in your navigator:
http://localhost:8081/blocking?q=test1
http://localhost:8081/blocking?q=test2
http://localhost:8081/async?q=test3
server log output:
blocking -> Query parameter: test1
blocking -> Query parameter: test2
async -> Query parameter: test3
The key mechanism you want to use for this is the CoroutineContext. This is the place that you can set key value pairs to be used in any child coroutine or suspending function call.
I will try to lay out an example.
First, let us define a CoroutineContextElement that will let us add an ApplicationCall to the CoroutineContext.
class ApplicationCallElement(var call: ApplicationCall?) : AbstractCoroutineContextElement(ApplicationCallElement) {
companion object Key : CoroutineContext.Key<ApplicationCallElement>
}
Now we can define some helpers that will add the ApplicationCall on one of our routes. (This could be done as some sort of Ktor plugin that listens to the pipeline, but I don't want to add to much noise here).
suspend fun PipelineContext<Unit, ApplicationCall>.withCall(
bodyOfCall: suspend PipelineContext<Unit, ApplicationCall>.() -> Unit
) {
val pipeline = this
val appCallContext = buildAppCallContext(this.call)
withContext(appCallContext) {
pipeline.bodyOfCall()
}
}
internal suspend fun buildAppCallContext(call: ApplicationCall): CoroutineContext {
var context = coroutineContext
val callElement = ApplicationCallElement(call)
context = context.plus(callElement)
return context
}
And then we can use it all together like in this test case below where we are able to get the call from a nested suspending function:
suspend fun getSomethingFromCall(): String {
val call = coroutineContext[ApplicationCallElement.Key]?.call ?: throw Exception("Element not set")
return call.parameters["key"] ?: throw Exception("Parameter not set")
}
fun Application.myApp() {
routing {
route("/foo") {
get {
withCall {
call.respondText(getSomethingFromCall())
}
}
}
}
}
class ApplicationCallTest {
#Test
fun `we can get the application call in a nested function`() {
withTestApplication({ myApp() }) {
with(handleRequest(HttpMethod.Get, "/foo?key=bar")) {
assertEquals(HttpStatusCode.OK, response.status())
assertEquals("bar", response.content)
}
}
}
}

Ribbon load balancer with webclient differs from rest template one (not properly balanced)

I've tried to use WebClient with LoadBalancerExchangeFilterFunction:
WebClient config:
#Bean
public WebClient myWebClient(final LoadBalancerExchangeFilterFunction lbFunction) {
return WebClient.builder()
.filter(lbFunction)
.defaultHeader(ACCEPT, APPLICATION_JSON_VALUE)
.defaultHeader(CONTENT_ENCODING, APPLICATION_JSON_VALUE)
.build();
}
Then I've noticed that calls to underlying service are not properly load balanced - there is constant difference of RPS on each instance.
Then I've tried to move back to RestTemplate. And it's working fine.
Config for RestTemplate:
private static final int CONNECT_TIMEOUT_MILLIS = 18 * DateTimeConstants.MILLIS_PER_SECOND;
private static final int READ_TIMEOUT_MILLIS = 18 * DateTimeConstants.MILLIS_PER_SECOND;
#LoadBalanced
#Bean
public RestTemplate restTemplateSearch(final RestTemplateBuilder restTemplateBuilder) {
return restTemplateBuilder
.errorHandler(errorHandlerSearch())
.requestFactory(this::bufferedClientHttpRequestFactory)
.build();
}
private ClientHttpRequestFactory bufferedClientHttpRequestFactory() {
final SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory();
requestFactory.setConnectTimeout(CONNECT_TIMEOUT_MILLIS);
requestFactory.setReadTimeout(READ_TIMEOUT_MILLIS);
return new BufferingClientHttpRequestFactory(requestFactory);
}
private ResponseErrorHandler errorHandlerSearch() {
return new DefaultResponseErrorHandler() {
#Override
public boolean hasError(ClientHttpResponse response) throws IOException {
return response.getStatusCode().is5xxServerError();
}
};
}
Load balancing using WebClient config up to 11:25, then switching back to RestTemplate:
Is there a reason why there is such difference and how I can use WebClient to have same amount of RPS on each instance? Clue might be that older instances are getting more requests than new ones.
I've tried bit of debugging and same (defaults like ZoneAwareLoadBalancer) logic is being called.
I did simple POC and everything works exactly the same with web client and rest template for default configuration.
Rest server code:
#SpringBootApplication
internal class RestServerApplication
fun main(args: Array<String>) {
runApplication<RestServerApplication>(*args)
}
class BeansInitializer : ApplicationContextInitializer<GenericApplicationContext> {
override fun initialize(context: GenericApplicationContext) {
serverBeans().initialize(context)
}
}
fun serverBeans() = beans {
bean("serverRoutes") {
PingRoutes(ref()).router()
}
bean<PingHandler>()
}
internal class PingRoutes(private val pingHandler: PingHandler) {
fun router() = router {
GET("/api/ping", pingHandler::ping)
}
}
class PingHandler(private val env: Environment) {
fun ping(serverRequest: ServerRequest): Mono<ServerResponse> {
return Mono
.fromCallable {
// sleap added to simulate some work
Thread.sleep(2000)
}
.subscribeOn(elastic())
.flatMap {
ServerResponse.ok()
.syncBody("pong-${env["HOSTNAME"]}-${env["server.port"]}")
}
}
}
In application.yaml add:
context.initializer.classes: com.lbpoc.server.BeansInitializer
Dependencies in gradle:
implementation('org.springframework.boot:spring-boot-starter-webflux')
Rest client code:
#SpringBootApplication
internal class RestClientApplication {
#Bean
#LoadBalanced
fun webClientBuilder(): WebClient.Builder {
return WebClient.builder()
}
#Bean
#LoadBalanced
fun restTemplate() = RestTemplateBuilder().build()
}
fun main(args: Array<String>) {
runApplication<RestClientApplication>(*args)
}
class BeansInitializer : ApplicationContextInitializer<GenericApplicationContext> {
override fun initialize(context: GenericApplicationContext) {
clientBeans().initialize(context)
}
}
fun clientBeans() = beans {
bean("clientRoutes") {
PingRoutes(ref()).router()
}
bean<PingHandlerWithWebClient>()
bean<PingHandlerWithRestTemplate>()
}
internal class PingRoutes(private val pingHandlerWithWebClient: PingHandlerWithWebClient) {
fun router() = org.springframework.web.reactive.function.server.router {
GET("/api/ping", pingHandlerWithWebClient::ping)
}
}
class PingHandlerWithWebClient(private val webClientBuilder: WebClient.Builder) {
fun ping(serverRequest: ServerRequest) = webClientBuilder.build()
.get()
.uri("http://rest-server-poc/api/ping")
.retrieve()
.bodyToMono(String::class.java)
.onErrorReturn(TimeoutException::class.java, "Read/write timeout")
.flatMap {
ServerResponse.ok().syncBody(it)
}
}
class PingHandlerWithRestTemplate(private val restTemplate: RestTemplate) {
fun ping(serverRequest: ServerRequest) = Mono.fromCallable {
restTemplate.getForEntity("http://rest-server-poc/api/ping", String::class.java)
}.flatMap {
ServerResponse.ok().syncBody(it.body!!)
}
}
In application.yaml add:
context.initializer.classes: com.lbpoc.client.BeansInitializer
spring:
application:
name: rest-client-poc-for-load-balancing
logging:
level.org.springframework.cloud: DEBUG
level.com.netflix.loadbalancer: DEBUG
rest-server-poc:
listOfServers: localhost:8081,localhost:8082
Dependencies in gradle:
implementation('org.springframework.boot:spring-boot-starter-webflux')
implementation('org.springframework.cloud:spring-cloud-starter-netflix-ribbon')
You can try it with two or more instances for server and it works exactly the same with web client and rest template.
Ribbon use by default zoneAwareLoadBalancer and if you have only one zone all instances for server will be registered in "unknown" zone.
You might have a problem with keeping connections by web client. Web client reuse the same connection in multiple requests, rest template do not do that. If you have some kind of proxy between your client and server then you might have a problem with reusing connections by web client. To verify it you can modify web client bean like this and run tests:
#Bean
#LoadBalanced
fun webClientBuilder(): WebClient.Builder {
return WebClient.builder()
.clientConnector(ReactorClientHttpConnector { options ->
options
.compression(true)
.afterNettyContextInit { ctx ->
ctx.markPersistent(false)
}
})
}
Of course it's not a good solution for production but doing that you can check if you have a problem with configuration inside your client application or maybe problem is outside, something between your client and server. E.g. if you are using kubernetes and register your services in service discovery using server node IP address then every call to such service will go though kube-proxy load balancer and will be (by default round robin will be used) routed to some pod for that service.
You have to configure Ribbon config to modify the load balancing behavior (please read below).
By default (which you have found yourself) the ZoneAwareLoadBalancer is being used. In the source code for ZoneAwareLoadBalancer we read:
(highlighted by me are some mechanics which could result in the RPS pattern you see):
The key metric used to measure the zone condition is Average Active Requests, which is aggregated per rest client per zone. It is the
total outstanding requests in a zone divided by number of available targeted instances (excluding circuit breaker tripped instances).
This metric is very effective when timeout occurs slowly on a bad zone.
The LoadBalancer will calculate and examine zone stats of all available zones. If the Average Active Requests for any zone has reached a configured threshold, this zone will be dropped from the active server list. In case more than one zone has reached the threshold, the zone with the most active requests per server will be dropped.
Once the the worst zone is dropped, a zone will be chosen among the rest with the probability proportional to its number of instances.
If your traffic is being served by one zone (perhaps the same box?) then you might get into some additionally confusing situations.
Please also note that without using LoadBallancedFilterFunction the average RPS is the same as when you use it (on the graph all lines converge to the median line) after the change, so globally looking both load balancing strategies consume the same amount of available bandwidth but in a different manner.
To modify your Ribbon client settings, try following:
public class RibbonConfig {
#Autowired
IClientConfig ribbonClientConfig;
#Bean
public IPing ribbonPing (IClientConfig config) {
return new PingUrl();//default is a NoOpPing
}
#Bean
public IRule ribbonRule(IClientConfig config) {
return new AvailabilityFilteringRule(); // here override the default ZoneAvoidanceRule
}
}
Then don't forget to globally define your Ribbon client config:
#SpringBootApplication
#RibbonClient(name = "app", configuration = RibbonConfig.class)
public class App {
//...
}
Hope this helps!

How to log requests in ktor http client?

I got something like this:
private val client = HttpClient {
install(JsonFeature) {
serializer = GsonSerializer()
}
install(ExpectSuccess)
}
and make request like
private fun HttpRequestBuilder.apiUrl(path: String, userId: String? = null) {
header(HttpHeaders.CacheControl, "no-cache")
url {
takeFrom(endPoint)
encodedPath = path
}
}
but I need to check request and response body, is there any way to do it? in console/in file?
You can achieve this with the Logging feature.
First add the dependency:
implementation "io.ktor:ktor-client-logging-native:$ktor_version"
Then install the feature:
private val client = HttpClient {
install(Logging) {
logger = Logger.DEFAULT
level = LogLevel.ALL
}
}
Bonus:
If you need to have multiple HttpClient instances throughout your application and you want to reuse some of the configuration, then you can create an extension function and add the common logic in there. For example:
fun HttpClientConfig<*>.default() {
install(Logging) {
logger = Logger.DEFAULT
level = LogLevel.ALL
}
// Add all the common configuration here.
}
And then initialize your HttpClient like this:
private val client = HttpClient {
default()
}
I ran into this as well. I switched to using the Ktor OkHttp client as I'm familiar with the logging mechanism there.
Update your pom.xml or gradle.build to include that client (copy/paste from the Ktor site) and also add the OkHttp Logging Interceptor (again, copy/paste from that site). Current version is 3.12.0.
Now configure the client with
val client = HttpClient(OkHttp) {
engine {
val loggingInterceptor = HttpLoggingInterceptor()
loggingInterceptor.level = Level.BODY
addInterceptor(loggingInterceptor)
}
}
Regardless of which client you use or framework you are on, you can implement your own logger like so:
private val client = HttpClient {
// Other configurations...
install(Logging) {
logger = CustomHttpLogger()
level = LogLevel.BODY
}
}
Where CustomHttpLogger is any class that implements the ktor Logger interface, like so:
import io.ktor.client.features.logging.Logger
class CustomHttpLogger(): Logger {
override fun log(message: String) {
Log.d("loggerTag", message) // Or whatever logging system you want here
}
}
You can read more about the Logger interface in the documentation here or in the source code here
It looks like we should handle the response in HttpReceivePipeline. We could clone the origin response and use it for logging purpose:
scope.receivePipeline.intercept(HttpReceivePipeline.Before) { response ->
val (loggingContent, responseContent) = response.content.split(scope)
launch {
val callForLog = DelegatedCall(loggingContent, context, scope, shouldClose = false)
....
}
...
}
The example implementation could be found here: https://github.com/ktorio/ktor/blob/00369bf3e41e91d366279fce57b8f4c97f927fd4/ktor-client/ktor-client-core/src/io/ktor/client/features/observer/ResponseObserver.kt
and would be available in next minor release as a client feature.
btw: we could implement the same scheme for the request.
A custom structured log can be created with the HttpSend plugin
Ktor 2.x:
client.plugin(HttpSend).intercept { request ->
val call = execute(request)
val response = call.response
val durationMillis = response.responseTime.timestamp - response.requestTime.timestamp
Log.i("NETWORK", "[${response.status.value}] ${request.url.build()} ($durationMillis ms)")
call
}
Ktor 1.x:
client.config {
install(HttpSend) {
intercept { call, _ ->
val request = call.request
val response = call.response
val durationMillis = response.responseTime.timestamp - response.requestTime.timestamp
Log.i("NETWORK", "[${response.status.value}] ${request.url} ($durationMillis ms)")
call
}
}
}
Check out Kotlin Logging, https://github.com/MicroUtils/kotlin-logging it isused by a lot of open source frameworks and takes care of all the prety printing.
You can use it simply like this:
private val logger = KotlinLogging.logger { }
logger.info { "MYLOGGER INFO" }
logger.warn { "MYLOGGER WARNING" }
logger.error { "MYLOGGER ERROR" }
This will print the messages on the console.