How can I make a ktor netty server use a CachedThreadPool to process requests? - kotlin

Using ktor (1.4, can upgrade if needed) for a server side application, vanilla initialization:
fun main(args: Array<String>): Unit = io.ktor.server.netty.EngineMain.main(args)
...
fun Application.module() {
... stuff
}
I would like the underlying Netty engine to use a CachedThreadPool to process requests. Quite a few of my requests take substantial time to process (e.g. running long queries against a database), which I assume will block the thread processing the request and potentially make the server unresponsive.
How do I do that? Any other options? Do I need to make other changes (e.g. to the coroutine dispatchers) to make sure this has the desired effect?

I think it should be something like this
val env: ApplicationEngineEnvironment = ....
val server = embeddedServer(Netty, env) {
configureBootstrap = {
group(NioEventLoopGroup(..., CachedThreadPool(..)))
}
}
server.start(wait = true)

Related

Parallel Flux blocking call

My application set up is mentioned as part of issue# Correct way of using spring webclient in spring amqp
where I am trying to use Spring webclient to make API calls in Spring AMQP rabbit MQ consumer threads.
Issue seems to be that parallel flux blocking call just stalls or takes a very long time after first few requests are fired.
To simulate this, I did below minimalistic set up -
Dependencies used
Spring boot 2.2.6.RELEASE
spring-boot-starter-web
spring-boot-starter-webflux
reactor-netty 0.9.14.RELEASE
As mentioned in the other linked issue, below is configuration for webclient -
#Bean
public WebClient webClient() {
ConnectionProvider connectionProvider = ConnectionProvider
.builder("fixed")
.lifo()
.pendingAcquireTimeout(Duration.ofMillis(200000))
.maxConnections(100)
.pendingAcquireMaxCount(3000)
.maxIdleTime(Duration.ofMillis(290000))
.build();
HttpClient client = HttpClient.create(connectionProvider);
client.tcpConfiguration(<<connection timeout, read timeout, write timeout is set here....>>);
Webclient.Builder builder =
Webclient.builder().baseUrl(<<base URL>>).clientConnector(new ReactorClientHttpConnector(client));
return builder.build();
}
Below is #Service class with parallel flux webclient calls -
#Service
public class FluxtestService {
public Flux<Response> getFlux(List<Request> reqList) {
return Flux
.fromIterable(reqList)
.parallel()
.runOn(Schedulers.elastic())
.flatMap(s -> {
return webClient
.method(POST)
.uri(<<downstream url>>)
.body(BodyInserters.fromValue(s))
.exchange()
.flatMap(response -> {
if(response.statusCode().isError()){
return Mono.just(new Response());
}
return response.bodyToMono(Response.class);
})
}).sequential();
}
}
}
To simulate Spring AMQP rabbit mq consumer/listener, I created below #RestController -
#RestController
public class FluxTestController
#Autowired
private FluxtestService service;
#PostMapping("/fluxtest")
public List<Response> getFlux (List<Request> reqlist) {
return service.getFlux(reqlist).collectList().block();
}
I tried firing requests from jmeter with around 15 threads. First few set of requests are processed very quickly. While requests are being served, I can see below set of logs in log file -
Channel cleaned, now 32 active connections and 68 inactive connections
Once I submit more set of requests, the active connections keeps increasing till it reaches max configured 100. I don't see it decreasing at all. Till this point, response time is ok.
But any subsequent requests start taking very long time. Also I don't see the active connections reducing much at all even though there are no requests being fired.
Also after some time, I see below exceptions -
reactor.netty.internal.shaded.reactor.pool.PoolAcquireTimeoutException: Pool#acquire(Duration) has been pending for more than the configured timeout of 200000 ms
This probably shows that the downstream connection is not being released. Please help advise on this issue and possible fixes.
Seems issue was because the underlying connection was not being properly released in case webclient downstream call responded with error status. While using "exchange" with "webclient", it seems we need to ensure that the response is properly released; else it can lead to connections leak. Below are the changes that seemed to fix this issue -
Replace
.flatMap(response -> {
if(response.statusCode().isError()) {
return Mono.just(new Response());
}
return response.bodyToMono(Response.class);
})
with
.flatMap(response -> {
if(response.statusCode().isError()) {
response.releaseBody().thenReturn(Mono.just(new Response()));
}
return response.bodyToMono(Response.class);
})

How is blocking in a suspend function different than calling an async function?

I am using OkHttp to make a synchronous http request. To avoid blocking the main thread, I wrapped the blocking network call in a suspend function and withContext(Dispatchers.IO)
suspend fun run(): String {
val client = OkHttpClient()
val request = Request.Builder()
.url("https://publicobject.com/helloworld.txt")
.build()
return withContext(Dispatchers.IO) {
val response = client.newCall(request).execute()
return#withContext "Request completed successfully"
}
}
Android Studio gives me a warning that execute() is an "Inappropriate blocking method call". My understanding is that execute() will block during the http request taking up a thread in Dispatchers.IO for the duration of the request, which is not ideal. To avoid this issue, I can use the asynchronous version of the request wrapped in suspendCoroutine
suspend fun runAsync(): String = suspendCoroutine { continuation ->
val client = OkHttpClient()
val request = Request.Builder()
.url("http://publicobject.com/helloworld.txt")
.build()
client.newCall(request).enqueue(object : Callback {
override fun onFailure(call: Call, e: IOException) {
continuation.resumeWithException(e)
}
override fun onResponse(call: Call, response: Response) {
response.use {
if (!response.isSuccessful) throw IOException("Unexpected code $response")
continuation.resume("Request completed successfully")
}
}
})
}
This avoids the warning, but I do not understand how it is functionally different than the synchronous version above. I am assuming that the async version of the http call uses a thread to wait on the request. Is this assumption correct? If not, how does the async function wait for the callback to return?
I do not understand how it is functionally different than the synchronous version above. I am assuming that the async version of the http call uses a thread to wait on the request. Is this assumption correct? If not, how does the async function wait for the callback to return?
It's not correct, the point of the async approach is that there are no threads being blocked while the call takes place. Your second approach achieves just that -- when the coroutine suspends, it doesn't live on any thread at all. Depending on the implementation details of OkHttp, it may or may not be the case that some internal thread is blocked because it relies on blocking IO. Ideally it should be implemented in terms of Java NIO (more typically, through the Netty library) and rely on the low-level non-blocking primitive called the Selector.
The correct approach is the way you did it with implementing suspension, and kotlin-coroutines-okhttp already provides exactly that implementation in its Call.await() function.

How to Run Ktor Embedded Server from Code

I've written a simple Ktor server that processes a JSON payload from an incoming POST request. Now, I want to spawn this server from another application, and after processing the request, shut it down.
So the first problem I need to solve is: how do I spawn the Ktor server from some other 'driver' Kotlin code? All the tutorials I've found online are 1-2 years old, and are apparently using an older version of Ktor, where the main class looks like this:
fun main(args: Array<String>) {
embeddedServer(Netty, 8080) {
routing {
get("/") {
call.respondText("Hello from Kotlin Backend", ContentType.Text.Html)
}
}
}.start(wait = true)
}
It's easy to see that one can just run the embeddedServer(Netty, 8080) { ... }.start(wait = true) from wherever they want, to spawn the server. But I downloaded the Ktor plugin for IntelliJ IDEA yesterday, and it seems things have changed lately. This is what the new main class looks like:
fun main(args: Array<String>): Unit = io.ktor.server.netty.EngineMain.main(args)
#Suppress("unused") // Referenced in application.conf
#kotlin.jvm.JvmOverloads
fun Application.module(testing: Boolean = false) {
install(ContentNegotiation) {
gson {
}
}
routing {
get("/") {
call.respondText("HELLO WORLD!", contentType = ContentType.Text.Plain)
}
get("/json/gson") {
call.respond(mapOf("hello" to "world"))
}
}
}
Now, the Application.module(...) function takes care of setting up the routing and stuff, while the actual running of the server is done internally by io.ktor.server.netty.EngineMain.main(args). Also, properties like the port number are referenced from the application.conf file, and I'm not quite sure how it figures out where to find that application.conf file.
I have been able to run this Ktor server using gradlew. I also understand that it is possible to export it as an executable jar and run it (as explained here). But I can't find out how to run it from code. Any suggestions?
Edit: And it would be nice if I could set the port number from the driver code.

Okhttp3 block program

I'm working on a library that is a file which get some high order functions
My file is like:
import okhttp3.*
private val client by lazy { OkHttpClient() }
fun fn() {
client.newCall(request(url)).enqueue(callback)
do stuff ...
}
...
When I call some fn(), it continues running on background blocking the program to exit, even there is no more instructions to execute. I suspect it happens because of .enqueue(callback) which is asynchronous.
If you upgrade to the latest OkHttp 4.7.2, then it won't block your program as the threads are now daemon threads.
Clean shutdown is documented in the API docs for OkHttpClient.
https://square.github.io/okhttp/4.x/okhttp/okhttp3/-ok-http-client/
client.dispatcher().executorService().shutdown();
client.connectionPool().evictAll();

Retrofit & okhttp won't work after I turn on internet on phone

Scenario:
Open app without internet, the app will try to do a request, and will fail
Turn on internet connection, and press retry button to trigger internet request
Retrofit & okhttp will always give me HTTP FAILED: java.net.SocketTimeoutException: timeout
Restarting the app with internet enabled from start will make everything work, unless I close it again, and fail a request, from that point on it will give me the same error.
I never had this issue on Java, just on Kotlin.
private val interceptor: Interceptor =
object : Interceptor {
override fun intercept(chain: Interceptor.Chain): Response {
var builder = chain.request().newBuilder()
Prefs.token?.let { token ->
builder = builder.addHeader("Authorization", "Bearer $token")
}
return chain.proceed(builder.build())
}
}
private val httpLoggingInterceptor: HttpLoggingInterceptor by lazy {
val interceptor = HttpLoggingInterceptor()
interceptor.level =
if (BuildConfig.DEBUG) HttpLoggingInterceptor.Level.BODY else HttpLoggingInterceptor.Level.NONE
interceptor
}
private val httpClient: OkHttpClient by lazy {
OkHttpClient.Builder()
.addInterceptor(httpLoggingInterceptor)
.addInterceptor(interceptor)
.build()
}
val retrofit: Retrofit by lazy {
Retrofit.Builder()
.baseUrl("https://api.secret.com/v1/")
.addConverterFactory(GsonConverterFactory.create(gson))
.client(httpClient)
.build()
}
And the service classes look like this
#GET("something")
fun something(): Call<SomeResponse>
I've tried playing around with timeout values, no matter the timeout time, I will get the same error.
Creating a new http client for every request will fix the issue, but I don't think is a good idea.
Your issue looks like OkHttp Bug. If you follow the link, you will find long discussion with many possible solutions.
Following solution works for my project:
Update OkHttp at lest up to 4.3.0.
Set ping interval, for example 1 second
okHttpClientBuilder.pingInterval(1, TimeUnit.SECONDS)
How it works
The root of the issue is that Android OS doesn't provide any way to know that connection isn't active any more. So that for library connection looks like alive, but it's already dead. As a result we get timeout exception on every request. Once we set ping, OkHttp starts sending Ping frames, so that if server doesn't respond library knows that connection is already dead, and it's time to create a new one.
Not recommended solutions, but it should work
Turn-off connection pool
okHttpClientBuilder.connectionPool(new ConnectionPool(0, 1, TimeUnit.NANOSECONDS))
Use Http 1.1
okHttpClientBuilder.protocols(listOf(Protocol.HTTP_1_1))
In both not recommended solutions you just stop reusing already opened connection that makes each request time little bit longer.