Timer: Not start my method periodically every N seconds - kotlin

In my Kotlin project
I want start two independent timer.
First must run every 30 seconds.
Second must run every 20 seconds (start after 5 seconds).
I try this:
import kotlin.concurrent.schedule
import kotlin.concurrent.timerTask
Timer().scheduleAtFixedRate(timerTask { login() }, 1000, 30 * 1000)
Timer().schedule(5_000) { Timer().scheduleAtFixedRate(timerTask { updateState() }, 1000, 20 * 1000) }
First timer success run (login) periodically ever 20 seconds. Nice.
But seconds timer start only ONCE. Not run (updateState) periodically every 30 seconds.
Method login do sync http request. Method updateState also do sync http request.
import okhttp3.MediaType.Companion.toMediaType
import okhttp3.OkHttpClient
import okhttp3.Request
import okhttp3.RequestBody.Companion.toRequestBody
import okhttp3.Response
fun login() {
val loginRequestURL =
"${Config.instance.loginApiUrl}/importAPILogist.php"
val requestToc= Request.Builder()
.url(loginRequestURL)
.get()
.build()
val httpClient = OkHttpClient()
val loginResponse: Response = httpClient.newCall(requestToc).execute() // sync request
}
and another method:
import okhttp3.MediaType.Companion.toMediaType
import okhttp3.OkHttpClient
import okhttp3.Request
import okhttp3.RequestBody.Companion.toRequestBody
import okhttp3.Response
fun updateOrdersState() {
logger.info("updateOrdersState:")
val someRequestURL = "${Config.instance.someApiUrl}/exportAPI"
val requestOrderStateTocan = Request.Builder()
.url(someRequestURL)
.get()
.build()
val httpClient = OkHttpClient()
val tocanOrderStateResponse: Response = httpClient.newCall(requestOrderStateTocan).execute() // sync request
Method updateState can throw exception(java.net.SocketTimeoutException)
Maybe I need to catch exception when http request is not success? E.g.
Exception in thread "Timer-2" java.net.SocketTimeoutException: timeout

Related

Ktor modify request on retry not working as expected

I have a custom retry policy on receiving 5XX errors from the server. The idea is to retry until I get a non-5XX error with an exponential delay between each retry request also I would like to update the request body on every retry.
Here is my code
import io.ktor.client.*
import io.ktor.client.engine.java.*
import io.ktor.client.plugins.*
import io.ktor.client.request.*
import io.ktor.http.*
import io.ktor.server.application.*
import io.ktor.server.engine.*
import io.ktor.server.netty.*
import io.ktor.server.request.*
import io.ktor.server.routing.*
import kotlinx.coroutines.*
import kotlin.time.Duration.Companion.seconds
suspend fun main() {
val serverJob = CoroutineScope(Dispatchers.Default).launch { startServer() }
val client = HttpClient(Java) {
install(HttpTimeout) {
connectTimeoutMillis = 5.seconds.inWholeMilliseconds
}
install(HttpRequestRetry)
}
client.post {
url("http://127.0.0.1:8080/")
setBody("Hello")
retry {
retryOnServerErrors(maxRetries = Int.MAX_VALUE)
exponentialDelay(maxDelayMs = 128.seconds.inWholeMilliseconds)
modifyRequest { it.setBody("With Different body ...") } // It's not working! if I comment this out then my retry logic works as expected
}
}
client.close()
serverJob.cancelAndJoin()
}
suspend fun startServer() {
embeddedServer(Netty, port = 8080) {
routing {
post("/") {
val text = call.receiveText()
println("Retrying exponentially... $text")
call.response.status(HttpStatusCode(500, "internal server error"))
}
}
}.start(wait = true)
}
As you can see, if I comment out modifyRequest { it.setBody("With Different body ...") } line from retry logic then everything works fine. If I include that line it only tries once and stuck there, what I'm doing wrong here? how to change the request body for every retry?
The problem is that rendering (transformation to an OutgoingContent) of a request body happens during the execution of the HttpRequestPipeline, which takes place only once after making an initial request. The HTTP request retrying happens after in the HttpSendPipeline.
Since you pass a String as a request body it needs to be transformed before the actual sending. To solve this problem, you can manually wrap your String into the TextContent instance and pass it to the setBody method:
retry {
retryOnServerErrors(maxRetries = Int.MAX_VALUE)
exponentialDelay(maxDelayMs = 128.seconds.inWholeMilliseconds)
modifyRequest {
it.setBody(TextContent("With Different body ...", ContentType.Text.Plain))
}
}

Ktor how to get http code from request without body

I make a request to the server, but there is no body in the response.
Accordingly, the return value type of response is Unit.
suspend fun foo(
url: String,
id: Long
) {
val requestUrl = "$url/Subscriptions?id=${id}"
val response = httpApiClient.delete<Unit>(requestUrl) {
headers {
append(HttpHeaders.Authorization, createRequestToken(token))
}
}
return response
}
How in this case to receive the code of the executed request?
HttpResponseValidator {
validateResponse { response ->
TODO()
}
}
using a similar construction and throwing an error, for example, is not an option, since one http client is used for several requests, and making a new http client for one request is strange. is there any other way out?
You can specify the HttpResponse type as a type argument instead of Unit to get an object that allows you to access the status property (HTTP status code), headers, to receive the body of a response, etc. Here is an example:
import io.ktor.client.HttpClient
import io.ktor.client.engine.apache.*
import io.ktor.client.request.*
import io.ktor.client.statement.*
suspend fun main() {
val client = HttpClient(Apache)
val response = client.get<HttpResponse>("https://httpbin.org/get")
// the response body isn't received yet
println(response.status)
}

How to trigger a websockets message from outside the routing() block in Ktor?

If I have an Kotlin application that wants to trigger outgoing Websocket messages in Ktor, I usually do this from within the relevant routing block. If I have a process outside of the routing block that wants to send a Websocket message, how can I trigger that?
You need to store the session provided by the web socket connection and then you can send messages in that session:
var session: WebSocketSession? = null
try {
client.ws {
session = this
}
} finally {
// clear the session both when the socket is closed normally
// and when an error occurs, because it is no longer valid
session = null
}
// other coroutine
session?.send(/*...*/)
You can use coroutines' channels to send a Websocket session and receive it in a different place:
import io.ktor.application.*
import io.ktor.http.cio.websocket.*
import io.ktor.routing.*
import io.ktor.server.engine.*
import io.ktor.server.netty.*
import io.ktor.websocket.*
import kotlinx.coroutines.channels.Channel
suspend fun main() {
val sessions = Channel<WebSocketSession>()
val server = embeddedServer(Netty, 5555, host = "0.0.0.0") {
install(WebSockets) {}
routing {
webSocket("/socket") {
sessions.send(this)
}
}
}
server.start()
for (session in sessions) {
session.outgoing.send(Frame.Text("From outside of the routing"))
}
}

Problems testing spring webflux Webclient with high load

I am trying to learn Spring Webflux comming from C# and NetCore, we have a very similar problem like this post, where a third party service provider has some response time problems.
But testing with spring-webclient is doubling the response time, I do not know if I am missing something
I tried to create a similar example with:
A computer running 3 servers
Demo server that just simulates some random delay time (port 8080)
Test Server in C# using async to call my "Wait" server (port 5000)
Test Server with spring and webclient to call my "Wait" server (port 8081)
Other computer running JMeter with 1000 clients and 10 rounds each one
Some code
Wait server
Just a simple route
#Configuration
class TestRouter(private val middlemanDemo: MiddlemanDemo) {
#Bean
fun route() = router {
GET("/testWait", middlemanDemo::middleTestAndGetWait)
}
}
The handler has a Random generator with a seed, so each test can generate the same sequence of delays
#Service
class TestWaiter {
companion object RandomManager {
private lateinit var random: Random
init {
resetTimer()
}
#Synchronized
fun next(): Long {
val random = random.nextLong(0, 10)
return random * 2
}
fun resetTimer() {
random = Random(12345)
}
}
private val logger = LoggerFactory.getLogger(javaClass)
fun testAndGetWait(request: ServerRequest): Mono<ServerResponse> {
val wait = next()
logger.debug("Wait is: {}", wait)
return ServerResponse
.ok()
.json()
.bodyValue(wait)
.delayElement(Duration.ofSeconds(wait))
}
fun reset(request: ServerRequest): Mono<ServerResponse> {
logger.info("Random reset")
resetTimer()
return ServerResponse
.ok()
.build()
}
}
Load testing the server with JMeter I can see a steady response time of around 9-10 seconds and a max throughput of 100/sec:
C# async Demo server
Trying a middle man with C#, this server just calls the main demo server:
The controller
[HttpGet]
public async Task<string> Get()
{
return await _waiterClient.GetWait();
}
And the service with the httpClient
private readonly HttpClient _client;
public WaiterClient(HttpClient client)
{
_client = client;
client.BaseAddress = new Uri("http://192.168.0.121:8080");
}
public async Task<string> GetWait()
{
var response = await _client.GetAsync("/testWait");
var waitTime = await response.Content.ReadAsStringAsync();
return waitTime;
}
}
Testing this service gives the same response time, with a little less throughput for the overhead, but it is understandable
The spring-webclient implementation
This client is also really simple, just one route
#Configuration
class TestRouter(private val middlemanDemo: MiddlemanDemo) {
#Bean
fun route() = router {
GET("/testWait", middlemanDemo::middleTestAndGetWait)
}
}
The handler just calls the service using the webclient
#Service
class MiddlemanDemo {
private val client = WebClient.create("http://127.0.0.1:8080")
fun middleTestAndGetWait(request: ServerRequest): Mono<ServerResponse> {
return client
.get()
.uri("/testWait")
.retrieve()
.bodyToMono(Int::class.java)
.flatMap(::processResponse)
}
fun processResponse(delay: Int): Mono<ServerResponse> {
return ServerResponse
.ok()
.bodyValue(delay)
}
}
However, running the tests, the throughput only get to 50/sec
And the response time doubles like if I had another wait, until the load goes down again
I think it may be caused by pool acquire time.
I assume your server gets over 1k TPS and each request looks to take about 9 seconds. But the default HTTP client connection pool is 500. Please refer to Projector Reactor - Connection Pool.
Please check the logs have PoolAcquireTimeoutException or whether your server takes some time to wait pool acquisition.
I am marking KL.Lee answer because it pointed me in the right way, but I will add the complete solution for anyone to find:
The key was to create a connection pool according to my needs. The default is 500 as JK.Lee mentioned.
#Service
class MiddlemanDemo(webClientBuilder: WebClient.Builder) {
private val client: WebClient
init {
val provider = ConnectionProvider.builder("fixed")
.maxConnections(2000) // This is the important part
.build()
val httpClient = HttpClient
.create(provider)
client = webClientBuilder
.clientConnector(ReactorClientHttpConnector(httpClient))
.baseUrl("http://localhost:8080")
.build()
}
fun middleTestAndGetWait(request: ServerRequest): Mono<ServerResponse> {
return client
.get()
.uri("/testWait")
.retrieve()
.bodyToMono(Int::class.java)
.flatMap(::processResponse)
}
fun processResponse(delay: Int): Mono<ServerResponse> {
return ServerResponse
.ok()
.bodyValue(delay)
}
}

How to implement a ratelimit across mutiple coroutines efficently?

So let's say I have a bunch of couroutines running that interact with some webservice and since I don't want to spam it I wanna limit the requests to 1 request every x seconds max. For that I could use some code like this:
fun CoroutineScope.rateLimiter(tokens: SendChannel<Unit>, rate: Int) = launch {
var lastToken = System.currentTimeMillis()
while (isActive) {
val currentTime = System.currentTimeMillis()
if (currentTime - lastToken < rate) {
delay(currentTime - lastToken)
}
tokens.send(Unit)
}
}
fun CoroutineScope.request(tokens: ReceiveChannel<Unit>) = launch {
for (token in tokens) {
//Do Web request
}
}
1.) Is this way to do that efficient?
2.) This isn't expandable to say limit something to x bytes/second or something where I would need to request x tokens out of a Token Bucket, what would be the best way to implement something like that with coroutines?
If you wanna skip having dependency on jobs and channels that might create more permits than are being consumed, and then having a stampeeding herd once some process starts taking permits, maybe this is the solution for you.
(Some jvm-style in here, but replaceable for Multi-platform)
import kotlin.math.max
import kotlinx.coroutines.delay
import kotlinx.coroutines.sync.Mutex
import kotlinx.coroutines.sync.withLock
class SimpleRateLimiter(eventsPerSecond: Double) {
private val mutex = Mutex()
#Volatile
private var next: Long = Long.MIN_VALUE
private val delayNanos: Long = (1_000_000_000L / eventsPerSecond).toLong()
/**
* Suspend the current coroutine until it's calculated time of exit
* from the rate limiter
*/
suspend fun acquire() {
val now: Long = System.nanoTime()
val until = mutex.withLock {
max(next, now).also {
next = it + delayNanos
}
}
if (until != now) {
delay((until - now) / 1_000_000)
}
}
}
It ofc comes with other tradeoffs.
Behavior when nanoTime is nearing Long.MAX_VALUE is most definitely corrupted.
No impl for maxDelay/timeout
No way of grabbing multiple termits
No tryAquire implementation
If you want an IntervalLimiter that allows X requests every Y seconds, and then throws exceptions, there is the RateLimiter in Resilience4J
Or if you want something that is a lot more fully featured, I'm working on a PR to create both a RateLimiter and an IntervalLimiter in coroutines core project.