Micronaut-Core: How to create dynamic endpoints - kotlin

Simple question. Is it possible to create endpoints without #Endpoint?
I want to create rather dynamic endpoints by a file and depending on the content to its context.
Thanks!
Update about my idea. I would to create something like a plugin system, to make my application more extensible for maintenance and future features.
It is worth to be mentioned I am using Micronaut with Kotlin. Right now I've got fixed defined Endpoints, which matches my command scripts.
My description files will be under /src/main/resources
I've got following example description file how it might look like.
ENDPOINT: GET /myapi/customendpoint/version
COMMAND: """
#!/usr/bin/env bash
# This will be executed via SSH and streamed to stdout for further handling
echo "1.0.0"
"""
# This is a template JSON which will generate a JSON as production on the endpoint
OUTPUT: """
{
"version": "Server version: $RESULT"
}
"""
How I would like to make it work with the application.
import io.micronaut.docs.context.events.SampleEvent
import io.micronaut.context.event.StartupEvent
import io.micronaut.context.event.ShutdownEvent
import io.micronaut.runtime.event.annotation.EventListener
#Singleton
class SampleEventListener {
/*var invocationCounter = 0
#EventListener
internal fun onSampleEvent(event: SampleEvent) {
invocationCounter++
}*/
#EventListener
internal fun onStartupEvent(event: StartupEvent) {
// 1. I read all my description files
// 2. Parse them (for what I created a parser)
// 3. Now the tricky part, how to add those information to Micronaut Runtime
val do = MyDescription() // After I parsed
// Would be awesome if it is that simple! :)
Micronaut.addEndpoint(
do.getEndpoint(), do.getHttpOption(),
MyCustomRequestHandler(do.getCommand()) // Maybe there is a base class for inheritance?
)
}
#EventListener
internal fun onShutdownEvent(event: ShutdownEvent) {
// shutdown logic here
}
}

You can create a custom RouteBuilder that will register your custom endpoints at runtime:
#Singleton
class CustomRouteBuilder extends DefaultRouteBuilder {
#PostConstruct
fun initRoutes() {
val do = MyDescription();
val method = do.getMethod();
val routeUri = do.getEndpoint();
val routeHandle = MethodExecutionHandle<Object, Object>() {
// implement the 'MethodExecutionHandle' in a suitable manner to invoke the 'do.getCommand()'
};
buildRoute(HttpMethod.parse(method), routeUri, routeHandle);
}
}
Note that while this would still feasible, it would be better to consider another extension path as the solution defeats the whole Micronaut philosophy of being an AOT compilation framework.

It was actually pretty easy. The solution for me was to implement a HttpServerFilter.
#Filter("/api/sws/custom/**")
class SwsRouteFilter(
private val swsService: SwsService
): HttpServerFilter {
override fun doFilter(request: HttpRequest<*>?, chain: ServerFilterChain?): Publisher<MutableHttpResponse<*>> {
return Flux.from(Mono.fromCallable {
runBlocking {
swsService.execute(request)
}
}.subscribeOn(Schedulers.boundedElastic()).flux())
}
}
And the service can process with the HttpRequest object:
suspend fun execute(request: HttpRequest<*>?): MutableHttpResponse<Feedback> {
val path = request!!.path.split("/api/sws/custom")[1]
val httpMethod = request.method
val parameters: Map<String, List<String>> = request.parameters.asMap()
// TODO: Handle request body
// and do your stuff ...
}

Related

Alternative solution to injecting dispatchers to make the code testable

I run into a problem during writing tests for a viewModel. The problem occurred when I was trying to verify LiveData that is updated with channelFlow flow on Dispatchers.IO.
I created a simple project to show the issue.
There is a data provider class that is providing 10 numbers:
As it is, the numbers variable in the test is empty and the test fails. I know it is a problem with coroutine dispatchers.
val numbersFlow: Flow<Int> = channelFlow {
var i = 0
while (i < 10) {
delay(100)
send(i)
i++
}
}.flowOn(Dispatchers.IO)
a simple viewModel that is collecting data:
class NumbersViewModel: ViewModel() {
private val _numbers: MutableLiveData<IntArray> = MutableLiveData(IntArray(0))
val numbers: LiveData<IntArray> = _numbers
val dataProvider = NumbersProvider()
fun startCollecting() {
viewModelScope.launch(Dispatchers.Main) {
dataProvider.numbersFlow
.onStart { println("start") }
.onCompletion { println("end") }
.catch { exception -> println(exception.message.orEmpty())}
.collect { data -> onDataRead(data) }
}
}
fun onDataRead(data: Int) {
_numbers.value = _numbers.value?.plus(data)
}
}
and the test:
class NumbersViewModelTest {
#get:Rule
var instantTaskExecutorRule = InstantTaskExecutorRule()
#get:Rule
var mainCoroutineRule = MainCoroutineRule()
private lateinit var viewModel: NumbersViewModel
#Before
fun setUp() {
viewModel = NumbersViewModel()
}
#Test
fun `provider_provides_10_values`() {
viewModel.startCollecting()
mainCoroutineRule.advanceTimeBy(2000)
val numbers = viewModel.numbers.value
assertThat(numbers?.size).isEqualTo(10)
}
}
There is a common solution with changing the main dispatcher for test usage but... is there any good solution for dealing with the IO one?
I found a solution with injecting dispatchers everywhere - similarly to how I would inject NumbersProvider using Hilt in a real app - and that enables injecting our test dispatcher when we need it. It works but now I have to inject dispatchers everywhere in the code and I don't really like that if it only serves to solve the testing problem
I tried another solution and created a Singleton which makes all the standard dispatchers available in the production code and which I can configure for tests (by setting every dispatcher to the test one). I like how the resulting source code looks more - there is no additional code in viewModels and data providers but there is this singleton and everyone shouting 'Don't use singletons'
Is there any better option to correctly test code with coroutines?

Access ApplicationCall in object without propagation

Is there a thread-safe method in Ktor where it is possible to statically access the current ApplicationCall? I am trying to get the following simple example to work;
object Main {
fun start() {
val server = embeddedServer(Jetty, 8081) {
intercept(ApplicationCallPipeline.Call) {
// START: this will be more dynamic in the future, we don't want to pass ApplicationCall
Addon.processRequest()
// END: this will be more dynamic in the future, we don't want to pass ApplicationCall
call.respondText(output, ContentType.Text.Html, HttpStatusCode.OK)
return#intercept finish()
}
}
server.start(wait = true)
}
}
fun main(args: Array<String>) {
Main.start();
}
object Addon {
fun processRequest() {
val call = RequestUtils.getCurrentApplicationCall()
// processing of call.request.queryParameters
// ...
}
}
object RequestUtils {
fun getCurrentApplicationCall(): ApplicationCall {
// Here is where I am getting lost..
return null
}
}
I would like to be able to get the ApplicationCall for the current context to be available statically from the RequestUtils so that I can access information about the request anywhere. This of course needs to scale to be able to handle multiple requests at the same time.
I have done some experiments with dependency inject and ThreadLocal, but to no success.
Well, the application call is passed to a coroutine, so it's really dangerous to try and get it "statically", because all requests are treated in a concurrent context.
Kotlin official documentation talks about Thread-local in the context of coroutine executions. It uses the concept of CoroutineContext to restore Thread-Local values in specific/custom coroutine context.
However, if you are able to design a fully asynchronous API, you will be able to bypass thread-locals by directly creating a custom CoroutineContext, embedding the request call.
EDIT: I've updated my example code to test 2 flavors:
async endpoint: Solution fully based on Coroutine contexts and suspend functions
blocking endpoint: Uses a thread-local to store application call, as referred in kotlin doc.
import io.ktor.server.engine.embeddedServer
import io.ktor.server.jetty.Jetty
import io.ktor.application.*
import io.ktor.http.ContentType
import io.ktor.http.HttpStatusCode
import io.ktor.response.respondText
import io.ktor.routing.get
import io.ktor.routing.routing
import kotlinx.coroutines.asContextElement
import kotlinx.coroutines.launch
import kotlin.coroutines.AbstractCoroutineContextElement
import kotlin.coroutines.CoroutineContext
import kotlin.coroutines.coroutineContext
/**
* Thread local in which you'll inject application call.
*/
private val localCall : ThreadLocal<ApplicationCall> = ThreadLocal();
object Main {
fun start() {
val server = embeddedServer(Jetty, 8081) {
routing {
// Solution requiring full coroutine/ supendable execution.
get("/async") {
// Ktor will launch this block of code in a coroutine, so you can create a subroutine with
// an overloaded context providing needed information.
launch(coroutineContext + ApplicationCallContext(call)) {
PrintQuery.processAsync()
}
}
// Solution based on Thread-Local, not requiring suspending functions
get("/blocking") {
launch (coroutineContext + localCall.asContextElement(value = call)) {
PrintQuery.processBlocking()
}
}
}
intercept(ApplicationCallPipeline.ApplicationPhase.Call) {
call.respondText("Hé ho", ContentType.Text.Plain, HttpStatusCode.OK)
}
}
server.start(wait = true)
}
}
fun main() {
Main.start();
}
interface AsyncAddon {
/**
* Asynchronicity propagates in order to properly access coroutine execution information
*/
suspend fun processAsync();
}
interface BlockingAddon {
fun processBlocking();
}
object PrintQuery : AsyncAddon, BlockingAddon {
override suspend fun processAsync() = processRequest("async", fetchCurrentCallFromCoroutineContext())
override fun processBlocking() = processRequest("blocking", fetchCurrentCallFromThreadLocal())
private fun processRequest(prefix : String, call : ApplicationCall?) {
println("$prefix -> Query parameter: ${call?.parameters?.get("q") ?: "NONE"}")
}
}
/**
* Custom coroutine context allow to provide information about request execution.
*/
private class ApplicationCallContext(val call : ApplicationCall) : AbstractCoroutineContextElement(Key) {
companion object Key : CoroutineContext.Key<ApplicationCallContext>
}
/**
* This is your RequestUtils rewritten as a first-order function. It defines as asynchronous.
* If not, you won't be able to access coroutineContext.
*/
suspend fun fetchCurrentCallFromCoroutineContext(): ApplicationCall? {
// Here is where I am getting lost..
return coroutineContext.get(ApplicationCallContext.Key)?.call
}
fun fetchCurrentCallFromThreadLocal() : ApplicationCall? {
return localCall.get()
}
You can test it in your navigator:
http://localhost:8081/blocking?q=test1
http://localhost:8081/blocking?q=test2
http://localhost:8081/async?q=test3
server log output:
blocking -> Query parameter: test1
blocking -> Query parameter: test2
async -> Query parameter: test3
The key mechanism you want to use for this is the CoroutineContext. This is the place that you can set key value pairs to be used in any child coroutine or suspending function call.
I will try to lay out an example.
First, let us define a CoroutineContextElement that will let us add an ApplicationCall to the CoroutineContext.
class ApplicationCallElement(var call: ApplicationCall?) : AbstractCoroutineContextElement(ApplicationCallElement) {
companion object Key : CoroutineContext.Key<ApplicationCallElement>
}
Now we can define some helpers that will add the ApplicationCall on one of our routes. (This could be done as some sort of Ktor plugin that listens to the pipeline, but I don't want to add to much noise here).
suspend fun PipelineContext<Unit, ApplicationCall>.withCall(
bodyOfCall: suspend PipelineContext<Unit, ApplicationCall>.() -> Unit
) {
val pipeline = this
val appCallContext = buildAppCallContext(this.call)
withContext(appCallContext) {
pipeline.bodyOfCall()
}
}
internal suspend fun buildAppCallContext(call: ApplicationCall): CoroutineContext {
var context = coroutineContext
val callElement = ApplicationCallElement(call)
context = context.plus(callElement)
return context
}
And then we can use it all together like in this test case below where we are able to get the call from a nested suspending function:
suspend fun getSomethingFromCall(): String {
val call = coroutineContext[ApplicationCallElement.Key]?.call ?: throw Exception("Element not set")
return call.parameters["key"] ?: throw Exception("Parameter not set")
}
fun Application.myApp() {
routing {
route("/foo") {
get {
withCall {
call.respondText(getSomethingFromCall())
}
}
}
}
}
class ApplicationCallTest {
#Test
fun `we can get the application call in a nested function`() {
withTestApplication({ myApp() }) {
with(handleRequest(HttpMethod.Get, "/foo?key=bar")) {
assertEquals(HttpStatusCode.OK, response.status())
assertEquals("bar", response.content)
}
}
}
}

vertx sync config retrieval behaves unexpectedly

In my multi-verticle application, I would like to load the config once and then inject the resulting JsonObject into each verticle using koin. The problem is that the ConfigRetriever doesn't really behave the way I would expect it to. Consider the following example:
class MainVerticle : AbstractVerticle() {
override fun start() {
val retriever = ConfigRetriever.create(vertx)
val config = ConfigRetriever.getConfigAsFuture(retriever).result()
println(config)
}
}
Intuitively I would expect this to load the config file under /resources/conf/config.json and print all the key/value pairs. Instead of doing that, it prints null. However, if I change the third line to:
val retriever = ConfigRetriever.create(Vertx.vertx())
then the JsonObject gets populated with the properties of my config.json file.
The docs of Future#result state the following
The result of the operation. This will be null if the operation failed.
So the operation succeeds but no config is loaded?
I don't really understand why I have to create a new vertx instance for the config to be loaded properly. What am I missing here?
Found a solution: there is a method which returns a cached version of the config https://vertx.io/docs/vertx-config/java/#_retrieving_the_last_retrieved_configuration
So all I had to do is to load the config once in #Provides method:
#Provides
#Singleton
public ConfigRetriever config(Vertx vertx) {
final ConfigRetriever retriever = ConfigRetriever.create(vertx);
// Retrieving the config here is not just to print it,
// but also to create a cached config which can be later used in Configuration
try {
final var cfg = retriever.getConfig().toCompletionStage().toCompletableFuture().get();
cfg.stream()
.filter(entry -> entry.getKey().startsWith("backend") && !entry.getKey().contains("pass"))
.forEach(entry -> log.info("{} = {}", entry.getKey(), entry.getValue()));
return retriever;
} catch (Exception e) {
throw new RuntimeException(e);
}
}

How do I use custom configuration in Ktor?

I'm digging the built-in configuration support, and want to use it (instead of just rolling my own alongside Ktor's), but I'm having a hard time figuring out how to do it in a clean way. I've got this, and it's working, but it's really ugly and I feel like there has to be a better way:
val myBatisConfig = MyBatisConfig(
environment.config.property("mybatis.url").getString(),
environment.config.property("mybatis.driver").getString(),
environment.config.property("mybatis.poolSize").getString().toInt())
installKoin(listOf(mybatisModule(myBatisConfig), appModule), logger = SLF4JLogger())
Thanks for any help!
Adding to the existing accepted answer. An implementation using ConfigFactory.load() could look like this (Without libs):
object Config {
#KtorExperimentalAPI
val config = HoconApplicationConfig(ConfigFactory.load())
#KtorExperimentalAPI
fun getProperty(key: String): String? = config.propertyOrNull(key)?.getString()
#KtorExperimentalAPI
fun requireProperty(key: String): String = getProperty(key)
?: throw IllegalStateException("Missing property $key")
}
So, theconfig class would become:
val myBatisConfig = MyBatisConfig(
requireProperty("mybatis.url"),
requireProperty("mybatis.driver"),
requireProperty("mybatis.poolSize").toInt())
Okay, I think I have a good, clean way of doing this now. The trick is to not bother going through the framework itself. You can get your entire configuration, as these cool HOCON files, extremely easily:
val config = ConfigFactory.load()
And then you can walk the tree yourself and build your objects, or use a project called config4k which will build your model classes for you. So, my setup above has added more configuration, but gotten much simpler and more maintainable:
installKoin(listOf(
mybatisModule(config.extract("mybatis")),
zendeskModule(config.extract("zendesk")),
appModule),
logger = SLF4JLogger())
Hope someone finds this useful!
You could also try this solution:
class MyService(val url: String)
fun KoinApplication.loadMyKoins(environment: ApplicationEnvironment): KoinApplication {
val myFirstModule = module {
single { MyService(environment.config.property("mybatis.url").getString()) }
}
val mySecondModule = module {}
return modules(listOf(myFirstModule, mySecondModule))
}
fun Application.main() {
install(DefaultHeaders)
install(Koin) {
loadMyKoins(environment)
SLF4JLogger()
}
routing {
val service by inject<MyService>()
get("/") {
call.respond("Hello world! This is my service url: ${service.url}")
}
}
}
fun main(args: Array<String>) {
embeddedServer(Netty, commandLineEnvironment(args)).start()
}

How to log requests in ktor http client?

I got something like this:
private val client = HttpClient {
install(JsonFeature) {
serializer = GsonSerializer()
}
install(ExpectSuccess)
}
and make request like
private fun HttpRequestBuilder.apiUrl(path: String, userId: String? = null) {
header(HttpHeaders.CacheControl, "no-cache")
url {
takeFrom(endPoint)
encodedPath = path
}
}
but I need to check request and response body, is there any way to do it? in console/in file?
You can achieve this with the Logging feature.
First add the dependency:
implementation "io.ktor:ktor-client-logging-native:$ktor_version"
Then install the feature:
private val client = HttpClient {
install(Logging) {
logger = Logger.DEFAULT
level = LogLevel.ALL
}
}
Bonus:
If you need to have multiple HttpClient instances throughout your application and you want to reuse some of the configuration, then you can create an extension function and add the common logic in there. For example:
fun HttpClientConfig<*>.default() {
install(Logging) {
logger = Logger.DEFAULT
level = LogLevel.ALL
}
// Add all the common configuration here.
}
And then initialize your HttpClient like this:
private val client = HttpClient {
default()
}
I ran into this as well. I switched to using the Ktor OkHttp client as I'm familiar with the logging mechanism there.
Update your pom.xml or gradle.build to include that client (copy/paste from the Ktor site) and also add the OkHttp Logging Interceptor (again, copy/paste from that site). Current version is 3.12.0.
Now configure the client with
val client = HttpClient(OkHttp) {
engine {
val loggingInterceptor = HttpLoggingInterceptor()
loggingInterceptor.level = Level.BODY
addInterceptor(loggingInterceptor)
}
}
Regardless of which client you use or framework you are on, you can implement your own logger like so:
private val client = HttpClient {
// Other configurations...
install(Logging) {
logger = CustomHttpLogger()
level = LogLevel.BODY
}
}
Where CustomHttpLogger is any class that implements the ktor Logger interface, like so:
import io.ktor.client.features.logging.Logger
class CustomHttpLogger(): Logger {
override fun log(message: String) {
Log.d("loggerTag", message) // Or whatever logging system you want here
}
}
You can read more about the Logger interface in the documentation here or in the source code here
It looks like we should handle the response in HttpReceivePipeline. We could clone the origin response and use it for logging purpose:
scope.receivePipeline.intercept(HttpReceivePipeline.Before) { response ->
val (loggingContent, responseContent) = response.content.split(scope)
launch {
val callForLog = DelegatedCall(loggingContent, context, scope, shouldClose = false)
....
}
...
}
The example implementation could be found here: https://github.com/ktorio/ktor/blob/00369bf3e41e91d366279fce57b8f4c97f927fd4/ktor-client/ktor-client-core/src/io/ktor/client/features/observer/ResponseObserver.kt
and would be available in next minor release as a client feature.
btw: we could implement the same scheme for the request.
A custom structured log can be created with the HttpSend plugin
Ktor 2.x:
client.plugin(HttpSend).intercept { request ->
val call = execute(request)
val response = call.response
val durationMillis = response.responseTime.timestamp - response.requestTime.timestamp
Log.i("NETWORK", "[${response.status.value}] ${request.url.build()} ($durationMillis ms)")
call
}
Ktor 1.x:
client.config {
install(HttpSend) {
intercept { call, _ ->
val request = call.request
val response = call.response
val durationMillis = response.responseTime.timestamp - response.requestTime.timestamp
Log.i("NETWORK", "[${response.status.value}] ${request.url} ($durationMillis ms)")
call
}
}
}
Check out Kotlin Logging, https://github.com/MicroUtils/kotlin-logging it isused by a lot of open source frameworks and takes care of all the prety printing.
You can use it simply like this:
private val logger = KotlinLogging.logger { }
logger.info { "MYLOGGER INFO" }
logger.warn { "MYLOGGER WARNING" }
logger.error { "MYLOGGER ERROR" }
This will print the messages on the console.