My application (typical REST server that calls other REST services internally) has two main classes to perform the bootstrapping procedure.
There is the Application.kt class that is supposed to configure the vertx instance itself and to register certain modules (jackson kotlin integration for example):
class Application(
private val profileSetting: String? = System.getenv("ACTIVE_PROFILES"),
private val logger: Logger = LoggerFactory.getLogger(Application::class.java)!!
) {
fun bootstrap() {
val profiles = activeProfiles()
val meterRegistry = configureMeters()
val vertx = bootstrapVertx(meterRegistry)
vertx.deployVerticle(ApplicationBootstrapVerticle(profiles)) { startup ->
if (startup.succeeded()) {
logger.info("Application startup finished")
} else {
logger.error("Application startup failed", startup.cause())
vertx.close()
}
}
}
}
In addition there is a ApplicationBootstrapVerticle.kt class that is supposed to deploy the different verticles in a defined order. Some of them in sequence, some of them in parallel:
class ApplicationBootstrapVerticle(
private val profiles: List<String>,
private val logger: Logger = LoggerFactory.getLogger(ApplicationBootstrapVerticle::class.java)
) : AbstractVerticle() {
override fun start(startFuture: Future<Void>) {
initializeApplicationConfig().compose {
logger.info("Application configuration initialized")
initializeScheduledJobs()
}.compose {
logger.info("Scheduled jobs initialized")
initializeRestEndpoints()
}.compose {
logger.info("Http server started")
startFuture
}.setHandler { ar ->
if (ar.succeeded()) {
startFuture.complete()
} else {
startFuture.fail(ar.cause())
}
}
}
private fun initializeApplicationConfig(): Future<String> {
return Future.future<String>().also {
vertx.deployVerticle(
ApplicationConfigVerticle(profiles),
it.completer()
)
}
}
private fun initializeScheduledJobs(): CompositeFuture {
val stationsJob = Future.future<String>()
val capabilitiesJob = Future.future<String>()
return CompositeFuture.all(stationsJob, capabilitiesJob).also {
vertx.deployVerticle(
StationQualitiesVerticle(),
stationsJob.completer()
)
vertx.deployVerticle(
VideoCapabilitiesVerticle(),
capabilitiesJob.completer()
)
}
}
private fun initializeRestEndpoints(): Future<String> {
return Future.future<String>().also {
vertx.deployVerticle(
RestEndpointVerticle(dispatcherFactory = RouteDispatcherFactory(vertx)),
it.completer()
)
}
}
}
I am not sure if this is the supposed way to bootstrap an application, if there is any. More important though, I am not sure if I understand the Future.compose mechanics correctly.
The application starts up successfully and I see all desired log messages except the
Application startup finished
message. Also the following code is never called in case of successs:
}.setHandler { ar ->
if (ar.succeeded()) {
startFuture.complete()
} else {
startFuture.fail(ar.cause())
}
}
In case of an failure though, for example when my application configuration files (yaml) cannot be parsed because there is an unknown field in the destination entity, the log message
Application startup failed
appears in the logs and also the code above is invoked.
I am curious what is wrong with my composed futures chain. I thought that the handler would be called after the previous futures succeeded or one of them failed but I think it's only called in case of success.
Update
I suppose that an invocation of startFuture.complete() was missing. By adapting the start method, it finally worked:
override fun start(startFuture: Future<Void>) {
initializeApplicationConfig().compose {
logger.info("Application configuration initialized")
initializeScheduledJobs()
}.compose {
logger.info("Scheduled jobs initialized")
initializeRestEndpoints()
}.compose {
logger.info("Http server started")
startFuture.complete()
startFuture
}.setHandler(
startFuture.completer()
)
}
I am not sure though, if this is the supposed way to handle this future chain.
The solution that worked for me looks like this:
override fun start(startFuture: Future<Void>) {
initializeApplicationConfig().compose {
logger.info("Application configuration initialized")
initializeScheduledJobs()
}.compose {
logger.info("Scheduled jobs initialized")
initializeRestEndpoints()
}.setHandler { ar ->
if(ar.succeeded()) {
logger.info("Http server started")
startFuture.complete()
} else {
startFuture.fail(ar.cause())
}
}
}
Related
In my Spring project(WebFlux/Kotlin Coroutines/Java 17), I defined a bean like this.
#Bean
fun sftpInboundFlow(): IntegrationFlow {
return IntegrationFlows
.from(
Sftp.inboundAdapter(sftpSessionFactory())
.preserveTimestamp(true)
.deleteRemoteFiles(true) // delete files after transfer is done successfully
.remoteDirectory(sftpProperties.remoteDirectory)
.regexFilter(".*\\.csv$")
// local settings
.localFilenameExpression("#this.toUpperCase() + '.csv'")
.autoCreateLocalDirectory(true)
.localDirectory(File("./sftp-inbound"))
) { e: SourcePollingChannelAdapterSpec ->
e.id("sftpInboundAdapter")
.autoStartup(true)
.poller(Pollers.fixedDelay(5000))
}
/* .handle { m: Message<*> ->
run {
val file = m.payload as File
log.debug("payload: ${file}")
applicationEventPublisher.publishEvent(ReceivedEvent(file))
}
}*/
.transform<File, DownloadedEvent> { DownloadedEvent(it) }
.handle(downloadedEventMessageHandler())
.get()
}
#Bean
fun downloadedEventMessageHandler(): ApplicationEventPublishingMessageHandler {
val handler = ApplicationEventPublishingMessageHandler()
handler.setPublishPayload(true)
return handler
}
And write a test for asserting the application event.
#OptIn(ExperimentalCoroutinesApi::class)
#SpringBootTest(
classes = [SftpIntegrationFlowsTestWithEmbeddedSftpServer.TestConfig::class]
)
#TestPropertySource(
properties = [
"sftp.hostname=localhost",
"sftp.port=2222",
"sftp.user=user",
"sftp.privateKey=classpath:META-INF/keys/sftp_rsa",
"sftp.privateKeyPassphrase=password",
"sftp.remoteDirectory=${SftpTestUtils.sftpTestDataDir}",
"logging.level.org.springframework.integration.sftp=TRACE",
"logging.level.org.springframework.integration.file=TRACE",
"logging.level.com.jcraft.jsch=TRACE"
]
)
#RecordApplicationEvents
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class SftpIntegrationFlowsTestWithEmbeddedSftpServer {
companion object {
private val log = LoggerFactory.getLogger(SftpIntegrationFlowsTestWithEmbeddedSftpServer::class.java)
}
#Configuration
#Import(
value = [
SftpIntegrationFlows::class,
IntegrationConfig::class
]
)
#ImportAutoConfiguration(
value = [
IntegrationAutoConfiguration::class
]
)
#EnableConfigurationProperties(value = [SftpProperties::class])
class TestConfig {
#Bean
fun embeddedSftpServer(sftpProperties: SftpProperties): EmbeddedSftpServer {
val sftpServer = EmbeddedSftpServer()
sftpServer.setPort(sftpProperties.port ?: 22)
//sftpServer.setHomeFolder()
return sftpServer
}
#Bean
fun remoteFileTemplate(sessionFactory: SessionFactory<LsEntry>) = RemoteFileTemplate(sessionFactory)
}
#Autowired
lateinit var uploadGateway: UploadGateway
#Autowired
lateinit var embeddedSftpServer: EmbeddedSftpServer
#Autowired
lateinit var template: RemoteFileTemplate<LsEntry>
#Autowired
lateinit var applicationEvents: ApplicationEvents
#BeforeAll
fun setup() {
embeddedSftpServer.start()
}
#AfterAll
fun teardown() {
embeddedSftpServer.stop()
}
#Test
//#Disabled("application events can not be tracked in this integration tests")
fun `download the processed ach batch files to local directory`() = runTest {
val testFilename = "foo.csv"
SftpTestUtils.createTestFiles(template, testFilename)
eventually(10.seconds) {
// applicationEvents.stream().forEach{ log.debug("published event:$it")}
applicationEvents.stream(DownloadedEvent::class.java).count() shouldBe 1
SftpTestUtils.fileExists(template, testFilename) shouldBe false
SftpTestUtils.cleanUp(template)
}
}
}
It can not catch the application events by ApplicationEvents.
I tried to replace the ApplicationEventPublishingMessageHandler with a constructor autowired ApplicationEventPublisher, it also does not work as expected.
Check the complete test source codes: SftpIntegrationFlowsTestWithEmbeddedSftpServer
Update: The applicationEvents does not work in an async thread, either applying a #Async on the listener method or invoking applicationEvents in a async thread, the application event records did not work as expected.
I'm not familiar with that #RecordApplicationEvents, so I would register an #EventListener(File payload) in the support #Configuration with some async barrier to wait form an event from that scheduled task.
You can turn on a DEBUG logging for org.springframework.integration and Message History to see in logs how your message travels. If there is one at all according to your SFTP state.
Is there any way to log the request and response body from the ktor server communication?
The buildin CallLogging feature only logs the metadata of a call. I tried writing my own logging feature like in this example: https://github.com/Koriit/ktor-logging/blob/master/src/main/kotlin/korrit/kotlin/ktor/features/logging/Logging.kt
class Logging(private val logger: Logger) {
class Configuration {
var logger: Logger = LoggerFactory.getLogger(Logging::class.java)
}
private suspend fun logRequest(call: ApplicationCall) {
logger.info(StringBuilder().apply {
appendLine("Received request:")
val requestURI = call.request.path()
appendLine(call.request.origin.run { "${method.value} $scheme://$host:$port$requestURI $version" })
call.request.headers.forEach { header, values ->
appendLine("$header: ${values.firstOrNull()}")
}
try {
appendLine()
appendLine(String(call.receive<ByteArray>()))
} catch (e: RequestAlreadyConsumedException) {
logger.error("Logging payloads requires DoubleReceive feature to be installed with receiveEntireContent=true", e)
}
}.toString())
}
private suspend fun logResponse(call: ApplicationCall, subject: Any) {
logger.info(StringBuilder().apply {
appendLine("Sent response:")
appendLine("${call.request.httpVersion} ${call.response.status()}")
call.response.headers.allValues().forEach { header, values ->
appendLine("$header: ${values.firstOrNull()}")
}
when (subject) {
is TextContent -> appendLine(subject.text)
is OutputStreamContent -> appendLine() // ToDo: How to get response body??
else -> appendLine("unknown body type")
}
}.toString())
}
/**
* Feature installation.
*/
fun install(pipeline: Application) {
pipeline.intercept(ApplicationCallPipeline.Monitoring) {
logRequest(call)
proceedWith(subject)
}
pipeline.sendPipeline.addPhase(responseLoggingPhase)
pipeline.sendPipeline.intercept(responseLoggingPhase) {
logResponse(call, subject)
}
}
companion object Feature : ApplicationFeature<Application, Configuration, Logging> {
override val key = AttributeKey<Logging>("Logging Feature")
val responseLoggingPhase = PipelinePhase("ResponseLogging")
override fun install(pipeline: Application, configure: Configuration.() -> Unit): Logging {
val configuration = Configuration().apply(configure)
return Logging(configuration.logger).apply { install(pipeline) }
}
}
}
It works fine for logging the request body using the DoubleReceive plugin. And if the response is plain text i can log the response as the subject in the sendPipeline interception will be of type TextContent or like in the example ByteArrayContent.
But in my case i am responding a data class instance with Jackson ContentNegotiation. In this case the subject is of type OutputStreamContent and i see no options to geht the serialized body from it.
Any idea how to log the serialized response json in my logging feature? Or maybe there is another option using the ktor server? I mean i could serialize my object manually and respond plain text, but thats an ugly way to do it.
I'm not shure about if this is the best way to do it, but here it is:
public fun ApplicationResponse.toLogString(subject: Any): String = when(subject) {
is TextContent -> subject.text
is OutputStreamContent -> {
val channel = ByteChannel(true)
runBlocking {
(subject as OutputStreamContent).writeTo(channel)
val buffer = StringBuilder()
while (!channel.isClosedForRead) {
channel.readUTF8LineTo(buffer)
}
buffer.toString()
}
}
else -> String()
}
Getting below compilation error while adding servlet mapping. Not Sure what is wrong with below code while adding graphqlServlet to handler.
Compilation error- None of the following functions can be called
with the arguments supplied.
(Servlet!) defined in org.eclipse.jetty.servlet.ServletHolder
(Class<out Servlet!>!) defined in org.eclipse.jetty.servlet.ServletHolder
(Source!) defined in org.eclipse.jetty.servlet.ServletHolder
GraphQLServlet.kt
class GraphQLServlet(schemaBuilder: SchemaBuilder) : SimpleGraphQLHttpServlet() {
private val schema = schemaBuilder.buildSchema()
public override fun doPost(request: HttpServletRequest?, response: HttpServletResponse?) {
super.doPost(request, response)
}
public override fun getConfiguration(): GraphQLConfiguration {
return GraphQLConfiguration.with(schema)
.with(GraphQLQueryInvoker.newBuilder().build())
.build()
}
}
Jetty.kt
class API {
fun start() {
val handler = createHandler()
Server(8080).apply {
setHandler(handler)
start()
}
}
private fun createHandler(): WebAppContext {
val schemaBuilder = MyApiSchemaBuilder();
val graphqlServlet : Servlet =GraphQLServlet(schemaBuilder)
val handler = ServletHandler()
return WebAppContext().apply {
setResourceBase("/")
handler.addServletWithMapping(ServletHolder(graphqlServlet), "/graphql")
}
}
}
handler.addServletWithMapping(ServletHolder(graphqlServlet),
"/graphql")
I am able to figure out. i have added jetty-servlet in my dependency which solved my purpose
Currently, the ktor client logging implementation is as below, and it works as intended but not what I wanted to have.
public class Logging(
public val logger: Logger,
public var level: LogLevel,
public var filters: List<(HttpRequestBuilder) -> Boolean> = emptyList()
)
....
private suspend fun logRequest(request: HttpRequestBuilder): OutgoingContent? {
if (level.info) {
logger.log("REQUEST: ${Url(request.url)}")
logger.log("METHOD: ${request.method}")
}
val content = request.body as OutgoingContent
if (level.headers) {
logger.log("COMMON HEADERS")
logHeaders(request.headers.entries())
logger.log("CONTENT HEADERS")
logHeaders(content.headers.entries())
}
return if (level.body) {
logRequestBody(content)
} else null
}
Above creates a nightmare while looking at the logs because it's logging in each line. Since I'm a beginner in Kotlin and Ktor, I'd love to know the way to change the behaviour of this. Since in Kotlin, all classes are final unless opened specifically, I don't know how to approach on modifying the logRequest function behaviour. What I ideally wanted to achieve is something like below for an example.
....
private suspend fun logRequest(request: HttpRequestBuilder): OutgoingContent? {
...
if (level.body) {
val content = request.body as OutgoingContent
return logger.log(value("url", Url(request.url)),
value("method", request.method),
value("body", content))
}
Any help would be appreciative
No way to actually override a private method in a non-open class, but if you just want your logging to work differently, you're better off with a custom interceptor of the same stage in the pipeline:
val client = HttpClient(CIO) {
install("RequestLogging") {
sendPipeline.intercept(HttpSendPipeline.Monitoring) {
logger.info(
"Request: {} {} {} {}",
context.method,
Url(context.url),
context.headers.entries(),
context.body
)
}
}
}
runBlocking {
client.get<String>("https://google.com")
}
This will produce the logging you want. Of course, to properly log POST you will need to do some extra work.
Maybe this will be useful for someone:
HttpClient() {
install("RequestLogging") {
responsePipeline.intercept(HttpResponsePipeline.After) {
val request = context.request
val response = context.response
kermit.d(tag = "Network") {
"${request.method} ${request.url} ${response.status}"
}
GlobalScope.launch(Dispatchers.Unconfined) {
val responseBody =
response.content.tryReadText(response.contentType()?.charset() ?: Charsets.UTF_8)
?: "[response body omitted]"
kermit.d(tag = "Network") {
"${request.method} ${request.url} ${response.status}\nBODY START" +
"\n$responseBody" +
"\nBODY END"
}
}
}
}
}
You also need to add a method from the Ktor Logger.kt class to your calss with HttpClient:
internal suspend inline fun ByteReadChannel.tryReadText(charset: Charset): String? = try {
readRemaining().readText(charset = charset)
} catch (cause: Throwable) {
null
}
I made a kotlin-ktor application, what i wanted to achieve is that it is modular that anytime any pipelines inside the application maybe removed from the source code without breaking any functionalities. So i decided i want to move a websocket implementation to separate class
but i faced an issue where the coroutine inside the lambda expression terminates immediately.
link-github issue
Can someone enlighten me about the coroutine setup on this, and how I can still keep this as modular without this kind of issue
working ktor websocket
fun Application.socketModule() = runBlocking {
// other declarations
......
routing {
val sessionService = SocketSessionService()
webSocket("/home") {
val chatSession = call.sessions.get<ChatSession>()
println("request session: $chatSession")
if (chatSession == null) {
close(CloseReason(CloseReason.Codes.VIOLATED_POLICY, "empty Session"))
return#webSocket
}
send(Frame.Text("connected to server"))
sessionService.addLiveSocket(chatSession.id, this)
sessionService.checkLiveSocket()
}
thread(start = true, name = "socket-monitor") {
launch {
sessionService.checkLiveSocket()
}
}
}
}
kotlin-ktor auto-close web socket
code below closes the socket automatically
Socket Module
class WebSocketServer {
fun createWebSocket(root: String, routing: Routing) {
println("creating web socket server")
routing.installSocketRoute(root)
}
private fun Routing.installSocketRoute(root: String) {
val base = "/message/so"
val socketsWeb = SocketSessionService()
webSocket("$root$base/{type}") {
call.parameters["type"] ?: throw Exception("missing type")
val session = call.sessions.get<ChatSession>()
if (session == null) {
println( "WEB-SOCKET:: client session is null" )
close(CloseReason(CloseReason.Codes.VIOLATED_POLICY, "No Session"))
return#webSocket
}
socketsWeb.addLiveSocket(session.id, this)
thread(start= true, name = "thread-live-socket") {
launch {
socketsWeb.checkLiveSocket()
}
}
}
}
}
Application Module
fun Application.socketModule() = runBlocking {
// other delcarations
.....
install(Sessions) {
cookie<ChatSession>("SESSION")
}
intercept(ApplicationCallPipeline.Features) {
if (call.sessions.get<ChatSession>() == null) {
val sessionID = generateNonce()
println("generated Session: $sessionID")
call.sessions.set(ChatSession(sessionID))
}
}
routing {
webSocketServer.createWebSocket("/home", this)
}
}
I quite dont understand why the coroutine insdie webSocket lamda is completed.
Can someone show me other/right approach on this one.