How to test ApplicationEvent in Spring Integration Flow - kotlin

In my Spring project(WebFlux/Kotlin Coroutines/Java 17), I defined a bean like this.
#Bean
fun sftpInboundFlow(): IntegrationFlow {
return IntegrationFlows
.from(
Sftp.inboundAdapter(sftpSessionFactory())
.preserveTimestamp(true)
.deleteRemoteFiles(true) // delete files after transfer is done successfully
.remoteDirectory(sftpProperties.remoteDirectory)
.regexFilter(".*\\.csv$")
// local settings
.localFilenameExpression("#this.toUpperCase() + '.csv'")
.autoCreateLocalDirectory(true)
.localDirectory(File("./sftp-inbound"))
) { e: SourcePollingChannelAdapterSpec ->
e.id("sftpInboundAdapter")
.autoStartup(true)
.poller(Pollers.fixedDelay(5000))
}
/* .handle { m: Message<*> ->
run {
val file = m.payload as File
log.debug("payload: ${file}")
applicationEventPublisher.publishEvent(ReceivedEvent(file))
}
}*/
.transform<File, DownloadedEvent> { DownloadedEvent(it) }
.handle(downloadedEventMessageHandler())
.get()
}
#Bean
fun downloadedEventMessageHandler(): ApplicationEventPublishingMessageHandler {
val handler = ApplicationEventPublishingMessageHandler()
handler.setPublishPayload(true)
return handler
}
And write a test for asserting the application event.
#OptIn(ExperimentalCoroutinesApi::class)
#SpringBootTest(
classes = [SftpIntegrationFlowsTestWithEmbeddedSftpServer.TestConfig::class]
)
#TestPropertySource(
properties = [
"sftp.hostname=localhost",
"sftp.port=2222",
"sftp.user=user",
"sftp.privateKey=classpath:META-INF/keys/sftp_rsa",
"sftp.privateKeyPassphrase=password",
"sftp.remoteDirectory=${SftpTestUtils.sftpTestDataDir}",
"logging.level.org.springframework.integration.sftp=TRACE",
"logging.level.org.springframework.integration.file=TRACE",
"logging.level.com.jcraft.jsch=TRACE"
]
)
#RecordApplicationEvents
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class SftpIntegrationFlowsTestWithEmbeddedSftpServer {
companion object {
private val log = LoggerFactory.getLogger(SftpIntegrationFlowsTestWithEmbeddedSftpServer::class.java)
}
#Configuration
#Import(
value = [
SftpIntegrationFlows::class,
IntegrationConfig::class
]
)
#ImportAutoConfiguration(
value = [
IntegrationAutoConfiguration::class
]
)
#EnableConfigurationProperties(value = [SftpProperties::class])
class TestConfig {
#Bean
fun embeddedSftpServer(sftpProperties: SftpProperties): EmbeddedSftpServer {
val sftpServer = EmbeddedSftpServer()
sftpServer.setPort(sftpProperties.port ?: 22)
//sftpServer.setHomeFolder()
return sftpServer
}
#Bean
fun remoteFileTemplate(sessionFactory: SessionFactory<LsEntry>) = RemoteFileTemplate(sessionFactory)
}
#Autowired
lateinit var uploadGateway: UploadGateway
#Autowired
lateinit var embeddedSftpServer: EmbeddedSftpServer
#Autowired
lateinit var template: RemoteFileTemplate<LsEntry>
#Autowired
lateinit var applicationEvents: ApplicationEvents
#BeforeAll
fun setup() {
embeddedSftpServer.start()
}
#AfterAll
fun teardown() {
embeddedSftpServer.stop()
}
#Test
//#Disabled("application events can not be tracked in this integration tests")
fun `download the processed ach batch files to local directory`() = runTest {
val testFilename = "foo.csv"
SftpTestUtils.createTestFiles(template, testFilename)
eventually(10.seconds) {
// applicationEvents.stream().forEach{ log.debug("published event:$it")}
applicationEvents.stream(DownloadedEvent::class.java).count() shouldBe 1
SftpTestUtils.fileExists(template, testFilename) shouldBe false
SftpTestUtils.cleanUp(template)
}
}
}
It can not catch the application events by ApplicationEvents.
I tried to replace the ApplicationEventPublishingMessageHandler with a constructor autowired ApplicationEventPublisher, it also does not work as expected.
Check the complete test source codes: SftpIntegrationFlowsTestWithEmbeddedSftpServer
Update: The applicationEvents does not work in an async thread, either applying a #Async on the listener method or invoking applicationEvents in a async thread, the application event records did not work as expected.

I'm not familiar with that #RecordApplicationEvents, so I would register an #EventListener(File payload) in the support #Configuration with some async barrier to wait form an event from that scheduled task.
You can turn on a DEBUG logging for org.springframework.integration and Message History to see in logs how your message travels. If there is one at all according to your SFTP state.

Related

Kafka Parallel consumer React produce events with batch

I am working with Kafka Parallel Consumer to consume and process messages, Now I would also like to produce new events to kafka topic. This is actually working with the ParallelStreamProcessor. But I am failing to make it work with ReactorProcessor
Here is the code that is working for me:
pConsumer = createPConsumer()
pConsumer.subscribe(UniLists.of(kafkaConsumerConfig.kafkaTopic))
pConsumer.pollAndProduceMany ({ something ->
val records = something.stream().toList()
records.map { any ->
println("Consuming ${any.partition()}:${any.offset()}")
ProducerRecord<String, JsonObject>("output", any.key(),
JsonObject(mapOf("someTest" to any.offset())))
}
}, { consumeProduceResult ->
println(
"Message ${consumeProduceResult.getOut()} saved to broker at offset " +
"${consumeProduceResult.getMeta().offset()}"
)
})
private fun createPConsumer(): ParallelStreamProcessor<String, JsonObject> {
val producer = KafkaProducerBuilder.getProducer(kafkaConsumerConfig)
val options = ParallelConsumerOptions.builder<String, JsonObject>()
.ordering(ParallelConsumerOptions.ProcessingOrder.KEY)
.maxConcurrency(parallelConsumerConfig.maxConcurrency)
.batchSize(parallelConsumerConfig.batchSize)
.consumer(buildConsumer(kafkaConsumerConfig))
.producer(producer)
.build()
return ParallelStreamProcessor.createEosStreamProcessor(options)
}
Expected this to send events, but it does not:
pConsumer.react { context ->
val events = context.stream().toList()
// do something with events
val results = events.map{any -> ProducerRecord<String, JsonObject>("output", any.key(),
JsonObject(mapOf("someTest" to any.offset())))}
Mono.just(results)
}
Will appreciate any advice
So, currently (version 0.5.2.4) it is not supported. see issue.
we did implement it in the following way if it helps anyone:
// Example usage
parallelConsumer.react(context -> {
var consumerRecord = context.getSingleRecord().getConsumerRecord();
log.info("Concurrently constructing and returning RequestInfo from record: {}", consumerRecord);
Map<String, String> params = UniMaps.of("recordKey", consumerRecord.key(), "payload", consumerRecord.value());
Mono originalResult = Mono.just(Arrays.asList(new ProducerRecord("topic", "key", "some value"));
return originalResult.map(batchProducer::produce);
});
class BatchProducer<K, V> {
Producer<K, V> producer;
public BatchProducer(Producer<K, V> producer) {
this.producer = producer;
}
public Mono<List<RecordMetadata>> produce(List<ProducerRecord<K, V>> messages) {
List<CompletableFuture<RecordMetadata>> futures = messages.stream().map(message -> {
CompletableFuture<RecordMetadata> completableFuture = new CompletableFuture<RecordMetadata>();
Callback kafkaCallback = createCallback(completableFuture);
producer.send(message, kafkaCallback);
return completableFuture;
}).toList();
CompletableFuture<List<RecordMetadata>> oneResult = sequence(futures);
return Mono.fromFuture(oneResult);
}
// From here: https://stackoverflow.com/questions/30025428/convert-from-listcompletablefuture-to-completablefuturelist
static<T> CompletableFuture<List<T>> sequence(List<CompletableFuture<T>> com) {
return CompletableFuture.allOf(com.toArray(new CompletableFuture<?>[0]))
.thenApply(v -> com.stream()
.map(CompletableFuture::join)
.collect(Collectors.toList())
);
}
private Callback createCallback(CompletableFuture<RecordMetadata> completableFuture) {
return new Callback() {
#Override
public void onCompletion(RecordMetadata metadata, Exception exception) {
if (exception != null) {
completableFuture.completeExceptionally(exception);
} else {
completableFuture.complete(metadata);
}
}
};
}
public void close() {
producer.close();
}
}

Configuring graphqlServlet with Jetty Server

Getting below compilation error while adding servlet mapping. Not Sure what is wrong with below code while adding graphqlServlet to handler.
Compilation error- None of the following functions can be called
with the arguments supplied.
(Servlet!) defined in org.eclipse.jetty.servlet.ServletHolder
(Class<out Servlet!>!) defined in org.eclipse.jetty.servlet.ServletHolder
(Source!) defined in org.eclipse.jetty.servlet.ServletHolder
GraphQLServlet.kt
class GraphQLServlet(schemaBuilder: SchemaBuilder) : SimpleGraphQLHttpServlet() {
private val schema = schemaBuilder.buildSchema()
public override fun doPost(request: HttpServletRequest?, response: HttpServletResponse?) {
super.doPost(request, response)
}
public override fun getConfiguration(): GraphQLConfiguration {
return GraphQLConfiguration.with(schema)
.with(GraphQLQueryInvoker.newBuilder().build())
.build()
}
}
Jetty.kt
class API {
fun start() {
val handler = createHandler()
Server(8080).apply {
setHandler(handler)
start()
}
}
private fun createHandler(): WebAppContext {
val schemaBuilder = MyApiSchemaBuilder();
val graphqlServlet : Servlet =GraphQLServlet(schemaBuilder)
val handler = ServletHandler()
return WebAppContext().apply {
setResourceBase("/")
handler.addServletWithMapping(ServletHolder(graphqlServlet), "/graphql")
}
}
}
handler.addServletWithMapping(ServletHolder(graphqlServlet),
"/graphql")
I am able to figure out. i have added jetty-servlet in my dependency which solved my purpose

How can I override logRequest/logResponse to log custom message in Ktor client logging?

Currently, the ktor client logging implementation is as below, and it works as intended but not what I wanted to have.
public class Logging(
public val logger: Logger,
public var level: LogLevel,
public var filters: List<(HttpRequestBuilder) -> Boolean> = emptyList()
)
....
private suspend fun logRequest(request: HttpRequestBuilder): OutgoingContent? {
if (level.info) {
logger.log("REQUEST: ${Url(request.url)}")
logger.log("METHOD: ${request.method}")
}
val content = request.body as OutgoingContent
if (level.headers) {
logger.log("COMMON HEADERS")
logHeaders(request.headers.entries())
logger.log("CONTENT HEADERS")
logHeaders(content.headers.entries())
}
return if (level.body) {
logRequestBody(content)
} else null
}
Above creates a nightmare while looking at the logs because it's logging in each line. Since I'm a beginner in Kotlin and Ktor, I'd love to know the way to change the behaviour of this. Since in Kotlin, all classes are final unless opened specifically, I don't know how to approach on modifying the logRequest function behaviour. What I ideally wanted to achieve is something like below for an example.
....
private suspend fun logRequest(request: HttpRequestBuilder): OutgoingContent? {
...
if (level.body) {
val content = request.body as OutgoingContent
return logger.log(value("url", Url(request.url)),
value("method", request.method),
value("body", content))
}
Any help would be appreciative
No way to actually override a private method in a non-open class, but if you just want your logging to work differently, you're better off with a custom interceptor of the same stage in the pipeline:
val client = HttpClient(CIO) {
install("RequestLogging") {
sendPipeline.intercept(HttpSendPipeline.Monitoring) {
logger.info(
"Request: {} {} {} {}",
context.method,
Url(context.url),
context.headers.entries(),
context.body
)
}
}
}
runBlocking {
client.get<String>("https://google.com")
}
This will produce the logging you want. Of course, to properly log POST you will need to do some extra work.
Maybe this will be useful for someone:
HttpClient() {
install("RequestLogging") {
responsePipeline.intercept(HttpResponsePipeline.After) {
val request = context.request
val response = context.response
kermit.d(tag = "Network") {
"${request.method} ${request.url} ${response.status}"
}
GlobalScope.launch(Dispatchers.Unconfined) {
val responseBody =
response.content.tryReadText(response.contentType()?.charset() ?: Charsets.UTF_8)
?: "[response body omitted]"
kermit.d(tag = "Network") {
"${request.method} ${request.url} ${response.status}\nBODY START" +
"\n$responseBody" +
"\nBODY END"
}
}
}
}
}
You also need to add a method from the Ktor Logger.kt class to your calss with HttpClient:
internal suspend inline fun ByteReadChannel.tryReadText(charset: Charset): String? = try {
readRemaining().readText(charset = charset)
} catch (cause: Throwable) {
null
}

Mock method with multiple lambda parameters

Say I have this situation:
interface Repository {
fun getMovies(success: (List<String>) -> Unit, failure: (Int) -> Unit)
}
and I want to mock the implementation of this interface. Basically in this case, there are two lambdas as input parameters to the getmovie method, and for the test case, I only want to produce success (success.invoke(theMoviesList) should be called).
Below is something similar to what I would like to do:
class MovieViewModel constructor(val repository: AppRepository) {
var movieNames = listOf<String>() // Not private, or live data, for simplicity
fun fetchMovies() {
repository.fetchMovies(
success = {
movies ->
this.movieNames = movies
}}, failure: {
statusCode ->
})
}
}
class MoviePageTests {
private var movieViewModel: MovieViewModel? = null
#Mock
lateinit var mockRepository: AppRepository
#Before
#Throws(Exception::class)
fun before() {
MockitoAnnotations.initMocks(this)
movieViewModel = MovieViewModel(repository = mockRepository)
}
#Test
fun checkFetchMoviesUpdatesMoviesData() {
var testMovies = listof("Dracula", "Superman")
//Set up mockito so that the repository generates success with testMovies above
?????
//
movieViewModel.fetchMovies()
assertEquals(movieViewModel.movies, testMovies)
}
}
I know how to do this by way of a RepositoryImpl, but not in Mockito, despite looking at many examples online.
Any ideas?

How to be notified when all futures in Future.compose chain succeeded?

My application (typical REST server that calls other REST services internally) has two main classes to perform the bootstrapping procedure.
There is the Application.kt class that is supposed to configure the vertx instance itself and to register certain modules (jackson kotlin integration for example):
class Application(
private val profileSetting: String? = System.getenv("ACTIVE_PROFILES"),
private val logger: Logger = LoggerFactory.getLogger(Application::class.java)!!
) {
fun bootstrap() {
val profiles = activeProfiles()
val meterRegistry = configureMeters()
val vertx = bootstrapVertx(meterRegistry)
vertx.deployVerticle(ApplicationBootstrapVerticle(profiles)) { startup ->
if (startup.succeeded()) {
logger.info("Application startup finished")
} else {
logger.error("Application startup failed", startup.cause())
vertx.close()
}
}
}
}
In addition there is a ApplicationBootstrapVerticle.kt class that is supposed to deploy the different verticles in a defined order. Some of them in sequence, some of them in parallel:
class ApplicationBootstrapVerticle(
private val profiles: List<String>,
private val logger: Logger = LoggerFactory.getLogger(ApplicationBootstrapVerticle::class.java)
) : AbstractVerticle() {
override fun start(startFuture: Future<Void>) {
initializeApplicationConfig().compose {
logger.info("Application configuration initialized")
initializeScheduledJobs()
}.compose {
logger.info("Scheduled jobs initialized")
initializeRestEndpoints()
}.compose {
logger.info("Http server started")
startFuture
}.setHandler { ar ->
if (ar.succeeded()) {
startFuture.complete()
} else {
startFuture.fail(ar.cause())
}
}
}
private fun initializeApplicationConfig(): Future<String> {
return Future.future<String>().also {
vertx.deployVerticle(
ApplicationConfigVerticle(profiles),
it.completer()
)
}
}
private fun initializeScheduledJobs(): CompositeFuture {
val stationsJob = Future.future<String>()
val capabilitiesJob = Future.future<String>()
return CompositeFuture.all(stationsJob, capabilitiesJob).also {
vertx.deployVerticle(
StationQualitiesVerticle(),
stationsJob.completer()
)
vertx.deployVerticle(
VideoCapabilitiesVerticle(),
capabilitiesJob.completer()
)
}
}
private fun initializeRestEndpoints(): Future<String> {
return Future.future<String>().also {
vertx.deployVerticle(
RestEndpointVerticle(dispatcherFactory = RouteDispatcherFactory(vertx)),
it.completer()
)
}
}
}
I am not sure if this is the supposed way to bootstrap an application, if there is any. More important though, I am not sure if I understand the Future.compose mechanics correctly.
The application starts up successfully and I see all desired log messages except the
Application startup finished
message. Also the following code is never called in case of successs:
}.setHandler { ar ->
if (ar.succeeded()) {
startFuture.complete()
} else {
startFuture.fail(ar.cause())
}
}
In case of an failure though, for example when my application configuration files (yaml) cannot be parsed because there is an unknown field in the destination entity, the log message
Application startup failed
appears in the logs and also the code above is invoked.
I am curious what is wrong with my composed futures chain. I thought that the handler would be called after the previous futures succeeded or one of them failed but I think it's only called in case of success.
Update
I suppose that an invocation of startFuture.complete() was missing. By adapting the start method, it finally worked:
override fun start(startFuture: Future<Void>) {
initializeApplicationConfig().compose {
logger.info("Application configuration initialized")
initializeScheduledJobs()
}.compose {
logger.info("Scheduled jobs initialized")
initializeRestEndpoints()
}.compose {
logger.info("Http server started")
startFuture.complete()
startFuture
}.setHandler(
startFuture.completer()
)
}
I am not sure though, if this is the supposed way to handle this future chain.
The solution that worked for me looks like this:
override fun start(startFuture: Future<Void>) {
initializeApplicationConfig().compose {
logger.info("Application configuration initialized")
initializeScheduledJobs()
}.compose {
logger.info("Scheduled jobs initialized")
initializeRestEndpoints()
}.setHandler { ar ->
if(ar.succeeded()) {
logger.info("Http server started")
startFuture.complete()
} else {
startFuture.fail(ar.cause())
}
}
}