I was trying to implement a retry logic in Kotlin and Reactor based on the Reactor extra package's features. What I'm trying to do is pass a list of durations, and on each context.iteration I'm getting the (iteration-1)th element of the list. It works partly, I'm always getting an IndexOutOfBoundsException on the last iteration, which is more than I wanted, although I've provided a max number of retries - the size of the list. The retries are running though, in the given duration and "correct" number of times (surely because IndexOutOfBoundsException prevents more), only this exception (and it's root cause) bothers me.
This is my custom BackOff interface:
interface MyCustomBackoff : Backoff {
companion object {
fun getBackoffDelay(backoffList: List<Duration>): (IterationContext<*>) -> BackoffDelay {
return { context -> BackoffDelay(backoffList[(context.iteration() - 1).toInt()]) }
}
}
}
And my Kotlin extension is:
fun <T> Mono<T>.retryCustomBackoffs(backoffList: List<Duration>, doOnRetry: ((RetryContext<T>) -> Unit)? = null): Mono<T> {
val retry = Retry.any<T>().retryMax(backoffList.size.toLong()).backoff(MyCustomBackoff.getBackoffDelay(backoffList))
return if (doOnRetry == null) {
this.retryWhen(retry)
}
else {
this.retryWhen(retry.doOnRetry(doOnRetry))
}
}
What am I missing here?
If you look at reactor.retry.AbstractRetry#calculateBackoff, you could find that there is special BackoffDelay named RETRY_EXHAUSTED. And it returns when retryContext.iteration() > maxIterations (not >=) after backoff.apply(retryContext)
if (retryContext.iteration() > maxIterations || Instant.now(clock).plus(jitteredBackoff).isAfter(timeoutInstant))
return RETRY_EXHAUSTED;
So, if you have 2 custom backoff delays in the list, there will be 3 backoff delays generated by calculateBackoff.
You could change your MyCustomBackoff like so (excuse me for Java, I'm not familiar with Kotlin):
public interface MyCustomBackoff extends Backoff {
static Backoff getBackoffDelay(List<Duration> backoffList) {
return context -> context.iteration() <= backoffList.size() ?
new BackoffDelay(backoffList.get(Long.valueOf(context.iteration() - 1).intValue())) :
new BackoffDelay(Duration.ZERO);
}
}
Related
You know that Array and List only store the same data struction.
I run the Code A and get the Result A.
It seems that the Flow can emit both Int value and String value, why?
Code A
import kotlinx.coroutines.*
import kotlinx.coroutines.flow.*
suspend fun performRequest(request: Int): Int {
delay(1000) // imitate long-running asynchronous work
return request
}
fun main() = runBlocking<Unit> {
(1..3).asFlow() // a flow of requests
.transform { request ->
emit("Making request $request")
if (request >1) {
emit(performRequest(request))
}
}
.collect { response -> println(response) }
}
Result A
Making request 1
Making request 2
2
Making request 3
3
This is not a question of Flow but Java/Kotling generics and type safety.
The type this flow returns is Comperable<*>
val flow: Flow<Comparable<*>> = (1..3).asFlow() // a flow of requests
.transform { request ->
emit("Making request $request")
if (request > 1) {
emit(performRequest(request))
}
If you explicitly specify which value you want to return Flow you can restrict the types.
About generics you can refer here or check any document about generics in java/kotlin, type safety you can refer this question
Also when you are in doubt what your specified type is use alt + enter with Android Studio to see avaialble options and select Specify type explicitly.
Disregarding the nature of this request, you can have the functionality you want by making your flow emit instances of some algebraic data type that is basically a "sum" (from the type-theoretic POV) of your constituent types:
sealed interface Record
data class IntData(val get: Int) : Record
data class Metadata(val get: String) : Record
// somewhere later (flow is of type Flow<Record>)
fun main() = runBlocking<Unit> {
(1..3).asFlow() // a flow of requests
.transform { request ->
emit(Metadata("Making request $request"))
if (request > 1) {
emit(IntData(performRequest(request)))
}
// probably want to handle the `else` case too
}
.collect { response -> println(response) }
}
This would be a good solution since it's extendable (i.e. you can add the other cases later on if you need to).
In your specific case though, since you just want to debug the flow, you might not want to actually emit the "metadata" and just go for the tests of your code directly.
I am currently working on a personal project - in which I need my Spring application to take queries from an EMQX (MQTT Server) and query its data for corresponding results, and then push the results to a topic with the query UUID.
This is working - after many hours understanding how the Spring Integration framework works. But I think the way in which the handler is using "block" is incorrect - and not in keeping with the manner in which the Integration Flow should operate. Whilst this works I do want to make sure it is being done properly - out of respect for the work - and to avoid future issues.
The code snippet below should be enough to understand what it is that I'm trying to achieve - and where the potential issue lies.
#Bean
fun mqttInFlow() : Publisher<Message<String>> {
return IntegrationFlows.from(inbound())
.handle<String> { payload, headers ->
val emotionalOutput: EmotionalOutput = gson.fromJson(payload, EmotionalOutput::class.java)
emotionalPrintService.populateEmotionalOutput(emotionalOutput).map {
MessageBuilder.withPayload(gson.toJson(it))
.copyHeaders(headers)
.setHeader(MqttHeaders.TOPIC, "query/" + it.query_uuid).build()
}.block()
}
.channel(outgoingChannel())
.toReactivePublisher()
}
EDIT - Thanks for the advice - here is what I understood to be the potential edit for the Kotlin DSL solution - this is now producing an error - complaining that an output-channel or replyChannel was not available - nothing outside of the this function has been changed.
#Bean
fun newMqttInFlow() =
integrationFlow (inbound()) {
wireTap {
handle<String> { payload, headers ->
gson.fromJson<EmotionalOutput>(payload, EmotionalOutput::class.java).let { emotionalOutput ->
emotionalPrintService.populateEmotionalOutput(emotionalOutput).map { populatedEmotionalOutput ->
MessageBuilder.withPayload(gson.toJson(populatedEmotionalOutput))
.copyHeaders(headers)
.setHeader(MqttHeaders.TOPIC, populatedEmotionalOutput.query_uuid)
}
}
}
}
channel("outgoingChannel")
}
Exception is :
exception is org.springframework.messaging.core.DestinationResolutionException: no output-channel or replyChannel header available
Although I have many years experience with Java - this approach is new - so thank you very much for your assistance. It's appreciated. If the whole class would be useful - I can post that.
EDIT
Here is the Configuration file - which might give a better insight into what might be causing this secondary error -
021-03-28 21:59:48.008 ERROR 84492 --- [T Call: divnrin] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessageHandlingException: error occurred in message handler [bean 'mqttOutbound'; defined in: 'class path resource [io/divnr/appserver/configuration/MQTTConfiguration.class]'; from source: 'org.springframework.core.type.classreading.SimpleMethodMetadata#4a9419d7']; nested exception is java.lang.IllegalArgumentException: This default converter can only handle 'byte[]' or 'String' payloads; consider adding a transformer to your flow definition, or provide a BytesMessageMapper, or subclass this converter for reactor.core.publisher.MonoMapFuseable payloads, failedMessage=GenericMessage [payload=MonoMapFuseable, headers={mqtt_receivedRetained=false, mqtt_id=0, mqtt_duplicate=false, id=c5a75283-c0fe-ebac-4168-dabddd989da9, mqtt_receivedTopic=source/d9e50e8f-67e0-4505-7ca2-4d05b1242207, mqtt_receivedQos=0, timestamp=1616961588004}]
at org.springframework.integration.support.utils.IntegrationUtils.wrapInHandlingExceptionIfNecessary(IntegrationUtils.java:192)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:65)
at
The full class is provided here.
#Configuration
#EnableIntegration
#IntegrationComponentScan
class MQTTConfiguration(val emotionalPrintService: EmotionalPrintService,
val gson: Gson,
val applicationConfiguration: ApplicationConfiguration) {
#Bean
fun mqttServiceFactory() : MqttPahoClientFactory {
return DefaultMqttPahoClientFactory().apply {
connectionOptions = MqttConnectOptions().apply {
serverURIs = arrayOf<String>(applicationConfiguration.mqttServerAddress)
}
}
}
#Bean
fun newMqttInFlow() =
integrationFlow (inbound()) {
handle<String> { payload, headers ->
gson.fromJson<EmotionalOutput>(payload, EmotionalOutput::class.java).let { emotionalOutput ->
emotionalPrintService.populateEmotionalOutput(emotionalOutput).map { populatedEmotionalOutput ->
MessageBuilder.withPayload(gson.toJson(populatedEmotionalOutput))
.copyHeaders(headers)
.setHeader(MqttHeaders.TOPIC, populatedEmotionalOutput.query_uuid).build()
}
}
}
channel(outgoingChannel())
}
#Bean
#ServiceActivator(requiresReply = "false", inputChannel = "outgoingChannel")
fun mqttOutbound(): MessageHandler {
val messageHandler = MqttPahoMessageHandler("divnrout", mqttServiceFactory())
messageHandler.setAsync(true)
return messageHandler
}
#Bean
fun outgoingChannel() : FluxMessageChannel {
return FluxMessageChannel()
}
#Bean
fun inbound(): MessageProducerSupport {
return MqttPahoMessageDrivenChannelAdapter("divnrin", mqttServiceFactory(),
"source/" + applicationConfiguration.sourceUuid).apply {
setConverter(DefaultPahoMessageConverter())
setQos(1)
}
}
}
You indeed don't need that block() in the end of your handle(). You just can return the Mono from that emotionalPrintService.populateEmotionalOutput() and the framework will take for you about the proper subscription and back-pressure handling.
What you would need yet is to make that outgoingChannel() as a FluxMessageChannel.
See more info in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/reactive-streams.html#reactive-streams
Plus consider to move your IntegrationFlow solution to the proper Kotlin DSL: https://docs.spring.io/spring-integration/docs/current/reference/html/kotlin-dsl.html#kotlin-dsl
Also: when it is a FluxMessageChannel in the end of flow, there is no reason to worry about a toReactivePublisher() - the FluxMessageChannel is a Publisher<Message<?>> by itself.
UPDATE
The problem is here:
handle<String>( { payload, headers ->
gson.fromJson<EmotionalOutput>(payload, EmotionalOutput::class.java).let { emotionalOutput ->
emotionalPrintService.populateEmotionalOutput(emotionalOutput).map { populatedEmotionalOutput ->
MessageBuilder.withPayload(gson.toJson(populatedEmotionalOutput))
.copyHeaders(headers)
.setHeader(MqttHeaders.TOPIC, populatedEmotionalOutput.query_uuid).build()
}
}
}) { async(true) }
See that async(true) option. Unfortunately in the current version we don't let it to process reactive reply in the reactive manner by default. You have to say that you'd like to be an async at this end-point. So, your Publisher reply and and FluxMessageChannel as an output is going to do the proper trick.
I encountered a case where I have a nested Flux. I don't care about the individual results of the inner flux as it returns Unit (in Kotlin / Void in Java), but I want to know if the Flux aborted due to an error or not. I thought I could use the then function, as the doc states: Error signal is replayed in the resulting Mono<V>
My problem can be reduced to the minimum (Kotlin) unit test:
#Test
fun fluxTest() {
val flux = Flux.just("willFail", "willSucceed")
.flatMap { outer ->
// In my real world example the inner flux is created via Flux.fromIterable from a property of the
// outer`-object
Flux.just(1)
.flatMap { inner ->
// this simulates a Mono.fromSupplier that can throw exceptions
if (outer == "willFail") Mono.error<Unit>(RuntimeException("bam"))
else Mono.just(Unit)
}
// We don't care about the Flux as it returns Unit/Void
// All we want to know is, whether there was an error or not
.then(Mono.just(outer))
}
.onErrorContinue { error, item -> println("$item => $error") }
.collectList()
StepVerifier.create(flux)
.expectNextMatches { it.size == 1 }
.verifyComplete()
}
So we have 2 elements. In the inner Flux one of the elements will fail on processing and the other won't. I expect the error to propagate through the pipeline where it is catched and discarded in the onErrorContinue.
Therefore I'd expect 1 element in the resulting list, but I get the original 2. I have no clue why.
Now comes the fun part: In this particular test case, I can replace Flux.just(1) with Mono.just(1) (in my real world case this doesn't work ofc because the flux has more than 1 element) and suddenly my test passes:
#Test
fun fluxTest() {
val flux = Flux.just("willFail", "willSucceed")
.flatMap { outer ->
// In my real world example the inner flux is created via Flux.fromIterable from a property of the
// outer`-object
Mono.just(1)
.flatMap { inner ->
// this simulates a Mono.fromSupplier that can throw exceptions
if (outer == "willFail") Mono.error<Unit>(RuntimeException("bam"))
else Mono.just(Unit)
}
// We don't care about the Flux as it returns Unit/Void
// All we want to know is, whether there was an error or not
.then(Mono.just(outer))
}
.onErrorContinue { error, item -> println("$item => $error") }
.collectList()
StepVerifier.create(flux)
.expectNextMatches { it.size == 1 }
.verifyComplete()
}
So obviously there is a difference in Mono.then(Mono<T>) and in Flux.then(Mono<T>), but it shouldn't since the Javadoc is the same right?
Side note: Instead of Flux.then(Mono.just(outer)) I also tried Mono.defer but that is not changing anything.
I understand that in Kotlin there is no such thing as "Non-local variables" or "Global Variables" I am looking for a way to modify variables in another "Scope" in Kotlin by using the function below:
class Listres(){
var listsize = 0
fun gatherlistresult(){
var listallinfo = FirebaseStorage.getInstance()
.getReference()
.child("MainTimeline/")
.listAll()
listallinfo.addOnSuccessListener {
listResult -> listsize += listResult.items.size
}
}
}
the value of listsize is always 0 (logging the result from inside of the .addOnSuccessListener scope returns 8) so clearly the listsize variable isn't being modified. I have seen many different posts about this topic on other sites , but none fit my usecase.
I simply want to modify listsize inside of the .addOnSuccessListener callback
This method will always be returned 0 as the addOnSuccessListener() listener will be invoked after the method execution completed. The addOnSuccessListener() is a callback method for asynchronous operation and you will get the value if it gives success only.
You can get the value by changing the code as below:
class Demo {
fun registerListResult() {
var listallinfo = FirebaseStorage.getInstance()
.getReference()
.child("MainTimeline/")
.listAll()
listallinfo.addOnSuccessListener {
listResult -> listsize += listResult.items.size
processResult(listsize)
}
listallinfo.addOnFailureListener {
// Uh-oh, an error occurred!
}
}
fun processResult(listsize: Int) {
print(listResult+"") // you will get the 8 here as you said
}
}
What you're looking for is a way to bridge some asynchronous processing into a synchronous context. If possible it's usually better (in my opinion) to stick to one model (sync or async) throughout your code base.
That being said, sometimes these circumstances are out of our control. One approach I've used in similar situations involves introducing a BlockingQueue as a data pipe to transfer data from the async context to the sync context. In your case, that might look something like this:
class Demo {
var listSize = 0
fun registerListResult() {
val listAll = FirebaseStorage.getInstance()
.getReference()
.child("MainTimeline/")
.listAll()
val dataQueue = ArrayBlockingQueue<Int>(1)
listAll.addOnSuccessListener { dataQueue.put(it.items.size) }
listSize = dataQueue.take()
}
}
The key points are:
there is a blocking variant of the Queue interface that will be used to pipe data from the async context (listener) into the sync context (calling code)
data is put() on the queue within the OnSuccessListener
the calling code invokes the queue's take() method, which will cause that thread to block until a value is available
If that doesn't work for you, hopefully it will at least inspire some new thoughts!
Why it is not allowed to continue from let function?
This code:
fun foo(elements: List<String?>) {
for (element in elements) {
element?.let {
continue // error: 'break' or 'continue' jumps across a function or a class boundary
}
}
}
And even this code:
fun foo(elements: List<String?>) {
loop# for (element in elements) {
element?.let {
continue#loop // error: 'break' or 'continue' jumps across a function or a class boundary
}
}
}
Does not compile with error:
'break' or 'continue' jumps across a function or a class boundary
I know that in this particular case I can use filterNotNull or manual check with smart cast, but my question is why it is not allowed to use continue here?
Please vote for this feature here: https://youtrack.jetbrains.com/issue/KT-1436
These would be called "non-local" breaks and continues. According to the documentation:
break and continue are not yet available in inlined lambdas, but we are planning to support them too.
Using a bare (e.g. non-local) return inside a lambda is only supported if it is an inlined lambda (because otherwise it doesn't have awareness of the context it is called from). So break and continue should be able to be supported. I don't know the reason for the functionality to be delayed.
Note, there are work-arounds for both of them by run either inside or outside the loop, and taking advantage of the fact that at least non-local returns are supported for inline functions.
fun foo(elements: List<String?>) {
run {
for (element in elements) {
element?.let {
println("Non-null value found in list.")
return#run // breaks the loop
}
}
}
println("Finished checking list")
}
fun bar(elements: List<String?>) {
for (element in elements) {
run {
element?.let {
return#run // continues the loop
}
println("Element is a null value.")
}
}
}