Retry is executed only once in Spring Cloud Stream Reactive - spring-webflux

When I try again in Spring Cloud Stream Reactive, a situation that I don't understand arises, so I ask a question.
In case of sending String type data per second, after processing in s-c-stream Function, I intentionally caused RuntimeException according to conditions.
#Bean
fun test(): Function<Flux<String>, Flux<String>?> = Function{ input ->
input.map { sellerId ->
if(sellerId == "I-ZICGO")
throw RuntimeException("intentional")
else
log.info("do normal: {}", sellerId)
sellerId
}.retryWhen(Retry.from { companion ->
companion.map { rs ->
if (rs.totalRetries() < 3) { // retrying 3 times
log.info("retry!!!: {}", rs.totalRetries())
rs.totalRetries()
}
else
throw Exceptions.propagate(rs.failure())
}
})
}
However, the result of running the above logic is:
2021-02-25 16:14:29.319 INFO 36211 --- [container-0-C-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'consumer.processingSellerItem-in-0' has 0 subscriber(s).
2021-02-25 16:14:29.322 INFO 36211 --- [container-0-C-1] k.c.m.c.service.impl.ItemServiceImpl : retry!!!: 0
2021-02-25 16:14:29.322 INFO 36211 --- [container-0-C-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'consumer.processingSellerItem-in-0' has 1 subscriber(s).
Retry is processed only once.
Should I change from reactive to imperative to fix this?

In short, yes. The retry settings are meaningless for reactive functions. You can see a more details explanation on the similar SO question here

Related

Redis Reactive Streams Subscriber Thread Hangs

Am trying to use Spring Boot Redis Reactive Stream to subscribe to the stream as a listener.When the data is inserted into the stream listener will pass to client using GRPC stream. I have pointer which is just redis value to keep track of last record which was delivered to the client.The thread is getting blocked randomly when i set pointer to redis and getting timeout.
Mainly am using this pointer if client re-connects after sometime I wanted to deliver data which was last sent and till current data template.opsForValue().set(pointerKey, msg.getId().toString()) .block(Duration.ofSeconds(5)); this place thread is getting blocked
Please let me know is anything wrong in the below code. If posted 10 record to the stream I got the error after receiving 5 record.
Code
public void subscribe(){
String channelId = this.streamRequest.getTopic();
String identifier = this.streamRequest.getIdentifier();
boolean isNew = this.streamRequest.getNew();
String pointerKey = channelId + "_" + identifier + "_pointer";
StreamOffset<String> stringStreamOffset = StreamOffset.fromStart(channelId);
if(isNew){
// If client want to read data from the start
// Removed the pointer
template.opsForValue().delete(pointerKey).block();
} else {
String id = template.opsForValue().get(pointerKey).block();
stringStreamOffset = id != null ? StreamOffset.create(channelId, ReadOffset.from(id)) : StreamOffset.fromStart(channelId);
}
logger.info("[SC] subscribed {}", this.streamRequest);
Flux<ObjectRecord<String, String>> receiver = this.streamReceive.receive(stringStreamOffset);
disposable = receiver.subscribe(msg -> {
logger.info("Processing message {}", msg.getValue());
String value = msg.getValue();
StreamResponse streamResponse = StreamResponse.newBuilder().setData(value).build();
try{
logger.info("[SC] posting data to the grpc client topic {}", this.streamRequest);
this.responseObserver.onNext(streamResponse);
logger.info("[SC] Successfully posted data to the grpc client {}", this.streamRequest);
logger.info("[SC] Updating pointer {}", pointerKey);
template.opsForValue().set(pointerKey, msg.getId().toString())
.block(Duration.ofSeconds(5));
logger.info("[SC] pointer update completed {}", pointerKey);
}catch (Exception ex){
logger.error("Error:{}", ex.getMessage());
this.responseObserver.onError(ex.getCause());
close();
}
});
}
Error:
Name: lettuce-nioEventLoop-4-1
State: TIMED_WAITING on java.util.concurrent.CountDownLatch$Sync#3c9ebf7a
Total blocked: 2 Total waited: 60
Stack trace:
java.base#17.0.2/jdk.internal.misc.Unsafe.park(Native Method)
java.base#17.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:717)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1074)
java.base#17.0.2/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:276)
app//reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:121)
app//reactor.core.publisher.Mono.block(Mono.java:1731)
app//ai.jiffy.message.publisher.ws.StreamConnection.setPointer(StreamConnection.java:68)
app//ai.jiffy.message.publisher.ws.StreamConnection.lambda$new$0(StreamConnection.java:54)
app//ai.jiffy.message.publisher.ws.StreamConnection$$Lambda$1318/0x0000000801553610.accept(Unknown Source)
app//reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
app//reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:154)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.onStreamMessage(DefaultStreamReceiver.java:398)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.access$300(DefaultStreamReceiver.java:210)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:360)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:351)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onNext(FluxUsingWhen.java:345)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//io.lettuce.core.RedisPublisher$ImmediateSubscriber.onNext(RedisPublisher.java:886)
app//io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:291)
app//io.lettuce.core.output.StreamingOutput$Subscriber.onNext(StreamingOutput.java:64)
app//io.lettuce.core.output.StreamReadOutput.complete(StreamReadOutput.java:110)
app//io.lettuce.core.protocol.RedisStateMachine.doDecode(RedisStateMachine.java:343)
app//io.lettuce.core.protocol.RedisStateMachine.decode(RedisStateMachine.java:295)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:841)
app//io.lettuce.core.protocol.CommandHandler.decode0(CommandHandler.java:792)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:766)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:658)
app//io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base#17.0.2/java.lang.Thread.run(Thread.java:833)
Name: lettuce-nioEventLoop-4-1
State: TIMED_WAITING on java.util.concurrent.CountDownLatch$Sync#3c9ebf7a
Total blocked: 2 Total waited: 60
Stack trace:
java.base#17.0.2/jdk.internal.misc.Unsafe.park(Native Method)
java.base#17.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:717)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1074)
java.base#17.0.2/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:276)
app//reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:121)
app//reactor.core.publisher.Mono.block(Mono.java:1731)
app//ai.jiffy.message.publisher.ws.StreamConnection.setPointer(StreamConnection.java:68)
app//ai.jiffy.message.publisher.ws.StreamConnection.lambda$new$0(StreamConnection.java:54)
app//ai.jiffy.message.publisher.ws.StreamConnection$$Lambda$1318/0x0000000801553610.accept(Unknown Source)
app//reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
app//reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:154)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.onStreamMessage(DefaultStreamReceiver.java:398)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.access$300(DefaultStreamReceiver.java:210)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:360)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:351)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onNext(FluxUsingWhen.java:345)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//io.lettuce.core.RedisPublisher$ImmediateSubscriber.onNext(RedisPublisher.java:886)
app//io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:291)
app//io.lettuce.core.output.StreamingOutput$Subscriber.onNext(StreamingOutput.java:64)
app//io.lettuce.core.output.StreamReadOutput.complete(StreamReadOutput.java:110)
app//io.lettuce.core.protocol.RedisStateMachine.doDecode(RedisStateMachine.java:343)
app//io.lettuce.core.protocol.RedisStateMachine.decode(RedisStateMachine.java:295)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:841)
app//io.lettuce.core.protocol.CommandHandler.decode0(CommandHandler.java:792)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:766)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:658)
app//io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base#17.0.2/java.lang.Thread.run(Thread.java:833)
Thanks in advance.

How to send a message with priority to RabbitMQ with StreamBridge

I'm using RabbitMQ. I've defined a queue with priority, and I can send messages to this queue with some priority value using RMQ GUI, and consumers also get the messages in sorted order, but when I try to send the message from my java code using Stream bridge, I don't know how to specify the priority with the message.
Here's what I have tried :
I have added x-max-priority: 10 to the queue while creating the queue.
Consumer example =
#Bean
public Consumer<Message<String>> testListener() {
return (m) -> {
System.out.println("inside consumer with message : " + m);
System.out.println("headers : " + m.getHeaders());
System.out.println("payload : " + m.getPayload());
};
}
Producer example =
#GET
#Path("test/")
public void test(#Context HttpServletRequest request) {
System.out.println("inside test");
try {
String payload = "hello world";
logger.info("going to send a message : {}", payload);
int priority = 5;
Message<String> message = MessageBuilder.withPayload(payload)
.setHeader("priority", priority)
.build();
boolean res = STREAM_BRIDGE.send("testWriter-out-0", message);
System.out.println(message);
System.out.println(res);
} catch (Exception e) {
logger.error(e);
}
}
The output of the Producer =
-> inside test
-> GenericMessage [payload=hello world, headers={priority=5, id=some_id, timestamp=epoch}]
-> true
The output of the Consumer =
-> inside consumer with message : GenericMessage [payload=hello world, headers={amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedExchange=test_exchange, amqp_deliveryTag=1, deliveryAttempt=1, amqp_consumerQueue=test_exchange.ats, amqp_redelivered=false, amqp_receivedRoutingKey=test_exchange, amqp_timestamp=date_time, amqp_messageId=some_id, id=some_id, amqp_consumerTag=some_tag, sourceData=(Body:'hello world' MessageProperties [headers={}, timestamp=date_time, messageId=some_id, contentType=application/json, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=test_exchange, receivedRoutingKey=test_exchange, deliveryTag=1, consumerTag=some_tag, consumerQueue=test_exchange.ats]), contentType=application/json, timestamp=epoch}]
-> headers : {amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedExchange=test_exchange, amqp_deliveryTag=1, deliveryAttempt=1, amqp_consumerQueue=test_exchange.ats, amqp_redelivered=false, amqp_receivedRoutingKey=test_exchange, amqp_timestamp=date_time, amqp_messageId=some_id, id=some_id, amqp_consumerTag=tag, sourceData=(Body:'hello world' MessageProperties [headers={}, timestamp=date_time, messageId=some_id, contentType=application/json, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=test_exchange, receivedRoutingKey=test_exchange, deliveryTag=1, consumerTag=tag, consumerQueue=test_exchange.ats]), contentType=application/json, timestamp=epoch}
-> payload : hello world
So the message goes to RMQ and the consumer also gets the message, but on RMQ GUI when I perform Get-message operation on the Queue, I get this result =>
Message 1
The server reported 0 messages remaining.
Exchange test_exchange
Routing Key test_exchange
Redelivered ○
Properties
timestamp: timestamp
message_id: some_id
priority: 0
delivery_mode: 2
headers:
content_type: application/json
Payload hello world
11 bytes
Encoding: string
As we can see in the above result, priority is set to 0 by RMQ (and hence in the Consumer, I get the messages in the FIFO manner, not in a priority-based manner) and inside headers : only one header is present "content_type: application/json", so I think the priority is not a part of the header but is a part of properties, then how to set message properties using StreamBridge?
To conclude, I am trying to figure out how to set the priority of a message dynamically while sending it using StreamBridge, any help would be appreciated, thanks in advance !
Please, consider to use the latest Spring Cloud Stream: https://spring.io/projects/spring-cloud-stream#learn.
Apparently your spring-cloud-starter-stream-rabbit = 3.0.3.RELEASE is old enough to suffer from the issue https://github.com/spring-cloud/spring-cloud-stream/issues/1931.
Have just tested with the latest one and I got the proper priority property on the message posted into RabbitMQ queue by the mentioned StreamBridge.

Retrofit OkHttp - "unexpected end of stream"

I am getting "Unexpected end of stream" while using Retrofit (2.9.0) with OkHttp3 (4.9.1)
Retrofit configuration:
interface ApiServiceInterface {
companion object Factory{
fun create(): ApiServiceInterface {
val interceptor = HttpLoggingInterceptor()
interceptor.level = HttpLoggingInterceptor.Level.BODY
val client = OkHttpClient.Builder()
.connectTimeout(30, TimeUnit.SECONDS)
.writeTimeout(30, TimeUnit.SECONDS)
.readTimeout(30,TimeUnit.SECONDS)
.addInterceptor(Interceptor { chain ->
chain.request().newBuilder()
.addHeader("Connection", "close")
.addHeader("Accept-Encoding", "identity")
.build()
.let(chain::proceed)
})
.retryOnConnectionFailure(true)
.connectionPool(ConnectionPool(0, 5, TimeUnit.MINUTES))
.protocols(listOf(Protocol.HTTP_1_1))
.build()
val gson = GsonBuilder().setLenient().create()
val retrofit = Retrofit.Builder()
.addCallAdapterFactory(CoroutineCallAdapterFactory())
.addConverterFactory(GsonConverterFactory.create(gson))
.baseUrl("http://***.***.***.***:****")
.client(client)
.build()
return retrofit.create(ApiServiceInterface::class.java)
}
}
#Headers("Content-type: application/json", "Connection: close", "Accept-Encoding: identity")
#POST("/")
fun requestAsync(#Body data: JsonObject): Deferred<Response>
}
So far I have found out the following:
This issue only occurs for me while using Android Studio emulators running from Windows series OS (7, 10, 11) - this was reproduced on 2 different laptops from different networks.
If running Android Studio emulators under OS X the issue won't reproduce in 100% cases.
ARC/Postman clients never has any issues completing same requests to my backend.
On running from Windows Android Studio emulators this issue reproduces in about 10-50% requests, other requests work without problem.
The identical requests can result in this error or complete sucessfully.
Responses which take about 11 sec to complete can result in success, while responses which take about 100 msec to complete can result in this error.
Commenting off .client(client) from retrofit configuration eliminates this issue, but I loose the opportunity to use interceptors and other OkHttp functionality.
Adding headers (Connection: close, Accept-Encoding: identity) does not solve issue.
Turning retryOnConnectionFailure on or off has no impact on issue as well.
Changing HttpLoggingInterceptor level or removing it completely does not solve issue.
Server-side configuration:
const http = require('http');
const server = http.createServer((req, res) => {
const callback = function(code, request, data) {
let result = responser(code, request, data);
res.writeHead(200, {
'Content-Type' : 'x-application/json',
'Connection': 'close',
'Content-Length': Buffer.byteLength(result)
});
res.end(result);
};
...
}
server.listen(process.env.PORT, process.env.HOSTNAME, () => {
console.log(`Server is running`);
});
So, based on 1,2,3 - this is unlikely server-side issue.
Based on 4, 5, 6 - it is not malformed request related or execution time related issue.
Guessing from 7 - this issue roots lay in OkHttp rather than Retrofit itself.
I have read almost half of stackoverflow is search of resolution, like:
unexpected end of stream retrofit
Retrofit OkHttp unexpected end of stream on Connection error
and also discussion at OkHttp on Github:
https://github.com/square/okhttp/issues/3682
https://github.com/square/okhttp/issues/3715
But nothing helped so far.
Any idea what might be causing the problem?
Update
I've got more info on situation.
First, I changed headers on backend to not to pass Content-Length and pass Transfer-Encoding : identity instead. I don't know why, but Postman gives an error if theese headers are present both, saying it is not right.
res.writeHead(200, {
'Content-Type' : 'x-application/json',
'Connection': 'close',
'Transfer-Encoding': 'identity'
});
After that I started to receive another error on Windows hosted Android Studio emulators (with equal ratio of fail / success to "Unexpected end of stream")
2021-12-09 14:58:19.696 401-401/? D/P2P-> FRG DEBUG:: java.io.EOFException: End of input at line 1 column 1807 path $.meta
at com.google.gson.stream.JsonReader.nextNonWhitespace(JsonReader.java:1397)
at com.google.gson.stream.JsonReader.doPeek(JsonReader.java:483)
at com.google.gson.stream.JsonReader.hasNext(JsonReader.java:415)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:216)
at retrofit2.converter.gson.GsonResponseBodyConverter.convert(GsonResponseBodyConverter.java:40)
at retrofit2.converter.gson.GsonResponseBodyConverter.convert(GsonResponseBodyConverter.java:27)
at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
at retrofit2.OkHttpCall$1.onResponse(OkHttpCall.java:153)
at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:519)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
Spending a lot of time debugging this issue I have found that this exception was generated by JsonReader.java in method nextNonWhitespace where it try to to get colons, double quotes and curly or square braces to compose json object from decoded as char array buffer.
This buffer itself is received in fillBuffer method of the same module and it has length limit of 1024 elements. In my case the backend response is longer that this value (1807 chars), so while JsonReader.java parses my response as json object it do this in 2 iterations.
Each iteration it fills the buffer here:
int total;
while ((total = in.read(buffer, limit, buffer.length - limit)) != -1) {
limit += total;
// if this is the first read, consume an optional byte order mark (BOM) if it exists
if (lineNumber == 0 && lineStart == 0 && limit > 0 && buffer[0] == '\ufeff') {
pos++;
lineStart++;
minimum++;
}
if (limit >= minimum) {
return true;
}
}
the read method is called on ResponseBody.kt class from okhttp3
#Throws(IOException::class)
override fun read(cbuf: CharArray, off: Int, len: Int): Int {
if (closed) throw IOException("Stream closed")
val finalDelegate = delegate ?: InputStreamReader(
source.inputStream(),
source.readBomAsCharset(charset)).also {
delegate = it
}
return finalDelegate.read(cbuf, off, len)
}
The main problem is:
At first iteration all goes well, ResponseBody.kt "reads" first 1024 chars and gives them to JsonReader.java where it composes a part of response object.
When second iteration comes ResponseBody.kt "reads" the last part of response and fills with it the start of char buffer, so char buffer now contains as its first elements the tail of response, and after that - all elements which was left there after firts iteration.
The main problem is that it im most cases (about 80%) looses last char from response, in about 10% in looses 2 last chars from response and in about 10% it reads all chars. Here is shots:
It must contains 783 chars to complete json, but as shown at line 1290 it receives only 782.
Looking at buffer itself
the char at 782 index (783 in order) must be second curly brace that closes json root, but instead of it there are leftovers from first iteration started. This results in exception mentioned above.
Now if we look at situation where requests finished successfully:
With the same request it occasionly returns valid number of chars: 783
And the buffer itself is:
Now the second brace is present where it must be.
In this case request will be successfull.
The same response ending from Postman:
The Postman success rate in parsing response is 100%, the same is true for OS X hosted android studio emulators and real devices I've used.
Update 2
It seems full buffer obtained in RealBufferedSource.kt:
internal inline fun RealBufferedSource.commonSelect(options: Options): Int {
check(!closed) { "closed" }
while (true) {
val index = buffer.selectPrefix(options, selectTruncated = true)
when (index) {
-1 -> {
return -1
}
-2 -> {
// We need to grow the buffer. Do that, then try it all again.
if (source.read(buffer, Segment.SIZE.toLong()) == -1L) return -1
}
else -> {
// We matched a full byte string: consume it and return it.
val selectedSize = options.byteStrings[index].size
buffer.skip(selectedSize.toLong())
return index
}
}
}
}
and here it is already missing last char:
Update 3
Found this unsolved question which is exactly the same behavior:
Retrofit Json data truncated
Also comment from Android Studio emulators issues tracker:
https://issuetracker.google.com/issues/119027639#comment9
OK, It took some time, but I've found what was going wrong and how to workaround that.
When Android Studio's emulators running in Windows series OS (checked for 7 & 10) receive json-typed reply from server with retrofit it can with various probability loose 1 or 2 last symbols of the body when it is decoded to string, this symbols contain closing curly brackets and so such body could not be parsed to object by gson converter which results in throwing exception.
The idea of workaround I found is to add an interceptor to retrofit which would check the decoded to string body if its last symbols match those of valid json response and add them if they are missed.
interface ApiServiceInterface {
companion object Factory{
fun create(): ApiServiceInterface {
val interceptor = HttpLoggingInterceptor()
interceptor.level = HttpLoggingInterceptor.Level.BODY
val stringInterceptor = Interceptor { chain: Interceptor.Chain ->
val request = chain.request()
val response = chain.proceed(request)
val source = response.body()?.source()
source?.request(Long.MAX_VALUE)
val buffer = source?.buffer()
var responseString = buffer?.clone()?.readString(Charset.forName("UTF-8"))
if (responseString != null && responseString.length > 2) {
val lastTwo = responseString.takeLast(2)
if (lastTwo != "}}") {
val lastOne = responseString.takeLast(1)
responseString = if (lastOne != "}") {
"$responseString}}"
} else {
"$responseString}"
}
}
}
val contentType = response.body()?.contentType()
val body = ResponseBody.create(contentType, responseString ?: "")
return#Interceptor response.newBuilder().body(body).build()
}
val client = OkHttpClient.Builder()
.connectTimeout(30, TimeUnit.SECONDS)
.writeTimeout(30, TimeUnit.SECONDS)
.readTimeout(30,TimeUnit.SECONDS)
.addInterceptor(interceptor)
.addInterceptor(stringInterceptor)
.retryOnConnectionFailure(true)
.connectionPool(ConnectionPool(0, 5, TimeUnit.MINUTES))
.protocols(listOf(Protocol.HTTP_1_1))
.build()
val gson = GsonBuilder().create()
val retrofit = Retrofit.Builder()
.addCallAdapterFactory(CoroutineCallAdapterFactory())
.addConverterFactory(GsonConverterFactory.create(gson))
.addConverterFactory(ScalarsConverterFactory.create())
.baseUrl("http://3.124.6.203:5000")
.client(client)
.build()
return retrofit.create(ApiServiceInterface::class.java)
}
}
#Headers("Content-type: application/json", "Connection: close", "Accept-Encoding: identity")
#POST("/")
fun requestAsync(#Body data: JsonObject): Deferred<Response>
}
After this changes the issue didn't occure.

Corda: Party rejected session request as Requestor has not been registered

I've a Corda application that using M14 to build and run corda to run a TwoPartyProtocol where either parties can exchange data to reach a data validity consensus. I've followed Corda flow cookbook to build a flow.
Also, after reading the docs from several different corda milestones I've understood that M14 no longer needs flowSessions as mentioned in the release notes which also eliminates need to register services.
My TwoPartyFlow with inner FlowLogics:
class TwoPartyFlow{
#InitiatingFlow
#StartableByRPC
open class Requestor(val price: Long,
val otherParty: Party) : FlowLogic<SignedTransaction>(){
#Suspendable
override fun call(): SignedTransaction {
val notary = serviceHub.networkMapCache.notaryNodes.single().notaryIdentity
send(otherParty, price)
/*Some code to generate SignedTransaction*/
}
}
#InitiatedBy(Requestor::class)
open class Responder(val requestingParty : Party) : FlowLogic<SignedTransaction>(){
#Suspendable
override fun call(): SignedTransaction {
val request = receive<Long>(requestor).unwrap { price -> price }
println(request)
/*Some code to generate SignedTransaction*/
}
}
}
But, running the above using startTrackedFlow from Api causes the above error:
Party CN=Other,O=Other,L=NY,C=US rejected session request: com.testapp.flow.TwoPartyFlow$Requestor has not been registered
I had hard time finding the reason from corda docs or logs since Two Party flow implementations have changed among several Milestones of corda. Can someone help me understand the problem here.
My API Call:
#GET
#Path("start-flow")
fun requestOffering(#QueryParam(value = "price") price: String) : Response{
val price : Long = 10L
/*Code to get otherParty details*/
val otherPartyHostAndPort = HostAndPort.fromString("localhost:10031")
val client = CordaRPCClient(otherPartyHostAndPort)
val services : CordaRPCOps = client.start("user1","test").proxy
val otherParty: Party = services.nodeIdentity().legalIdentity
val (status, message) = try {
val flowHandle = services.startTrackedFlow(::Requestor, price, otherParty)
val result = flowHandle.use { it.returnValue.getOrThrow() }
// Return the response.
Response.Status.CREATED to "Transaction id ${result.id} committed to ledger.\n"
} catch (e: Exception) {
Response.Status.BAD_REQUEST to e.message
}
return Response.status(status).entity(message).build()
}
My Gradle deployNodes task:
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['build']) {
directory "./build/nodes"
networkMap "CN=Controller,O=R3,OU=corda,L=London,C=UK"
node {
name "CN=Controller,O=R3,OU=corda,L=London,C=UK"
advertisedServices = ["corda.notary.validating"]
p2pPort 10021
rpcPort 10022
cordapps = []
}
node {
name "CN=Subject,O=Subject,L=NY,C=US"
advertisedServices = []
p2pPort 10027
rpcPort 10028
webPort 10029
cordapps = []
rpcUsers = [[ user: "user1", "password": "test", "permissions": []]]
}
node {
name "CN=Other,O=Other,L=NY,C=US"
advertisedServices = []
p2pPort 10030
rpcPort 10031
webPort 10032
cordapps = []
rpcUsers = [[ user: "user1", "password": "test", "permissions": []]]
}
There appears to be a couple of problems with the code you posted:
The annotation should be #StartableByRPC, not #StartableNByRPC
The price passed to startTrackedFlow should be a long, not an int
However, even after fixing these issues, I couldn't replicate your error. Can you apply these fixes, do a clean re-deploy of your nodes (gradlew clean deployNodes), and see whether the error changes?
You shouldn't be connecting to the other node via RPC. RPC is how a node's owner speaks to their node. In the real world, you wouldn't have the other node's RPC credentials, and couldn't log into the node in this way.
Instead, you should use your own node's RPC client to retrieve the counterparty's identity:
val otherParty = services.partyFromX500Name("CN=Other,O=Other,L=NY,C=US")!!
See an M14 example here: https://github.com/corda/cordapp-example/blob/release-M14/kotlin-source/src/main/kotlin/com/example/api/ExampleApi.kt.

RxJava timeout without emiting error?

Is there an option to have variant of timeout that does not emit Throwable?
I would like to have complete event emited.
You don't need to map errors with onErrorResumeNext. You can just provide a backup observable using:
timeout(long,TimeUnit,Observable)
It would be something like:
.timeout(500, TimeUnit.MILLISECONDS, Observable.empty())
You can resume from an error with another Observable, for example :
Observable<String> data = ...
data.timeout(1, TimeUnit.SECONDS)
.onErrorResumeNext(Observable.empty())
.subscribe(...);
A simpler solution that does not use Observable.timeout (thus it does not generate an error with the risk of catching unwanted exceptions) might be to simply take until a timer completes:
Observable<String> data = ...
data.takeUntil(Observable.timer(1, TimeUnit.SECOND))
.subscribe(...);
You can always use onErrorResumeNext which will get the error and you can emit whatever item you want-
/**
* Here we can see how onErrorResumeNext works and emit an item in case that an error occur in the pipeline and an exception is propagated
*/
#Test
public void observableOnErrorResumeNext() {
Subscription subscription = Observable.just(null)
.map(Object::toString)
.doOnError(failure -> System.out.println("Error:" + failure.getCause()))
.retryWhen(errors -> errors.doOnNext(o -> count++)
.flatMap(t -> count > 3 ? Observable.error(t) : Observable.just(null)),
Schedulers.newThread())
.onErrorResumeNext(t -> {
System.out.println("Error after all retries:" + t.getCause());
return Observable.just("I save the world for extinction!");
})
.subscribe(s -> System.out.println(s));
new TestSubscriber((Observer) subscription).awaitTerminalEvent(500, TimeUnit.MILLISECONDS);
}