How to recover from akka.stream.io.Framing$FramingException - scala-2.11

On: akka-stream-experimental_2.11 1.0.
We are using Framing.delimiter in a Tcp server. When a message arrives with length greater than maximumFrameLength the FramingException is thrown and we could capture it from OnError of the ActorSubscriber.
Server Code:
def bind(address: String, port: Int, target: ActorRef, maxInFlight: Int, maxFrameLength: Int)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach {
conn: Tcp.IncomingConnection =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target, maxInFlight))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = maxFrameLength, allowTruncation = true))
.map(raw ⇒ Message(raw))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
Subscriber code:
class TargetSubscriber(target: ActorRef, maxInFlight: Int) extends ActorSubscriber with ActorLogging {
private var inFlight = 0
override protected def requestStrategy = new MaxInFlightRequestStrategy(maxInFlight) {
override def inFlightInternally = inFlight
}
override def receive = {
case OnNext(msg: Message) ⇒
target ! msg
inFlight += 1
case OnError(t) ⇒
inFlight -= 1
log.error(t, "Subscriber encountered error")
case TargetAck(_) ⇒
inFlight -= 1
}
}
Problem:
Messages that are under the max frame length do not flow after this exception for that incoming connection. killing the client and re running it works fine.
ActorSubscriber does not honor supervision
What is the correct way to skip the bad message and continue with the next good message ?

Have you tried to put supervision on the targetFlow sink instead of the whole materialiser? I don't see it anywhere here and I believe it should be set on that flow directly.
Stil this is more a guess than science ;)

I had the same exception reading from a file, and for me it was solved by putting a return after last line.

Related

Retrofit OkHttp - "unexpected end of stream"

I am getting "Unexpected end of stream" while using Retrofit (2.9.0) with OkHttp3 (4.9.1)
Retrofit configuration:
interface ApiServiceInterface {
companion object Factory{
fun create(): ApiServiceInterface {
val interceptor = HttpLoggingInterceptor()
interceptor.level = HttpLoggingInterceptor.Level.BODY
val client = OkHttpClient.Builder()
.connectTimeout(30, TimeUnit.SECONDS)
.writeTimeout(30, TimeUnit.SECONDS)
.readTimeout(30,TimeUnit.SECONDS)
.addInterceptor(Interceptor { chain ->
chain.request().newBuilder()
.addHeader("Connection", "close")
.addHeader("Accept-Encoding", "identity")
.build()
.let(chain::proceed)
})
.retryOnConnectionFailure(true)
.connectionPool(ConnectionPool(0, 5, TimeUnit.MINUTES))
.protocols(listOf(Protocol.HTTP_1_1))
.build()
val gson = GsonBuilder().setLenient().create()
val retrofit = Retrofit.Builder()
.addCallAdapterFactory(CoroutineCallAdapterFactory())
.addConverterFactory(GsonConverterFactory.create(gson))
.baseUrl("http://***.***.***.***:****")
.client(client)
.build()
return retrofit.create(ApiServiceInterface::class.java)
}
}
#Headers("Content-type: application/json", "Connection: close", "Accept-Encoding: identity")
#POST("/")
fun requestAsync(#Body data: JsonObject): Deferred<Response>
}
So far I have found out the following:
This issue only occurs for me while using Android Studio emulators running from Windows series OS (7, 10, 11) - this was reproduced on 2 different laptops from different networks.
If running Android Studio emulators under OS X the issue won't reproduce in 100% cases.
ARC/Postman clients never has any issues completing same requests to my backend.
On running from Windows Android Studio emulators this issue reproduces in about 10-50% requests, other requests work without problem.
The identical requests can result in this error or complete sucessfully.
Responses which take about 11 sec to complete can result in success, while responses which take about 100 msec to complete can result in this error.
Commenting off .client(client) from retrofit configuration eliminates this issue, but I loose the opportunity to use interceptors and other OkHttp functionality.
Adding headers (Connection: close, Accept-Encoding: identity) does not solve issue.
Turning retryOnConnectionFailure on or off has no impact on issue as well.
Changing HttpLoggingInterceptor level or removing it completely does not solve issue.
Server-side configuration:
const http = require('http');
const server = http.createServer((req, res) => {
const callback = function(code, request, data) {
let result = responser(code, request, data);
res.writeHead(200, {
'Content-Type' : 'x-application/json',
'Connection': 'close',
'Content-Length': Buffer.byteLength(result)
});
res.end(result);
};
...
}
server.listen(process.env.PORT, process.env.HOSTNAME, () => {
console.log(`Server is running`);
});
So, based on 1,2,3 - this is unlikely server-side issue.
Based on 4, 5, 6 - it is not malformed request related or execution time related issue.
Guessing from 7 - this issue roots lay in OkHttp rather than Retrofit itself.
I have read almost half of stackoverflow is search of resolution, like:
unexpected end of stream retrofit
Retrofit OkHttp unexpected end of stream on Connection error
and also discussion at OkHttp on Github:
https://github.com/square/okhttp/issues/3682
https://github.com/square/okhttp/issues/3715
But nothing helped so far.
Any idea what might be causing the problem?
Update
I've got more info on situation.
First, I changed headers on backend to not to pass Content-Length and pass Transfer-Encoding : identity instead. I don't know why, but Postman gives an error if theese headers are present both, saying it is not right.
res.writeHead(200, {
'Content-Type' : 'x-application/json',
'Connection': 'close',
'Transfer-Encoding': 'identity'
});
After that I started to receive another error on Windows hosted Android Studio emulators (with equal ratio of fail / success to "Unexpected end of stream")
2021-12-09 14:58:19.696 401-401/? D/P2P-> FRG DEBUG:: java.io.EOFException: End of input at line 1 column 1807 path $.meta
at com.google.gson.stream.JsonReader.nextNonWhitespace(JsonReader.java:1397)
at com.google.gson.stream.JsonReader.doPeek(JsonReader.java:483)
at com.google.gson.stream.JsonReader.hasNext(JsonReader.java:415)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:216)
at retrofit2.converter.gson.GsonResponseBodyConverter.convert(GsonResponseBodyConverter.java:40)
at retrofit2.converter.gson.GsonResponseBodyConverter.convert(GsonResponseBodyConverter.java:27)
at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
at retrofit2.OkHttpCall$1.onResponse(OkHttpCall.java:153)
at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:519)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
Spending a lot of time debugging this issue I have found that this exception was generated by JsonReader.java in method nextNonWhitespace where it try to to get colons, double quotes and curly or square braces to compose json object from decoded as char array buffer.
This buffer itself is received in fillBuffer method of the same module and it has length limit of 1024 elements. In my case the backend response is longer that this value (1807 chars), so while JsonReader.java parses my response as json object it do this in 2 iterations.
Each iteration it fills the buffer here:
int total;
while ((total = in.read(buffer, limit, buffer.length - limit)) != -1) {
limit += total;
// if this is the first read, consume an optional byte order mark (BOM) if it exists
if (lineNumber == 0 && lineStart == 0 && limit > 0 && buffer[0] == '\ufeff') {
pos++;
lineStart++;
minimum++;
}
if (limit >= minimum) {
return true;
}
}
the read method is called on ResponseBody.kt class from okhttp3
#Throws(IOException::class)
override fun read(cbuf: CharArray, off: Int, len: Int): Int {
if (closed) throw IOException("Stream closed")
val finalDelegate = delegate ?: InputStreamReader(
source.inputStream(),
source.readBomAsCharset(charset)).also {
delegate = it
}
return finalDelegate.read(cbuf, off, len)
}
The main problem is:
At first iteration all goes well, ResponseBody.kt "reads" first 1024 chars and gives them to JsonReader.java where it composes a part of response object.
When second iteration comes ResponseBody.kt "reads" the last part of response and fills with it the start of char buffer, so char buffer now contains as its first elements the tail of response, and after that - all elements which was left there after firts iteration.
The main problem is that it im most cases (about 80%) looses last char from response, in about 10% in looses 2 last chars from response and in about 10% it reads all chars. Here is shots:
It must contains 783 chars to complete json, but as shown at line 1290 it receives only 782.
Looking at buffer itself
the char at 782 index (783 in order) must be second curly brace that closes json root, but instead of it there are leftovers from first iteration started. This results in exception mentioned above.
Now if we look at situation where requests finished successfully:
With the same request it occasionly returns valid number of chars: 783
And the buffer itself is:
Now the second brace is present where it must be.
In this case request will be successfull.
The same response ending from Postman:
The Postman success rate in parsing response is 100%, the same is true for OS X hosted android studio emulators and real devices I've used.
Update 2
It seems full buffer obtained in RealBufferedSource.kt:
internal inline fun RealBufferedSource.commonSelect(options: Options): Int {
check(!closed) { "closed" }
while (true) {
val index = buffer.selectPrefix(options, selectTruncated = true)
when (index) {
-1 -> {
return -1
}
-2 -> {
// We need to grow the buffer. Do that, then try it all again.
if (source.read(buffer, Segment.SIZE.toLong()) == -1L) return -1
}
else -> {
// We matched a full byte string: consume it and return it.
val selectedSize = options.byteStrings[index].size
buffer.skip(selectedSize.toLong())
return index
}
}
}
}
and here it is already missing last char:
Update 3
Found this unsolved question which is exactly the same behavior:
Retrofit Json data truncated
Also comment from Android Studio emulators issues tracker:
https://issuetracker.google.com/issues/119027639#comment9
OK, It took some time, but I've found what was going wrong and how to workaround that.
When Android Studio's emulators running in Windows series OS (checked for 7 & 10) receive json-typed reply from server with retrofit it can with various probability loose 1 or 2 last symbols of the body when it is decoded to string, this symbols contain closing curly brackets and so such body could not be parsed to object by gson converter which results in throwing exception.
The idea of workaround I found is to add an interceptor to retrofit which would check the decoded to string body if its last symbols match those of valid json response and add them if they are missed.
interface ApiServiceInterface {
companion object Factory{
fun create(): ApiServiceInterface {
val interceptor = HttpLoggingInterceptor()
interceptor.level = HttpLoggingInterceptor.Level.BODY
val stringInterceptor = Interceptor { chain: Interceptor.Chain ->
val request = chain.request()
val response = chain.proceed(request)
val source = response.body()?.source()
source?.request(Long.MAX_VALUE)
val buffer = source?.buffer()
var responseString = buffer?.clone()?.readString(Charset.forName("UTF-8"))
if (responseString != null && responseString.length > 2) {
val lastTwo = responseString.takeLast(2)
if (lastTwo != "}}") {
val lastOne = responseString.takeLast(1)
responseString = if (lastOne != "}") {
"$responseString}}"
} else {
"$responseString}"
}
}
}
val contentType = response.body()?.contentType()
val body = ResponseBody.create(contentType, responseString ?: "")
return#Interceptor response.newBuilder().body(body).build()
}
val client = OkHttpClient.Builder()
.connectTimeout(30, TimeUnit.SECONDS)
.writeTimeout(30, TimeUnit.SECONDS)
.readTimeout(30,TimeUnit.SECONDS)
.addInterceptor(interceptor)
.addInterceptor(stringInterceptor)
.retryOnConnectionFailure(true)
.connectionPool(ConnectionPool(0, 5, TimeUnit.MINUTES))
.protocols(listOf(Protocol.HTTP_1_1))
.build()
val gson = GsonBuilder().create()
val retrofit = Retrofit.Builder()
.addCallAdapterFactory(CoroutineCallAdapterFactory())
.addConverterFactory(GsonConverterFactory.create(gson))
.addConverterFactory(ScalarsConverterFactory.create())
.baseUrl("http://3.124.6.203:5000")
.client(client)
.build()
return retrofit.create(ApiServiceInterface::class.java)
}
}
#Headers("Content-type: application/json", "Connection: close", "Accept-Encoding: identity")
#POST("/")
fun requestAsync(#Body data: JsonObject): Deferred<Response>
}
After this changes the issue didn't occure.

[RabbitMQ][AMQP] Failing to get and read single message with amqp_basic_get and amqp_read_message

I want to setup a consumer with amqp to read from a specific queue. Some googling pointed out that this can be done with amqp_basic_get, and looking into the documentation, the actual message is retrieved with amqp_read_message. I also found this example which I tried to follow for implementing the basic_get. Nevertheless, I am failing to get and read a message from a specific queue.
My scenario is like this: I have two programs that communicate by publishing and consuming from the rabbitmq server. In each, a connection is declared, with two channels, one meant for consuming, and one for publishing. The flow of information is like this: program A gets the current time and publishes to rabbitmq. Upon receiving this message, program B gets its own time, packages its time and the received time in a message that it publishes to rabbitmq. Program A should consume this message. However, I cannot succeed in reading from the namedQueue.
Program A (in c++, and uses the amqp.c) is implemented as follows:
... after creating the connection
//Create channels
amqp_channel_open_ok_t *res = amqp_channel_open(conn, channelIDPub);
assert(res != NULL);
amqp_channel_open_ok_t *res2 = amqp_channel_open(conn, channelIDSub);
assert(res2 != NULL);
//Declare exchange
exchange = "exchangeName";
exchangetype = "direct";
amqp_exchange_declare(conn, channelIDPub, amqp_cstring_bytes(exchange.c_str()),
amqp_cstring_bytes(exchangetype.c_str()), 0, 0, 0, 0,
amqp_empty_table);
...
throw_on_amqp_error(amqp_get_rpc_reply(conn), printText.c_str());
//Bind the exchange to the queue
const char* qname = "namedQueue";
amqp_bytes_t queue = amqp_bytes_malloc_dup(amqp_cstring_bytes(qname));
amqp_queue_declare_ok_t *r = amqp_queue_declare(
conn, channelIDSub, queue, 0, 0, 0, 0, amqp_empty_table);
throw_on_amqp_error(amqp_get_rpc_reply(conn), "Declaring queue");
if (queue.bytes == NULL) {
fprintf(stderr, "Out of memory while copying queue name");
return;
}
amqp_queue_bind(conn, channelIDSub, queue, amqp_cstring_bytes(exchange.c_str()),
amqp_cstring_bytes(queueBindingKey.c_str()), amqp_empty_table);
throw_on_amqp_error(amqp_get_rpc_reply(conn), "Binding queue");
amqp_basic_consume(conn, channelIDSub, queue, amqp_empty_bytes, 0, 0, 1,
amqp_empty_table);
throw_on_amqp_error(amqp_get_rpc_reply(conn), "Consuming");
// ...
// In order to get a message from rabbitmq
amqp_rpc_reply_t res, res2;
amqp_message_t message;
amqp_boolean_t no_ack = false;
amqp_maybe_release_buffers(conn);
printf("were here, with queue name %s, on channel %d\n", queueName, channelIDSub);
amqp_time_t deadline;
struct timeval timeout = { 1 , 0 };//same timeout used in consume(json)
int time_rc = amqp_time_from_now(&deadline, &timeout);
assert(time_rc == AMQP_STATUS_OK);
do {
res = amqp_basic_get(conn, channelIDSub, amqp_cstring_bytes("namedQueue"), no_ack);
} while (res.reply_type == AMQP_RESPONSE_NORMAL &&
res.reply.id == AMQP_BASIC_GET_EMPTY_METHOD
&& amqp_time_has_past(deadline) == AMQP_STATUS_OK);
if (AMQP_RESPONSE_NORMAL != res.reply_type || AMQP_BASIC_GET_OK_METHOD != res.reply.id)
{
printf("amqp_basic_get error codes amqp_response_normal %d, amqp_basic_get_ok_method %d\n", res.reply_type, res.reply.id);
return false;
}
res2 = amqp_read_message(conn,channelID,&message,0);
printf("error %s\n", amqp_error_string2(res2.library_error));
printf("5:reply type %d\n", res2.reply_type);
if (AMQP_RESPONSE_NORMAL != res2.reply_type) {
printf("6:reply type %d\n", res2.reply_type);
return false;
}
payload = std::string(reinterpret_cast< char const * >(message.body.bytes), message.body.len);
printf("then were here\n %s", payload.c_str());
amqp_destroy_message(&message);
Program B (in python) is as follows
#!/usr/bin/env python3
import pika
import json
from datetime import datetime, timezone
import time
import threading
cosimTime = 0.0
newData = False
lock = threading.Lock()
thread_stop = False
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
connectionPublish = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channelConsume = connection.channel()
channelPublish = connectionPublish.channel()
print("Declaring exchange")
channelConsume.exchange_declare(exchange='exchangeName', exchange_type='direct')
channelPublish.exchange_declare(exchange='exchangeName', exchange_type='direct')
print("Creating queue")
result = channelConsume.queue_declare(queue='', exclusive=True)
queue_name = result.method.queue
result2 = channelPublish.queue_declare(queue='namedQueue', exclusive=False, auto_delete=False)
channelConsume.queue_bind(exchange='exchangeName', queue=queue_name,
routing_key='fromB')
channelPublish.queue_bind(exchange='exchangeName', queue="namedQueue",
routing_key='toB')
print(' [*] Waiting for logs. To exit press CTRL+C')
def callbackConsume(ch, method, properties, body):
global newData, cosimTime
print("\nReceived [x] %r" % body)
#cosimTime = datetime.datetime.strptime(body, "%Y-%m-%dT%H:%M:%S.%f%z")
with lock:
newData = True
cosimTime = body.decode()
cosimTime = json.loads(cosimTime)
#print(cosimTime)
def publishRtime():
global newData
while not thread_stop:
if newData:
#if True:
with lock:
newData = False
msg = {}
msg['rtime'] = datetime.now(timezone.utc).astimezone().isoformat(timespec='milliseconds')
msg['cosimtime'] = cosimTime["simAtTime"]
print("\nSending [y] %s" % str(msg))
channelPublish.basic_publish(exchange='exchangeName',
routing_key='toB',
body=json.dumps(msg))
#time.sleep(1)
channelConsume.basic_consume(
queue=queue_name, on_message_callback=callbackConsume, auto_ack=True)
try:
thread = threading.Thread(target = publishRtime)
thread.start()
channelConsume.start_consuming()
except KeyboardInterrupt:
print("Exiting...")
channelConsume.stop_consuming()
thread_stop = True
connection.close()
What program A outputs is:
amqp_basic_get error codes amqp_response_normal 1, amqp_basic_get_ok_method 3932232
which is the code for AMQP_BASIC_GET_EMPTY_METHOD.
Program B gets the data, and publishes continuously.
If I slightly modify B to just publish all the time a specific string, then it seems that the amqp_basic_get returns successfully, however then it fails at amqp_read_message with the code AMQP_RESPONSE_LIBRARY_EXCEPTION.
Any idea how to get this to work, what I am missing the setup?
The issue was in the queue_declare where the auto_delete parameter was not matching on both sides.

how to receive information in kotlin from a server in python (socketserver)

server
I have tried almost everything to receive text from python
I don't know where the problem comes from from the client or from the server
try:
llamadacod = self.request.recv(1024)
llamada = self.decode(llamadacod)
print(f"{color.A}{llamada}")
time.sleep(0.1)
if llamada == "conectado":
msg = "Hello"
msgcod = self.encode(msg)
print(f"{color.G}{msg}")
self.request.send(msgcod)
client
val thread = Thread(Runnable {
try{
val client = Socket("localHost",25565)
client.setReceiveBufferSize(1024)
client.outputStream.write("conectado".toByteArray())
val text = InputStreamReader(client.getInputStream())
recibir = text.toString()
client.outputStream.write("Client_desconect".toByteArray())
client.close()
I already solved it, the solution was very simple, you just had to ensure that both the server and the client would occupy the same way of communicating
client :
val input = DataInputStream(client.getInputStream())
id = input.readUTF()
server:
self.request.send(len(msg).to_bytes(2, byteorder='big'))
self.request.send(msg)

Akka http - SSE - Not receiving streaming Json response

I am playing with Server Sent Events to get updates from akka-http v2.4.11 based micro-service. I am using akka-sse. For some reason, I am not receiving any updates on my Javascript front-end. However, as soon as, I terminate or kill the server process, I get some of the messages in the front-end. My code looks like this:
val start = ByteString.empty
val sep = ByteString("\n")
val end = ByteString.empty
import Fill._
implicit val jsonStreamingSupport: JsonEntityStreamingSupport =
EntityStreamingSupport.json()
.withFramingRenderer(Flow[ByteString].intersperse(start,
sep,
end))
import de.heikoseeberger.akkasse.EventStreamMarshalling._
def routes: Route = pathPrefix("subscribe") {
path("fills") {
get {
complete {
Source.actorPublisher[Fill](FillProvider())
.map(fill ⇒ sse(fill))
.keepAlive(1.second, () ⇒ ServerSentEvent.heartbeat)
}
}
}
}
def sse[T: ClassTag](obj: T)(implicit w: JsonWriter[T]): ServerSentEvent = {
ServerSentEvent(data = w.write(obj).compactPrint,
eventType = classTag[T].runtimeClass.getSimpleName)
}
Any pointers what I can be doing wrong? To me, it seems that I am following every instructions as mentioned here

How do i create a TCP receiver that only consumes messages using akka streams?

We are on: akka-stream-experimental_2.11 1.0.
Inspired by the example
We wrote a TCP receiver as follows:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val serverFlow = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(message => {
target ? new Message(message); ByteString.empty
})
conn handleWith serverFlow
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
However, our intention was to have the receiver not respond at all and only sink the message. (The TCP message publisher does not care about response ).
Is it even possible? to not respond at all since akka.stream.scaladsl.Tcp.IncomingConnection takes a flow of type: Flow[ByteString, ByteString, Unit]
If yes, some guidance will be much appreciated. Thanks in advance.
One attempt as follows passes my unit tests but not sure if its the best idea:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(Message(_))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
You are on the right track. To keep the possibility to close the connection at some point you may want to keep the promise and complete it later on. Once completed with an element this element published by the source. However, as you don't want any element to be published on the connection, you can use drop(1) to make sure the source will never emit any element.
Here's an updated version of your example (untested):
val promise = Promise[ByteString]()
// this source will complete when the promise is fulfilled
// or it will complete with an error if the promise is completed with an error
val completionSource = Source(promise.future).drop(1)
completionSource // only used to complete later
.via(conn.flow) // I reordered the flow for better readability (arguably)
.runWith(targetSink)
// to close the connection later complete the promise:
def closeConnection() = promise.success(ByteString.empty) // dummy element, will be dropped
// alternatively to fail the connection later, complete with an error
def failConnection() = promise.failure(new RuntimeException)