Akka http - SSE - Not receiving streaming Json response - akka-http

I am playing with Server Sent Events to get updates from akka-http v2.4.11 based micro-service. I am using akka-sse. For some reason, I am not receiving any updates on my Javascript front-end. However, as soon as, I terminate or kill the server process, I get some of the messages in the front-end. My code looks like this:
val start = ByteString.empty
val sep = ByteString("\n")
val end = ByteString.empty
import Fill._
implicit val jsonStreamingSupport: JsonEntityStreamingSupport =
EntityStreamingSupport.json()
.withFramingRenderer(Flow[ByteString].intersperse(start,
sep,
end))
import de.heikoseeberger.akkasse.EventStreamMarshalling._
def routes: Route = pathPrefix("subscribe") {
path("fills") {
get {
complete {
Source.actorPublisher[Fill](FillProvider())
.map(fill ⇒ sse(fill))
.keepAlive(1.second, () ⇒ ServerSentEvent.heartbeat)
}
}
}
}
def sse[T: ClassTag](obj: T)(implicit w: JsonWriter[T]): ServerSentEvent = {
ServerSentEvent(data = w.write(obj).compactPrint,
eventType = classTag[T].runtimeClass.getSimpleName)
}
Any pointers what I can be doing wrong? To me, it seems that I am following every instructions as mentioned here

Related

Retrofit OkHttp - "unexpected end of stream"

I am getting "Unexpected end of stream" while using Retrofit (2.9.0) with OkHttp3 (4.9.1)
Retrofit configuration:
interface ApiServiceInterface {
companion object Factory{
fun create(): ApiServiceInterface {
val interceptor = HttpLoggingInterceptor()
interceptor.level = HttpLoggingInterceptor.Level.BODY
val client = OkHttpClient.Builder()
.connectTimeout(30, TimeUnit.SECONDS)
.writeTimeout(30, TimeUnit.SECONDS)
.readTimeout(30,TimeUnit.SECONDS)
.addInterceptor(Interceptor { chain ->
chain.request().newBuilder()
.addHeader("Connection", "close")
.addHeader("Accept-Encoding", "identity")
.build()
.let(chain::proceed)
})
.retryOnConnectionFailure(true)
.connectionPool(ConnectionPool(0, 5, TimeUnit.MINUTES))
.protocols(listOf(Protocol.HTTP_1_1))
.build()
val gson = GsonBuilder().setLenient().create()
val retrofit = Retrofit.Builder()
.addCallAdapterFactory(CoroutineCallAdapterFactory())
.addConverterFactory(GsonConverterFactory.create(gson))
.baseUrl("http://***.***.***.***:****")
.client(client)
.build()
return retrofit.create(ApiServiceInterface::class.java)
}
}
#Headers("Content-type: application/json", "Connection: close", "Accept-Encoding: identity")
#POST("/")
fun requestAsync(#Body data: JsonObject): Deferred<Response>
}
So far I have found out the following:
This issue only occurs for me while using Android Studio emulators running from Windows series OS (7, 10, 11) - this was reproduced on 2 different laptops from different networks.
If running Android Studio emulators under OS X the issue won't reproduce in 100% cases.
ARC/Postman clients never has any issues completing same requests to my backend.
On running from Windows Android Studio emulators this issue reproduces in about 10-50% requests, other requests work without problem.
The identical requests can result in this error or complete sucessfully.
Responses which take about 11 sec to complete can result in success, while responses which take about 100 msec to complete can result in this error.
Commenting off .client(client) from retrofit configuration eliminates this issue, but I loose the opportunity to use interceptors and other OkHttp functionality.
Adding headers (Connection: close, Accept-Encoding: identity) does not solve issue.
Turning retryOnConnectionFailure on or off has no impact on issue as well.
Changing HttpLoggingInterceptor level or removing it completely does not solve issue.
Server-side configuration:
const http = require('http');
const server = http.createServer((req, res) => {
const callback = function(code, request, data) {
let result = responser(code, request, data);
res.writeHead(200, {
'Content-Type' : 'x-application/json',
'Connection': 'close',
'Content-Length': Buffer.byteLength(result)
});
res.end(result);
};
...
}
server.listen(process.env.PORT, process.env.HOSTNAME, () => {
console.log(`Server is running`);
});
So, based on 1,2,3 - this is unlikely server-side issue.
Based on 4, 5, 6 - it is not malformed request related or execution time related issue.
Guessing from 7 - this issue roots lay in OkHttp rather than Retrofit itself.
I have read almost half of stackoverflow is search of resolution, like:
unexpected end of stream retrofit
Retrofit OkHttp unexpected end of stream on Connection error
and also discussion at OkHttp on Github:
https://github.com/square/okhttp/issues/3682
https://github.com/square/okhttp/issues/3715
But nothing helped so far.
Any idea what might be causing the problem?
Update
I've got more info on situation.
First, I changed headers on backend to not to pass Content-Length and pass Transfer-Encoding : identity instead. I don't know why, but Postman gives an error if theese headers are present both, saying it is not right.
res.writeHead(200, {
'Content-Type' : 'x-application/json',
'Connection': 'close',
'Transfer-Encoding': 'identity'
});
After that I started to receive another error on Windows hosted Android Studio emulators (with equal ratio of fail / success to "Unexpected end of stream")
2021-12-09 14:58:19.696 401-401/? D/P2P-> FRG DEBUG:: java.io.EOFException: End of input at line 1 column 1807 path $.meta
at com.google.gson.stream.JsonReader.nextNonWhitespace(JsonReader.java:1397)
at com.google.gson.stream.JsonReader.doPeek(JsonReader.java:483)
at com.google.gson.stream.JsonReader.hasNext(JsonReader.java:415)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:216)
at retrofit2.converter.gson.GsonResponseBodyConverter.convert(GsonResponseBodyConverter.java:40)
at retrofit2.converter.gson.GsonResponseBodyConverter.convert(GsonResponseBodyConverter.java:27)
at retrofit2.OkHttpCall.parseResponse(OkHttpCall.java:243)
at retrofit2.OkHttpCall$1.onResponse(OkHttpCall.java:153)
at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:519)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
Spending a lot of time debugging this issue I have found that this exception was generated by JsonReader.java in method nextNonWhitespace where it try to to get colons, double quotes and curly or square braces to compose json object from decoded as char array buffer.
This buffer itself is received in fillBuffer method of the same module and it has length limit of 1024 elements. In my case the backend response is longer that this value (1807 chars), so while JsonReader.java parses my response as json object it do this in 2 iterations.
Each iteration it fills the buffer here:
int total;
while ((total = in.read(buffer, limit, buffer.length - limit)) != -1) {
limit += total;
// if this is the first read, consume an optional byte order mark (BOM) if it exists
if (lineNumber == 0 && lineStart == 0 && limit > 0 && buffer[0] == '\ufeff') {
pos++;
lineStart++;
minimum++;
}
if (limit >= minimum) {
return true;
}
}
the read method is called on ResponseBody.kt class from okhttp3
#Throws(IOException::class)
override fun read(cbuf: CharArray, off: Int, len: Int): Int {
if (closed) throw IOException("Stream closed")
val finalDelegate = delegate ?: InputStreamReader(
source.inputStream(),
source.readBomAsCharset(charset)).also {
delegate = it
}
return finalDelegate.read(cbuf, off, len)
}
The main problem is:
At first iteration all goes well, ResponseBody.kt "reads" first 1024 chars and gives them to JsonReader.java where it composes a part of response object.
When second iteration comes ResponseBody.kt "reads" the last part of response and fills with it the start of char buffer, so char buffer now contains as its first elements the tail of response, and after that - all elements which was left there after firts iteration.
The main problem is that it im most cases (about 80%) looses last char from response, in about 10% in looses 2 last chars from response and in about 10% it reads all chars. Here is shots:
It must contains 783 chars to complete json, but as shown at line 1290 it receives only 782.
Looking at buffer itself
the char at 782 index (783 in order) must be second curly brace that closes json root, but instead of it there are leftovers from first iteration started. This results in exception mentioned above.
Now if we look at situation where requests finished successfully:
With the same request it occasionly returns valid number of chars: 783
And the buffer itself is:
Now the second brace is present where it must be.
In this case request will be successfull.
The same response ending from Postman:
The Postman success rate in parsing response is 100%, the same is true for OS X hosted android studio emulators and real devices I've used.
Update 2
It seems full buffer obtained in RealBufferedSource.kt:
internal inline fun RealBufferedSource.commonSelect(options: Options): Int {
check(!closed) { "closed" }
while (true) {
val index = buffer.selectPrefix(options, selectTruncated = true)
when (index) {
-1 -> {
return -1
}
-2 -> {
// We need to grow the buffer. Do that, then try it all again.
if (source.read(buffer, Segment.SIZE.toLong()) == -1L) return -1
}
else -> {
// We matched a full byte string: consume it and return it.
val selectedSize = options.byteStrings[index].size
buffer.skip(selectedSize.toLong())
return index
}
}
}
}
and here it is already missing last char:
Update 3
Found this unsolved question which is exactly the same behavior:
Retrofit Json data truncated
Also comment from Android Studio emulators issues tracker:
https://issuetracker.google.com/issues/119027639#comment9
OK, It took some time, but I've found what was going wrong and how to workaround that.
When Android Studio's emulators running in Windows series OS (checked for 7 & 10) receive json-typed reply from server with retrofit it can with various probability loose 1 or 2 last symbols of the body when it is decoded to string, this symbols contain closing curly brackets and so such body could not be parsed to object by gson converter which results in throwing exception.
The idea of workaround I found is to add an interceptor to retrofit which would check the decoded to string body if its last symbols match those of valid json response and add them if they are missed.
interface ApiServiceInterface {
companion object Factory{
fun create(): ApiServiceInterface {
val interceptor = HttpLoggingInterceptor()
interceptor.level = HttpLoggingInterceptor.Level.BODY
val stringInterceptor = Interceptor { chain: Interceptor.Chain ->
val request = chain.request()
val response = chain.proceed(request)
val source = response.body()?.source()
source?.request(Long.MAX_VALUE)
val buffer = source?.buffer()
var responseString = buffer?.clone()?.readString(Charset.forName("UTF-8"))
if (responseString != null && responseString.length > 2) {
val lastTwo = responseString.takeLast(2)
if (lastTwo != "}}") {
val lastOne = responseString.takeLast(1)
responseString = if (lastOne != "}") {
"$responseString}}"
} else {
"$responseString}"
}
}
}
val contentType = response.body()?.contentType()
val body = ResponseBody.create(contentType, responseString ?: "")
return#Interceptor response.newBuilder().body(body).build()
}
val client = OkHttpClient.Builder()
.connectTimeout(30, TimeUnit.SECONDS)
.writeTimeout(30, TimeUnit.SECONDS)
.readTimeout(30,TimeUnit.SECONDS)
.addInterceptor(interceptor)
.addInterceptor(stringInterceptor)
.retryOnConnectionFailure(true)
.connectionPool(ConnectionPool(0, 5, TimeUnit.MINUTES))
.protocols(listOf(Protocol.HTTP_1_1))
.build()
val gson = GsonBuilder().create()
val retrofit = Retrofit.Builder()
.addCallAdapterFactory(CoroutineCallAdapterFactory())
.addConverterFactory(GsonConverterFactory.create(gson))
.addConverterFactory(ScalarsConverterFactory.create())
.baseUrl("http://3.124.6.203:5000")
.client(client)
.build()
return retrofit.create(ApiServiceInterface::class.java)
}
}
#Headers("Content-type: application/json", "Connection: close", "Accept-Encoding: identity")
#POST("/")
fun requestAsync(#Body data: JsonObject): Deferred<Response>
}
After this changes the issue didn't occure.

Google Cloud Pubsub Data lost

I'm experiencing a problem with GCP pubsub where a small percentage of data was lost when publishing thousands of messages in couple seconds.
I'm logging both message_id from pubsub and a session_id unique to each message on both the publishing end as well as the receiving end, and the result I'm seeing is that some message on the receiving end has same session_id, but different message_id. Also, some messages were missing.
For example, in one test I send 5,000 messages to pubsub, and exactly 5,000 messages were received, with 8 messages lost. The log lost messages look like this:
MISSING sessionId:sessionId: 731 (missing in log from pull request, but present in log from Flask API)
messageId FOUND: messageId:108562396466545
API: 200 **** sessionId: 731, messageId:108562396466545 ******(Log from Flask API)
Pubsub: sessionId: 730, messageId:108562396466545(Log from pull request)
And the duplicates looks like:
======= Duplicates FOUND on sessionId: 730=======
sessionId: 730, messageId:108562396466545
sessionId: 730, messageId:108561339282318
(both are logs from pull request)
All missing data and duplicates look like this.
From the above example, it is clear that some messages has taken the message_id of another message, and has been sent twice with two different message_ids.
I wonder if anyone would help me figure out what is going on? Thanks in advance.
Code
I have an API sending message to pubsub, which looks like this:
from flask import Flask, request, jsonify, render_template
from flask_cors import CORS, cross_origin
import simplejson as json
from google.cloud import pubsub
from functools import wraps
import re
import json
app = Flask(__name__)
ps = pubsub.Client()
...
#app.route('/publish', methods=['POST'])
#cross_origin()
#json_validator
def publish_test_topic():
pubsub_topic = 'test_topic'
data = request.data
topic = ps.topic(pubsub_topic)
event = json.loads(data)
messageId = topic.publish(data)
return '200 **** sessionId: ' + str(event["sessionId"]) + ", messageId:" + messageId + " ******"
And this is the code I used to read from pubsub:
from google.cloud import pubsub
import re
import json
ps = pubsub.Client()
topic = ps.topic('test-xiu')
sub = topic.subscription('TEST-xiu')
max_messages = 1
stop = False
messages = []
class Message(object):
"""docstring for Message."""
def __init__(self, sessionId, messageId):
super(Message, self).__init__()
self.seesionId = sessionId
self.messageId = messageId
def pull_all():
while stop == False:
m = sub.pull(max_messages = max_messages, return_immediately = False)
for data in m:
ack_id = data[0]
message = data[1]
messageId = message.message_id
data = message.data
event = json.loads(data)
sessionId = str(event["sessionId"])
messages.append(Message(sessionId = sessionId, messageId = messageId))
print '200 **** sessionId: ' + sessionId + ", messageId:" + messageId + " ******"
sub.acknowledge(ack_ids = [ack_id])
pull_all()
For generating session_id, sending request & logging response from API:
// generate trackable sessionId
var sessionId = 0
var increment_session_id = function () {
sessionId++;
return sessionId;
}
var generate_data = function () {
var data = {};
// data.sessionId = faker.random.uuid();
data.sessionId = increment_session_id();
data.user = get_rand(userList);
data.device = get_rand(deviceList);
data.visitTime = new Date;
data.location = get_rand(locationList);
data.content = get_rand(contentList);
return data;
}
var sendData = function (url, payload) {
var request = $.ajax({
url: url,
contentType: 'application/json',
method: 'POST',
data: JSON.stringify(payload),
error: function (xhr, status, errorThrown) {
console.log(xhr, status, errorThrown);
$('.result').prepend("<pre id='json'>" + JSON.stringify(xhr, null, 2) + "</pre>")
$('.result').prepend("<div>errorThrown: " + errorThrown + "</div>")
$('.result').prepend("<div>======FAIL=======</div><div>status: " + status + "</div>")
}
}).done(function (xhr) {
console.log(xhr);
$('.result').prepend("<div>======SUCCESS=======</div><pre id='json'>" + JSON.stringify(payload, null, 2) + "</pre>")
})
}
$(submit_button).click(function () {
var request_num = get_request_num();
var request_url = get_url();
for (var i = 0; i < request_num; i++) {
var data = generate_data();
var loadData = changeVerb(data, 'load');
sendData(request_url, loadData);
}
})
UPDATE
I made a change on the API, and the issue seems to go away. The changes I made was instead of using one pubsub.Client() for all request, I initialized a client for every single request coming in. The new API looks like:
from flask import Flask, request, jsonify, render_template
from flask_cors import CORS, cross_origin
import simplejson as json
from google.cloud import pubsub
from functools import wraps
import re
import json
app = Flask(__name__)
...
#app.route('/publish', methods=['POST'])
#cross_origin()
#json_validator
def publish_test_topic():
ps = pubsub.Client()
pubsub_topic = 'test_topic'
data = request.data
topic = ps.topic(pubsub_topic)
event = json.loads(data)
messageId = topic.publish(data)
return '200 **** sessionId: ' + str(event["sessionId"]) + ", messageId:" + messageId + " ******"
Talked with some guy from Google, and it seems to be an issue with the Python Client:
The consensus on our side is that there is a thread-safety problem in the current python client. The client library is being rewritten almost from scratch as we speak, so I don't want to pursue any fixes in the current version. We expect the new version to become available by end of June.
Running the current code with thread_safe: false in app.yaml or better yet just instantiating the client in every call should is the work around -- the solution you found.
For detailed solution, please see the Update in the question
Google Cloud Pub/Sub message IDs are unique. It should not be possible for "some messages [to] taken the message_id of another message." The fact that message ID 108562396466545 was seemingly received means that Pub/Sub did deliver the message to the subscriber and was not lost.
I recommend you check how your session_ids are generated to ensure that they are indeed unique and that there is exactly one per message. Searching for the sessionId in your JSON via a regular expression search seems a little strange. You would be better off parsing this JSON into an actual object and accessing fields that way.
In general, duplicate messages in Cloud Pub/Sub are always possible; the system guarantees at-least-once delivery. Those messages can be delivered with the same message ID if the duplication happens on the subscribe side (e.g., the ack is not processed in time) or with a different message ID (e.g., if the publish of the message is retried after an error like a deadline exceeded).
You shouldn't need to create a new client for every publish operation. I'm betting that the reason that that "fixed the problem" is because it mitigated a race that exists in the publisher client side. I'm also not convinced that the log line you've shown on the publisher side:
API: 200 **** sessionId: 731, messageId:108562396466545 ******
corresponds to a successful publish of sessionId 731 by publish_test_topic(). Under what conditions is that log line printed? The code that has been presented so far does not show this.

Using spark-kernel comm API

I started using spark-kernel recently.
As given in tutorial and sample code, I was able to set up client and use it for executing code snippets on spark-kernel and retrieving back results as given in this example code.
Now, I need to use comm API provided with spark-kernel. I tried this tutorial, but I am not able to make it work. In fact, I have no understanding of how to make that work.
I tried the following code, but when I run this code, I get this error "Received invalid target for Comm Open: my_target" on the kernel.
package examples
import scala.runtime.ScalaRunTime._
import scala.collection.mutable.ListBuffer
import com.ibm.spark.kernel.protocol.v5.MIMEType
import com.ibm.spark.kernel.protocol.v5.client.boot.ClientBootstrap
import com.ibm.spark.kernel.protocol.v5.client.boot.layers.{StandardHandlerInitialization, StandardSystemInitialization}
import com.ibm.spark.kernel.protocol.v5.content._
import com.typesafe.config.{Config, ConfigFactory}
import Array._
object commclient extends App{
val profileJSON: String = """
{
"stdin_port" : 48691,
"control_port" : 44808,
"hb_port" : 49691,
"shell_port" : 40544,
"iopub_port" : 43462,
"ip" : "127.0.0.1",
"transport" : "tcp",
"signature_scheme" : "hmac-sha256",
"key" : ""
}
""".stripMargin
val config: Config = ConfigFactory.parseString(profileJSON)
val client = (new ClientBootstrap(config)
with StandardSystemInitialization
with StandardHandlerInitialization).createClient()
def printResult(result: ExecuteResult) = {
println(s"${result.data.get(MIMEType.PlainText).get}")
}
def printStreamContent(content:StreamContent) = {
println(s"${content.text}")
}
def printError(reply:ExecuteReplyError) = {
println(s"Error was: ${reply.ename.get}")
}
client.comm.register("my_target").addMsgHandler {
(commWriter, commId, data) =>
println(data)
commWriter.close()
}
// Initiate the Comm connection
client.comm.open("my_target")
}
Can someone tell me how shall I run this piece of code:
// Register the callback to respond to being opened from the client
kernel.comm.register("my target").addOpenHandler {
(commWriter, commId, targetName, data) =>
commWriter.writeMsg(Map("response" -> "Hello World!"))
}
I would really appreciate if someone can point me to complete working example on usage of comm API.
Any help will be appreciated. Thanks
You can use your client to run this server (kernel) side registration once in one program. Then your other programs can communicate to kernel using this channel.
Here is a way I ran my registration in the first program I mentioned above:
client.execute(
"""
// Register the callback to respond to being opened from the client
kernel.comm.register("my target").
addOpenHandler {
(commWriter, commId, targetName, data) =>
commWriter.writeMsg(org.apache.toree.kernel.protocol.v5.MsgData("response" -> "Toree Hello World!"))
}.
addMsgHandler {
(commWriter, _, data) =>
if (!data.toString.contains("closing")) {
commWriter.writeMsg(data)
} else {
commWriter.writeMsg(org.apache.toree.kernel.protocol.v5.MsgData("closing" -> "done"))
}
}
""".stripMargin
)

How do i create a TCP receiver that only consumes messages using akka streams?

We are on: akka-stream-experimental_2.11 1.0.
Inspired by the example
We wrote a TCP receiver as follows:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val serverFlow = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(message => {
target ? new Message(message); ByteString.empty
})
conn handleWith serverFlow
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
However, our intention was to have the receiver not respond at all and only sink the message. (The TCP message publisher does not care about response ).
Is it even possible? to not respond at all since akka.stream.scaladsl.Tcp.IncomingConnection takes a flow of type: Flow[ByteString, ByteString, Unit]
If yes, some guidance will be much appreciated. Thanks in advance.
One attempt as follows passes my unit tests but not sure if its the best idea:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(Message(_))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
You are on the right track. To keep the possibility to close the connection at some point you may want to keep the promise and complete it later on. Once completed with an element this element published by the source. However, as you don't want any element to be published on the connection, you can use drop(1) to make sure the source will never emit any element.
Here's an updated version of your example (untested):
val promise = Promise[ByteString]()
// this source will complete when the promise is fulfilled
// or it will complete with an error if the promise is completed with an error
val completionSource = Source(promise.future).drop(1)
completionSource // only used to complete later
.via(conn.flow) // I reordered the flow for better readability (arguably)
.runWith(targetSink)
// to close the connection later complete the promise:
def closeConnection() = promise.success(ByteString.empty) // dummy element, will be dropped
// alternatively to fail the connection later, complete with an error
def failConnection() = promise.failure(new RuntimeException)

How to recover from akka.stream.io.Framing$FramingException

On: akka-stream-experimental_2.11 1.0.
We are using Framing.delimiter in a Tcp server. When a message arrives with length greater than maximumFrameLength the FramingException is thrown and we could capture it from OnError of the ActorSubscriber.
Server Code:
def bind(address: String, port: Int, target: ActorRef, maxInFlight: Int, maxFrameLength: Int)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach {
conn: Tcp.IncomingConnection =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target, maxInFlight))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = maxFrameLength, allowTruncation = true))
.map(raw ⇒ Message(raw))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
Subscriber code:
class TargetSubscriber(target: ActorRef, maxInFlight: Int) extends ActorSubscriber with ActorLogging {
private var inFlight = 0
override protected def requestStrategy = new MaxInFlightRequestStrategy(maxInFlight) {
override def inFlightInternally = inFlight
}
override def receive = {
case OnNext(msg: Message) ⇒
target ! msg
inFlight += 1
case OnError(t) ⇒
inFlight -= 1
log.error(t, "Subscriber encountered error")
case TargetAck(_) ⇒
inFlight -= 1
}
}
Problem:
Messages that are under the max frame length do not flow after this exception for that incoming connection. killing the client and re running it works fine.
ActorSubscriber does not honor supervision
What is the correct way to skip the bad message and continue with the next good message ?
Have you tried to put supervision on the targetFlow sink instead of the whole materialiser? I don't see it anywhere here and I believe it should be set on that flow directly.
Stil this is more a guess than science ;)
I had the same exception reading from a file, and for me it was solved by putting a return after last line.