How do i create a TCP receiver that only consumes messages using akka streams? - scala-2.11

We are on: akka-stream-experimental_2.11 1.0.
Inspired by the example
We wrote a TCP receiver as follows:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val serverFlow = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(message => {
target ? new Message(message); ByteString.empty
})
conn handleWith serverFlow
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
However, our intention was to have the receiver not respond at all and only sink the message. (The TCP message publisher does not care about response ).
Is it even possible? to not respond at all since akka.stream.scaladsl.Tcp.IncomingConnection takes a flow of type: Flow[ByteString, ByteString, Unit]
If yes, some guidance will be much appreciated. Thanks in advance.
One attempt as follows passes my unit tests but not sure if its the best idea:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(Message(_))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}

You are on the right track. To keep the possibility to close the connection at some point you may want to keep the promise and complete it later on. Once completed with an element this element published by the source. However, as you don't want any element to be published on the connection, you can use drop(1) to make sure the source will never emit any element.
Here's an updated version of your example (untested):
val promise = Promise[ByteString]()
// this source will complete when the promise is fulfilled
// or it will complete with an error if the promise is completed with an error
val completionSource = Source(promise.future).drop(1)
completionSource // only used to complete later
.via(conn.flow) // I reordered the flow for better readability (arguably)
.runWith(targetSink)
// to close the connection later complete the promise:
def closeConnection() = promise.success(ByteString.empty) // dummy element, will be dropped
// alternatively to fail the connection later, complete with an error
def failConnection() = promise.failure(new RuntimeException)

Related

how to receive information in kotlin from a server in python (socketserver)

server
I have tried almost everything to receive text from python
I don't know where the problem comes from from the client or from the server
try:
llamadacod = self.request.recv(1024)
llamada = self.decode(llamadacod)
print(f"{color.A}{llamada}")
time.sleep(0.1)
if llamada == "conectado":
msg = "Hello"
msgcod = self.encode(msg)
print(f"{color.G}{msg}")
self.request.send(msgcod)
client
val thread = Thread(Runnable {
try{
val client = Socket("localHost",25565)
client.setReceiveBufferSize(1024)
client.outputStream.write("conectado".toByteArray())
val text = InputStreamReader(client.getInputStream())
recibir = text.toString()
client.outputStream.write("Client_desconect".toByteArray())
client.close()
I already solved it, the solution was very simple, you just had to ensure that both the server and the client would occupy the same way of communicating
client :
val input = DataInputStream(client.getInputStream())
id = input.readUTF()
server:
self.request.send(len(msg).to_bytes(2, byteorder='big'))
self.request.send(msg)

what is correct use of consumer groups in spring cloud stream dataflow and rabbitmq?

A follow up to this:
one SCDF source, 2 processors but only 1 processes each item
The 2 processors (del-1 and del-2) in the picture are receiving the same data within milliseconds of each other. I'm trying to rig this so del-2 never receives the same thing as del-1 and vice versa. So obviously I've got something configured incorrectly but I'm not sure where.
My processor has the following application.properties
spring.application.name=${vcap.application.name:sample-processor}
info.app.name=#project.artifactId#
info.app.description=#project.description#
info.app.version=#project.version#
management.endpoints.web.exposure.include=health,info,bindings
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration
spring.cloud.stream.bindings.input.group=input
Is "spring.cloud.stream.bindings.input.group" specified correctly?
Here's the processor code:
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Object transform(String inputStr) throws InterruptedException{
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = " I AM [" + inputStr + "] AND I HAVE BEEN PROCESSED!!!!!!!";
log.info("SampleProcessor.transform() incoming inputStr="+inputStr);
return message;
}
Is the #Transformer annotation the proper way to link this bit of code with "spring.cloud.stream.bindings.input.group" from application.properties? Are there any other annotations necessary?
Here's my source:
private String format = "EEEEE dd MMMMM yyyy HH:mm:ss.SSSZ";
#Bean
#InboundChannelAdapter(value = Source.OUTPUT, poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public MessageSource<String> timerMessageSource() {
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = new SimpleDateFormat(format).format(new Date());
log.info("SampleSource.timeMessageSource() message=["+message+"]");
return () -> new GenericMessage<>(new SimpleDateFormat(format).format(new Date()));
}
I'm confused about the "value = Source.OUTPUT". Does this mean my processor needs to be named differently?
Is the inclusion of #Poller causing me a problem somehow?
This is how I define the 2 processor streams (del-1 and del-2) in SCDF shell:
stream create del-1 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
stream create del-2 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
Do I need to do anything differently there?
All of this is running in Docker/K8s.
RabbitMQ is given by bitnami/rabbitmq:3.7.2-r1 and is configured with the following props:
RABBITMQ_USERNAME: user
RABBITMQ_PASSWORD <redacted>:
RABBITMQ_ERL_COOKIE <redacted>:
RABBITMQ_NODE_PORT_NUMBER: 5672
RABBITMQ_NODE_TYPE: stats
RABBITMQ_NODE_NAME: rabbit#localhost
RABBITMQ_CLUSTER_NODE_NAME:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_MANAGER_PORT_NUMBER: 15672
RABBITMQ_DISK_FREE_LIMIT: "6GiB"
Are any other environment variables necessary?

Akka http - SSE - Not receiving streaming Json response

I am playing with Server Sent Events to get updates from akka-http v2.4.11 based micro-service. I am using akka-sse. For some reason, I am not receiving any updates on my Javascript front-end. However, as soon as, I terminate or kill the server process, I get some of the messages in the front-end. My code looks like this:
val start = ByteString.empty
val sep = ByteString("\n")
val end = ByteString.empty
import Fill._
implicit val jsonStreamingSupport: JsonEntityStreamingSupport =
EntityStreamingSupport.json()
.withFramingRenderer(Flow[ByteString].intersperse(start,
sep,
end))
import de.heikoseeberger.akkasse.EventStreamMarshalling._
def routes: Route = pathPrefix("subscribe") {
path("fills") {
get {
complete {
Source.actorPublisher[Fill](FillProvider())
.map(fill ⇒ sse(fill))
.keepAlive(1.second, () ⇒ ServerSentEvent.heartbeat)
}
}
}
}
def sse[T: ClassTag](obj: T)(implicit w: JsonWriter[T]): ServerSentEvent = {
ServerSentEvent(data = w.write(obj).compactPrint,
eventType = classTag[T].runtimeClass.getSimpleName)
}
Any pointers what I can be doing wrong? To me, it seems that I am following every instructions as mentioned here

How to recover from akka.stream.io.Framing$FramingException

On: akka-stream-experimental_2.11 1.0.
We are using Framing.delimiter in a Tcp server. When a message arrives with length greater than maximumFrameLength the FramingException is thrown and we could capture it from OnError of the ActorSubscriber.
Server Code:
def bind(address: String, port: Int, target: ActorRef, maxInFlight: Int, maxFrameLength: Int)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach {
conn: Tcp.IncomingConnection =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target, maxInFlight))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = maxFrameLength, allowTruncation = true))
.map(raw ⇒ Message(raw))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
Subscriber code:
class TargetSubscriber(target: ActorRef, maxInFlight: Int) extends ActorSubscriber with ActorLogging {
private var inFlight = 0
override protected def requestStrategy = new MaxInFlightRequestStrategy(maxInFlight) {
override def inFlightInternally = inFlight
}
override def receive = {
case OnNext(msg: Message) ⇒
target ! msg
inFlight += 1
case OnError(t) ⇒
inFlight -= 1
log.error(t, "Subscriber encountered error")
case TargetAck(_) ⇒
inFlight -= 1
}
}
Problem:
Messages that are under the max frame length do not flow after this exception for that incoming connection. killing the client and re running it works fine.
ActorSubscriber does not honor supervision
What is the correct way to skip the bad message and continue with the next good message ?
Have you tried to put supervision on the targetFlow sink instead of the whole materialiser? I don't see it anywhere here and I believe it should be set on that flow directly.
Stil this is more a guess than science ;)
I had the same exception reading from a file, and for me it was solved by putting a return after last line.

Why is RabbitMQ reader flushing my queue?

I have some simple code to put a few things on a queue:
val factory = new ConnectionFactory()
factory.setHost("localhost")
val connection = factory.newConnection()
val channel = connection.createChannel()
channel.basicPublish("", "myq", null, "AAA".getBytes())
channel.basicPublish("", "myq", null, "BBB".getBytes())
channel.basicPublish("", "myq", null, "CCC".getBytes())
channel.close()
connection.close()
This seems to work. After running this I can do 'rabbitmqctl list_queues' and see myq with 3 items in it.
Now (in a different process) I run reader code to grab just 1 element from the queue:
val factory = new ConnectionFactory()
factory.setHost("localhost")
val connection = factory.newConnection()
val channel = connection.createChannel()
channel.queueDeclare("myq", false, false, false, null)
val consumer = new QueueingConsumer(channel)
channel.basicConsume("myq", true, consumer)
// Grab just one message from queue
val delivery = consumer.nextDelivery()
val message = new String(delivery.getBody())
println(" [x] Received '" + message + "'")
channel.close()
connection.close()
This successfully retrieves the first item on the queue (AAA). But... now when I run 'rabbitmqctl list_queues' I see 0 items in my queue, and of course re-running my reader hangs/waits because the queue is now empty. Why did the other items in the queue disappear?
You seem to not be using basicQos. With basicQos set to one, then you can achieve what you want, otherwise RabbitMQ will considered the prefetch setting to be unlimited, and send all the messages (or as many as it can) to the process that first did a basicConsume().
More info here: http://www.rabbitmq.com/tutorials/tutorial-two-java.html bellow "Fair Dispatch"