spring-cloud-stream3.0 batch-mode is true consumer data Can't convert value of class - serialization

config info
spring:
kafka:
consumer:
max-poll-records: 5
cloud:
stream:
instance-count: 5
instance-index: 0
kafka:
binder:
brokers: 127.0.0.1:9092
auto-create-topics: true
auto-add-partitions: true
min-partition-count: 5
bindings:
log-data-in:
destination: log-data1
group: log-data-group
contenttype: text/plain
consumer:
partitioned: true
batch-mode: true
log-data-out:
destination: log-data1
contentType: text/plain
producer:
use-native-encoding: true
partitionCount: 5
# configuration:
# key.serializer: org.apache.kafka.common.serialization.StringSerializer
# value.serializer: org.apache.kafka.common.serialization.ByteArraySerializer
send kafka code:
LogData logData = new LogData();
logData.setId(1)
logData.setVer("22")
MessageChannel messageChannel = logDataStreams.outboundLogDataStreams();
boolean sent = messageChannel.send(MessageBuilder.withPayload(logData).setHeader("partitionKey", key).build());
consumer data listener kafka topic
I think there is an error in processing the data type in kafka, there is no problem, there is no less setting, why is the conversion type error when consuming kafka information, bytes cannot know to convert an entity class
The code here is the consumption data, which is the place where the error is reported
#StreamListener(LogDataStreams.INPUT_LOG_DATA)
public void handleLogData(List<LogData> messages) {
System.out.println(messages);
messages.parallelStream().forEach(item -> {
System.out.println(item);
});
}
Define topic identifier
public interface LogDataStreams {
String INPUT_LOG_DATA = "log-data-in";
String OUTPUT_LOG_DATA = "log-data-out";
String INPUT_LOG_SC = "log-sc-in";
String OUTPUT_LOG_SC = "log-sc-out";
String INPUT_LOG_BEHAVIOR = "log-behavior-in";
String OUTPUT_LOG_BEHAVIOR = "log-behavior-out";
#Input(INPUT_LOG_DATA)
SubscribableChannel inboundLogDataStreams();
#Output(OUTPUT_LOG_DATA)
MessageChannel outboundLogDataStreams();
}
I think there is an error in processing the data type in kafka, there is no problem, there is no less setting, why is the conversion type error when consuming kafka information, bytes cannot know to convert an entity class
Error in last run:
Caused by: org.apache.kafka.common.errors.SerializationException: Can't convert value of class com.xx.core.data.model.LogData to class org.apache.kafka.common.serialization.ByteArraySerializer specified in value.serializer
Caused by: java.lang.ClassCastException: com.xx.core.data.model.LogData cannot be cast to [B
Please help me, how to solve this problem

Related

Redis Reactive Streams Subscriber Thread Hangs

Am trying to use Spring Boot Redis Reactive Stream to subscribe to the stream as a listener.When the data is inserted into the stream listener will pass to client using GRPC stream. I have pointer which is just redis value to keep track of last record which was delivered to the client.The thread is getting blocked randomly when i set pointer to redis and getting timeout.
Mainly am using this pointer if client re-connects after sometime I wanted to deliver data which was last sent and till current data template.opsForValue().set(pointerKey, msg.getId().toString()) .block(Duration.ofSeconds(5)); this place thread is getting blocked
Please let me know is anything wrong in the below code. If posted 10 record to the stream I got the error after receiving 5 record.
Code
public void subscribe(){
String channelId = this.streamRequest.getTopic();
String identifier = this.streamRequest.getIdentifier();
boolean isNew = this.streamRequest.getNew();
String pointerKey = channelId + "_" + identifier + "_pointer";
StreamOffset<String> stringStreamOffset = StreamOffset.fromStart(channelId);
if(isNew){
// If client want to read data from the start
// Removed the pointer
template.opsForValue().delete(pointerKey).block();
} else {
String id = template.opsForValue().get(pointerKey).block();
stringStreamOffset = id != null ? StreamOffset.create(channelId, ReadOffset.from(id)) : StreamOffset.fromStart(channelId);
}
logger.info("[SC] subscribed {}", this.streamRequest);
Flux<ObjectRecord<String, String>> receiver = this.streamReceive.receive(stringStreamOffset);
disposable = receiver.subscribe(msg -> {
logger.info("Processing message {}", msg.getValue());
String value = msg.getValue();
StreamResponse streamResponse = StreamResponse.newBuilder().setData(value).build();
try{
logger.info("[SC] posting data to the grpc client topic {}", this.streamRequest);
this.responseObserver.onNext(streamResponse);
logger.info("[SC] Successfully posted data to the grpc client {}", this.streamRequest);
logger.info("[SC] Updating pointer {}", pointerKey);
template.opsForValue().set(pointerKey, msg.getId().toString())
.block(Duration.ofSeconds(5));
logger.info("[SC] pointer update completed {}", pointerKey);
}catch (Exception ex){
logger.error("Error:{}", ex.getMessage());
this.responseObserver.onError(ex.getCause());
close();
}
});
}
Error:
Name: lettuce-nioEventLoop-4-1
State: TIMED_WAITING on java.util.concurrent.CountDownLatch$Sync#3c9ebf7a
Total blocked: 2 Total waited: 60
Stack trace:
java.base#17.0.2/jdk.internal.misc.Unsafe.park(Native Method)
java.base#17.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:717)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1074)
java.base#17.0.2/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:276)
app//reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:121)
app//reactor.core.publisher.Mono.block(Mono.java:1731)
app//ai.jiffy.message.publisher.ws.StreamConnection.setPointer(StreamConnection.java:68)
app//ai.jiffy.message.publisher.ws.StreamConnection.lambda$new$0(StreamConnection.java:54)
app//ai.jiffy.message.publisher.ws.StreamConnection$$Lambda$1318/0x0000000801553610.accept(Unknown Source)
app//reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
app//reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:154)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.onStreamMessage(DefaultStreamReceiver.java:398)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.access$300(DefaultStreamReceiver.java:210)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:360)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:351)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onNext(FluxUsingWhen.java:345)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//io.lettuce.core.RedisPublisher$ImmediateSubscriber.onNext(RedisPublisher.java:886)
app//io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:291)
app//io.lettuce.core.output.StreamingOutput$Subscriber.onNext(StreamingOutput.java:64)
app//io.lettuce.core.output.StreamReadOutput.complete(StreamReadOutput.java:110)
app//io.lettuce.core.protocol.RedisStateMachine.doDecode(RedisStateMachine.java:343)
app//io.lettuce.core.protocol.RedisStateMachine.decode(RedisStateMachine.java:295)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:841)
app//io.lettuce.core.protocol.CommandHandler.decode0(CommandHandler.java:792)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:766)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:658)
app//io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base#17.0.2/java.lang.Thread.run(Thread.java:833)
Name: lettuce-nioEventLoop-4-1
State: TIMED_WAITING on java.util.concurrent.CountDownLatch$Sync#3c9ebf7a
Total blocked: 2 Total waited: 60
Stack trace:
java.base#17.0.2/jdk.internal.misc.Unsafe.park(Native Method)
java.base#17.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:717)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1074)
java.base#17.0.2/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:276)
app//reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:121)
app//reactor.core.publisher.Mono.block(Mono.java:1731)
app//ai.jiffy.message.publisher.ws.StreamConnection.setPointer(StreamConnection.java:68)
app//ai.jiffy.message.publisher.ws.StreamConnection.lambda$new$0(StreamConnection.java:54)
app//ai.jiffy.message.publisher.ws.StreamConnection$$Lambda$1318/0x0000000801553610.accept(Unknown Source)
app//reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
app//reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:154)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.onStreamMessage(DefaultStreamReceiver.java:398)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.access$300(DefaultStreamReceiver.java:210)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:360)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:351)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onNext(FluxUsingWhen.java:345)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//io.lettuce.core.RedisPublisher$ImmediateSubscriber.onNext(RedisPublisher.java:886)
app//io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:291)
app//io.lettuce.core.output.StreamingOutput$Subscriber.onNext(StreamingOutput.java:64)
app//io.lettuce.core.output.StreamReadOutput.complete(StreamReadOutput.java:110)
app//io.lettuce.core.protocol.RedisStateMachine.doDecode(RedisStateMachine.java:343)
app//io.lettuce.core.protocol.RedisStateMachine.decode(RedisStateMachine.java:295)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:841)
app//io.lettuce.core.protocol.CommandHandler.decode0(CommandHandler.java:792)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:766)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:658)
app//io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base#17.0.2/java.lang.Thread.run(Thread.java:833)
Thanks in advance.

NPE while deserializing avro messages in kafka streams

I wrote a small java class to test the consumption of Avro encoded Kafka topic.
Properties appProps = new Properties();
appProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://***kfk14bro1.lc:9092");
appProps.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://***kfk14str1.lc:8081");
appProps.put(StreamsConfig.APPLICATION_ID_CONFIG, "consumer");
appProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
appProps.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG,LogAndContinueExceptionHandler.class);
StreamsBuilder streamsBuilder = new StreamsBuilder();
streamsBuilder.stream(
"coordinates", Consumed.with(Serdes.String(), new GenericAvroSerde()))
.peek((key, value) -> System.out.println("key=" + key + ", value=" + value));
new KafkaStreams(streamsBuilder.build(), appProps).start();
When I run this class, SerdeConfigs are being logged alright which can be seen in the below log:
[consumer-56b0e0ca-d336-45cc-b388-46a68dbfab8b-StreamThread-1] INFO io.confluent.kafka.serializers.KafkaAvroSerializerConfig - KafkaAvroSerializerConfig values:
schema.registry.url = [http://***kfk14str1.lc:8081]
basic.auth.user.info = [hidden]
auto.register.schemas = true
max.schemas.per.subject = 1000
basic.auth.credentials.source = URL
schema.registry.basic.auth.user.info = [hidden]
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
[normal-consumer-56b0e0ca-d336-45cc-b388-46a68dbfab8b-StreamThread-1] INFO io.confluent.kafka.serializers.KafkaAvroDeserializerConfig - KafkaAvroDeserializerConfig values:
schema.registry.url = [http://***kfk14str1.lc:8081]
basic.auth.user.info = [hidden]
auto.register.schemas = true
max.schemas.per.subject = 1000
basic.auth.credentials.source = URL
schema.registry.basic.auth.user.info = [hidden]
specific.avro.reader = false
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
but messages are not being consumed and generates the below log for every message:
[normal-consumer-56b0e0ca-d336-45cc-b388-46a68dbfab8b-StreamThread-1] WARN org.apache.kafka.streams.errors.LogAndContinueExceptionHandler - Exception caught during Deserialization, taskId: 0_0, topic: coordinates, partition: 0, offset: 782205986
org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 83
Caused by: java.lang.NullPointerException
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:116)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:88)
at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:55)
at io.confluent.kafka.streams.serdes.avro.GenericAvroDeserializer.deserialize(GenericAvroDeserializer.java:63)
at io.confluent.kafka.streams.serdes.avro.GenericAvroDeserializer.deserialize(GenericAvroDeserializer.java:39)
at org.apache.kafka.common.serialization.Deserializer.deserialize(Deserializer.java:58)
at org.apache.kafka.streams.processor.internals.SourceNode.deserializeValue(SourceNode.java:60)
But I am able to read just fine from the avro console consumer, so I know there is nothing wrong with the data written to the topic. Below command prints logs alright:
~/kafka/confluent-5.1.2/bin/kafka-avro-console-consumer --bootstrap-server http://***kfk14bro1.lc:9092 --topic coordinates --property schema.registry.url=http://***kfk14str1.lc:8081 --property auto.offset.reset=latest
When you instantiate an Avro Serde yourself it is not configured automatically with the schema-registry URL.
So either you have to configure it yourself or you define default serdes by adding:
appProps.setProperty(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
appProps.setProperty(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, GenericAvroSerde.class.getName());
And by removing
Consumed.with(Serdes.String(), new GenericAvroSerde())
To configure Serde use following code (adapt it to your situation):
GenericAvroSerde genericAvroSerde = new GenericAvroSerde();
boolean isKeySerde = false;
genericAvroSerde.configure(
Collections.singletonMap(
AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG,
"http://confluent-schema-registry-server:8081/"),
isKeySerde);

what is correct use of consumer groups in spring cloud stream dataflow and rabbitmq?

A follow up to this:
one SCDF source, 2 processors but only 1 processes each item
The 2 processors (del-1 and del-2) in the picture are receiving the same data within milliseconds of each other. I'm trying to rig this so del-2 never receives the same thing as del-1 and vice versa. So obviously I've got something configured incorrectly but I'm not sure where.
My processor has the following application.properties
spring.application.name=${vcap.application.name:sample-processor}
info.app.name=#project.artifactId#
info.app.description=#project.description#
info.app.version=#project.version#
management.endpoints.web.exposure.include=health,info,bindings
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration
spring.cloud.stream.bindings.input.group=input
Is "spring.cloud.stream.bindings.input.group" specified correctly?
Here's the processor code:
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Object transform(String inputStr) throws InterruptedException{
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = " I AM [" + inputStr + "] AND I HAVE BEEN PROCESSED!!!!!!!";
log.info("SampleProcessor.transform() incoming inputStr="+inputStr);
return message;
}
Is the #Transformer annotation the proper way to link this bit of code with "spring.cloud.stream.bindings.input.group" from application.properties? Are there any other annotations necessary?
Here's my source:
private String format = "EEEEE dd MMMMM yyyy HH:mm:ss.SSSZ";
#Bean
#InboundChannelAdapter(value = Source.OUTPUT, poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public MessageSource<String> timerMessageSource() {
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = new SimpleDateFormat(format).format(new Date());
log.info("SampleSource.timeMessageSource() message=["+message+"]");
return () -> new GenericMessage<>(new SimpleDateFormat(format).format(new Date()));
}
I'm confused about the "value = Source.OUTPUT". Does this mean my processor needs to be named differently?
Is the inclusion of #Poller causing me a problem somehow?
This is how I define the 2 processor streams (del-1 and del-2) in SCDF shell:
stream create del-1 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
stream create del-2 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
Do I need to do anything differently there?
All of this is running in Docker/K8s.
RabbitMQ is given by bitnami/rabbitmq:3.7.2-r1 and is configured with the following props:
RABBITMQ_USERNAME: user
RABBITMQ_PASSWORD <redacted>:
RABBITMQ_ERL_COOKIE <redacted>:
RABBITMQ_NODE_PORT_NUMBER: 5672
RABBITMQ_NODE_TYPE: stats
RABBITMQ_NODE_NAME: rabbit#localhost
RABBITMQ_CLUSTER_NODE_NAME:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_MANAGER_PORT_NUMBER: 15672
RABBITMQ_DISK_FREE_LIMIT: "6GiB"
Are any other environment variables necessary?

"max allowed size 128000 bytes, actual size of encoded class scala" error in akka remoting

I want to use Akka Remoting to exchange message over network between actors, but for large String message i got the following error:
akka.remote.OversizedPayloadException: Discarding oversized payload
sent to Actor :: max allowed size 128000 bytes
, actual size of encoded class scala.
How can i fix this limitation?
I add the following configuration and now everything is ok:
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
maximum-payload-bytes = 30000000 bytes
netty.tcp {
hostname = "127.0.0.1"
port = 2552
message-frame-size = 30000000b
send-buffer-size = 30000000b
receive-buffer-size = 30000000b
maximum-frame-size = 30000000b
}
}
}

How do i create a TCP receiver that only consumes messages using akka streams?

We are on: akka-stream-experimental_2.11 1.0.
Inspired by the example
We wrote a TCP receiver as follows:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val serverFlow = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(message => {
target ? new Message(message); ByteString.empty
})
conn handleWith serverFlow
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
However, our intention was to have the receiver not respond at all and only sink the message. (The TCP message publisher does not care about response ).
Is it even possible? to not respond at all since akka.stream.scaladsl.Tcp.IncomingConnection takes a flow of type: Flow[ByteString, ByteString, Unit]
If yes, some guidance will be much appreciated. Thanks in advance.
One attempt as follows passes my unit tests but not sure if its the best idea:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(Message(_))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
You are on the right track. To keep the possibility to close the connection at some point you may want to keep the promise and complete it later on. Once completed with an element this element published by the source. However, as you don't want any element to be published on the connection, you can use drop(1) to make sure the source will never emit any element.
Here's an updated version of your example (untested):
val promise = Promise[ByteString]()
// this source will complete when the promise is fulfilled
// or it will complete with an error if the promise is completed with an error
val completionSource = Source(promise.future).drop(1)
completionSource // only used to complete later
.via(conn.flow) // I reordered the flow for better readability (arguably)
.runWith(targetSink)
// to close the connection later complete the promise:
def closeConnection() = promise.success(ByteString.empty) // dummy element, will be dropped
// alternatively to fail the connection later, complete with an error
def failConnection() = promise.failure(new RuntimeException)