"max allowed size 128000 bytes, actual size of encoded class scala" error in akka remoting - serialization

I want to use Akka Remoting to exchange message over network between actors, but for large String message i got the following error:
akka.remote.OversizedPayloadException: Discarding oversized payload
sent to Actor :: max allowed size 128000 bytes
, actual size of encoded class scala.
How can i fix this limitation?

I add the following configuration and now everything is ok:
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
maximum-payload-bytes = 30000000 bytes
netty.tcp {
hostname = "127.0.0.1"
port = 2552
message-frame-size = 30000000b
send-buffer-size = 30000000b
receive-buffer-size = 30000000b
maximum-frame-size = 30000000b
}
}
}

Related

How to send a message with priority to RabbitMQ with StreamBridge

I'm using RabbitMQ. I've defined a queue with priority, and I can send messages to this queue with some priority value using RMQ GUI, and consumers also get the messages in sorted order, but when I try to send the message from my java code using Stream bridge, I don't know how to specify the priority with the message.
Here's what I have tried :
I have added x-max-priority: 10 to the queue while creating the queue.
Consumer example =
#Bean
public Consumer<Message<String>> testListener() {
return (m) -> {
System.out.println("inside consumer with message : " + m);
System.out.println("headers : " + m.getHeaders());
System.out.println("payload : " + m.getPayload());
};
}
Producer example =
#GET
#Path("test/")
public void test(#Context HttpServletRequest request) {
System.out.println("inside test");
try {
String payload = "hello world";
logger.info("going to send a message : {}", payload);
int priority = 5;
Message<String> message = MessageBuilder.withPayload(payload)
.setHeader("priority", priority)
.build();
boolean res = STREAM_BRIDGE.send("testWriter-out-0", message);
System.out.println(message);
System.out.println(res);
} catch (Exception e) {
logger.error(e);
}
}
The output of the Producer =
-> inside test
-> GenericMessage [payload=hello world, headers={priority=5, id=some_id, timestamp=epoch}]
-> true
The output of the Consumer =
-> inside consumer with message : GenericMessage [payload=hello world, headers={amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedExchange=test_exchange, amqp_deliveryTag=1, deliveryAttempt=1, amqp_consumerQueue=test_exchange.ats, amqp_redelivered=false, amqp_receivedRoutingKey=test_exchange, amqp_timestamp=date_time, amqp_messageId=some_id, id=some_id, amqp_consumerTag=some_tag, sourceData=(Body:'hello world' MessageProperties [headers={}, timestamp=date_time, messageId=some_id, contentType=application/json, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=test_exchange, receivedRoutingKey=test_exchange, deliveryTag=1, consumerTag=some_tag, consumerQueue=test_exchange.ats]), contentType=application/json, timestamp=epoch}]
-> headers : {amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedExchange=test_exchange, amqp_deliveryTag=1, deliveryAttempt=1, amqp_consumerQueue=test_exchange.ats, amqp_redelivered=false, amqp_receivedRoutingKey=test_exchange, amqp_timestamp=date_time, amqp_messageId=some_id, id=some_id, amqp_consumerTag=tag, sourceData=(Body:'hello world' MessageProperties [headers={}, timestamp=date_time, messageId=some_id, contentType=application/json, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=test_exchange, receivedRoutingKey=test_exchange, deliveryTag=1, consumerTag=tag, consumerQueue=test_exchange.ats]), contentType=application/json, timestamp=epoch}
-> payload : hello world
So the message goes to RMQ and the consumer also gets the message, but on RMQ GUI when I perform Get-message operation on the Queue, I get this result =>
Message 1
The server reported 0 messages remaining.
Exchange test_exchange
Routing Key test_exchange
Redelivered ○
Properties
timestamp: timestamp
message_id: some_id
priority: 0
delivery_mode: 2
headers:
content_type: application/json
Payload hello world
11 bytes
Encoding: string
As we can see in the above result, priority is set to 0 by RMQ (and hence in the Consumer, I get the messages in the FIFO manner, not in a priority-based manner) and inside headers : only one header is present "content_type: application/json", so I think the priority is not a part of the header but is a part of properties, then how to set message properties using StreamBridge?
To conclude, I am trying to figure out how to set the priority of a message dynamically while sending it using StreamBridge, any help would be appreciated, thanks in advance !
Please, consider to use the latest Spring Cloud Stream: https://spring.io/projects/spring-cloud-stream#learn.
Apparently your spring-cloud-starter-stream-rabbit = 3.0.3.RELEASE is old enough to suffer from the issue https://github.com/spring-cloud/spring-cloud-stream/issues/1931.
Have just tested with the latest one and I got the proper priority property on the message posted into RabbitMQ queue by the mentioned StreamBridge.

spring-cloud-stream3.0 batch-mode is true consumer data Can't convert value of class

config info
spring:
kafka:
consumer:
max-poll-records: 5
cloud:
stream:
instance-count: 5
instance-index: 0
kafka:
binder:
brokers: 127.0.0.1:9092
auto-create-topics: true
auto-add-partitions: true
min-partition-count: 5
bindings:
log-data-in:
destination: log-data1
group: log-data-group
contenttype: text/plain
consumer:
partitioned: true
batch-mode: true
log-data-out:
destination: log-data1
contentType: text/plain
producer:
use-native-encoding: true
partitionCount: 5
# configuration:
# key.serializer: org.apache.kafka.common.serialization.StringSerializer
# value.serializer: org.apache.kafka.common.serialization.ByteArraySerializer
send kafka code:
LogData logData = new LogData();
logData.setId(1)
logData.setVer("22")
MessageChannel messageChannel = logDataStreams.outboundLogDataStreams();
boolean sent = messageChannel.send(MessageBuilder.withPayload(logData).setHeader("partitionKey", key).build());
consumer data listener kafka topic
I think there is an error in processing the data type in kafka, there is no problem, there is no less setting, why is the conversion type error when consuming kafka information, bytes cannot know to convert an entity class
The code here is the consumption data, which is the place where the error is reported
#StreamListener(LogDataStreams.INPUT_LOG_DATA)
public void handleLogData(List<LogData> messages) {
System.out.println(messages);
messages.parallelStream().forEach(item -> {
System.out.println(item);
});
}
Define topic identifier
public interface LogDataStreams {
String INPUT_LOG_DATA = "log-data-in";
String OUTPUT_LOG_DATA = "log-data-out";
String INPUT_LOG_SC = "log-sc-in";
String OUTPUT_LOG_SC = "log-sc-out";
String INPUT_LOG_BEHAVIOR = "log-behavior-in";
String OUTPUT_LOG_BEHAVIOR = "log-behavior-out";
#Input(INPUT_LOG_DATA)
SubscribableChannel inboundLogDataStreams();
#Output(OUTPUT_LOG_DATA)
MessageChannel outboundLogDataStreams();
}
I think there is an error in processing the data type in kafka, there is no problem, there is no less setting, why is the conversion type error when consuming kafka information, bytes cannot know to convert an entity class
Error in last run:
Caused by: org.apache.kafka.common.errors.SerializationException: Can't convert value of class com.xx.core.data.model.LogData to class org.apache.kafka.common.serialization.ByteArraySerializer specified in value.serializer
Caused by: java.lang.ClassCastException: com.xx.core.data.model.LogData cannot be cast to [B
Please help me, how to solve this problem

what is correct use of consumer groups in spring cloud stream dataflow and rabbitmq?

A follow up to this:
one SCDF source, 2 processors but only 1 processes each item
The 2 processors (del-1 and del-2) in the picture are receiving the same data within milliseconds of each other. I'm trying to rig this so del-2 never receives the same thing as del-1 and vice versa. So obviously I've got something configured incorrectly but I'm not sure where.
My processor has the following application.properties
spring.application.name=${vcap.application.name:sample-processor}
info.app.name=#project.artifactId#
info.app.description=#project.description#
info.app.version=#project.version#
management.endpoints.web.exposure.include=health,info,bindings
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration
spring.cloud.stream.bindings.input.group=input
Is "spring.cloud.stream.bindings.input.group" specified correctly?
Here's the processor code:
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Object transform(String inputStr) throws InterruptedException{
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = " I AM [" + inputStr + "] AND I HAVE BEEN PROCESSED!!!!!!!";
log.info("SampleProcessor.transform() incoming inputStr="+inputStr);
return message;
}
Is the #Transformer annotation the proper way to link this bit of code with "spring.cloud.stream.bindings.input.group" from application.properties? Are there any other annotations necessary?
Here's my source:
private String format = "EEEEE dd MMMMM yyyy HH:mm:ss.SSSZ";
#Bean
#InboundChannelAdapter(value = Source.OUTPUT, poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public MessageSource<String> timerMessageSource() {
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = new SimpleDateFormat(format).format(new Date());
log.info("SampleSource.timeMessageSource() message=["+message+"]");
return () -> new GenericMessage<>(new SimpleDateFormat(format).format(new Date()));
}
I'm confused about the "value = Source.OUTPUT". Does this mean my processor needs to be named differently?
Is the inclusion of #Poller causing me a problem somehow?
This is how I define the 2 processor streams (del-1 and del-2) in SCDF shell:
stream create del-1 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
stream create del-2 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
Do I need to do anything differently there?
All of this is running in Docker/K8s.
RabbitMQ is given by bitnami/rabbitmq:3.7.2-r1 and is configured with the following props:
RABBITMQ_USERNAME: user
RABBITMQ_PASSWORD <redacted>:
RABBITMQ_ERL_COOKIE <redacted>:
RABBITMQ_NODE_PORT_NUMBER: 5672
RABBITMQ_NODE_TYPE: stats
RABBITMQ_NODE_NAME: rabbit#localhost
RABBITMQ_CLUSTER_NODE_NAME:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_MANAGER_PORT_NUMBER: 15672
RABBITMQ_DISK_FREE_LIMIT: "6GiB"
Are any other environment variables necessary?

How change limit file size of Clamd service for nclam

Default Limits: File size limit set to 26214400 bytes.
If i scan file size > 25mb, it will occur error.
The maximum stream size of 26214400 bytes has been exceeded.
I try change:
public ClamClient(string server, int port)
{
MaxChunkSize = 131072; //128k
MaxStreamSize = 209715200; //200mb ,- 26214400; //25mb
Server = server;
Port = port;
}
But it occur error when scan file:
Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
How change limit file size of Clamd service in Window?
Thanks all.
You need to change nclam config ("clamd.conf"):
StreamMaxLength 50M
You also have to change the ClamClient instance with a higher MaxStreamSize:
var client = new ClamClient("localhost", 3310)
{
MaxStreamSize = 52428800
};

Why is my Destination IP Address seen as my Source IP Address when attempting to connect from a Handheld Device?

I am trying to call a REST method on a server from a handheld device with this code:
public static void WriteIt2( string fileName, string data )
{
// "filename" is what the file to save will be named; "data" is the contents of that file
if (File.Exists(fileName))
{
MessageBox.Show(String.Format("{0} exists - deleting", fileName));
File.Delete(fileName);
}
string justFileName = Path.GetFileNameWithoutExtension(fileName);
String uri = String.Format("http://192.168.125.50:21608/api/inventory/sendXML/duckbilled/platypus/{0}", justFileName);
SendXMLFile2(uri, data);
}
public static void SendXMLFile2(string uri, string data)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uri);
req.Method = "Post";
req.ContentType = "text/plain; charset=utf-8";
byte[] encodedBytes = Encoding.UTF8.GetBytes(data);
req.ContentLength = encodedBytes.Length;
Stream requestStream = req.GetRequestStream();
requestStream.Write(encodedBytes, 0, encodedBytes.Length);
requestStream.Close();
WebResponse result = req.GetResponse();
MessageBox.Show(result.ToString());
}
The breakpoint in my server code is not getting reached; I'm trying to find out why. I'm using RawCap and Wireshark to try to see exactly what's going on. After running RawCap and opening the .pcap file it creates in WireShark, and by then searching for any appearances of the port I'm trying to access (via Edit > Find Packet > Packet Bytes with "21608" as the search string") I found this in the Data for the only packet that contains that string:
SBCMYReportProviderStatusMessage#bgM
%<NetworkShield`http://192.168.125.50:21608/api/inventory/sendXML/duckbilled/platypus/INV_0000003.0916201413022
6z )
...so the code running on the handheld device is being picked up by WireShark, but Wireshark shows 192.168.125.50 as "Source" and 192.168.125.87 as "Destination" (Protocol == TCP, where I would kind of expect it to be HTTP).
192.168.125.50 is my PC's IP Address (should be the Destination, not the Source, right?)
192.168.125.87, the Destination, according to "nbtstat -a 192.168.125.87" is "BUCK, UNIQUE" I don't know or what "BUCK" is...(obviously, a computer on the local network)
The IP Address of the handheld is 192.168.55.101
Why does Wireshark not show 192.168.55.101 as the Source and 192.168.125.50 as the Destination? Is it possible to determine the reason for the failure (the REST method not getting hit) from this Wireshark data?
UPDATE
By right-clicking the Packet record in WireShark and selecting "Follow TCP Stream" I get the following:
SBCM.................w.......R.e.p.o.r.t.P.r.o.v.i.d.e.r.S.t.a.t.u.s.M.e.s.s.a.g.e.#...bg.M.....%.<....F.i.l.e.S.y.s.t.e.m.S.h.i.e.l.d.....l...C.:.\.W.i.n.d.o.w.s.\.a.s.s.e.m.b.l.y.\.N.a.t.i.v.e.I.m.a.g.e.s._.v.2...0...5.0.7.2.7._.3.2.. . . . [ much more of the same type of thing elided ]
.................SBCM.................Y.......R.e.p.o.r.t.P.r.o.v.i.d.e.r.S.t.a.t.u.s.M.e.s.s.a.g.e.#...bg.M.....%.<
...N.e.t.w.o.r.k.S.h.i.e.l.d.....`...h.t.t.p.:././.1.9.2...1.6.8...1.2.5...5.0.:.2.1.6.0.8./.a.p.i./.i.n.v.e.n.t.o.r.y./.s.e.n.d.X.M.L./.d.u.c.k.b.i.l.l.e.d./.p.l.a.t.y.p.u.s./.I.N.V._.0.0.0.0.0.0.3...0.9.1.6.2.0.1.4.1.3.0.2.2.6.....z ......)...SBCM.........................R.e.p.o.r.t.P.r.o.v.i.d.e.r.S.t.a.t.u.s.M.e.s.s.a.g.e.#...bg.M.....%.<....W.e.b.R.e.p........................JSBCM.........................R.e.p.o.r.t.M.a.i.n.S.t.a.t.u.s.M.e.s.s.a.g.e.#...bg.M.....%.<............................$.....O6.........R8e....E.......C.......C....B_.................SBCM.........................R.e.p.o.r.t.P.r.o.v.i.d.e.r.S.t.a.t.u.s.M.e.s.s.a.g.e.#...bg.M.....%.<....W.e.b.R.e.p........................JSBCM.........................R.e.p.o.r.t.M.a.i.n.S.t.a.t.u.s.M.e.s.s.a.g.e.#...bg.M.....%.<............................$.....O6.........R8e....E.......C.......C....B_.................
I cannot make heads or tails of this; I don't know what I should expect to see after my uri...I don't see any "ack" of either success or failure...