cant consume queue from topic exchange rabbitmq - asp.net-core

I have a weird issue with rabbit mq,my publisher service sends message to queue and i can see it there,in my consumer i cant get it even thought i go there with router key and exact queue name
var factory = new ConnectionFactory() { HostName = "myhst",
UserName = "payoutservice",
Password = pass};
using (var connection = factory.CreateConnection())
using ( _channel = connection.CreateModel())
{
_channel.ExchangeDeclare("payin-exchange", ExchangeType.Topic);
_channel.QueueDeclare("OpenPaymentReceiveResponseQueue", durable: true, exclusive: false, autoDelete: false, arguments: null);
_channel.QueueBind(queue: "OpenPaymentReceiveResponseQueue",
exchange: "payin-exchange",
routingKey: "payin");
//}
Console.WriteLine(" [*] Waiting for messages. To exit press CTRL+C");
var consumer = new EventingBasicConsumer(_channel);
consumer.Received += (model, ea) =>
{
var body = ea.Body.ToArray();
var message = Encoding.UTF8.GetString(body);
var routingKey = ea.RoutingKey;
};
in the last part doesn't go inside consumer.Received and does not even hit my breakpoint inside there

Have you included an explicit call to for the consumer to consume from the queue? E.g.,
channel.BasicConsume(queue: queueName,
autoAck: true,
consumer: consumer);
There are helpful examples that you can look over here: https://www.rabbitmq.com/tutorials/tutorial-three-dotnet.html

Related

Handle partially failed batch with spring cloud streams

When using spring-cloud-stream for streaming application (functional style) with batches, is there a way to retry/DLQ a failed message but also process (stream) the non-failing records?
for example:
function received batch of 10 records, and attempts to convert them to other type and return the new records for producing. let's say record 8 failed on mapping, is it possible to complete the producing of records 0-7 and then retry/DLQ record 8?
throwing BatchListenerFailedException with the index does not cause the prior messages to be sent.
spring kafka version: 2.8.0
code:
#Override
public List<Message<Context>> apply(Message<List<Context>> listMessage) {
List<Message<Context>> output = new ArrayList<>();
IntStream.range(0, listMessage.getPayload().size()).forEach(index -> {
try {
Record<Context> record = Record.fromBatch(listMessage, index);
output.add(MessageBuilder.withPayload(record.getValue()).build());
if (index == listMessage.getPayload().size() - 1) {
throw new TransientError("offset " + record.getOffset() + "failed", new RuntimeException());
}
} catch (Exception e) {
throw new BatchListenerFailedException("Tigger retry", e, index);
}
});
return output;
}
customizer:
private CommonErrorHandler getCommonErrorHandler(String group) {
DefaultErrorHandler errorHandler = new DefaultErrorHandler(getRecoverer(group), getBackOff());
errorHandler.setLogLevel(KafkaException.Level.DEBUG);
errorHandler.setAckAfterHandle(true);
errorHandler.setClassifications(Map.of(
PermanentError.class, false,
TransientError.class, true,
SerializationException.class, properties.isRetryDesErrors()),
false);
errorHandler.setRetryListeners(getRetryListener());
return errorHandler;
}
private ConsumerRecordRecoverer getRecoverer(String group) {
KafkaOperations<?, ?> operations = new KafkaTemplate<>(new DefaultKafkaProducerFactory<>(producerProperties()));
DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer(
operations, getDestinationResolver(group));
recoverer.setHeadersFunction(this::buildAdditionalHeaders);
return recoverer;
}
yaml:
spring:
cloud:
function:
definition: batchFunc
stream:
default-binder: kafka-string-avro
binders:
kafka-string-avro:
type: kafka
environment.spring.cloud.stream.kafka.binder.consumerProperties:
key.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value.deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: ${SCHEMA_REGISTRY_URL:http://localhost:8081}
value.subject.name.strategy: io.confluent.kafka.serializers.subject.TopicNameStrategy
specific.avro.reader: true
environment.spring.cloud.stream.kafka.binder.producerProperties:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
schema.registry.url: ${SCHEMA_REGISTRY_URL:http://localhost:8081}
value.subject.name.strategy: io.confluent.kafka.serializers.subject.TopicNameStrategy
bindings:
batchFunc-in-0:
binder: kafka-string-avro
destination: records.publisher.output
group: function2-in-group
contentType: application/*+avro
consumer:
batchMode: true
function2-out-0:
binder: kafka-string-avro
destination: reporter.output
producer:
useNativeEncoding: true
kafka:
binder:
brokers: ${KAFKA_BROKER_ADDRESS:localhost:9092}
autoCreateTopics: ${KAFKA_AUTOCREATE_TOPICS:false}
default:
consumer:
startOffset: ${START_OFFSET:latest}
enableDlq: false
default:
consumer:
maxAttempts: 1
defaultRetryable: false

Question for RawRabbit: sends various messages autonomously

I didn't understand, I have a client that sends various messages autonomously, it doesn't wait for the ack but it has to send them and that's it, but seems that it send only the first one and all the others only when I close the application.
where am I wrong? what should i set?.
var config = new RawRabbitConfiguration()
{
Username = username,
Password = password,
VirtualHost = "/",
Hostnames = new List<string>() { hostname },
AutoCloseConnection = false,
//Ssl = new SslOption() { Enabled = true },
Port = port,
Exchange = new GeneralExchangeConfiguration
{
AutoDelete = false,
Durable = true,
Type = RawRabbit.Configuration.Exchange.ExchangeType.Direct
},
Queue = new GeneralQueueConfiguration
{
Exclusive = false,
AutoDelete = false,
Durable = true
}
};
var options = new RawRabbitOptions() { ClientConfiguration = config };
client = RawRabbitFactory.CreateSingleton(options);
client.SubscribeAsync<MessageModel>(async msg =>
{
return await Task.Run(() => MessageReceived(msg));
},
ctx => ctx.UseSubscribeConfiguration(
cfg => cfg.FromDeclaredQueue(
queue => queue.WithName(queueName))))
.GetAwaiter();
UPDATE: function for sending that I use...
public void SendMessage(MessageModel message, string machineName = null, string exchangeName = null)
{
if (!string.IsNullOrEmpty(machineName))
message.MachineName = machineName;
else if (string.IsNullOrEmpty(message.MachineName))
message.MachineName = this.MachineName;
if (!string.IsNullOrEmpty(LastMessageReceived?.ID))
message.RequestID = LastMessageReceived.ID;
else
message.RequestID = string.Empty;
if (!string.IsNullOrEmpty(LastMessageReceived?.MachineName))
message.MachineNameDest = LastMessageReceived.MachineName;
else if (string.IsNullOrEmpty(message.MachineNameDest))
message.MachineNameDest = string.Empty;
try
{
if (string.IsNullOrEmpty(exchangeName))
client.PublishAsync<MessageModel>(message);
else
client.PublishAsync<MessageModel>(message,
ctx => ctx.UsePublishConfiguration(
cfg => cfg.OnExchange(exchangeName)));
}
catch (Exception ex)
{
OnError?.Invoke(this, ex);
}
LastMessageReceived = null;
}
EDIT:
In what case is the error "Stage Initialized has no additional middlewares registered" generated ?
I cannot understand why this error is generated on "SubscribeAsync" and after does not send messages. :(
Please, help me.

Rabbit saving extra bytes for AMQP 1.0 messages

I have an environment where some AMQP 1.0 and some AMQP 0.9.1 clients need to write to/read from a RabbitMQ queue. I enabled the AMQP 1.0 rabbit plugin and it is working, but I get extra bytes in body for each AMQP 1.0 message.
I'm sending messages through AMQP1.0 by using rhea (typescript):
const connection: Connection = new Connection (
{
host: 'localhost',
port: 5672,
id: 'my_id',
reconnect: true
}
);
const senderName = "sender01";
const senderOptions: SenderOptions = {
name: senderName,
target: {
address: "target.queue"
},
onError: (context: EventContext) => {},
onSessionError: (context: EventContext) => {}
};
await connection.open();
const sender: Sender = await connection.createSender(senderOptions);
sender.send({
body: JSON.stringify({"one": "two", "three": "four"}),
content_encoding: 'UTF-8',
content_type: 'application/json'
});
console.log("sent");
await sender.close();
await connection.close();
console.log("connection closed");
This example works but this is what is stored in the queue:
The base64 encoded message is AFN3oRx7Im9uZSI6InR3byIsInRocmVlIjoiZm91ciJ9, which after decoding becomes:
Sw{"one":"two","three":"four"}
There is an additional Sw which I didn't send.
I tried setting up a java client with the official RabbitMQ library (which talks AMQP 0.9.1) to see if those extra bytes were sent to clients:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.basicConsume(
"target.queue",
true,
(consumerTag, delivery) -> {
String message = new String(delivery.getBody(), "UTF-8");
System.out.println(" [x] Received '" + message + "'");
},
ignored -> {}
);
This is the output:
[x] Received ' Sw�{"one":"two","three":"four"}'
The weird thing is that if I try consuming the exact same message with an AMQP 1.0 client, those extra bytes don't appear in the received message body, extra bytes appear only when publishing with AMQP 1.0 and subscribing with AMQP 0.9.1.
Why is that? Is there any way to avoid extra bytes when using both AMQP versions?
UPDATE
I also tried with SwiftMQ:
int nMsgs = 100;
int qos = QoS.AT_MOST_ONCE;
AMQPContext ctx = new AMQPContext(AMQPContext.CLIENT);
String host = "localhost";
int port = 5672;
String queue = "target.queue";
try {
Connection connection = new Connection(ctx, host, port, false);
connection.setContainerId(UUID.randomUUID().toString());
connection.setIdleTimeout(-1);
connection.setMaxFrameSize(1024 * 4);
connection.setExceptionListener(Exception::printStackTrace);
connection.connect();
{
Session session = connection.createSession(10, 10);
Producer p = session.createProducer(queue, qos);
for (int i = 0; i < nMsgs; i++) {
AMQPMessage msg = new AMQPMessage();
System.out.println("Sending " + i);
msg.setAmqpValue(new AmqpValue(new AMQPString("{\"one\":\"two\",\"three\":\"four\"}")));
p.send(msg);
}
p.close();
session.close();
}
connection.close();
} catch (Exception e) {
e.printStackTrace();
}
The issue is still there, but the first bytes changed, now I get:
[x] Received '□�□□□□□□□w�{"one":"two","three":"four"}'
Please see this response which clarifies how to encode the payload and avoid these extra bytes:
if AMQP 1.0 client sends message to 0-9-1 client and encodes her payload as binary in a "data section" (i.e. not in amqp-sequence section, not in amqp-value section) the 0-9-1 client shall get the the complete payload without any extra bytes
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

limitation in the callback function in nodejs redis?

I am not sure if the issue I am having is a limitation in redis itself or in the nodejs 'redis' module implementation.
var redis = require('redis');
var client = redis.createClient(6379,'192.168.200.5');
client.on('error',function (error) {
console.log("** error in connection **");
process.exit(1);
});
client.on('connect',function () {
console.log("** connected **");
client.on('message',function (channel,message) {
if (channel == 'taskqueue') {
console.log(channel + ' --> ' + message);
var params = message.split(' ');
var inputf = params[0];
var outputf = params[1];
var dim = inputf.split('_').slice(-1)[0];
client.rpush('records',message,function (e,reply) {
});
}
});
client.subscribe('taskqueue');
});
From the code snippet above, I tried to do an RPUSH inside an "ON MESSAGE" subscription event. It does not work, and I get a client 'ON ERROR' event, thus, it prints error in connection. What is the correct way to do this?
After further searching, I came across this page https://github.com/phpredis/phpredis/issues/365 which seems to explain the scenario.

Creating Peer Connection for Web RTC Data Channel

I have been trying to establish peer connection between browsers, to use data channel, but i am unsuccessful.
Everytime I correct one statement another error appears.
First I established a socketting server using socket.io and Node.js. In the server when any client is connection I am sending 'beacon' packets. On listening 'beacon' packet 1st client requests to join a 'room'. Then I allow the second client to join the same 'room'.
As soon as the second client connects, Server sends a confirmation packet to Client 1.
Then Client 1 sends the RTC Peer Connection 'offer' to Client 2, after setting local Description.
if( isChrome ) {
localPC = new window.webkitRTCPeerConnection(server, contraints);
rslt.innerHTML = "Webkit Variables Set";
}else {
localPC = new mozRTCPeerConnection(server, contraints);
rslt.innerHTML = "Mozilla Variables Set";
}
localPC.onicecandidate = function(event) {
if( event.candidate )
localPC.addIceCandidate( event.candidate );
};
localPC.onnegotiationneeded = function() {
localPC.createOffer( setOffer, sendFail );
};
sendChannel = localPC.createDataChannel( "sendDataChannel", {reliable: false} );
localPC.ondatachannel = function(event) {
receiveChannel = event.channel;
receiveChannel.onmessage = function(event) {
rslt.innerHTML = event.data;
};
};
localPC.createOffer( setOffer, sendFail );
function setOffer( offer ) {
lDescp = new RTCSessionDescription(offer);
localPC.setLocalDescription( lDescp );
socket.emit( 'offer', JSON.stringify(offer) );
rslt.innerHTML += "Offer Sent...<br/>";//+offer.sdp;
}//End Of setOffer()
Client 2 on receiving the 'offer' sets its as remote Description and creates a 'reply'. Sets the 'reply' as local Description, and sends it.
if( message.type == 'offer' ) {
rDescp = new RTCSessionDescription(message.sdp);
localPC.setRemoteDescription( rDescp );
localPC.createAnswer(
function( answer ) {
lDescp = new RTCSessionDescription(answer);
localPC.setLocalDescription( lDescp );
socket.emit( 'reply', JSON.stringify(answer) );
}, sendFail
);
}else {
localPC.addIceCandidate = new RTCIceCandidate( message.candidate );
}//End Of IF ELse
Client 1 on receiving the 'reply' sets it as remote Description and the connection should get established???
localPC.setRemoteDescription( new RTCSessionDescription( message.sdp ) );
But its not working!!! Pleease Help.
Seems like you got the flow correct, although I don't see the entire code.
One thing that strikes me weird is this:
localPC.onicecandidate = function(event) {
if( event.candidate )
localPC.addIceCandidate( event.candidate );
};
You need to send the icecandidate recieved in the onicecandidate event to the other peer. and not add it yourself.