I use MassTransit and RabbitMQ quorum queue for application integration. I send message using
var endpoint = await _bus.GetSendEndpoint(new Uri("exchange:TestCommand"));
await endpoint.Send(command1, stoppingToken);
If "receiver application" doesn't ever start, queue won't be created and all sent messages will be lost.
If I use prefix queue for send:
var endpoint = await _bus.GetSendEndpoint(new Uri("queue:TestCommand"));
await endpoint.Send(command, stoppingToken);
classic queue will be created (not quorum). And I can't change queue type later.
I don't want to think about "receiver application" starting moment and I don't want to loose sent messages. How I can create RabbitMQ quorum queue by sender application using MassTransit?
The general guidance is simple, don't couple your producers to your consumers.
You can start the receiver first so that it configures the queue properly (including the quorum settings).
Or you can set the Mandatory flag (RabbitMQ specified) so that Send throws an exception if the message is not delivered to a queue, and return to sending to the exchange.
Related
Is there a way to create a RabbitMQ queue using spring cloud stream without having a consumer for the queue.
Our scenario is that we want to use the delay messaging strategy, so messages coming to the first queue would be held until expired and moved to a DLQ.
The application would be consuming the messages from the DLQ.
Wanted to check if there is a way we can use spring cloud stream to configure the queues, when we do not have a consumer for the first queue and it's just there to hold messages till expiry.
Yes; simply add a Queue bean (and binding if needed).
Boot auto configures a RabbitAdmin which will detect such beans when the connection is first established.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#broker-configuration
#Bean
public Queue queue() {
return QueueBuilder.nonDurable("foo")
.autoDelete()
.exclusive()
.withArgument("foo", "bar")
.build();
}
Bg:
Producer send a message to rabbitmq, its true rabbitmq will send a confirm msessage to producer. And rabbitmq will storage the message in exchange,queue. I have a question, when does the rabbitmq store the message, is before send confirm message or after that? I guess the workflow is broker storage the message first and then send confirm message, does it right?
Read https://www.rabbitmq.com/confirms.html#publisher-confirms. If you use publisher confirms, the broker will send an ack when the message has been stored.
We are sending amqp messages to rabbitMQ and are setting the message-ttl property.
If messages got expired, they are moved to the defined DLQ.
Is it possible to have expired messages moved to a seperate DLQ so that they do not interfere with other messages moved to DLQ because of more serious reasons?
Yes, this is possible.
You need to set a Dead Letter Exchange on your queue, and configure the message routing key to change when the messages get expired. Use the x-dead-letter-routing-key arg for this.
Then bind a new queue to your DLX with the dead letter routing key you just defined.
Expired messages will then be sent by RabbitMQ to the DLX, which will route them to the queue you have explicitly defined only for expired messages.
More about this here: https://www.rabbitmq.com/dlx.html.
I want to consume the messages in my Storm Spout from a rabbitMq Queue.
Now , we are using Spring AMQP to send and receive messages from RabbitMq asynchronously.
Spring AMQP provides mechanism(either creating a listener or using annotation #RabbitListner) to read message from the queue .
The problem is I can have a Listener to read the message from the Queue. But how do I send this message to my Storm Spout which is running on storm cluster ?
The topology will start a cluster, but in my nextTuple() method of my spout , I need to read message from this Queue. Can Spring AMQP be used here ?
I have a listener configured to read message from the queue:
#RabbitListener(queues = "queueName")
public void processMessage(QueueMessage message) {
}
How can the above message received at the listener be sent to my spout running on a cluster .
Alternatively , how can a spout's nextTuple() method have this method inside it ? Is it possible
I am using Java as a language here.
You can read messages on-demand (rather than being message-driven) by using one of the RabbitTemplate receive or receiveAndConvert methods.
By default, they will return null if there is no message in the queue.
EDIT:
If you set the receiveTimeout (available in version 1.5 or above), the receive methods will block for that time (it uses an async consumer internally and does not poll).
But it's still not as efficient as the listener because a new consumer is created for each method; to use a listener you would need to use some internal blocking mechanism in nextTuple() (e.g. a BlockingQueue) to wait for messages.
How can I log messages to a Azure Service Bus Queue and read messages from it in application.
Here you have a guide how to use queues in .net.
Focus on "How to send messages to a queue" and "How to receive messages from a queue".
The main idea of this is to create a QueueClient object using a connection string, create a BrokeredMessage and then send it to the queue.
For receiving the message, I believe that you will have a service (lets say a worker role for example) peeking messages from that queue.
There you will need to create a client and add a callback to handle the received messages. When you receive the message you can log it using Nlog.