I'm using spring-cloud-stream : 2.1.1.RELEASE with Rabbit binder.
The queue names generated for my binding always have a -0 extension like :
test-data-direct.group01-0
test-data-direct is the exchange name
and group01 is the group name.
How can I avoid the -0 extension ?
spring.cloud.stream.rabbit.bindings.output.producer.partitioned : false
didn't help
The queues creation is triggered/influenced by the consumer properties. Meanwhile your ...partitioned : false is set on producer. You probably have something like
input:
consumer:
instanceCount: 3
instanceIndex: 0
partitioned: true
. . . which would explain your queue names.
Related
Problem
Our clients can create their own queues on the RabbitMq cluster and we need to control the important parameters on the queue (ttl, expiration etc.).
The issue is that we cannot be sure what value is actually applied: the one from x-arguments or the policy.
Question
In this rabbitmq documentation, there is nicely explained how are different policies resolved but it does not mention the priority of x-arguments.
So if the queue is created with x-message-ttl : 180000 and the applied policy defines message-ttl : 100000, like this :
... what will be the applied value?
Answer is likely Yes
It looks like policies do override the queue x-attribute.
Why ?
Well, it did for max-length in this small test (with ver 3.10.11) :
Queue was created with x-max-length: 5
Policy of max-lenght: 3 applied
Number of ready messages dropped from 5 to 3
I am trying to move the message to dead letter queue in AWS if there is an exception while handling the message.
Now I am deleting the original message and sending it to DLQ explicitly. However, while doing this I am losing the message meta information like Original message-id, Total receive count, first sent time stamp etc.
Below is the code snippet for the same.
#Inject
#Named("demo-queue")
private SimpleQueueService sqsService;
#Inject
#Named("dlq")
private SimpleQueueService dlqService;
.
.
.
List<Message> messages = sqsService.receiveMessages(10, 30, 20);
messages.forEach(
m -> dlqService.sendMessage(m.getBody(),
attr -> {
new SendMessageRequest()
.withMessageAttributes(m.getMessageAttributes())
.withMessageBody(m.getBody());
})
);
messages.forEach(message -> sqsService.deleteMessage(message.getReceiptHandle()));
After reaching max receive count when AWS moves the message from the original queue to DLQ it preserves all mentioned attributes. Is there any way we can achieve the same using aws-sdk?
I am using the Agorapulse library with Micronaut to send/receive messages from SQS.
I'm building a SpringCloud Stream based application and exchange type is topic and message is sent to 2 queue consumer groups from the topic exchange. The scenario is something like this:
Service A in my application wants to send message of type appointments to service B and service C via an exchange named as: appointments-request based on different use case scenarios such as book, cancel, update etc.
So messages with a key appointments.book.B or appointments.cancel.B should go to consumer queue group appointments.B
messages with a key appointments.book.C or appointments.cancel.C should go to consumer queue group appointments.C
How to achieve this successfully?
Configuration of Producer Service:
spring.cloud.stream.bindings.output.destination=appointments-request
spring.cloud.stream.bindings.input.destination=appointments-reply
spring.cloud.stream.rabbit.bindings.output.producer.exchangeType=topic
spring.cloud.stream.rabbit.bindings.output.producer.routingKeyExpression=
appointments.#.#
Configuration of Consumer Service B:
spring.cloud.stream.rabbit.bindings.input.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.input.consumer.group=
appointments.docmgmt
spring.cloud.stream.rabbit.bindings.input.consumer.bindingRoutingKey=
appointments.docmgmt
spring.cloud.stream.rabbit.bindings.input.consumer.routingKeyExpression=
appointments.#.docmgmt
Producer Service A has the below method to set routing key
public boolean send(AppointmentEvent appointmentEvent)
{
logger.info("Sending event {} ",appointmentEvent);
return this.source.output().
send(MessageBuilder.withPayload(appointmentEvent).
setHeader(SimpMessageHeaderAccessor.DESTINATION_HEADER,
"appointments.book.docmgmt").build());
}
My communication between services is not working.
appointments.#.#
You can't use wildcards on the producer side.
You need something like
spring.cloud.stream.rabbit.bindings.output.producer.routingKeyExpression=headers['routingKey']
And then the producer sets the routingKey header to the desired value for each message.
You shouldn't really use the Simp headers; that is for STOMP; use your own header.
Suppose we have 3 nodes in a cluster.
node1,node2,node3
In node1 we have a
exchange e1 bounded to a queue q1 with binding key =key1
It is attached to a consumer1.
In node2 we have a
exchange e2 bounded to a queue q2 with binding key =key2
It is attached to a consumer2.
Can consumer2 read messages from q1 in cluster ? If not how can this be implemented ?
you can read rabbitMQ route totorial.Though it's using python,the concept would be the same.In the Putting it all together part the consumer 2 can receive info,error and warning from queue 2 while the consumer 1 get error from queue 1.
In your case,c2 can't read message from queue 1 now.To implement,the exchange setting don't need to change.Just bind queue 2 with exchange 1 key 1.
I have the following scenario:
There are 3 rabbitmq queues to which producers push their messages based on the priority of the message.(myqueue_high, myqueue_medium, myqueue_low)
I want to have a single consumer which can pull from these queues in order or priority i.e. it keeps pulling from high queue as long as messages are there. o/w it pulls from medium. If medium is also empty it pulls from low.
How do i achieve this? Do i need to write a custom component?
It would be easier to put all the messages to one queue but with different priorities. That way, the priority sorting would be done in the broker and the Camel consumer would get the messages already sorted by priority. However, RabbitMQ implements the FIFO principle and does not support priority handling (yet).
Solution 1
Camel allows you to reorganise messages based on some comparator using a Resequencer: https://camel.apache.org/resequencer.html:
from("rabbitmq://hostname[:port]/myqueue_high")
.setHeader("priority", constant(9))
.to("direct:messageProcessing");
from("rabbitmq://hostname[:port]/myqueue_medium")
.setHeader("priority", constant(5))
.to("direct:messageProcessing");
from("rabbitmq://hostname[:port]/myqueue_low")
.setHeader("priority", constant(1))
.to("direct:messageProcessing");
// sort by priority by allowing duplicates (message can have same priority)
// and use reverse ordering so 9 is first output (most important), and 0 is last
// (of course we could have set the priority the other way around, but this way
// we keep align with the JMS specification...)
// use batch mode and fire every 3th second
from("direct:messageProcessing")
.resequence(header("priority")).batch().timeout(3000).allowDuplicates().reverse()
.to("mock:result");
That way, all incoming messages are routed to the same sub route (direct:messageProcessing) where the messages are reordered according the priority header set by the incoming routes.
Solution 2
Use SEDA with a prioritization queue:
final PriorityBlockingQueueFactory<Exchange> priorityQueueFactory = new PriorityBlockingQueueFactory<Exchange>();
priorityQueueFactory.setComparator(new Comparator<Exchange>() {
#Override
public int compare(final Exchange exchange1, final Exchange exchange2) {
final Integer prio1 = (Integer) exchange1.getIn().getHeader("priority");
final Integer prio2 = (Integer) exchange2.getIn().getHeader("priority");
return -prio1.compareTo(prio2); // 9 has higher priority then 0
}
});
final SimpleRegistry registry = new SimpleRegistry();
registry.put("priorityQueueFactory", priorityQueueFactory);
final ModelCamelContext context = new DefaultCamelContext(registry);
// configure and start your context here...
The route definition:
from("rabbitmq://hostname[:port]/myqueue_high")
.setHeader("priority", constant(9))
.to("seda:priority?queueFactory=#priorityQueueFactory"); // reference queue in registry
from("rabbitmq://hostname[:port]/myqueue_medium")
.setHeader("priority", constant(5))
.to("seda:priority?queueFactory=#priorityQueueFactory");
from("rabbitmq://hostname[:port]/myqueue_low")
.setHeader("priority", constant(1))
.to("seda:priority?queueFactory=#priorityQueueFactory");
from("seda:priority")
.to("direct:messageProcessing");
Solution 3
Use JMS such as Camel's ActiveMQ component instead of SEDA if you need persistence in case of failures. Just forward the incoming messages from RabbitMQ to a JMS destination with setting the JMSPriority header.
Solution 4
Skip the RabbitMQ entirely and just use a JMS broker such as ActiveMQ that supports prioritization.