Two queues are bound to a topic exchange with the following routing keys:
Queue A, bound with routing key pattern match *.foo
Queue B, bound with routing key pattern match *.bar
I'd like to add a third queue to this exchange that receives messages that are neither foo messages nor bar messages. If I bind this queue with a # routing key, I naturally get all messages I need, but including foo's and bar's which I don't want.
Any way to route messages patching a pattern NOT *.foo AND NOT *.bar ?
If you want to catch all messages that doesn't match any bindings, that can be done with Alternate Exchange.
Add alternate exchange for existent one and collect all messages from that alternate exchanges:
standard workflow --> [main exchange (topic)]
| --> via binding *.foo --> [foo queue]
| --> via binding *.bar --> [bar queue]
v
[alternate exchange (let it be topic too)]
--> via binding * --> []
For more specific cases when you have N bindings but you want to catch all messages that doesn't match M bindings (where M < N) it is more problematic, but technically can be done via Dead Letter Exchange and then publish it to custom exchange where you have only M bindings, and then apply case with Alternate Exchange. But it even sounds rusty, not even think about performance degradation (applied only if you have really high messages flow).
Related
I'm working with SQS in my application. I have the following configuration.
justSaying
.WithSqsTopicSubscriber()
.IntoQueue(_busNamingConvention.QueueName())
.ConfigureSubscriptionWith(x =>
{
x.VisibilityTimeoutSeconds = 60;
x.RetryCountBeforeSendingToErrorQueue = 3;
})
.WithMessageHandler<MyMessage>(_handlerResolver)
.WithSqsMessagePublisher<MyMessage>(config => config.QueueName = _busNamingConvention.QueueName());
So, there will be 3 re-attempts before the messages gets to Dead Letter Queue. I want to consume this dead letter queue and process the message separately. In essence, I want to create a handler to deal with the messages in the DLQ.
I'm not sure if this is possible or SQS is not intended to be used this way. Please post if this is possible and if yes, is it okay to do this or is this an anti pattern.
I'm building a SpringCloud Stream based application and exchange type is topic and message is sent to 2 queue consumer groups from the topic exchange. The scenario is something like this:
Service A in my application wants to send message of type appointments to service B and service C via an exchange named as: appointments-request based on different use case scenarios such as book, cancel, update etc.
So messages with a key appointments.book.B or appointments.cancel.B should go to consumer queue group appointments.B
messages with a key appointments.book.C or appointments.cancel.C should go to consumer queue group appointments.C
How to achieve this successfully?
Configuration of Producer Service:
spring.cloud.stream.bindings.output.destination=appointments-request
spring.cloud.stream.bindings.input.destination=appointments-reply
spring.cloud.stream.rabbit.bindings.output.producer.exchangeType=topic
spring.cloud.stream.rabbit.bindings.output.producer.routingKeyExpression=
appointments.#.#
Configuration of Consumer Service B:
spring.cloud.stream.rabbit.bindings.input.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.input.consumer.group=
appointments.docmgmt
spring.cloud.stream.rabbit.bindings.input.consumer.bindingRoutingKey=
appointments.docmgmt
spring.cloud.stream.rabbit.bindings.input.consumer.routingKeyExpression=
appointments.#.docmgmt
Producer Service A has the below method to set routing key
public boolean send(AppointmentEvent appointmentEvent)
{
logger.info("Sending event {} ",appointmentEvent);
return this.source.output().
send(MessageBuilder.withPayload(appointmentEvent).
setHeader(SimpMessageHeaderAccessor.DESTINATION_HEADER,
"appointments.book.docmgmt").build());
}
My communication between services is not working.
appointments.#.#
You can't use wildcards on the producer side.
You need something like
spring.cloud.stream.rabbit.bindings.output.producer.routingKeyExpression=headers['routingKey']
And then the producer sets the routingKey header to the desired value for each message.
You shouldn't really use the Simp headers; that is for STOMP; use your own header.
i am trying to implement below scenario in my application
Exachange e1 -> Queue q1
DLX exchange e2 -> Queue q2
Also i have mentioned DLE and DLK in queue-q1 then message moving to queue-q2 on rejection/failure/timeout.
But how does i resend/retry message from queue-q2 to original queue-q1?
You can do that manually in your application after some analyze and filtering logic. Or you can make some TTL on that queue-q2 to let not consumed messages to be expired. And you also need to specify in this queue a x-dead-letter-exchange as a name for the Exachange e1 for desired recycling.
See more info yin this article:
Create the dead letter exchange, which is just a normal exchange with a special name
Create a retry_message queue and have all messages published to the dead letter exchange route here
When you setup the retry_message queue, be sure to default the following parameter values of the queue
x-message-ttl: 30000 – This will set a ttl on any message published to the queue. When the ttl expires, the message will be republished to the exchange specified in the x-dead-letter-exchange parameter.
x-dead-letter-exchange: original_exchange_name – This is where the message will get republished to once the message ttl expires. We normally want this be the name of the exchange where the message was originally published.
I have the following scenario:
There are 3 rabbitmq queues to which producers push their messages based on the priority of the message.(myqueue_high, myqueue_medium, myqueue_low)
I want to have a single consumer which can pull from these queues in order or priority i.e. it keeps pulling from high queue as long as messages are there. o/w it pulls from medium. If medium is also empty it pulls from low.
How do i achieve this? Do i need to write a custom component?
It would be easier to put all the messages to one queue but with different priorities. That way, the priority sorting would be done in the broker and the Camel consumer would get the messages already sorted by priority. However, RabbitMQ implements the FIFO principle and does not support priority handling (yet).
Solution 1
Camel allows you to reorganise messages based on some comparator using a Resequencer: https://camel.apache.org/resequencer.html:
from("rabbitmq://hostname[:port]/myqueue_high")
.setHeader("priority", constant(9))
.to("direct:messageProcessing");
from("rabbitmq://hostname[:port]/myqueue_medium")
.setHeader("priority", constant(5))
.to("direct:messageProcessing");
from("rabbitmq://hostname[:port]/myqueue_low")
.setHeader("priority", constant(1))
.to("direct:messageProcessing");
// sort by priority by allowing duplicates (message can have same priority)
// and use reverse ordering so 9 is first output (most important), and 0 is last
// (of course we could have set the priority the other way around, but this way
// we keep align with the JMS specification...)
// use batch mode and fire every 3th second
from("direct:messageProcessing")
.resequence(header("priority")).batch().timeout(3000).allowDuplicates().reverse()
.to("mock:result");
That way, all incoming messages are routed to the same sub route (direct:messageProcessing) where the messages are reordered according the priority header set by the incoming routes.
Solution 2
Use SEDA with a prioritization queue:
final PriorityBlockingQueueFactory<Exchange> priorityQueueFactory = new PriorityBlockingQueueFactory<Exchange>();
priorityQueueFactory.setComparator(new Comparator<Exchange>() {
#Override
public int compare(final Exchange exchange1, final Exchange exchange2) {
final Integer prio1 = (Integer) exchange1.getIn().getHeader("priority");
final Integer prio2 = (Integer) exchange2.getIn().getHeader("priority");
return -prio1.compareTo(prio2); // 9 has higher priority then 0
}
});
final SimpleRegistry registry = new SimpleRegistry();
registry.put("priorityQueueFactory", priorityQueueFactory);
final ModelCamelContext context = new DefaultCamelContext(registry);
// configure and start your context here...
The route definition:
from("rabbitmq://hostname[:port]/myqueue_high")
.setHeader("priority", constant(9))
.to("seda:priority?queueFactory=#priorityQueueFactory"); // reference queue in registry
from("rabbitmq://hostname[:port]/myqueue_medium")
.setHeader("priority", constant(5))
.to("seda:priority?queueFactory=#priorityQueueFactory");
from("rabbitmq://hostname[:port]/myqueue_low")
.setHeader("priority", constant(1))
.to("seda:priority?queueFactory=#priorityQueueFactory");
from("seda:priority")
.to("direct:messageProcessing");
Solution 3
Use JMS such as Camel's ActiveMQ component instead of SEDA if you need persistence in case of failures. Just forward the incoming messages from RabbitMQ to a JMS destination with setting the JMSPriority header.
Solution 4
Skip the RabbitMQ entirely and just use a JMS broker such as ActiveMQ that supports prioritization.
we're going to use rabbitmq in our project, but facing a problem that, we want to debug on our dev machines, so the response message have to be send to machine which originally send the request message out. How we're going to achive that, is there an existing solution in spring-rabbitmq framework?
We have considered several solutions. such as declare a set of queues for each machine, the queue name prefix by machine name. Is that feasible?
Define set of queues (debug queue A-Z) and bind them to headers exchange with attributes x-match=any, from=[A-Z], to=[A-Z] respectively to . Then bind headers exchange to you main working exchange (one or more) to receive all messages you interested in, so when your consumer publish response it will be duplicated to your debug exchange and then routed to appropriate queue.
[sender X] [ worker ] [consumer on queue X]
| ^ |
[request] | [response from=X, to=X] [duped request from=X|
\ | | [duplicated response from=X, to=X]
\ [request from=X] | ^
v | V |
[working topic exchange] -------> [debug headers exchange]
/ | \ / | \
{bindings by routing key mask} {bindings by any headers from=[A-Z], to=[A-Z]}
/ | \ / | \
[working queue 1] ... [working queue N] [debug queue A] ... [debug queue Z]
To bind request and response messages you can use applicationId and correlationId message attributes.
Note, that both request and response messages will be duplicated to debug queues. You may also specify separate queue for request and response messages by binding queues to match only specific headers, something like x-match=all, from=[A-Z] or x-match=all, to=[A-Z] and publish response and request messages with only that headers (only from or only to), but it is up to you.
The pros:
easy to implement
requires minimal code changes
easy to turn on/off
may be safely run in production environment
Cons:
use more resource power from RabbitMQ side
Alternatively, you can utilize RPC pattern somehow if you debugging process requires realtime response receiving. But this will block publisher until response processed, which may differ from real-world app usage and break business logic.
Pros:
step-by-step debugging process
Cons:
hard to implement
may require a lot of code changes
break business logic
hard to enable/disable
not production environment safe
p.s.: sorry for ascii graph