I try to use Nest Microservices functionality this way :
a producer (Nest instance #1) sends a message to a RabbitMQ queue my_queue
a consumer (Nest instance #2) waits for messages on the same queue my_queue (using the #MessagePattern annotation)
=> everything works fine, the message is sent by #1 and received by #2
But when i stop #2, #1 gets the error "There is no matching message handler defined in the remote service.".
So, when nobody "waits" for the messages, #1 just fails. If #1 listens to its own message, it does not fail.
Is this the expected behaviour ?
Nest documentation states that "To enable the request-response message type, Nest creates two logical channels - one is responsible for transferring the data while the other waits for incoming responses." : should #1 "listen" somehow for the incoming response (or in this case the lack of response) and handle it ?
Related
I am doing a POC to work with RabbitMQ and have a questions about how to listen to queues conditionally!
We are consuming messaging from a queue and once consumed, the message will be involved in an upload process that takes longer times based on the file size. And as the file sizes are larger, sometimes the external service we invoke running out of memory if multiple messages are consumed and upload process is continuing for the previous messages.
That said, we would like to only consume the next message from the queue once the current/previous message is processed completely. I am new to JMS and wondering how to do it.
My current thought is, the code flow will manually pull the next message from the queue when it completes the process of previous message as the flow knows that it has completed the processing but if that listener is only used in code flow to manually call, how it will pull the very first message!
The JMS spec says that message consumers work sequentially:
The session used to create the message consumer serializes the
execution of all message listeners registered with the session
If you create a MessageListener and use that with your consumer, the JMS spec states the listener's onMessage will be called sequentially, i.e. once per message after each message has been processed by the listener. So in effect each message waits until the previous has completed.
I am involved in a project which consists of apps below:
Producer application: receives messages from clients via ASP.NET web api, and enqueues messages into a message queue.
Consumer application: dequeues messages from the message queue above, and sends messages to Handler application below.
Handler application: receives messages from Consumer application, and sends the message to external application, if that failed, sends them to dead queue.
The problem is that:
Consumer dequeues messages off the queue, and send them to Handler. Then Consumer is blocked (via background threads using async) waiting for Handler's process. That is, Consumer performs RPC call to Handler app.
If Handler either successfully sends the messages to external app, or if that failed, successfully enqueues them to a dead queue, Consumer commits the dequeuing. (removes message off the queue)
If either of both (external app or dead queue) above failed, consumer rollbacks the dequeuing (puts message back to queue)
My question is that
What is the pros and cons of using Handlers app, comparing Consumer performs Handler's logic in addition to Consumer's current logic?
Is it better to remove Handler application, and integrates Handler's logic to Consumer application? So Consumer talks to external application directly, and handles dead queue. One fewer application to maintain.
Let's be perfectly clear: in the abstract sense, you have two entities - a producer and a consumer. The producer sends the original message, and the consumer processes it. There is no need to muddy the water by adding details about "handler" as it is a logical part of the consuming process.
It seems then that your real question (and also mine) is "what value does consumer (your definition) add?" Keep in mind that none are "talking" directly to one another - they are communicating via a message queue. In that regard, if it is easier to have the ultimate processing piece dequeue the message directly, rather than having some intermediate pipe, then do that.
After the consumer gets a message, consumer/worker does some validations and then call web service. In this phase, if any error occurs or validation fails, we want the message put back to the queue it was originally consumed from.
I have read RabbitMQ documentation. But I am confused about differences between reject, nack and cancel methods.
Short answer:
To requeue specific message you can pick both basic.reject or basic.nack with multiple flag set to false.
basic.consume calling may also results to messages redelivering if you are using message acknowledge and there are un-acknowledged message on consumer at specific time and consumer exit without ack-ing them.
basic.recover will redeliver all un-acked messages on specific channel.
Long answer:
basic.reject and basic.nack both serves to same purpose - drop or requeue message that can't be handled by specific consumer (at the given moment, under certain conditions or at all). The main difference between them is that basic.nack supports bulk messages processing, whilst basic.reject doesn't.
This difference described in Negative Acknowledgements article on official RabbitMQ web site:
The AMQP specification defines the basic.reject method that allows clients to reject individual, delivered messages, instructing the broker to either discard them or requeue them. Unfortunately, basic.reject provides no support for negatively acknowledging messages in bulk.
To solve this, RabbitMQ supports the basic.nack method that provides all the functionality of basic.reject whilst also allowing for bulk processing of messages.
To reject messages in bulk, clients set the multiple flag of the basic.nack method to true. The broker will then reject all unacknowledged, delivered messages up to and including the message specified in the delivery_tag field of the basic.nack method. In this respect, basic.nack complements the bulk acknowledgement semantics of basic.ack.
Note, that basic.nack method is RabbitMQ-specific extension while basic.reject method is part of AMQP 0.9.1 specification.
As to basic.cancel method, it used to notify server that client stops message consuming. Note, that client may receive arbitrary messages number between basic.cancel method sending an receiving the cancel-ok reply. If message acknowledge is used by client and it has any un-acknowledged messages they will be moved back to the queue they originally was consumed from.
basic.recover has some limitations in RabbitMQ: it
- basic.recover with requeue=false
- basic.recover synchronicity
In addition to errata, according to RabbitMQ specs basic.recover has partial support (Recovery with requeue=false is not supported.)
Note about basic.consume:
When basic.consume started without auto-ack (noack=false) and there are some pending messages non-acked messages, then when consumer get canceled (dies, fatal error, exception, whatever) that pending messages will be redelivered. Technically, that pending messages will not be processed (even dead-lettered) until consumer release them (ack/nack/reject/recover). Only after that they will be processed (e.g. deadlettered).
For example, let say we post originally 5 message in a row:
Queue(main) (tail) { [4] [3] [2] [1] [0] } (head)
And then consume 3 of them, but not ack them, and then cancel consumer. We will have this situation:
Queue(main) (tail) { [4] [3] [2*] [1*] [0*] } (head)
where star (*) notes that redelivered flag set to true.
Assume that we have situation with dead-lettered exchange set and queue for dead-lettered messages
Exchange(e-main) Exchange(e-dead)
Queue(main){x-dead-letter-exchange: "e-dead"} Queue(dead)
And assume we post 5 message with expire property set to 5000 (5 sec):
Queue(main) (tail) { [4] [3] [2] [1] [0] } (head)
Queue(dead) (tail) { }(head)
and then we consume 3 message from main queue and hold them for 10 second:
Queue(main) (tail) { [2!] [1!] [0!] } (head)
Queue(dead) (tail) { [4*] [3*] } (head)
where exclamation mark (!) stands for unacked message. Such messages can't be delivered to any consumer and they normally can't be viewed in management panel. But let's cancel consumer, remember, that it still hold 3 un-acked message:
Queue(main) (tail) { } (head)
Queue(dead) (tail) { [2*] [1*] [0*] [4*] [3*] } (head)
So now that 3 messages which was in the head put back to original queue, but as they has per-message TTL set, they are dead-lettered to the tail of dead-letter queue (sure, via dead-letter exchange).
P.S.:
Consuming message aka listening for new one is somehow different from direct queue access (getting one or more message without taking care of others). See basic.get method description for more.
RPC call and cast are two different types of message passing protocol in OpenStack. In case of RPC.call, the invoker (or caller) waits for the reply or ack messsage from the worker (callee).
I am trying to intercept all RPC messages (both Request & Reply Message) passing through rabbitmq system in OpenStack. In OpenStack all request messages pass through a single exchange named "nova". Attaching a new queue to the "nova" exchange, I can capture request Message.
Now, I want to capture reply messages that are sent back to callee. Reply messages can be captured by "direct Consumer" as specified by AMQP and Nova and excerpt as follows
a Direct Consumer comes to life if (an only if) a rpc.call operation is executed; this object is
instantiated and used to receive a response message from the queuing system; Every consumer connects to
a unique direct-based exchange via a unique exclusive queue; its life-cycle is limited to the message
delivery; the exchange and queue identifiers are determined by a *UUID generator*, and are marshaled in
the message sent by the Topic Publisher (only rpc.call operations).
In order to capture reply message, I have tried to connect to a direct exchange with corresponding msg_id or request_id. I am not sure what would be correct exchange id for capturing reply of a specific rpc.call.
Any idea what would be the exchange id what I may use to capture reply from a rpc.call message ? What is the UUID generator as specified in the excerpt I attached ?
I don't know the details of the OpenStack implementation, but when doing RPC over Messaging Systems, usually messages carry a correlation_id identifier that should be used to track requests.
See: http://www.rabbitmq.com/tutorials/tutorial-six-python.html
In ActiveMQ, I was sending a message to a consumer, the consumer then forwards the message to a different process. I wanted to know if there is any way by which the acknowledgment can be send to the broker from the other process??
I tried sending the Message Object using a socket connection to the other process and then calling the acknowledge() method on it, it is not working.
I tried the sending the message to some other class object(in the same JAVA process) and then calling the acknowledge() method, it worked.
I guess it depends on how you are sending the message to the other process...I'd just call acknowledge() in first consumer after the call to deliver it to the other process...that should guarantee that its been delivered (assuming your delivery to the second process is sound)...