RabbitMQ: Consumer becomes producer -- connection closed error - rabbitmq

I have following scheme of my heavy task which is run in c1,c2,c3.. consumers (consuming 1st queue -- task queue)
When task in c1 or c2 or c3 completes, it creates another connection and channel in callback to produce to another cb_q.
I'm getting "Connection Closed" error after my consumer produces the task. However I do not close connection of my consumer, but I close connection of producer. Objects are different..
Question:
Should I create another connection and channel in Task Consumer Callback to produce the task for cb_q?
What are best-practices when consumer becomes producer?

I have a similar set-up where I have a "worker" that consumes from one queue, processes the data and pushes the result onto another queue
I currently have all the queues living on the same machine, so I just re-use the connection/channel I have to that machine, and am using the default exchange, simply specifying the name of the queue
So for consuming my c1/c2/c3 instances call:
channel.basic_consume(your_callback_function, queue=e1)
And for pushing to the next queue, using the same channel they call:
channel.basic_publish(exchange='',
routing_key=cb_q,
properties=pika.BasicProperties(...),
body=message
)
I haven't played around with queues on different machines which would require establishing a new connection, but hopefully this helps, in my scenario having a "worker" as both a consumer and producer is relatively simple

Related

RabbitMQ pause a queue

I am using a RabbitMQ Server (v3.8.9) with Java clients.
Use case is:
Our Backend creates messages for different clients. We send them out to their respective Endpoints.
1 Producer -> Outbound Queue -> 1 Consumer
The producer creates messages for n clients
Which the consumer should send out to the clients' endpoints
Messages must be kept in the correct order regarding each client
Works fine, unless all clients are up and running. Problem: If one client becomes unavailable, we need to have a bulletproof retry mechanism for that.
Say:
Wait 1 Minute and try again
All following messages must NOT be delivered before the first failed one and kept in the correct order
If a retry works, then ALL other messages should be send to the client immediately
As you can see, it is not a solution to just "supsend" the consumer, because it should still deliver msg to the other (alive) clients. Due to application limitations and a dynamic number of clients, we cannot spawn one consumer per client queue.
My best approach right now is to dynamically create one queue per client, which are then routed to a single outbound queue. If one msg to one client cannot be delivered by the consumer, I would like to "pause" the clients queue for x minutes. An API call like "queue_pause('client_q1', '5 Minutes')" would help. But even then I have to deal with the other, already routed messages to that particular client and keep them in the correct order...
Any better ideas?
I think the key here is that a single consumer script can consume from multiple queues. So if I'm understanding correctly, you could model this as:
Each client has its own queue. These could be created by the consumer script when it starts up, or by a back-end process when a new client is created.
The consumer script subscribes to each queue separately
When a message is received, the consumer tries to send it immediately to the client; if it succeeds, it is manually acknowledged with basic.ack, and the consumer is ready to send the next message to that client.
When a message cannot be delivered to the client, it is requeued (basic.nack or basic.reject with requeue=1), retaining its position in the client's queue.
The consumer then needs to pause consuming from that particular queue. Depending on how its written, that could be as simple as a sleep in that particular thread, but if that's not practical, you can effectively "pause" the subscription to the queue:
Cancel the subscription to that queue, leaving other subscriptions in tact
Store the queue name and the retry time in an appropriate variable
If the consumer script is implemented with an event/polling loop, check the list of "paused" subscriptions each time around that loop; if the retry time has been reached, re-subscribe.
Alternatively, if the library / framework supports it, register a delayed event that will fire at the appropriate time and re-subscribe the queue. The exact mechanics of this depend on the technologies you're using.
All the other subscriptions will continue, so messages to other clients will be delivered. The queue with no subscribers will retain the messages for the offline client in order until the consumer script starts consuming them again.

ActiveMQ CMS: Can messages be lost between creating a consumer and setting a listener?

Setting up a CMS consumer with a listener involves two separate calls: first, acquiring a consumer:
cms::MessageConsumer* cms::Session::createConsumer( const cms::Destination* );
and then, setting a listener on the consumer:
void cms::MessageConsumer::setMessageListener( cms::MessageListener* );
Could messages be lost if the implementation subscribes to the destination (and receives messages from the broker/router) before the listener is activated? Or are such messages queued internally and delivered to the listener upon activation?
Why isn't there an API call to create the consumer with a listener as a construction argument? (Is it because the JMS spec doesn't have it?)
(Addendum: this is probably a flaw in the API itself. A more logical order would be to instantiate a consumer from a session, and have a cms::Consumer::subscribe( cms::Destination*, cms::MessageListener* ) method in the API.)
I don't think the API is flawed necessarily. Obviously it could have been designed a different way, but I believe the solution to your alleged problem comes from the start method on the Connection object (inherited via Startable). The documentation for Connection states:
A CMS client typically creates a connection, one or more sessions, and a number of message producers and consumers. When a connection is created, it is in stopped mode. That means that no messages are being delivered.
It is typical to leave the connection in stopped mode until setup is complete (that is, until all message consumers have been created). At that point, the client calls the connection's start method, and messages begin arriving at the connection's consumers. This setup convention minimizes any client confusion that may result from asynchronous message delivery while the client is still in the process of setting itself up.
A connection can be started immediately, and the setup can be done afterwards. Clients that do this must be prepared to handle asynchronous message delivery while they are still in the process of setting up.
This is the same pattern that JMS follows.
In any case I don't think there's any risk of message loss regardless of when you invoke start(). If the consumer is using an auto-acknowledge mode then messages should only be automatically acknowledged once they are delivered synchronously via one of the receive methods or asynchronously through the listener's onMessage. To do otherwise would be a bug in my estimation. I've worked with JMS for the last 10 years on various implementations and I've never seen any kind of condition where messages were lost related to this.
If you want to add consumers after you've already invoked start() you could certainly call stop() first, but I don't see any problem with simply adding them on the fly.

Spring Cloud Stream DLQ, Producer and Consumer Residing under Multiple Application

I have producer in say Application A with the below configuration,
Producer Properties:
spring.cloud.stream.bindings.packageVersionUpdatesPublishChannel.destination=fabric-exchange
spring.cloud.stream.bindings.packageVersionUpdatesPublishChannel.producer.requiredGroups=version-updates
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesPublishChannel.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesPublishChannel.producer.routingKeyExpression='package-version'
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesPublishChannel.producer.bindingRoutingKey=package-version
And I have a Consumer for the same Queue in an another application say B,
#Consumer Properties:
spring.cloud.stream.bindings.packageVersionUpdatesConsumerChannel.destination=fabric-exchange
spring.cloud.stream.bindings.packageVersionUpdatesConsumerChannel.group=package-version-updates
spring.cloud.stream.bindings.packageVersionUpdatesConsumerChannel.consumer.max-attempts=1
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.durableSubscription=true
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.bindingRoutingKey=package-version
#DLQ
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.autoBindDlq=true
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.dlqDeadLetterExchange=
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesConsumerChannel.consumer.dlq-ttl=30000
#Error Exchange Creation and Bind the Same to Error Queue
spring.cloud.stream.bindings.packageVersionUpdatesErrorPublishChannel.destination=fabric-error-exchange
spring.cloud.stream.bindings.packageVersionUpdatesErrorPublishChannel.producer.requiredGroups=package-version-updates-error
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesErrorPublishChannel.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesErrorPublishChannel.producer.routingKeyExpression='packageversionupdateserror'
spring.cloud.stream.rabbit.bindings.packageVersionUpdatesErrorPublishChannel.producer.bindingRoutingKey=packageversionupdateserror
Now say for example if the Application A boots first, then the Queue version-updates would be created without any dead letter queue associated with it.
And now the when the Application B starts, this is the exception I get and the channel gets shudtdown, I think this is because app B is trying to re-create the queue with a different configuration
inequivalent arg 'x-dead-letter-exchange' for queue 'fabric-exchange.version-updates' in vhost '/': received the value 'DLX' of type 'longstr' but current is none
Can anyone please let me know, how do i solve this, where my requirement is to create a Queue in App A and App-A would simple produce the messages onto this queue
And App-B would consume the same and my requirement is to support re-tries after X amount of time through DLQ
required-groups is simply a convenience to provision the consumer queue when the producer starts, to avoid losing messages if the producer starts first.
You must use identical exchange/queue/binding configuration on both sides.

How to re-declare queue if it's get deleted in RPC RabbitMQ

I am using java client of
https://www.rabbitmq.com/tutorials/tutorial-six-java.html
. My setup is RPC. My server is creating queue and client is also creating same queue and sending the message. After receiving message server is performing some operation and sending result back to client.
Now if server created the queue and connect with it while queue get's deleted for some reason. The server is not throwing any exception and when the client is creating the same queue and putting messages server is not getting those messages either as it's not connected.
How do server knows that the queue get deleted?
Thanks so much
It sounds like the following situation is happening:
Queue A is created.
Consumer 1 subscribes to Queue A
Queue A is deleted while Consumer 1 is still active
Queue A is re-created (call it A')
Now, you're wondering why Consumer 1 is not getting any messages? You would have to re-subscribe your consumer. I don't usually delete queues, because there is no need to do so under any reasonable scenario (instead, use the queue.expires property to handle auto-deletion of queues).
According to the AMQP 0-9-1 Specification,
When a queue is deleted any pending messages are sent to a dead-letter
queue if this is defined in the server configuration, and all
consumers on the queue are cancelled.
So, based on the description of the behavior, this is a bug with the consumer. It should throw an exception or otherwise exit the consuming loop in this case. In any case, you'll have to re-subscribe to A' before you'll get any more messages.

API design around RabbitMQ for publisher/subscriber

TL;DR - Whats the best way to expose RabbitMQ to a consumer via REST API?
I'm creating an API to publish and consume message from RabbitMQ. In my current design, the publisher is going to make a POST request. My API will route the POST request to the exchange. In this way, the publisher doesn't have to know the server address, exchange name etc. while publishing.
Now the consumer part is where I'm not sure how to proceed.
At the beginning there will be no queues. When a new consumer wants to subscribe to a TOPIC, then I will create a queue and bind it to the exchange. I need help with answers to few questions -
Once I create a queue for the consumer, what's the next step to let the consumer get messages from that queue?
I make the consumer ask for a batch of messages(say 50 messages) from the queue. Then once I receive an ack from the consumer I will send the next 50 messages from queue. If I don't receive an ack I will requeue the 50 messages back into the queue. Isn't this expensive in terms of opening and closing connection between the consumer and my API?
If there is a better approach then please suggest
In general, your idea of putting RMQ behind a REST API is a good one. You don't want to expose RMQ to the world, directly.
For the specific questions:
Once I create a queue for the consumer, what's the next step to let the consumer get messages from that queue?
Have you read the tutorials? I would start there, for the language you are working with: http://www.rabbitmq.com/getstarted.html
Isn't this expensive in terms of opening and closing connection between the consumer and my API?
Don't open and close connections for each batch of messages.
Your application instance (the "consumer" app) should have a single connection. That connection stays open as long as you need it - across as many calls to RabbitMQ as you want.
I typically open my RMQ connection as soon as the app starts, and I leave it open until the app shuts down.
Within the consumer app, using that one single connection, you will create multiple channels through the connection. A channel is where the actual work is done.
Depending on your language, you will have a single channel per thread; a single channel per queue being consumed; etc
You can create and destroy channels very quickly, unlike connections.
More specifically with your idea of batch processing, this will be handled by putting a consumer prefetch limit on your consumer and then requiring messages to be acknowledged after processing it.