ResponseQueue: Can you send a queue as part of the message/body in RabbitMq? - rabbitmq

In MSMQ there is nice feature, called a response queue: as part of the message one can send a (private/invisible) queue as well, in which the reponse is awaited - very similar to the callbacks in the async world. Technically this feature is just an wrapper around private queues and queue monikers.
Is there anything similar in RabbitMQ?

Actually I figured it out:
a private queue is created this way:
privateQ = channel.queue_declare(exclusive=True)
and passing a response queue is via the reply_to prop for the send command (versus being a property of the message)
channel.basic_publish(exchange='',
routing_key='rpc_queue',
properties=pika.BasicProperties(
reply_to = privateQ,
),
body=request)
The real difference - actually hinted by the way the API is formalized - is that you should not create a reply queue for every message - as this is inefficient. The suggested way is to have one private queue to accept all responses, and to incorporate a correlation id.

Related

Publish same message to different queues in RabbitMQ using Masstransit

any body can help me finding how can I send the message to differents queues (depending on business logic) to differents queues in RabbitMQ, using Masstransit
I have read the documentation I didn't found how I can specify the queue destination name
thank you
You might want to read the documentation again. If you want to send to a specific queue, you actually have to send to the exchange for that queue created by MassTransit.
Use the ISendEndpointProvider (or ConsumeContext):
Call await GetSendEndpoint(new Uri("exchange:name")) or await GetSendEndpoint(new Uri("queue:name")) to get the ISendEndpoint.
Call Send(...) to send the message to the exchange or queue.

What is the correct way to perform a single blocking, synchronous receive with Pika?

I would like to use Pika / RabbitMQ in a pattern similar to a standard socket: that is, set up the connection, then make blocking synchronous calls to receive a single message each time I'm ready to do more work.
Option A: basic_get
The basic_get method of the BlockingConnection offers the ability to receive a message, but it returns immediately if there is no message available to receive. This is like a socket recv call with blocking disabled. I could use this approach with a timeout to poll continuously, but that's not efficient.
Option B: basic_consume
The basic_consume method of BlockingConnection could do the job, but it has the strange requirement that I have start_consuming() somewhere else, in a thread by itself. Since my callers of my receive method are already expecting to block, waiting for a message, this seems like a waste of a thread.
Is it possible with Pika to do the equivalent of socket.recv(blocking=True)?
Run Pika on its own thread and basic_consume with a prefetch value of 1 (if you really want a single message at a time). Insert messages into some sort of synchronized data structure on which your callers can block.
Be sure to acknowledge your messages correctly from other threads (example)
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Use the channel's basic_get method like in this example:
credentials = pika.PlainCredentials('username', 'password')
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', credentials=credentials))
channel = connection.channel()
inmessage = channel.basic_get("your_queue_name", auto_ack=True)
inmessage is a tuple of 3 elements, element with index of 2 is your message's body.

Spring AMQP RabbitMQ RPC - Queue with with some messages that do not expect a response

I am trying to create a priority RPC queue that can accept some messages that expect a response and some messages that do not expect a response. The problem I am facing is that when I send messages with convertAndSend I get an error saying "org.springframework.amqp.AmqpException: Cannot determine ReplyTo message property value: Request message does not contain reply-to property, and no default response Exchange was set." I know the issue is that the RPC queue is expecting a response, and the message just stays on the queue, but for these messages I do not want/need a response. Any idea how I can work around this issue?
Thanks,
Brian
A solution recommended in this link worked for me: Single Queue, multiple #RabbitListener but different services. Basically I have a class with RabbitListener, and different methods with RabbitHandler

MassTransit generates _skipped queues which I want to ignore

Can anyone guess what the problem can be because I'm clueless on how to solve this. MassTransit generates _skipped queues and I don't have a clue why it is generating those. It is being generated when doing a publish request response.
Request Client is created using following method in MassTransit.RequestClientExtensions
public static IRequestClient<TRequest, TResponse> CreatePublishRequestClient<TRequest, TResponse>(this IBus bus, TimeSpan timeout, TimeSpan? ttl = null, Action<SendContext<TRequest>> callback = null) where TRequest : class where TResponse : class
{
return (IRequestClient<TRequest, TResponse>) new PublishRequestClient<TRequest, TResponse>(bus, timeout, ttl, callback);
}
And Request is done as follows:
TResponse response = TaskUtil.Await(() => requestClient.Request(request));
As you can see this is Request Response scenario where Request is being sent to all consumers. But because at the moment we have only one consumer it only is being sent to that consumer. deadletters appear easily if a publishrequestresponse is done to multiple consumers, once a consumer responds, the other consumer doesn't know where to respond and a deadletter is generated. But because we have one consumer here, we can eliminate this possibility.
So what could be other reasons for these skipped queues? Huge thanks for any help on how I can troubleshoot this...
I have to say, in the Consume method, in some condition, we raise a RequestTimeoutException and catch it in the requesting application. This is tested and this doesn't generate skipped queues.
Skipped queue is a dead letter queue. It means that your endpoint queue has a binding to some message exchange but there is no consumer for that message any longer. Maybe you change the topology and moved the consumer. You can go to the RMQ management UI and check the bindings for your endpoint exchange. If you look at messages that ended up in the skipped queue, you will find out what message types to look for.
Exchanges are named after message types so it will be easy to find the obsolete binding.
Then, in the management UI, you can manually remove the binding that is obsolete and there will be no more messages coming to the skipped queue.

Rabbitmq + web stomp plugin with rpc - reply-to

I'm trying to perform an RPC with RabbitMQ's STOMP adapter. As the client lib I'm using the STOMP over WebSocket (https://github.com/jmesnil/stomp-websocket/) library.
From the documentation (http://www.rabbitmq.com/stomp.html#d.tqd) I see that I have to set the reply-to header. I've done that by specifying something like "reply-to: /temp-queue/foo" and I saw in my server-side client (node-amqp) that the replyTo header is set correctly (example: replyTo: '/reply-queue/amq.gen-w2jykNGp4DNDBADm3C4Cdx'). Still in my server-side client, I can reply to the message just by publishing a message to "/reply-queue/amq.gen-w2jykNGp4DNDBADm3C4Cdx".
However, how do I get this reply it in my client code where the RPC call was initiated? The documentation states "SEND and SUBSCRIBE frames must not contain /temp-queue destinations (...) subscriptions to reply queues are created automatically."
So, how do I subscribe to the reply-to queue? How can I get the results of RPC calls?
Thanks in advance.
The answer is:
When you receive the rpc call in the server worker you get the header replyTo. That header comes like:
replyTo: '/reply-queue/[queue_name]'
for example: replyTo:'/reply-queue/amqp.fe43gggr5g54g54ggfd_'
The trick is:
you have to parse it and only answer to the queue_name [for example: amqp.fe43gggr5g54g54ggfd_]
You have to answer to the default exchange and not to any other exchange
Example of an answer in nodejs:
function onRpcReceived(message, headers, deliveryInfo, m) {
var reply_to = m.replyTo.toString().substr(13, m.replyTo.toString().length);
connection.publish(reply_to, {response:"OK", reply:"The time is 13h35m"}, {
contentType:'application/json',
contentEncoding:'utf-8',
correlationId:m. correlationId
});
}
Now i just wonder why the web-stomp-plugin adds the /reply-queue/ string to the attribute "replyTo" on the header instead of only add the queue name....! If someone knows the reason i would like to know.
The answer to the original question:
However, how do I get this reply it in my client code where the RPC
call was initiated? The documentation states "SEND and SUBSCRIBE
frames must not contain /temp-queue destinations (...) subscriptions
to reply queues are created automatically."
So, how do I subscribe to the reply-to queue? How can I get the
results of RPC calls?
Rabbit automatically subscribes the current STOMP session to the temp queue. The client doesn't know the temp queue name and cannot subscribe to it. However, when Rabbit sends a STOMP MESSAGE frame it sets the subscription header to the "reply-to" value (e.g. "/temp-queue/foo"). Although the STOMP over WebSocket client wasn't written with this in mind, a subscription could be registered as follows:
stompClient.subscriptions['/temp-queue/foo'] = function(message) {
// ...
};
I'd be happy to hear if there is another solution.
NB: There is no more '/reply-queue/' in the replyTo since RabbitMQ 3.0.0
I spent about a 4 hours to find what was the problem. Use .replace('/reply-queue/', '') instead of .substring(13)!