I want to be able to send different kinds of messages to RabbitMQ so that as a consumer I can distinguish them. I don't want to add anything to the bodies of the messages. How can I do that?
# producer
channel1.basic_publish(exchange="", routing_key="something", body="fdsfds") # something here maybe?
Have you considered using headers ? It is useful to add meta-informations to the message!
Just as follows : https://groups.google.com/forum/m/#!topic/pika-python/LjCldaIhEzA
Related
Is it possible to get a list of consumers for a particular routing key when using wildcards?
I have two consumers creating these two routing keys:
customer.created.#
customer.created.from.template.#
I want to find out which routing keys match for a customer.created.from.template event.
RabbitMQ has a management API. One of the methods you can call on this is the /consumers endpoint, which will list all consumers on a particular RabbitMQ cluster.
While I'm sure there would be a way to use this information to get what you need here, I'm not sure what the particular use case is. If you could supply additional detail, it might be possible to advise further.
One possible way of doing this is by the use of Firehose Tracer.
The firehose publishes messages to the topic exchange amq.rabbitmq.trace. In this section we refer to the messages consumed and inspected via the Firehose mechanism as "traced messages".
Traced message routing key will be either "publish.{exchangename}" (for messages entering the node), or "deliver.{queuename}" (for messages that are delivered to consumers).
The Trace queue can then be consumed to extract the desired information.
I am using ActiveMQ to store messages to be used later. It is working as expected, but there is a specific scenario I need to fit which I cannot figure out.
The short question is this.
Is there a way to run a query on the queue to find out all messages with certain header values?
The problem in detail is this :
So there is a set of data that is coming in multiple messages and the requirement is to use that data only after all messages for that has come in.
So if the dataset has lets say 50 messages i need to wait for those 50 messages and then read them in.
I am adding headers to each message to denote they belong to a certain set.
Like "TotalSets"=50 , "SetId"=39 .
I would like to write a thread that keeps tracing if all sets for a particular batch has arrived.
NMS is a .NET equivalent to the JMS messaging API so the means of filtering messages is the same as in JMS, your subscription applies a JMS Message Selector when created to tell the broker what messages it is interested in. The session methods to create MessageConsumer instances have variants that accept the selector using the JMS defined syntax which is your means of filtering messages.
I want to use RabbitMQ with StormCrawler. I already saw that there is a repository for using RabbitMQ with Storm:
https://github.com/ppat/storm-rabbitmq
How would you use this for the StormCrawler? I would like to use the Producer as well as the consumer.
For the consumer there seems to be some documentation. What about the Producer? Can you just put the config entries in the storm crawler config or would I need to change the source code of the RabbitMQProducer?
You'd want the bolt which sends URLs to RabbitMQ to extend AbstractStatusUpdaterBolt as the super class does a lot of useful things under the bonnet, which means that you would not use the Producer out of the box but will need to write some custom code.
Unless you are certain that there will be no duplicates URLs, you'll need to deduplicate the URLs before sending them to the queues anyway, which could be done e.g. with Redis within your custom status updater.
I use Nservicebus 5 with RabbitMQ, and I want to send different messages to different queues under the same uow. Is it possible ?
using (_NsbunitOfWork)
{
_NsbunitOfWork.Begin();
_busSms.Send(smsmessage);
_busOffer.Send(offermessage);
_busTrnx.Send(Trnxmessage);
_NsbunitOfWork.Commit();
}
I'm not sure what you're trying to accomplish? I'm assuming you want to send messages to different queues, so that different applications can process these messages? In the documentation these are usually called endpoints.
If you read the routing documentation, you'll notice that the sender code should not be aware of where the message should be sent. This is what routing takes care of. So you could do multiple calls to context.Send() and NServiceBus would figure out where to send the message to.
Does that make sense? You could also try https://discuss.particular.net/ which is more suited for NServiceBus related discussions with multiple replies, or try support#particular.net
I have to implement this scenario:
An external application publish message to rabbitmq.
This message has a client_id property. We can place this id to routing key or message header or some other property.
I have to implement sharding in a exchange routng logic - the message should be delivered to specific queue based on the client_id range.
Is it possible to implement in a standard exchanges?
If not what exchange should I take as the base?
How to dynamicly change client_id ranges?
Take a look at the rabbitmq plugin. It's included in the RabbitMQ distribution from v3.6.0 onwards.
Just have your producer put enough info into the routing key that causes the message to go into the right queue on the other side of the Exchange.
So for example, create two queues called 1 and 2 and bind them with routing keys matching the names. Then have your producer decide which routing key to use when producing the event message. Customers with names starting with letters a-m go to 1, n-z go to 2, you get the idea. It pushes the sharding to the producer but that might be OK for your application.
AMQP doesn't have any explicit implementation of sharding, but its architecture should help you to do that.
Spreading messages to several queues is just a rabbitmq challenge (and part of amqp specification), and with routing, way you can attach hetereogeneous consumers to handle specific messages routed via the same exchange. Therefore, producer should push a specific key to be consumed by specific queue/consumer...
You can decide to make a static sharding, perhaps you have 10 queues with one consumer per queue. You could implement a consistent hashing function such that key is CLIENT_ID % 10.
Another ways and none static solutions could be propoused, and you can try to over this architecture.