I am using ActiveMQ as message Broker with something like 140 Topics.
I am facing a problem that the broker keeps old messages, instead of discarding them in order to send new messages (so clients gets old data instead of current data).
How do I configure the broker not to keep old messages? the important data is allways the last data, so if a consumer didn't get data, he will get next time the most updated.
I have configured on producer TTL as 250, but it doesn't seem to work...
One other thing,
How can I disable the creation of advisory topics?
Any help will be appreciated...
Advisory messages are required for
dynamic network broker topologies as
NetworkConnectors subscribe to
advisory messages. In the absence of
advisories, a network must be
statically configured.
Beware that using advisorySupport="false" will NOT work with dynamic network brokers as per this reference page: http://activemq.apache.org/advisory-message.html
Are you using a durable consumer to receive these messages from the topics concerned? If so, the broker will be holding on to all messages sent when you were disconnected. Switch to a regular consumer in order to only see "current" messages on the topic.
To prevent the creation of advisory topics and their associated messages add the advisorySupport="false" property to the <broker /> element of the ActiveMQ config file.
Related
I need a queue/message broker that would allow me to receive by 1 message for each user.
Previously I've been using FIFO SQS and each user has its own message group id, which allows me to have 1 message inflight for each user.
But I've ended up with a problem when I have more than 20.000 messages in the queue, because A FIFO queue looks through the first 20k messages to determine available message groups, which makes all messages above the 20.000 limit to be unavailable for processing.
Do you guys have any advice for a message broker/queue that would allow me to achieve the same behavior without having a limit of 20.000 messages?
I was thinking about using Redis Lists (where each user has its own list and I do an RPOP for each existent list)
A couple options to overcome the consumer window problem you describe in ActiveMQ:
Use Virtual Topics and have consumers register a selector. The broker will filter messages to each consumer into their own queue. This does not require any broker changes and is completely dynamic via the client setting the selector.
Use Composite Destinations to filter messages to separate queues. This is static broker routing and requires server-side config change for any new routing rules. The clients do not need to be changed
Both approaches documented here:
ref: https://activemq.apache.org/virtual-destinations
We are currently using RabbitMQ Dynamic Shovels to forward messages to Azure Event Hub. Recently we setup a new Queue to be forwarded to Event Hub. Some messages in this Queue have a size of over 1MB which is the limit for messages on Event Hub. Because of this limit the messages bounce back and are sent again a few times each second. This creates a lot of network traffic which can be an issue.
Is there any way to send messages that bounce back to a DLX (dead letter exchange) or to a different queue? We have looked for some Dynamic Shovel options but could not find any that would be of any use.
Thank you Jesse Squire. Posting your suggestion as an answer to help other community members.
Generally, for cases when your payload is (or may be) larger than the allowable size, we recommend considering the claim check pattern where you store your payload in some other durable store (such as Blob storage) and then publish the event with a body that points to that resource.
You can refer to Dead-lettering dead-lettered messages in RabbitMQ.
You can also open an issue on GitHub: rabbitmq-server
I'm trying to setup RabbitMQ in a model where there is only one producer and one consumer, and where messages sent by the producer are delivered to the consumer only if the consumer is connected, but dropped if the consumer is not present.
Basically I want the queue to drop all the messages it receives when no consumer is connected to it.
An additional constraint is that the queue must be declared on the RabbitMQ server side, and must not be explicitly created by the consumer or the producer.
Is that possible?
I've looked at a few things, but I can't seem to make it work:
durable vs non-durable does not work, because it is only useful when the broker restarts. I need the same effect but on a connection.
setting auto_delete to true on the queue means that my client can never connect to this queue again.
x-message-ttl and max-length make it possible to lose message even when there is a consumer connected.
I've looked at topic exchanges, but as far as I can tell, these only affect the routing of messages between the exchange and the queue based on the message content, and can't take into account whether or not a queue has connected consumers.
The effect that I'm looking for would be something like auto_delete on disconnect, and auto_create on connect. Is there a mechanism in rabbitmq that lets me do that?
After a bit more research, I discovered that one of the assumptions in my question regarding x-message-ttl was wrong. I overlooked a single sentence from the RabbitMQ documentation:
Setting the TTL to 0 causes messages to be expired upon reaching a queue unless they can be delivered to a consumer immediately
https://www.rabbitmq.com/ttl.html
It turns out that the simplest solution is to set x-message-ttl to 0 on my queue.
You can not doing it directly, but there is a mechanism not dificult to implement.
You have to enable the Event Exchange Plugin. This is a exchange at which your server app can connect and will receive internal events of RabbitMQ. You would be interested in the consumer.created and consumer.deleted events.
When these events are received you can trigger an action (create or delete the queue you need). More information here: https://www.rabbitmq.com/event-exchange.html
Hope this helps.
If your consumer is allowed to dynamically bind / unbind a queue during start/stop on the broker it should be possible by that way (e.g. queue is pre setup and the consumer binds the queue during startup to an exchange it wants to receive messages from)
I'm using NServiceBus 4.x with RabbitMQ 3.2.x as my transport.
I made the assumption that by using RabbitMQ as my transport I would be given the competing consumer model as an option. I understand that NServiceBus employs the "Fannout" exchange type for all exchanges and does not support round robin at this time. However is there a way to configure NServiceBus to take advantage of the levels of indirection via Exchanges and channels that RabbitMQ offers.
I have several consumers I would like to compete for messages from a given queue. What I am observing is subscribers' blocking access to further message retrieval from the queue until the message is consumed. So having more then one consumer at this point does me no good other then redundancy.
After reading some documentation on RabbitMQ I'm assuming that it's normal to block until the Ack receipt is sent from the subscriber. But I had assumed that subscriber #2 would have free access to the queue to fetch another message.
There is mention of increasing the prefetch count on RabbitMQ channel.
Example:
channel.BasicQos(0,prefetchcount,false)
I don't see anywhere that I can change this setting via configuration in NServiceBus. Furthermore as I read what prefetch does I'm really not sure this what i'm looking for.
Is it possible to use RabbitMQ with out a distirbutor type pattern used with MSMQ? Or should I move to MassTransit or Rebus?
Put prefetchcount=2 in your connection string. Any value above 1 will tell the broker to allow more than X unacked message to go out. You need to fiddle with this setting to find the optimum for your scenario.
I am using ActiveMQ 5.4 with KahaDB as message store.
While Publishing Messages (with Persistence true) to a Topic, which has Durable subscriber, the persistence store is increasing even the messages are dispatched to Subscriber. So this is causing an issue as the message store is getting full and not accepting any more messages.
So my question is why the Persistence store is not discarding the messages in the KahaDB, even the messages are getting dispatched?
Regards,
Srinivas
What you are seeing is an interaction between the ActiveMQ message store behaviour and that for durable subscriptions on topics.
When you have durable subscriptions, a topic is treated like a queue for each subscriber's clientId (set on the Connection). The logic being that the client doesn't want to miss any messages when they disconnect. So if they disconnect, the durable subscription hangs around and keeps the messages alive.
The AMQ message store uses data logs for it's message journal. These are written sequentially, and never actually removed from (that would require random access). There is a second file which keeps track of which messages have been consumed. Once all the messages in a data file have been consumed, that file is deleted.
So what you're seeing is that some of the messages in the data file are not being consumed by these durable subscriptions and just hang around. ClientIds for durable subscribers not being consistently used would cause this issue. It's likely that there is something wrong with the way the feature is being used, if you use JMX to inspect the subscriptions on the broker that should help you track down the root cause.
As a general rule, whenever you think that you might want to use a durable subscription, use virtual topics instead - they are much easier to reason about, inspect and load balance. On the other hand if you just want to get the last couple of messages when you reconnect a topic subscriber rather than all the messages you may have missed, use retroactive consumers.
An easy way to get around this issue is to always use a time to live when you send a message - pretty much every use case has a time limit of when a message ought to be consumed by anyway. ActiveMQ will expire messages beyond this point, and free up the messages in the data files for deletion.