rabbitmq Strange connections on dashboard - rabbitmq

What is the usage of marked by red came from, as my publisher and my consumers did not create it
Thanks

Related

How to handle stuck RabbitMQ Dynamic Shovel messages

We are currently using RabbitMQ Dynamic Shovels to forward messages to Azure Event Hub. Recently we setup a new Queue to be forwarded to Event Hub. Some messages in this Queue have a size of over 1MB which is the limit for messages on Event Hub. Because of this limit the messages bounce back and are sent again a few times each second. This creates a lot of network traffic which can be an issue.
Is there any way to send messages that bounce back to a DLX (dead letter exchange) or to a different queue? We have looked for some Dynamic Shovel options but could not find any that would be of any use.
Thank you Jesse Squire. Posting your suggestion as an answer to help other community members.
Generally, for cases when your payload is (or may be) larger than the allowable size, we recommend considering the claim check pattern where you store your payload in some other durable store (such as Blob storage) and then publish the event with a body that points to that resource.
You can refer to Dead-lettering dead-lettered messages in RabbitMQ.
You can also open an issue on GitHub: rabbitmq-server

RabbitMQ guarantee delivery to mirrored queue

Assume I have a mirrored queue deployed over multiple nodes (f.e. 1 master + 1 mirror). I can define the number of mirrors I want but is it possible to only accept a producer message when the message is stored at least on 2 queues (master + mirror). Otherwise it is still possible to loose a message when the master node fails before the message is mirrored.
So the mirroring activity should be part of the transaction.
You should use Publisher Confirms. When this is enabled, and your publisher has received confirmation, you can be certain that your message has been replicated to all queue mirrors.
Searching Google for site:rabbitmq.com high availability returns this document which mentions Publisher Confirms here.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

What happens when a shovel is deleted

We have a queue which takes incoming data and pushes the data into another queue in another machine through a shovel.
For some reason, we found that the source queue had backed up with around 2M messages. We couldn't figure out the cause for that as it seemed that the destination queue and the consumer of that queue was working fine.
We also realized that the shovel was setup with the default pre-fetch count of 1000.
We are not able to modify the shovel to set a higher pre-fetch count, the only option is to delete the shovel and setup a new one with higher pre-fetch count.
What will happen if we delete the shovel ?
Will it delete the messages in the queues ?
Thanks
Based on your latest comment, if I understand correctly, you have:
RabbitMQ1 - exchange RMQ1EXCA -> queueA1
RabbitMQ2 - exchange RMQ2EXCB -> queueB1
A shovel has been configured, exchange [RMQ1EXCA] to exchange [RMQ2EXCB]
And you found out that the queueA1 is filled up with millions of message.
If indeed this is an accurate depiction of your setup:
it's quite normal, as queueA1 is not part of the shovel process
if you check the queues bound to RMQ1EXCA, you should see one queue with a name starting with amq.gen-......
deleting the shovel will not impact the queueA1, as it's not related to the process (but will delete the queue amq.gen-...... that is)
If the description provided doesn't match your setup, please provide additional information to clarify your situation so that I can adapt my answer accordingly

Non-persistent jms message get lost for non durable subscriber

I have started with ActiveMQ just one day ago, so my knowledge on it is limited.
My target is to check the ActiveMQ stability and throughput in different scenario for JMS message.
So, following is one scenario.
1. I am publishing 1 mil non-persistent synchronous messages to topic and subscribing it synchronously non durable manner. One publisher and one subscriber.
2. The broker, publisher and subscriber is up during the test.
Unfortunately nearly most of the times(only one time I got all messages out of 14 try), I am not getting all the messages(1 mil) in subscriber end. nearly 5500 messages are lost.
I did the same test for tibco ems and ibm mq and did not get this issue.
So, for ActiveMQ, if I need all messages to be received, is it necessary to use persistent message and durable subscriber always?
Don't think form the angle of guaranteed messaging or fail-over scenario.
Any suggestion is welcome.
Thanks,
Smith
Not sure about your exact scenario. But ActiveMQ will limit the memory used for buffering messages when the producer is faster than the consumer by dropping old messages above a certain limit.
This is configurable.

ActiveMQ, Network of brokers, offline durable subscriber dedupe

Scenario: Two ActiveMQ nodes A, B. No master slave, but peers, with network connectors between them.
A durable topic subscriber is registered with both (as it uses failover and at one point connects to A and at another point connects to B).
Issue: If subscriber is being online against A, a copy of each message is placed in the offload subscription on B.
Question: Is this by design? Can this be configured so that a message is deduped and only sent to the subscriber in one of subscriptions?
Apparently by-design: http://activemq.apache.org/how-do-distributed-queues-work.html
See "Distributed Topics in Store/Forward" where it says:
For topics the above algorithm is followed except, every interested client receives a copy of the message - plus ActiveMQ will check for loops (to avoid a message flowing infinitely around a ring of brokers).