The following is my ActiveMQ setup:
I have two AMQ broker which are configured with failover.
I have 40 producer but only on consumer.
Now the problem:
From time to time, one of the producer lost the connection to the master broker. The failover reacts and the producer gets a new connection to the slave which gets the messages. So far so good. But the consumer does not have the problem, he consumes still the messages from the master. He does not know, that the slave has also some messages.
How can i now solve the problem woth losing those messages thay are sent to the slave?
Thank in advance
I would recommend you configure a network of brokers. That way, your brokers will be connected as well, and it no longer matters which broker your producers and consumers connect to - the messages will get propagated across the network.
Related
I have a rabbitMQ cluster with two nodes configured to be synchronized. Each queue is mirrored and persistent.
Each time I need to reboot a node of my cluster, some old messages are replayed.
I don’t understand why because one of the two nodes is still alive and they are "normally" synchronized.
Have you any idea to help me to investigate this problem?
Could you check if you have some messages that are not acknowledged?
If you do (this would mean a consumer never acknowledges it), it could explain the behavior:
Message consumed but never acknowledged
Node reboots
The connections of the consumers connected to that node get closed
Any of the unacked messages that have been consumed in the related channel are put back in the queue
I have a 3 node cluster of Rabbitmq behind a HAproxy Load Balancer. When I shut down a node, Rabbitmq successfully switches the queue to the other nodes. However, I notice that Logstash stops pulling messages from the queue unless I restart it. Is this a problem with the way rabbitmq operates? i.e. it deactivates all active consumers. I am not sure if log stash has any retry capability. Anyone run into this issue?
Quoting rabbit mq documentation, page for clustering first
What is Replicated? All data/state required for the operation of a
RabbitMQ broker is replicated across all nodes. An exception to this
are message queues, which by default reside on one node, though they
are visible and reachable from all nodes.
and high availability
Clients that are consuming from a mirrored queue may wish to know that
the queue from which they have been consuming has failed over. When a
mirrored queue fails over, knowledge of which messages have been sent
to which consumer is lost, and therefore all unacknowledged messages
are redelivered with the redelivered flag set. Consumers may wish to
know this is going to happen.
If so, they can consume with the argument x-cancel-on-ha-failover set
to true. Their consuming will then be cancelled on failover and a
consumer cancellation notification sent. It is then the consumer's
responsibility to reissue basic.consume to start consuming again.
So, what does all this mean:
You have to mirror queues
The consumers should use manual ACK
The consumers should reconnect on their own
So the answer to your question is no, it's not a problem with rabbitmq, that's simply how it works. It's up to clients to reconnect.
I want to understand zookeeper's role in replicated leveldb for ActiveMQ broker.
About zookeeper election : How does zookeeper knows that out of all the clients connected to zookeeper, which clients are ActiveMQ brokers fighting to become master. Is there any particular key or configuration which is passed by all the brokers connecting to zookeeper which says that we (let say 3) ActiveMQ brokers belong to same environment and fighting to become master.
At what interval slave broker copy data from master broker ? Any corner cases where data might be lost ?
Does ActiveMQ provides guarantee of message ordering using replicated leveldb ? I am talking about the case when re-election of master happens while producer is sending messages in sequence to the broker?
Thanks,
Anuj
By zkPath in the Zookeeper configuration and by broker name.
Each message is synced to a quorum (nodes/2+1) brokers before the transaction completes. So there is no sync interval, it's synced in real time. The cluster will no function unless you have a quorum of brokers online so there should be no data loss.
The messages are synced to a majority of the nodes in a synchronous fashion. At reelection, a node with the latest updates will be elected. Ordered messages should be no problem. However, it's generally problematic to rely critically on ordered messages in a message queueing. As a rule of thumb - message order will only be complete under "happy days". Dead letters, multiple consumers and so forth might as well mess up message order.
I use ActiveMQ as a job dispatcher. Which means one master sends job messages to ActiveMQ, and multiple slaves grab job messages from ActiveMQ and process them. When slaves finish one job, they send a message with job_id back to ActiveMQ.
However, slaves are unreliable. If one slave doesn't respond before a period of time, we can assume the slave is down, and try redeliver the sent job message.
Are there any good ideas to realize this re-delivery?
Typically a consumer handles redelivery so that it can maintain message order while a message appears as inflight on the broker. This means that redelivery is limited to a single consumer unless that consumer terminates. In this way the broker is unaware of redelivery.
In ActiveMQ v5.7+ you have the option of using broker side redelivery, it is possible to have the broker redeliver a message after a delay using a resend. This is implemented by a broker plugin that handles dead letter processing by redelivery via the scheduler. This is useful when total message order is not important and where through put and load distribution among consumers is. With broker redelivery, messages that fail delivery to a given consumer can get immediately re-dispatched.
See the ActiveMQ documentation for an example of setting this up in the configuration file.
I have a little problem here with my sample JMS layout.
I have two brokers (A, B) on two machines, which are linked via network connector. The idea is that the producer can send to any broker and the consumer can listen to any broker and the topic to send to/receive from is available globally.
The topic has two durable subscriber clients (one on each machine) that both will process all the messages in the topic. I want it to be a durable subscription so that the processes won't loose any workload if a process has to be restarted. Both subscriber clients are configured to have a failover broker url, so that they first try to connect to their localhost broker and if not available to the other. Failover of the clients seems to work, but I found a problem in the following situation:
Each broker 'A' and 'B' have a subscriber client connected The producer is sending to 'A'. Broker 'B' gets restarted. Client of 'B' registers connection loss and switches to 'A'. 'B' comes up again, and because it had itself registered as a durable subscriber to 'A' it gets the message feed. It has no active durable subscriber now ('A' has now three, including 'B') and piles up until it reaches its connection limits.
Is my configuration wrong? Is it possible what I've intended?
Cheers,
Kai
Are you running master-slave configuration?
Why do you want both brokers to have connected clients at the same time?
If you user failover connection string (identifying both brokers in it) your consumers/producers will use ActiveMQ failover implementation and will connect/reconnect to the active node when needed. I don't think having two active instances with active clients is a good idea - unless you are trying to duplicate your processes (in this case there will be no synchronization)
To make both nodes (master and slave) to always have the same durable data you need
to persist your messages to the same place accessible to both nodes. It can be JDBC adapter connected to a single instance of database (probably behind the cluster) or it can be NAS with shared network folder for KahaDB.