I use ActiveMQ Client 5.10.0 and failover protocol to connect to ActiveMQ Node,what I face now is that the failover reconnect process executes indefinitely if the ActiveMQ Node used in the failover uri breaks up,and another thread which is intended to stop the reconnect process just waits.
The thread dump which belongs to the mentioned stop thread above is as follows:
I can not post the whole thread dump here,please get from the link
could anyone help me?I am stuck in this problem for a long time!
Thanks very much!
By the way,I can not use any newer versions of ActiveMQ Client,because the target environment JDK is 1.6
Related
I see that my custom Spring cloud stream sink with log sink stream app dependency loses RabbitMQ connectivity during RabbitMQ outage, tries making a connection for 5 times and then stops its consumer. I have to manually restart the app to make it successfully connect once the RabbitMQ is up. When I see the default properties of rabbitMQ binding here, it gives interval time but there is no property for infinite retry(which i assume to be default behaviour). Can someone please let me know what I might be missing here to make it try connecting infinitely ?
Error faced during outage triggering consumer retry :
2017-08-08T10:52:07.586-04:00 [APP/PROC/WEB/0] [OUT] Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - home node 'rabbit#229ec9f90e07c75d56a0aa84dc28f602' of durable queue 'datastream.dataingestor.datastream' in vhost '8880756f-8a21-4dc8-9b97-95e5a3248f58' is down or inaccessible, class-id=50, method-id=10)
It appears you have a RabbitMQ cluster and the queue in question is hosted on a down node.
If the queue was HA, you wouldn't have this problem.
The listener container does not (currently) handle that condition. It will retry for ever if it loses connection to RabbitMQ itself.
Please open a JIRA Issue and we'll take a look. The container should treat that error the same as a connection problem.
I've configured 3 ActiveMQ instances along with Zookeeper (Using multi replicated level db).
For verifying if it works fine with zookeeper or not, I've one producer and 8 consumers. Producer has produced around 1000 messages and I am shutting the producer.
Now these threads pick from the queue and keep on processing. I am using consumer.receive() method in the code it blocks for sometime ( indefinite time ) and restarts processing. I see the threads, all are waiting for the messages to be picked from the queue. I am not sure, why this happens even though we have enough messages to be consumed on the queue.
Can anyone please help ?
Regards,
JE
Note :
ActiveMQ version :5.10
ZooKeeper Version : 3.4.6
I have deployed two application in my mule standalone in which one application requires ActiveMQ up and running because I have applied reconnect-forever policy for connection.
but without starting ActiveMQ broker if i start mule.bat file it doesn't even deploy other applications which are not dependent on ActiveMQ.
What can be done to solve this issue so that only ActiveMQ dependent applications wait for the connection and other application start working.
Thank You.
Have you set blocking="false" in the reconnect?
I have 2 servers A and B running a glassfish 3.1.2.2 application server on them. Both use a JMS queue for communication, which works fine so far. If the network connection breaks for any reason, I can see in the logs of server B (the one configured to connect to the remote queue of A) that it tries to reconnect and is actually always successful in doing so as soon as A is up again.
But the problem is, that if I try to restart the glassfish instance on B while server A is unreachable, the startup process will fail after some retries and remains stuck in a kind of undefined/unusable state, i.e. the java process is started, some ports are open but the applications are not started - not even the administration console.
IMHO glassfish startup process should not wait for the queues to connect, this should be done in some kind of background process.
Has anyone of you experienced something similar? Is there anything I can configure/tune to fix this behaviour?
Never mind, it seems to have fixed itself :(
After restarting the computer,removing the deployed ear and deploying it again it just worked. I haven't experienced this behaviour since then.
I'm a fairly new user of ActiveMQ and I'm looking for a way to get detailed debug information on the client side of a queue connection. My problem is this: I have a server that is sending a message through a queue to a client. Using the admin web page associated with the broker, I can verify the following: the queue was created, there is a consumer associated with the queue, the message has been enqueued, the message has been dispatched, the dispatched queue size is 1, the message has not been dequeued. This setup was working yesterday but mysteriously stopped working today even though I did a restart of the activemq service. The log file at /var/log/activemq.log does not contain any useful information.
At this point I'm stumped; I'm assuming that there is some sort of problem with the configuration, but it hasn't changed since yesterday. Does anybody have a suggestion about what my next step should be?
Turn on debug (or even trace) logging in the broker first of all in conf/log4j.properties.
log4j.logger.org.apache.activemq=DEBUG
restart the broker and re-run your scenario. The logging will hopefully provide you with some information.
Jconsole is also a useful tool to monitor the running broker.
Does your client use any message filters?
You can also enable remote debugging and then connect with an IDE.
To start remote debugging execute
$ ACTIVEMQ_DEBUG=true bin/activemq
and then start a remote debugger to connect to port 5005