How to debug ActiveMQ client? - activemq

I'm a fairly new user of ActiveMQ and I'm looking for a way to get detailed debug information on the client side of a queue connection. My problem is this: I have a server that is sending a message through a queue to a client. Using the admin web page associated with the broker, I can verify the following: the queue was created, there is a consumer associated with the queue, the message has been enqueued, the message has been dispatched, the dispatched queue size is 1, the message has not been dequeued. This setup was working yesterday but mysteriously stopped working today even though I did a restart of the activemq service. The log file at /var/log/activemq.log does not contain any useful information.
At this point I'm stumped; I'm assuming that there is some sort of problem with the configuration, but it hasn't changed since yesterday. Does anybody have a suggestion about what my next step should be?

Turn on debug (or even trace) logging in the broker first of all in conf/log4j.properties.
log4j.logger.org.apache.activemq=DEBUG
restart the broker and re-run your scenario. The logging will hopefully provide you with some information.
Jconsole is also a useful tool to monitor the running broker.
Does your client use any message filters?

You can also enable remote debugging and then connect with an IDE.
To start remote debugging execute
$ ACTIVEMQ_DEBUG=true bin/activemq
and then start a remote debugger to connect to port 5005

Related

RabbitMQ durable queue losing messages over STOMP

I have a webpage connecting to a rabbit mq broker using javascript/websockets that are exposed by a spring app deployed in tomcat. Messages are produced 1 per second by an external application and are rendered on the webpage. The javascript subscription is durable.
The issue I'm experiencing is that when the network connection is broken on the javascript client for a period of time (say 60 seconds), the first ~24 seconds of messages are missing. I've looked through the logs of the app deployed in tomcat and the missing messages seem to be up until the following log statement:
org.springframework.messaging.simp.stomp.StompBrokerRelayMessageHandler - DEBUG - TCP connection to broker closed in session 14
I think this is the point at which the endpoint realises the javascript client is disconnected and decides to close the connection to the broker resulting in future messages queueing up.
My question is how can I ensure that the messages between the time the network is severed and the time the endpoint realises the client is disconnected are not lost? Should the endpoint put the messages back on the queue somehow? Maybe there's a way to make it transactional?
Thanks in advance.
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Your Tomcat application should not acknowledge messages from RabbitMQ until it confirms that your Javascript client has received them. This way, any messages that aren't ack-ed by the JS client won't be ack-ed by Tomcat, and RabbitMQ will re-deliver them.
I don't know how your JS app and Tomcat interact, but you may have to implement your own ack process there.

NserviceBus - What happens to a message if the server is offline

I went thought NServiceBus documentation including the durable messaging one. What I understand is that when the server is offline the messages continue to go into the server's input queue which get picked up when server comes back online.
But what if the server is completely down and the input queue is not accessible?
I'm using Bus.Send from the client.
It depends on what transport you're using.
In the case of a brokered message queue, like Azure Service Bus, as long as that service is available, the fact the machine that will eventually retrieve the messages is offline is irrelevant, as that machine is simply asking the external queuing service for messages. The same goes for a transport like SQL Server.
In the case of a transport like MSMQ, which is a store a forward style queue, the messages will remain in a local outgoing queue until the remote machine becomes available.
Can you double check that you are looking in the correct spot? If you aren't getting an error out of NServiceBus when you Send, then MSMQ is installed. If it can't be reached or the service is stopped you should get errors.
The Outbound queues are in a different place as illustrated here:
http://blogs.msdn.com/cfs-filesystemfile.ashx/__key/communityserver-components-postattachments/00-09-06-31-16/outgoingempty.JPG
As RMD indicated, this is an advantage of the store and forward MSMQ transport.. the local outbound queue should just stack these up until the remote server is available.
Thx.
Joe

RabbitMQ dropping messages after the first one

I'm using celery 3.0.18 with RabbitMQ 3.0.2. I have a task sent to another application by using celery.send_task, and I can see the send_task call in my logs, I can see the packets leaving the worker instance, and I can see the packets reaching the RabbitMQ instance when I call tcpflow -ce -i any port 5672, however, only the first message gets to the queue. They all have the same routing key, I tried recreating the exchange and bindings, and even a new RabbitMQ instance, and nothing seems to work. This used to work fine for months, until we had to rebuild the RabbitMQ from scratch after a crash in our AWS infrastructure. Strangely, I have the exact same setup working on other application, using the same broker and the same exchange, binding and queue, and it works perfectly there. Also, it works when I send the messages to the same exchange using the same call from a management script, running from the shell on the same instance, but it doesn't work when it's sent from the celery task in the worker process.
Any ideas on what the problem might be?
Eventually, I figured what's wrong, but it's not clear if this is the expected behavior, a celery bug, or a RabbitMQ bug.
What happens is that besides our application tasks, I have a custom logging handler used to send logs to a central location using RabbitMQ, using celery.send_task. This logging handler sends messages to an exchange named application.logger, with a routing key like application.logger.info, application.logger.warning, etc, and have bindings to route some logging levels to specific queues. This exchange, bindings and queues were created directly in RabbitMQ and not defined in Celery routes.
When the worker tries to send a message to this exchange and it doesn't exist, Celery would log a 404 NOT_FOUND error. After that, tasks sent to other exchanges using the same connection weren't delivered. They were sent by the worker instance, we could see the packets arriving and the RabbitMQ management screen for that connection even shows the data arriving from the client in kb/s, but no messages were delivered.

asadmin start-domain fails when remote JMS queue is unreachable

I have 2 servers A and B running a glassfish 3.1.2.2 application server on them. Both use a JMS queue for communication, which works fine so far. If the network connection breaks for any reason, I can see in the logs of server B (the one configured to connect to the remote queue of A) that it tries to reconnect and is actually always successful in doing so as soon as A is up again.
But the problem is, that if I try to restart the glassfish instance on B while server A is unreachable, the startup process will fail after some retries and remains stuck in a kind of undefined/unusable state, i.e. the java process is started, some ports are open but the applications are not started - not even the administration console.
IMHO glassfish startup process should not wait for the queues to connect, this should be done in some kind of background process.
Has anyone of you experienced something similar? Is there anything I can configure/tune to fix this behaviour?
Never mind, it seems to have fixed itself :(
After restarting the computer,removing the deployed ear and deploying it again it just worked. I haven't experienced this behaviour since then.

Can't read from remote transactional private queue using WCF in workgroup mode (can do using System.Messaging !)

I have spent days reading MSDN, forums and article about this, and cannot find a solution to my problem.
As a PoC, I need to consume a queue from more than one machine since I need fault tolerance on the consumers side. Performance is not an issue since less than 100 messages a day should by exchanged.
I have coded two trivial console application , one as client, the other one as server. Using Framework 4.0 (tested also on 3.5). Messages are using transactions.
Everything runs fines on a single machine (Windows 7), even when running multiple consumers application instance.
Now I have a 2012 and a 2008 R2 virtual test servers running in the same domain (but don't want to use AD integration anyway). I am using IP address or "." in endpoint address attribute to prevent from DNS / AD resolution side effects.
Everything works fine IF the the queue is hosted by the consumer and the producer is submitting messages on the remote private queue. This is also true if I exchange the consumer / producer role of the 2012 and 2008 server.
But I have NEVER been able to make this run, using WCF, when the consumer is reading from remote queue and the producer is submitting messages localy. Submition never fails, my problem is on the consumer side.
My wish is to make this run using netMsmqBinding, but I also tried using msmqIntegrationBinding. For each test, I adapted code and configuration, then confirmed this was running ok when the consumer was consuming from the local queue.
The last test I have done is using WCF (msmqIntegrationBinding) only on the producer (local queue) and System.Messaging.MessageQueue on the consumer (remote queue) : It works fine ! => My goal is to make the same using WCF and netMsmqBinding on both sides.
In my point of view, I have proved this problem is a WCF issue, not an MSMQ one. This has nothing to do with security, authentication, firewall, transport, protocol, MSMQ version etc.
Errors info using MS Service Trace Viewer :
Using msmqIntegrationBinding when receiving the message (openning queue was ok) : An error occurred while receiving a message from the queue: The transaction specified cannot be imported. (-1072824242, 0xc00e004e). Ensure that MSMQ is installed and running. Make sure the queue is available to receive from.
Using netMsmqBinding, on opening the queue : An error occurred when converting the '172.22.1.9\private$\Test' queue path name to the format name: The queue path name specified is invalid. (-1072824300, 0xc00e0014). All operations on the queued channel failed. Ensure that the queue address is valid. MSMQ must be installed with Active Directory integration enabled and access to it is available.
If someone can help to find why my configuration cannot be handled by WCF, a much elegant and configurable way than Messaging, I would greatly appreciate !
Thank you.
You may need to post you consumer code and config to give more of an idea but it could be the construction of the queue name - e.g.
FormatName:DIRECT=TCP:192.168.0.2\SomeQueue
There are several different ways to connect to a queue and it changes when you are remote or local as well.
I have found this article in the past to help:
http://blogs.msdn.com/b/johnbreakwell/archive/2009/02/26/difference-between-path-name-and-format-name-when-accessing-msmq-queues.aspx
Also, MessageQueue Constructor on MSDN...
http://msdn.microsoft.com/en-us/library/ch1d814t.aspx