I am using jboss 7.1.1 final and HornetQ 2.2.13 final.
I have a couple of queues configured and one of them is "full" of messages, couple of thousands. I cant delete the messages.
Ive tried deleting them using jboss cli with the command
/subsystem=messaging/hornetq-server=default/jms-queue=Queue:remove-messages
it responds with success, but the messages are still there...
Ive tried deleting them using JConsole with a jmx command. It responds with the number zero and the count messages are still the same.
Ive tried deleting the queue inside Jboss Console and restarting the AS. After I configure the queue again, the messages are still there cause its persisted.
The only way it worked was configuring hornetq server to disable persistence inside standalone.xml.
Does anybody know how to do it using jconsole or jboss cli?
All you have to do is to call the method:
from the jboss-cli:
/subsystem=messaging/hornetq-server=default/jms-queue=testQueue:remove-messages
I just tried at the exact versions you tried by adding a large ammount of messages, including with paging.. and everything worked fine.
I configured my system to page, and used this to create a few thousand messages:
HornetQConnectionFactory cf = HornetQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, new TransportConfiguration(NETTY_CONNECTOR_FACTORY));
Connection conn = cf.createConnection("guest", "hello");
Session sess = conn.createSession(true, Session.SESSION_TRANSACTED);
javax.jms.Queue queue = sess.createQueue("testQueue");
MessageProducer prod = sess.createProducer(queue);
for (int i = 0 ; i < 50000; i++)
{
TextMessage msg = sess.createTextMessage("hello " + i);
prod.send(msg);
if (i % 500 == 0)
{
System.out.println("Sent " + i);
System.out.println("commit");
sess.commit();
}
}
sess.commit();
conn.close();
I then tried the remove method and it worked:
/subsystem=messaging/hornetq-server=default/jms-queue=testQueue:remove-messages
If this is not working there are two possibilities:
We changed how the locks are held on the queue during delivery. Perhaps you are hitting a fixed bug and you would have to move to a newer version.
You have queues in delivery on consumers. We can't delete messages if they are on the Consumer's buffer in delivery state. You would have to remove consumers to delete all messages.
I'm adding this answer here as I did a lot of research trying to replicate your issue and everything worked like a charm. I would need more information to what's going.
I think the best would be the user's forum where we can discuss it further. SOF is going to simple questions / answer. It's not a place for investigating bugs or anything like that.
https://community.jboss.org/en/hornetq?view=discussions
Related
I'm getting an strange behavior in my DEV environment. When I send a message to my queue it's dispatched correctly, but the next message (can be same content or different) always fails and is directly sent to my deadletter queue. And then this pattern is repeated, one OK, one sent to deadletter.
In my local setup, everything is working OK, but not in my DEV env so this makes a little difficult to debug/troubleshoot. Not sure what could be wrong or different. I'm new on RabbitMq so maybe I need to include more information (if so please let me know).
Does anyone have an idea of what could be causing it? Or does anyone have experienced something like this before?
RabbitMq Version is: 3.8.2
My rabbitmq.config file is:
[{rabbitmq_management,[{tcp_config,[{port,15672}]}]}, {rabbit,[{total_memory_available_override_value,3999997952}, {tcp_listeners,[5672]}, {loopback_users,[]}]}].
My two queues are configured this way:
**my-queue.dev**
Type: Classic
Features: D, DLX
**my-queue.dev.deadletter**
Type: Classic
Features: D
Kind regards!
Hi i am new to ActiveMQ,
We are using Active MQ-5.8.0 as message broker for our system.My requirement is to get a alert mail if the number of messages in a particular Queue exceeds some specified number(configurable) . So i found that that we can use QueueBrowser to get the list of messages.
Below is the code snippet:
enum1 = TestQBrowser.getEnumeration();
int count = 0;
while(enum1.hasMoreElements()){
count++;
enum1.nextElement();
}
if(count>5)
sendMail("Queue has more pending message than threashold 5");//logic to send alert mail.
This was working as expected previously but i found a strange number (1113762 messages) in queue however when i checked the same with ActiveMQ admin console there are only 100 message.
Can you please help me why i am getting this high count of messages. Is there any problem with the way i did or some problem with QueueBrowser??
P.S : This is my first question in StackOverflow , this question might be basic one but i have been spending a lot of time on this issue.
There is a bug in ActiveMQ 5.8 that causes this. You need to move to version 5.9.0 if you want to reliably use the QueueBrowser to try and do this. However you will still probably run into issues if the Queue is to deep as there is no guarantee that the browser will return all the messages since it must work within the configured memory limits which can cause it to stop paging in messages from the store.
Check out How can I monitor ActiveMQ, there are many possibilities.
Probably the Advisory Messages are the best for your requirement.
I have a producer sending durable messages to a RabbitMQ exchange. If the RabbitMQ memory or disk exceeds the watermark threshold, RabbitMQ will block my producer. The documentation says that it stops reading from the socket, and also pauses heartbeats.
What I would like is a way to know in my producer code that I have been blocked. Currently, even with a heartbeat enabled, everything just pauses forever. I'd like to receive some sort of exception so that I know I've been blocked and I can warn the user and/or take some other action, but I can't find any way to do this. I am using both the Java and C# clients and would need this functionality in both. Any advice? Thanks.
Sorry to tell you but with RabbitMQ (at least with 2.8.6) this isn't possible :-(
had a similar problem, which centred around trying to establish a channel when the connection was blocked. The result was the same as what you're experiencing.
I did some investigation into the actual core of the RabbitMQ C# .Net Library and discovered the root cause of the problem is that it goes into an infinite blocking state.
You can see more details on the RabbitMQ mailing list here:
http://rabbitmq.1065348.n5.nabble.com/Net-Client-locks-trying-to-create-a-channel-on-a-blocked-connection-td21588.html
One suggestion (which we didn't implement) was to do the work inside of a thread and have some other component manage the timeout and kill the thread if it is exceeded. We just accepted the risk :-(
The Rabbitmq uses a blocking rpc call that listens for a reply indefinitely.
If you look the Java client api, what it does is:
AMQChannel.BlockingRpcContinuation k = new AMQChannel.SimpleBlockingRpcContinuation();
k.getReply(-1);
Now -1 passed in the argument blocks until a reply is received.
The good thing is you could pass in your timeout in order to make it return.
The bad thing is you will have to update the client jars.
If you are OK with doing that, you could pass in a timeout wherever a blocking call like above is made.
The code would look something like:
try {
return k.getReply(200);
} catch (TimeoutException e) {
throw new MyCustomRuntimeorTimeoutException("RabbitTimeout ex",e);
}
And in your code you could handle this exception and perform your logic in this event.
Some related classes that might require this fix would be:
com.rabbitmq.client.impl.AMQChannel
com.rabbitmq.client.impl.ChannelN
com.rabbitmq.client.impl.AMQConnection
FYI: I have tried this and it works.
RabbitMQ ticks all the boxes for the project I am planning, save one. I would have different workers listening on a queue and it is important that they process the newest messages (i.e., latest sequence number) first (LIFO).
My application is such that newer messages pretty much obsolete older messages. If you have workers to spare you could still process the older messages but it is important the newer ones are done first.
After trawling the various forums and such I can only see one solution and that is for a client to process a message it should first:
consume all messages
re-order them according to the sequence number
re-submit to the queue
consume the first message
Ugly and problematic if the client dies halfway. But mabye somebody here has a better solution.
My research is based (in part) on:
http://groups.google.com/group/rabbitmq-discuss/browse_thread/thread/e79e77d86bc7a3b8?fwc=1
http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2010-July/007934.html
http://groups.google.com/group/rabbitmq-discuss/browse_thread/thread/e40d1069dcebe2cc
http://old.nabble.com/Priority-Queue-implementation-and-performance-td29946348.html
Note: the expected traffic of messages will roughly be in the range of 1 msg/hour for some queues and 100/minute for others. So nothing stellar.
Since there is no reply I guess I did my homework rather well ;)
Anyway, after discussing the requirements with the other stakeholders it was decided I can drop the LIFO requirement for now. We can worry about that when it comes to it.
A solution that we will probably end up adopting is for the worker to open a second queue that the master can use to let the worker know what jobs to ignore + provide additional control/monitoring information (which it looks like we will need anyway).
The RabbitMQ implementing the AMQP 1.0 spec may also help here.
So I will mark this question as answered for now. Somebody else is still free to add or improve.
One possibility might be to use basic.get in a loop and wait for the response basic-ok.message-count to become zero (throwing away all other messages):
while (<get ok> = <call basic.get>) {
if (<get ok>.message-count == 0) {
// Now <get ok> is the most recent message on this queue
break;
} else if (<is get-empty>) {
// Someone else got it
}
}
Of course, you'd have to set up the message routing patterns on the broker such that 1 consumer throwing away messages doesn't mess with another. Try to avoid re queueing messages as they will re queue at the top of the stack, making them look like the most recent.
I have a C# publisher and subscriber that talk to each other using ActiveMQ and NMS. Everything works fine, except I have no way to know when ActiveMQ goes down. This is particularly bad for the consumer. They stop getting data, but aside from the fact that data stops showing up, no errors or events are raised.
Is there a way using NMS(particulary Apache.NMS.IConnection or the Apache.NMS.ISession objects)
I downloaded the implementation that I'm using from Spring, but I'm not using any specific spring implementations, everything I'm using is in the Apache.NMS and Apache.NMS.ActiveMQ namespaces.
Well, it's been a lot since this question has been asked, but now you have several events available:
m_connection.ConnectionInterruptedListener += new ConnectionInterruptedListener(OnConnectionInterruptedListener);
m_connection.ConnectionResumedListener += new ConnectionResumedListener(OnConnectionResumedListener);
m_connection.ExceptionListener += new ExceptionListener(OnExceptionListener);
where m_connection is a IConnection object.
With these 3 events you will be able to find when your broker is down (among other usefull information, such as when it resumes the connection or when he encounters an exception)
Note: If you are in failover mode, these exceptions will be swallowed by the failover transport layer and handled automatically with them. Hence you will not receive any of these events.