I have ActiveMQ broker (5.6.0), and spring-JMS producer. I am using JMS queues, and not topics. It works great, but when the JMS-producer have many messages to sent, I sometimes get:
'org.springframework.jms.UncategorizedJmsException: Uncategorized
exception occured during JMS processing; nested exception is
javax.jms.JMSException:
org.apache.activemq.transport.RequestTimedOutIOException'
although all the messages are actually being sent to the broker.
Sending a lot of messages using JMSTemplate configured with a plain connection factory out a Java EE container is never a good idea. It fires up a lot of network connections, build new sessions etc, for every message.
Read JMSTemplate Gotchas from ActiveMQ for some background and help to solve the issues.
Just configuring a PooledConnectionFactory och a CachingConnectionFactory might solve you issues.
That said, I don't know if it will solve your RequestTimedOutIOException, but it's a good place to start.
I had the same error. In my case I had only to increase sendTimeout param value of the connectionFactory from 2000 to 5000 or higher.
Related
I am using Spring AMQP to publish messages to RabbitMQ. Consider a scenario:
1. Java client sends a message to MQ usin amqpTemplate.convertAndSend()
2. But RabbitMQ is down OR there's some n/w issue
In this case the message will be lost? OR
Is there any way it'll be persisted and will be retried?
I checked the publish-confirm model as well but as I understood, ultimately we've to handle the nack messages through coding on our own.
The RabbitTemplate supports adding a RetryTemplate which can be configured whit whatever retry semantics you want. It will handle situations when the broker is down.
See Adding Retry Capabilities.
You can use a transaction or publisher confirms to ensure rabbit secured the message.
As I understand there are some errors in AMQP which cause the channel to force-close. (Such as sending to a non-existent exchange)
Why is this? And what should an application do?
The close could cause consumers, bindings, queues and exchanges to be deleted. The java client supports recovering these things on network failure through AutorecoveringConnection, but the same doesn't happen for this case.
I have a requirement to load messages from two queues and i am using ActiveMQ I have to implement the Retry mechanism in case of any error or network or application server failure and load back into the same Queue. Also, I want to load any poison messages to DLQ.
Please let me know if I can acheive these through Spring JMS. Also, please advise some good examples to accomplish this task. I checked Spring JMS documentation and have not much details in that.
This is a broker function with ActiveMQ - just configure the broker with the appropriate policies.
If using a DefaultMessageListenerContainer, you must use transacted sessions; then, if the listener throws an exception the message will be rolled back onto the queue and the broker's retry/DLQ policies kick in.
See the Spring documentation about enabling transactions.
trying to work past an issue when using a resque job to process inbound AMQP messages.
am using an initializer to set up the message consumer at application startup and then feed the received messages to resque job for processing. that is working quite well.
however, i also want to process a response message out of the worker, i.e. publish it back out to a queue, and am running into the issue of the forking process making the app-wide AMQP connection unaddressable from inside the resque worker. would be very interested to see how other folks have tackled this as i can't believe this pattern is unusual.
due to message volumes, taking the approach of firing up a new thread and amqp connection for every response is not a workable solution.
ideas?
my bust on this, had my eye off the ball and forgot about resque forking when it kicks off a worker. going to go the route suggested by others of daemonizing the process instead....
I am running into a deadlock situation when receiving messages on a pool of MDBs in GlassFish. If I receive multiple messages that concurrently try to update the same set of rows, the MDB throws a LockAcquisitionException. Unfortunately the GlassFish JMS provider redelivers the message immediately causing the same exception to occur again. I was hoping to configure the JMS provider to redeliver after some delay, but this does not seem to be supported. Any ideas on how I could solve this issue?
Have you looked at
Configuring a 'retry delay' in MQ Series
What about catching the error, sleeping, and then re-throwing it?
Here's a link to some Oracle documentation on the configuration options:
http://download.oracle.com/docs/cd/E19798-01/821-1794/aeooq/index.html
endpointExceptionRedeliveryAttempts
This will allow you to catch errors. You could then implement an MBean on the Fault/RME endpoint and add in artificial delays
But there doesn't appear to be a way to put a retry delay in GlassFish at this time.