RabbitMQ Camel Consumer - Consume a single message - rabbitmq

I have a scenario where I want to "pull" messages of a RabbitMQ queue/topic and process them one at a time.
Specifically if there are already messages sitting on the queue when the consumer starts up.
I have tried the following with no success (meaning, each of these options reads the queue until it is either empty or until another thread closes the context).
1.Stopping route immediately it is first processed
final CamelContext context = new DefaultCamelContext();
try {
context.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
RouteDefinition route = from("rabbitmq:harley?queue=IN&declare=false&autoDelete=false&hostname=localhost&portNumber=5672");
route.process(new Processor() {
Thread stopThread;
#Override
public void process(final Exchange exchange) throws Exception {
String name = exchange.getIn().getHeader(Exchange.FILE_NAME_ONLY, String.class);
String body = exchange.getIn().getBody(String.class);
// Doo some stuff
routeComplete[0] = true;
if (stopThread == null) {
stopThread = new Thread() {
#Override
public void run() {
try {
((DefaultCamelContext)exchange.getContext()).stopRoute("RabbitRoute");
} catch (Exception e) {}
}
};
}
stopThread.start();
}
});
}
});
context.start();
while(!routeComplete[0].booleanValue())
Thread.sleep(100);
context.stop();
}
Similar to 1 but using a latch rather than a while loop and sleep.
Using a PollingConsumer
final CamelContext context = new DefaultCamelContext();
context.start();
Endpoint re = context.getEndpoint(srcRoute);
re.start();
try {
PollingConsumer consumer = re.createPollingConsumer();
consumer.start();
Exchange exchange = consumer.receive();
String bb = exchange.getIn().getBody(String.class);
consumer.stop();
} catch(Exception e){
String mm = e.getMessage();
}
Using a ConsumerTemplate() - code similar to above.
I have also tried enabling preFetch and setting the max number of exchanges to 1.
None of these appear to work, if there are 3 messages on the queue, all are read before I am able to stop the route.
If I were to use the standard RabbitMQ Java API I would use a basicGet() call which lets me read a single message, but for other reasons I would prefer to use a Camel consumer.
Has anyone successfully been able to process a single message on a queue that holds multiple messages using a Camel RabbitMQ Consumer?
Thanks.

This is not the primary intention of the component as its for continued received. But I have created a ticket to look into supporting a basicGet (single receive). There is a new spring based rabbitmq component coming in 3.8 onwards so its going to be implemeneted there (first): https://issues.apache.org/jira/browse/CAMEL-16048

Related

Camel Route Losing Message on restart in camel rabbitmq

I am using camel-rabbitmq.
Here is my route defination
camelContext.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from("rabbitmq:TEST?queue=TEST&concurrentConsumers=5")
.routeId("jms")
.autoStartup(false)
.throttle(10)
.asyncDelayed()
.log("Consuming message ${body} to ${header.deliveryAddress}")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
System.out.println(atomicLong.decrementAndGet());
}
})
;
}
});
When I push 500 messages to this queue , when stop and start route all message on channel will be lost ,wonder where they are going.
If I configure same route with &autoAck=false it is working properly but losing performance. Why camel not offering same behavior with and without autoAck.
I managed my problem doing following change in rabbitmqconsumer of camel-rabbitmq
public void handleCancelOk(String consumerTag) {
// no work to do
log.info("Received cancelOk signal on the rabbitMQ channel");
**downLatch.countDown();**
}
#Override
protected void doStop() throws Exception {
if (channel == null) {
return;
}
this.requeueChannel=openChannel(consumer.getConnection());
if (tag != null && isChannelOpen()) {
channel.basicCancel(tag);
}
stopping=true;
downLatch.await();
try {
lock.acquire();
if (isChannelOpen()) {
channel.close();
}
} catch (TimeoutException e) {
log.error("Timeout occured");
throw e;
} catch (InterruptedException e1) {
log.error("Thread Interrupted!");
} finally {
lock.release();
}
}
By doing this camel route will for message to consumed and avoided message loss.
You need to check rabbitmq consumer prefetch count
consumer prefetch
I think By default consumer picks all the messages in queue to its memory buffers.
If you set the prefetch count to 1, consumer will acknowledge messages one by one.
All the other unacknowledged will be present in the queue in ready state. Waiting to be picked up, after the consumer completes it task on the previous message picked.

How to move messages from one queue to another in RabbitMQ

In RabbitMQ,I have a failure queue, in which I have all the failed messages from different Queues. Now I want to give the functionality of 'Retry', so that administrator can again move the failed messages to their respective queue. The idea is something like that:
Above diagram is structure of my failure queue. After click on Retry link, message should move into original queue i.e. queue1, queue2 etc.
If you are looking for a Java code to do this, then you have to simply consume the messages you want to move and publish those messages to the required queue. Just look up on the Tutorials page of rabbitmq if you are unfamiliar with basic consuming and publishing operations.
It's not straight forward consume and publish. RabbitMQ is not designed in that way. it takes into consideration that exchange and queue both could be temporary and can be deleted. This is embedded in the channel to close the connection after single publish.
Assumptions:
- You have a durable queue and exchange for destination ( to send to)
- You have a durable queue for target ( to take from )
Here is the code to do so:
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.QueueingConsumer;
import org.apache.commons.lang.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.rabbit.connection.CachingConnectionFactory;
public object shovelMessage(
String exchange,
String targetQueue,
String destinationQueue,
String host,
Integer port,
String user,
String pass,
int count) throws IOException, TimeoutException, InterruptedException {
if(StringUtils.isEmpty(exchange) || StringUtils.isEmpty(targetQueue) || StringUtils.isEmpty(destinationQueue)) {
return null;
}
CachingConnectionFactory factory = new CachingConnectionFactory();
factory.setHost(StringUtils.isEmpty(host)?internalHost.split(":")[0]:host);
factory.setPort(port>0 ? port: Integer.parseInt(internalPort.split(":")[1]));
factory.setUsername(StringUtils.isEmpty(user)? this.user: user);
factory.setPassword(StringUtils.isEmpty(pass)? this.pass: pass);
Channel tgtChannel = null;
try {
org.springframework.amqp.rabbit.connection.Connection connection = factory.createConnection();
tgtChannel = connection.createChannel(false);
tgtChannel.queueDeclarePassive(targetQueue);
QueueingConsumer consumer = new QueueingConsumer(tgtChannel);
tgtChannel.basicQos(1);
tgtChannel.basicConsume(targetQueue, false, consumer);
for (int i = 0; i < count; i++) {
QueueingConsumer.Delivery msg = consumer.nextDelivery(500);
if(msg == null) {
// if no message found, break from the loop.
break;
}
//Send it to destination Queue
// This repetition is required as channel looses the connection with
//queue after single publish and start throwing queue or exchange not
//found connection.
Channel destChannel = connection.createChannel(false);
try {
destChannel.queueDeclarePassive(destinationQueue);
SerializerMessageConverter serializerMessageConverter = new SerializerMessageConverter();
Message message = new Message(msg.getBody(), new MessageProperties());
Object o = serializerMessageConverter.fromMessage(message);
// for some reason msg.getBody() writes byte array which is read as a byte array // on the consumer end due to which this double conversion.
destChannel.basicPublish(exchange, destinationQueue, null, serializerMessageConverter.toMessage(o, new MessageProperties()).getBody());
tgtChannel.basicAck(msg.getEnvelope().getDeliveryTag(), false);
} catch (Exception ex) {
// Send Nack if not able to publish so that retry is attempted
tgtChannel.basicNack(msg.getEnvelope().getDeliveryTag(), true, true);
log.error("Exception while producing message ", ex);
} finally {
try {
destChannel.close();
} catch (Exception e) {
log.error("Exception while closing destination channel ", e);
}
}
}
} catch (Exception ex) {
log.error("Exception while creating consumer ", ex);
} finally {
try {
tgtChannel.close();
} catch (Exception e) {
log.error("Exception while closing destination channel ", e);
}
}
return null;
}
To requeue a message you can use the receiveAndReply method. The following code will move all messages from the dlq-queue to the queue-queue:
do {
val movedToQueue = rabbitTemplate.receiveAndReply<String, String>(dlq, { it }, "", queue)
} while (movedToQueue)
In the code example above, dlq is the source queue, { it } is the identity function (you could transform the message here), "" is the default exchange and queue is the destination queue.
I also have implemented something like that, so I can move messages from a dlq back to processing. Link: https://github.com/kestraa/rabbit-move-messages
Here is a more generic tool for some administrative/supporting tasks, the management-ui is not capable of.
Link: https://github.com/bkrieger1991/rabbitcli
It also allows you to fetch/move/dump messages from queues even with a filter on message-content or message-headers :)

What is the use case of BrokerService in ActiveMQ and how to use it correctly

I am new about ActiveMQ. I'm trying to study and check how it works by checking the example code provided by Apache at this link:-
http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
public class Server implements MessageListener {
private static int ackMode;
private static String messageQueueName;
private static String messageBrokerUrl;
private Session session;
private boolean transacted = false;
private MessageProducer replyProducer;
private MessageProtocol messageProtocol;
static {
messageBrokerUrl = "tcp://localhost:61616";
messageQueueName = "client.messages";
ackMode = Session.AUTO_ACKNOWLEDGE;
}
public Server() {
try {
//This message broker is embedded
BrokerService broker = new BrokerService();
broker.setPersistent(false);
broker.setUseJmx(false);
broker.addConnector(messageBrokerUrl);
broker.start();
} catch (Exception e) {
System.out.println("Exception: "+e.getMessage());
//Handle the exception appropriately
}
//Delegating the handling of messages to another class, instantiate it before setting up JMS so it
//is ready to handle messages
this.messageProtocol = new MessageProtocol();
this.setupMessageQueueConsumer();
}
private void setupMessageQueueConsumer() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(messageBrokerUrl);
Connection connection;
try {
connection = connectionFactory.createConnection();
connection.start();
this.session = connection.createSession(this.transacted, ackMode);
Destination adminQueue = this.session.createQueue(messageQueueName);
//Setup a message producer to respond to messages from clients, we will get the destination
//to send to from the JMSReplyTo header field from a Message
this.replyProducer = this.session.createProducer(null);
this.replyProducer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
//Set up a consumer to consume messages off of the admin queue
MessageConsumer consumer = this.session.createConsumer(adminQueue);
consumer.setMessageListener(this);
} catch (JMSException e) {
System.out.println("Exception: "+e.getMessage());
}
}
public void onMessage(Message message) {
try {
TextMessage response = this.session.createTextMessage();
if (message instanceof TextMessage) {
TextMessage txtMsg = (TextMessage) message;
String messageText = txtMsg.getText();
response.setText(this.messageProtocol.handleProtocolMessage(messageText));
}
//Set the correlation ID from the received message to be the correlation id of the response message
//this lets the client identify which message this is a response to if it has more than
//one outstanding message to the server
response.setJMSCorrelationID(message.getJMSCorrelationID());
//Send the response to the Destination specified by the JMSReplyTo field of the received message,
//this is presumably a temporary queue created by the client
this.replyProducer.send(message.getJMSReplyTo(), response);
} catch (JMSException e) {
System.out.println("Exception: "+e.getMessage());
}
}
public static void main(String[] args) {
new Server();
}
}
My confusion about the messageBrokerUrl = "tcp://localhost:61616"; You know ActiveMQ service is running on port 61616 by default. Why does this example chooses same port. If I try to run the code thows eception as:
Exception: Failed to bind to server socket: tcp://localhost:61616 due to: java.net.BindException: Address already in use: JVM_Bind
Perhaps if I change the port number, I can execute the code.
Please let me know why it is like this in the example and how to work with BrokerService.
The BrokerService in this example is trying to create an in memory ActiveMQ broker for use in the example. Given the error you are seeing I'd guess you already have an ActiveMQ broker running on the machine that is bound to port 61616 as that's the default port and thus the two are conflicting. You could either stop the external broker and run the example or modify the example to not run the embedded broker and just rely on your external broker instance.
Embedded brokers are great for unit testing or for creating examples that don't require the user to have a broker installed and running.

spring amqp custom TTL and retry count

We are trying to implement Retry mechanism on client exceptions. We want to be able to set different routing key, ttl and retry count based on the content in each message. We want to keep the handler simple, i.e; for handleMessage to throw exception. How do we handle this exception and send the message to DLX with appropriate parameters. On retry if the failure happens again - message would be discarded (acknowledged) , or will be put back on DLX with incrementing the retry count. where would we implement this logic and how would be wired?
========================
With Gary's direction, I was able to implement. Here are excerpts ..
#Bean
public SimpleMessageListenerContainer listenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
jsonMessageHandler.setQueueName(queueName);
container.setQueueNames(queueName);
container.setMessageListener(jsonMessageListenerAdapter());
container.setAdviceChain(new Advice[]{retryOperationsInterceptor()});
return container;
}
#Bean
public MessageListenerAdapter messageListenerAdapter() {
return new MessageListenerAdapter(messageHandler,messageConverter);
}
#Bean
public MessageListenerAdapter jsonMessageListenerAdapter() {
return new MessageListenerAdapter(jsonMessageHandler);
}
#Bean
RetryOperationsInterceptor retryOperationsInterceptor() {
return RetryInterceptorBuilder.stateless().recoverer(republishMessageRecoverer).maxAttempts(1).build();
}
#Bean
RepublishMessageRecoverer republishMessageRecoverer() {
return new MyRepublishMessageRecoverer(rabbitTemplate());
}
==========
public class MyRepublishMessageRecoverer extends RepublishMessageRecoverer {
// - constructor
#Override
public void recover(Message message, Throwable cause) {
//Deal with headers
long currentCount = 0;
List xDeathList = (List)message.getMessageProperties().getHeaders().get("x-death");
if(xDeathList != null && xDeathList.size() > 0) {
currentCount = (Long)((Map)(xDeathList.get(0))).get("count");
}
if(currentCount < context.getRules().getNumberOfRetries()) {
//message sent to DLX
this.retryTemplate.send(handlerProperties.getSystem(), message);
} else {
//message ignored
}
throw new AmqpRejectAndDontRequeueException(cause);
}
}
You can't modify a rejected message, it is routed to the DLX/DLQ unchanged (except x-death headers are added by the broker).
You have to republish to the DLX/DLQ yourself if you want to change message properties.
You can use Spring Retry with a customized RepublishMessageRecoverer to do this.

RabbitMQ 3.5 and Message Priority Queue process is waiting until the task completion

I have some priority based tasks in my application. I have two queues "queue1" and "queue2", i want to gave highest priority to queue1 and lowest priority to queue2. I setup queue1 priority number is 255 and queue2 priority number is 200. I am having an issue in executing the tasks , once priority task is taken , The process is waiting until the task completion synchronously. But as per our project need, this process shouldn't be waiting but just kick off it. How do I achieve it?
I refereed this blog,
Message Received part:
final CountDownLatch latch = new CountDownLatch(3);
ch.basicConsume(QUEUE, true, new DefaultConsumer(ch) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, BasicProperties properties, byte[] body) throws IOException {
System.out.println("Received " + new String(body));
latch.countDown();
}
});
latch.await();
I already done this process without priority queue in RabbitMQ https://pamlesleylu.wordpress.com/2013/02/02/hello-world-for-spring-amqp-and-rabbitmq/
Producer
public void execute() {
System.out.println("execute...");
messageQueue.convertAndSend("hello " + counter.incrementAndGet());
}
Consumer
public void onMessage(Message msg) {
System.out.println(new String(msg.getBody()));
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
You need to hand the task off to another thread (e.g. using a TaskExecutor).
Bear in mind, though, that the message will be acked so if the server crashes you will lose that message.
It would probably be better to increase the concurrency in the listener container so you can handle multiple messages at once.