How to set timeout for consumer(rabbitmq) - rabbitmq

I use SimpleMessageListenerContainer to consume message from rabbitMQ queue.
EVN:
rabbitmq:3.6.6
spring-rabbit:1.6.2
CODE:
#Bean
public SimpleMessageListenerContainer listenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory());
container.setQueues(new Queue("spring.demo"));
container.setMaxConcurrentConsumers(1);
container.setConcurrentConsumers(1);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
container.setMessageListener(new ChannelAwareMessageListener() {
#Override
public void onMessage(Message message, Channel channel) throws Exception {
long deliveryTag = message.getMessageProperties().getDeliveryTag();
try {
byte[] body = message.getBody();
String result = new String(body);
logger.debug("result:" + result);
// doSomething(for a long time)
channel.basicAck(deliveryTag, false);
} catch (Exception e) {
e.printStackTrace();
channel.basicNack(deliveryTag, false, false);
}
}
});
return container;
}
There has 1 consumer to handle message. Now If a message will handle in a long time(5minutes), then other message can not to be consumed.
I want to set a timeout for each handle message. For example,set timeout is 30 seconds. If a Message handle over this value(30 seconds) then the consumer throw Exception to over handle. Thus this consumer can handle other message.
But now I do not know how to set this timeout?

Related

Camel Route Losing Message on restart in camel rabbitmq

I am using camel-rabbitmq.
Here is my route defination
camelContext.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
from("rabbitmq:TEST?queue=TEST&concurrentConsumers=5")
.routeId("jms")
.autoStartup(false)
.throttle(10)
.asyncDelayed()
.log("Consuming message ${body} to ${header.deliveryAddress}")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
System.out.println(atomicLong.decrementAndGet());
}
})
;
}
});
When I push 500 messages to this queue , when stop and start route all message on channel will be lost ,wonder where they are going.
If I configure same route with &autoAck=false it is working properly but losing performance. Why camel not offering same behavior with and without autoAck.
I managed my problem doing following change in rabbitmqconsumer of camel-rabbitmq
public void handleCancelOk(String consumerTag) {
// no work to do
log.info("Received cancelOk signal on the rabbitMQ channel");
**downLatch.countDown();**
}
#Override
protected void doStop() throws Exception {
if (channel == null) {
return;
}
this.requeueChannel=openChannel(consumer.getConnection());
if (tag != null && isChannelOpen()) {
channel.basicCancel(tag);
}
stopping=true;
downLatch.await();
try {
lock.acquire();
if (isChannelOpen()) {
channel.close();
}
} catch (TimeoutException e) {
log.error("Timeout occured");
throw e;
} catch (InterruptedException e1) {
log.error("Thread Interrupted!");
} finally {
lock.release();
}
}
By doing this camel route will for message to consumed and avoided message loss.
You need to check rabbitmq consumer prefetch count
consumer prefetch
I think By default consumer picks all the messages in queue to its memory buffers.
If you set the prefetch count to 1, consumer will acknowledge messages one by one.
All the other unacknowledged will be present in the queue in ready state. Waiting to be picked up, after the consumer completes it task on the previous message picked.

ActiveMQ 5.15.3 shows 0 producerCount in the web console

Producer count in the activemq web console shows 0 all the time, even if there are producers connected to the broker. I'm not sure why?
My producer code looks like this.
public boolean postMessage(List<? extends JMSMessageBean> messageList, String data, int messageCount)
throws JMSException {
String queueName = null;
MessageProducer producer = null;
Connection connection = null;
Session session = null;
try {
connection = pooledConnectionFactory.createConnection();
connection.setExceptionListener(this);
connection.start();
session = connection.createSession(false, Session.CLIENT_ACKNOWLEDGE);
int index = 0;
for (JMSMessageBean message : messageList) {
if (producer == null || !message.getQueueName().equals(queueName)) {
queueName = message.getQueueName();
producer = getQueueProducer(queueName, session);
}
Message _omessage = session.createObjectMessage(message);
_omessage.setStringProperty("MESSAGE_INDEX", messageCount + ":" + index);
_omessage.setIntProperty("RETRY_COUNT", 0);
_omessage.setJMSType(message.getJmsType());
if (data != null) {
_omessage.setStringProperty("RAW_DATA", data);
}
producer.send(_omessage);
index++;
}
} catch (JMSException e) {
logger.error("Exception while creating connection to jms broker", e);
} finally {
try {
if (null != session) {
session.close();
}
if (null != connection) {
connection.close();
}
if(null != producer) {
producer.close();
}
} catch (JMSException e) {
logger.error(e.getMessage(), e);
}
}
return true;
}
Am using a pooledconnectionfactory to create sessions, connections, and messageproducers. Everytime, someone has to post a message, a new connection is requested from the pooledconnectionfactory. and then
The ActiveMQ client often uses what they call "dynamic producers"-- a producer per message for non-transacted sessions. If you walked the JMS object lifecycle, you'd find there is little need to keep a producer object around in a non-transacted session-- which is different from the consumer object.
Look under the dynamicProducers list in JMX, and you'll catch them being created. You can also monitor the advisory topics to see them get created and destroyed.
Side note: your object close order in the finally is incorrect.. you should close objects in reverse order-- producer, session, connection.

spring amqp custom TTL and retry count

We are trying to implement Retry mechanism on client exceptions. We want to be able to set different routing key, ttl and retry count based on the content in each message. We want to keep the handler simple, i.e; for handleMessage to throw exception. How do we handle this exception and send the message to DLX with appropriate parameters. On retry if the failure happens again - message would be discarded (acknowledged) , or will be put back on DLX with incrementing the retry count. where would we implement this logic and how would be wired?
========================
With Gary's direction, I was able to implement. Here are excerpts ..
#Bean
public SimpleMessageListenerContainer listenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
jsonMessageHandler.setQueueName(queueName);
container.setQueueNames(queueName);
container.setMessageListener(jsonMessageListenerAdapter());
container.setAdviceChain(new Advice[]{retryOperationsInterceptor()});
return container;
}
#Bean
public MessageListenerAdapter messageListenerAdapter() {
return new MessageListenerAdapter(messageHandler,messageConverter);
}
#Bean
public MessageListenerAdapter jsonMessageListenerAdapter() {
return new MessageListenerAdapter(jsonMessageHandler);
}
#Bean
RetryOperationsInterceptor retryOperationsInterceptor() {
return RetryInterceptorBuilder.stateless().recoverer(republishMessageRecoverer).maxAttempts(1).build();
}
#Bean
RepublishMessageRecoverer republishMessageRecoverer() {
return new MyRepublishMessageRecoverer(rabbitTemplate());
}
==========
public class MyRepublishMessageRecoverer extends RepublishMessageRecoverer {
// - constructor
#Override
public void recover(Message message, Throwable cause) {
//Deal with headers
long currentCount = 0;
List xDeathList = (List)message.getMessageProperties().getHeaders().get("x-death");
if(xDeathList != null && xDeathList.size() > 0) {
currentCount = (Long)((Map)(xDeathList.get(0))).get("count");
}
if(currentCount < context.getRules().getNumberOfRetries()) {
//message sent to DLX
this.retryTemplate.send(handlerProperties.getSystem(), message);
} else {
//message ignored
}
throw new AmqpRejectAndDontRequeueException(cause);
}
}
You can't modify a rejected message, it is routed to the DLX/DLQ unchanged (except x-death headers are added by the broker).
You have to republish to the DLX/DLQ yourself if you want to change message properties.
You can use Spring Retry with a customized RepublishMessageRecoverer to do this.

TCP Server configuration in Mule - writing into client socket

I am trying to create a mule flow with a TCP inbound endpoint which is a TCP server that listens to a port. When a successful client connection is identified, before receiving any request from the client, I need to write a message into the socket (which lets the client know that I am listening), only after which the client sends me further requests. This is how I do it with a sample java program :
import java.net.*;
import java.io.*;
public class TCPServer
{
public static void main(String[] args) throws IOException
{
ServerSocket serverSocket = null;
try {
serverSocket = new ServerSocket(4445);
}
catch (IOException e)
{
System.err.println("Could not listen on port: 4445.");
System.exit(1);
}
Socket clientSocket = null;
System.out.println ("Waiting for connection.....");
try {
clientSocket = serverSocket.accept();
}
catch (IOException e)
{
System.err.println("Accept failed.");
System.exit(1);
}
System.out.println ("Connection successful");
System.out.println ("Sending output message - .....");
//Sending a message to the client to indicate that the server is active
PrintStream pingStream = new PrintStream(clientSocket.getOutputStream());
pingStream.print("Server listening");
pingStream.flush();
//Now start listening for messages
System.out.println ("Waiting for incoming message - .....");
PrintWriter out = new PrintWriter(clientSocket.getOutputStream(),true);
BufferedReader in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
{
System.out.println ("Server: " + inputLine);
out.println(inputLine);
if (inputLine.equals("Bye."))
break;
}
out.close();
in.close();
clientSocket.close();
serverSocket.close();
}
}
I have tried to use Mule's TCP inbound endpoint as a server, but I am not able to see how I can identify a successful connection from the client, inorder to trigger the outbound message. The flow gets triggered only when a message is sent across from the client. Is there a way I can extend the functionality of the Mule TCP connector and have a listener which could do the above requirement?
Based on the answer provided, this is how I implemented this -
public class TCPMuleOut extends TcpMessageReceiver {
boolean InitConnection = false;
Socket clientSocket = null;
public TCPMuleOut(Connector connector, FlowConstruct flowConstruct,
InboundEndpoint endpoint) throws CreateException {
super(connector, flowConstruct, endpoint);
}
protected Work createWork(Socket socket) throws IOException {
return new MyTcpWorker(socket, this);
}
protected class MyTcpWorker extends TcpMessageReceiver.TcpWorker {
public MyTcpWorker(Socket socket, AbstractMessageReceiver receiver)
throws IOException {
super(socket, receiver);
// TODO Auto-generated constructor stub
}
#Override
protected Object getNextMessage(Object resource) throws Exception {
if (InitConnection == false) {
clientSocket = this.socket;
logger.debug("Sending logon message");
PrintStream pingStream = new PrintStream(
clientSocket.getOutputStream());
pingStream.print("Log on message");
pingStream.flush();
InitConnection = true;
}
long keepAliveTimeout = ((TcpConnector) connector)
.getKeepAliveTimeout();
Object readMsg = null;
try {
// Create a monitor if expiry was set
if (keepAliveTimeout > 0) {
((TcpConnector) connector).getKeepAliveMonitor()
.addExpirable(keepAliveTimeout,
TimeUnit.MILLISECONDS, this);
}
readMsg = protocol.read(dataIn);
// There was some action so we can clear the monitor
((TcpConnector) connector).getKeepAliveMonitor()
.removeExpirable(this);
if (dataIn.isStreaming()) {
}
return readMsg;
} catch (SocketTimeoutException e) {
((TcpConnector) connector).getKeepAliveMonitor()
.removeExpirable(this);
System.out.println("Socket timeout");
} finally {
if (readMsg == null) {
// Protocols can return a null object, which means we're
// done
// reading messages for now and can mark the stream for
// closing later.
// Also, exceptions can be thrown, in which case we're done
// reading.
dataIn.close();
InitConnection = false;
logger.debug("Client closed");
}
}
return null;
}
}
}
And the TCP connector is as below:
<tcp:connector name="TCP" doc:name="TCP connector"
clientSoTimeout="100000" receiveBacklog="0" receiveBufferSize="0"
sendBufferSize="0" serverSoTimeout="100000" socketSoLinger="0"
validateConnections="true" keepAlive="true">
<receiver-threading-profile
maxThreadsActive="5" maxThreadsIdle="5" />
<reconnect-forever />
<service-overrides messageReceiver="TCPMuleOut" />
<tcp:direct-protocol payloadOnly="true" />
</tcp:connector>
What you're trying to do is a little difficult to accomplish but not impossible. The messages are received by the org.mule.transport.tcp.TcpMessageReceiver class, and this class always consumes the data in the input stream to create the message that injects in the flow.
However, you could extend that receiver and instruct the TCP module to use yours by adding a service-overrides tag in your flow's tcp connector (documented here) and replacing the messageReceiver element.
In your extended receiver you should change the TcpWorker.getNextMessage method in order to send the ack message before read from the input stream.
HTH, Marcos.

Consumer not consuming messages when created dynamically

I am learning to implement active mq interface in my project. This is how I am creating producers and consumers.
public void connectionSetup(String portName) { // portname is object of PortTO class. We are creating producer and consumer pair for every existing PortTO object.
Connection connection = null;
try {
if (timeToLive != 0) {
}
// Create the connection.
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(user, password, url);
connection = connectionFactory.createConnection();
connection.start();
connection.setExceptionListener(this);
// Create the session
Session session = connection.createSession(transacted, Session.AUTO_ACKNOWLEDGE);
if (topic) {
destination = session.createTopic(subject);
} else {
destination = session.createQueue(portName);
}
// Create the producer.
MessageProducer producer = session.createProducer(destination); if (persistent) {
producer.setDeliveryMode(DeliveryMode.PERSISTENT);
} else {
producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
}
MessageConsumer consumer = session.createConsumer(destination); if (timeToLive != 0)
producer.setTimeToLive(timeToLive);
mapOfSession.put(portName, session);
mapOfMessageProducer.put(portName, producer);
mapOfMessageConsumer.put(portName, consumer); log.info("Producer is " + producer);
log.info("Consumer is " + consumer);
} catch (Exception e) {
log.error(e.getMessage());
}
}
So, we are creating producer and consumer and storing them in a map for every PortTO object. Now, producer is sending messages:
TextMessage message = session.createTextMessage();
message.setIntProperty(key, 2);
producer.send(message);
But consumer is not consuming it...
public void onMessage(Message message) {
PortService portService = new PortService();
List<PortTO> portTOList = portService.getMoxaPorts();
for(PortTO portTO : portTOList) { // catching messages from producers of every PortTO object
MessageConsumer consumer = DataCollectionMessageProducer.getMapOfMessageConsumer().get(portTO.getPort()); // getting consumer from map of PortTO
consumer.setMessageListener(this);
message = consumer.receive(1000); if (message instanceof TextMessage) {
/ / some processing
}
} else {
if (verbose) {
}
}
}
}
What can be the reason? Is my approach wrong ??
You are setting the messageListener in the onMessage method. This is a catch 22, since the onMessage method gets invoked only if the messageListener is set to that object.
Another thing, I am not sure why you would do a receive in a message listener. The onMessage will be invoked for each message on the queue once it has been set as listener and the logic for each received message should reside in there in an event driven fashion. At least, that is the idea with JMS in the first place