Concept dead letter exchange not working in my environment - rabbitmq

I try to use the dead letter exchange with annotations in my java code. Maybe is my assumption wrong how it should work. But in my method processMpcMessage I deserialize the message from a Queue into a POJO. If I get a IllegalargumentException I want that the message is put onto a dead letter queue. I configured the deadletter exchange and routing key see my code example.
If I throw "throw new AmqpRejectAndDontRequeueException(msg, exception);" I expect that the message I consumed earlier is put onto the dead letter queue.
I get how ever the following debug message:
2019-02-07 13:35:42,009 [SimpleAsyncTaskExecutor-1] DEBUG {} - org.springframework.amqp.rabbit.listener.BlockingQueueConsumer - Rejecting messages (requeue=false)
Any advise is welcome
Regards
Dirk
#RabbitListener(bindings = #QueueBinding(
value = #Queue(
value = "${mpc.inbound.receive.queue}",
durable = "true",
arguments = {
#Argument(name = "x-dead-letter-exchange", value = "${mpc.inbound.dead.letter}"),
#Argument(name = "x-dead-letter-routing-key", value = "${mpc.inbound.receive.error.routing.key}"),
#Argument(name = "defaultRequeueRejected", value = "false")
}),
exchange = #Exchange(value = "${mpc.inbound.exchange}",
type = ExchangeTypes.TOPIC, durable = "true"),
key = "${mpc.inbound.routing.key}"
))
public void processMPCMessage(final Message message) {
//Here the message is deserialized in to a java object and this is where I want to throw a exception.
try{
}catch(IllegalArgumgenException ex){
throw new new AmqpRejectAndDontRequeueException(" a error message", ex);
}
}

Does the queue already exist?
Queues are idempotent; you can't change their properties (arguments) after they are created. Delete it first so it will be recreated.
If that's not it, turn on DEBUG logging to see what's going on.

Related

Getting failure callback for Producer in rabbitmq when back pressure kicks in

I wanted to find out the failed messages for my rabbitmq producers using some call back api.I have configured rabbitmq with [{rabbit, [{vm_memory_high_watermark, 0.001}]}]. and tried pushing lot of messages but all the messages are getting accepted and TimeoutException is coming later on and messages not getting send to Queueenter code here, Please tell me how to capture it.
Code for sending message:
// #create-sink - producer
final Sink<ByteString, CompletionStage<Done>> amqpSink =
AmqpSink.createSimple(
AmqpSinkSettings.create(connectionProvider)
.withRoutingKey(AkkaConstants.queueName)
.withDeclaration(queueDeclaration));
// #run-sink
//final List<String> input = Arrays.asList("one", "two", "three", "four", "five");
//Source.from(input).map(ByteString::fromString).runWith(amqpSink, materializer);
String filePath = "D:\\subrata\\code\\akkaAmqpTest-master\\akkaAmqpTest-master\\logs2\\dummy.txt";
Path path = Paths.get(filePath);
// List containing 78198 individual message
List<String> contents = Files.readAllLines(path);
System.out.println("********** file reading done ....");
int times = 5;
// Send 78198*times message to Queue [From console i can see 400000 number of messages being sent]
for(int i=0;i<times;i++) {
Source.from(contents).map(ByteString::fromString).runWith(amqpSink, materializer);
}
System.out.println("************* sending to queue is done");
Unfortunately currently that is not supported out of the box. Ideally the producer would be modeled as a Flow which would send all incoming messages to the AMQP broker and would emit the same message with a result weather it has been successfully sent to the broker or not. There is a ticket to track this possible improvement on the Alpakka issue tracker.

RabbitMQ delayed message not working

I am trying to use delayed exchange plugin, but somehow its not working for me and message are received without delay.
I tried following things :
a) enabled rabbitmq_delayed_message_exchange successfully and restarted rabbitmq server on ubuntu-16.04.
b) Declaring exchange
Map<String,Object> props = new HashMap<String,Object>();
props.put("x-delayed-type", "direct");
this.automationExchange = new DirectExchange(exchangeName,true,false, props);
c) Pushing message as
DefaultClassMapper typeMapper = QueueUtils.classMapper;
typeMapper.setDefaultType(type);
Jackson2JsonMessageConverter converter = QueueUtils.converter;
converter.setClassMapper(typeMapper);
RabbitTemplate template = AMQPRabbitMQTemplate.getAMQPTemplate();
template.setMessageConverter(converter);
template.convertAndSend(routingKey, message, new MessagePostProcessor() {
#Override
public Message postProcessMessage(Message m) throws AmqpException {
m.getMessageProperties().setDelay(delayMiliSeconds);
m.getMessageProperties().setDeliveryMode(MessageDeliveryMode.PERSISTENT);
return m;
}
});
Now when i am printing message
public void onMessage(Message message, Channel channel) throws Exception{
System.out.println(message.getMessageProperties().getDelay());
channel.basicAck(message.getMessageProperties().getDeliveryTag(), false);
}
It is printing null for getDelay, which ideally should be negative of set value as per https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq.
Please let me know if i am doing something wrong.
I am using 1.6.8.RELEASE version for spring-amqp and spring-rabbit.
In order to avoid unexpected propagation of headers from an inbound message to an outbound message, certain headers for inbound messages are provided by MessageProperties.getReceived... methods.
In this case, the header is in MessageProperties.getReceivedDelay().
You also need setDelayed(true) on automationExchange before declaring it with the admin.
I presume you have set the exchange as the default in the RabbitTemplate too.

RabbitMQ - Non Blocking Consumer with Manual Acknowledgement

I'm just starting to learn RabbitMQ so forgive me if my question is very basic.
My problem is actually the same with the one posted here:
RabbitMQ - Does one consumer block the other consumers of the same queue?
However, upon investigation, i found out that manual acknowledgement prevents other consumers from getting a message from the queue - blocking state. I would like to know how can I prevent it. Below is my code snippet.
...
var message = receiver.ReadMessage();
Console.WriteLine("Received: {0}", message);
// simulate processing
System.Threading.Thread.Sleep(8000);
receiver.Acknowledge();
public string ReadMessage()
{
bool autoAck = false;
Consumer = new QueueingBasicConsumer(Model);
Model.BasicConsume(QueueName, autoAck, Consumer);
_ea = (BasicDeliverEventArgs)Consumer.Queue.Dequeue();
return Encoding.ASCII.GetString(_ea.Body);
}
public void Acknowledge()
{
Model.BasicAck(_ea.DeliveryTag, false);
}
I modify how I get messages from the queue and it seems blocking issue was fixed. Below is my code.
public string ReadOneAtTime()
{
Consumer = new QueueingBasicConsumer(Model);
var result = Model.BasicGet(QueueName, false);
if (result == null) return null;
DeliveryTag = result.DeliveryTag;
return Encoding.ASCII.GetString(result.Body);
}
public void Reject()
{
Model.BasicReject(DeliveryTag, true);
}
public void Acknowledge()
{
Model.BasicAck(DeliveryTag, false);
}
Going back to my original question, I added the QOS and noticed that other consumers can now get messages. However some are left unacknowledged and my program seems to hangup. Code changes are below:
public string ReadMessage()
{
Model.BasicQos(0, 1, false); // control prefetch
bool autoAck = false;
Consumer = new QueueingBasicConsumer(Model);
Model.BasicConsume(QueueName, autoAck, Consumer);
_ea = Consumer.Queue.Dequeue();
return Encoding.ASCII.GetString(_ea.Body);
}
public void AckConsume()
{
Model.BasicAck(_ea.DeliveryTag, false);
}
In Program.cs
private static void Consume(Receiver receiver)
{
int counter = 0;
while (true)
{
var message = receiver.ReadMessage();
if (message == null)
{
Console.WriteLine("NO message received.");
break;
}
else
{
counter++;
Console.WriteLine("Received: {0}", message);
receiver.AckConsume();
}
}
Console.WriteLine("Total message received {0}", counter);
}
I appreciate any comments and suggestions. Thanks!
Well, the rabbit provides infrastructure where one consumer can't lock/block other message consumer working with the same queue.
The behavior you faced with can be a result of couple of following issues:
The fact that you are not using auto ack mode on the channel leads you to situation where one consumer took the message and still didn't send approval (basic ack), meaning that the computation is still in progress and there is a chance that the consumer will fail to process this message and it should be kept in rabbit queue to prevent message loss (the total amount of messages will not change in management consule). During this period (from getting message to client code and till sending explicit acknowledge) the message is marked as being used by specific client and is not available to other consumers. However this doesn't prevent other consumers from taking other messages from the queue, if there are more mossages to take.
IMPORTANT: to prevent message loss with manual acknowledge make sure
to close the channel or sending nack in case of processing fault, to
prevent situation where your application took the message from queue,
failed to process it, removed from queue, and lost the message.
Another reason why other consumers can't work with the same queue is QOS - parameter of the channel where you declare how many messages should be pushed to client cache to improve dequeue operation latency (working with local cache). Your code example lackst this part of code, so I am just guessing. In case like this the QOS can be so big that there are all messages on server marked as belonging to one client and no other client can take any of those, exactly like with manual ack I've already described.
Hope this helps.

Custom component to read messages from queue

I'm using Mule 3.3.1
I am trying to write a Component that reads all available messages from queue, which I intend to be polled using a Quartz scheduler.
Here is my code.
#Override
public Object onCall(MuleEventContext muleEventContext) throws Exception {
MuleMessage[] messages = null;
MuleMessage result = muleEventContext.getMessage();
do {
if (result == null) {
break;
}
if (result instanceof MuleMessageCollection) {
MuleMessageCollection resultsCollection = (MuleMessageCollection) result;
System.out.println("Number of messages: " + resultsCollection.size());
messages = resultsCollection.getMessagesAsArray();
} else {
messages = new MuleMessage[1];
messages[0] = result;
}
result = muleEventContext.getMessage();
} while (result !=null);
return messages;
}
Unfortunately, it loops indefinitely on the first message. Thoughts?
The onCall() method provided in the post is going to loop infinitely because
muleEventContext.getMessage()
always returns a MuleMessage. And so the loop will go in infinitely.
MuleEventContext object here is not an iterator or stream where the pointer points to next element after reading current element.
In order to read all the avialable messages from the queue. You can have a JMS inbound to read by polling on the queue and read all the messages. But remember each message on the queue will be one iteration(one message) from your JMS inbound.
If you want to gather all the Queue messages as a collection of objects and then proceed, then that is a different story. That cannot be done with your component code.
If you intend to gather all your messages as a collection and then start processing then try using something like collection-aggregator of mule at your inbound.
Hope this helps.

Is there an easy way to subscribe to the default error queue in EasyNetQ?

In my test application I can see messages that were processed with an exception being automatically inserted into the default EasyNetQ_Default_Error_Queue, which is great. I can then successfully dump or requeue these messages using the Hosepipe, which also works fine, but requires dropping down to the command line and calling against both Hosepipe and the RabbitMQ API to purge the queue of retried messages.
So I'm thinking the easiest approach for my application is to simply subscribe to the error queue, so I can re-process them using the same infrastructure. But in EastNetQ, the error queue seems to be special. We need to subscribe using a proper type and routing ID, so I'm not sure what these values should be for the error queue:
bus.Subscribe<WhatShouldThisBe>("and-this", ReprocessErrorMessage);
Can I use the simple API to subscribe to the error queue, or do I need to dig into the advanced API?
If the type of my original message was TestMessage, then I'd like to be able to do something like this:
bus.Subscribe<ErrorMessage<TestMessage>>("???", ReprocessErrorMessage);
where ErrorMessage is a class provided by EasyNetQ to wrap all errors. Is this possible?
You can't use the simple API to subscribe to the error queue because it doesn't follow EasyNetQ queue type naming conventions - maybe that's something that should be fixed ;)
But the Advanced API works fine. You won't get the original message back, but it's easy to get the JSON representation which you could de-serialize yourself quite easily (using Newtonsoft.JSON). Here's an example of what your subscription code should look like:
[Test]
[Explicit("Requires a RabbitMQ server on localhost")]
public void Should_be_able_to_subscribe_to_error_messages()
{
var errorQueueName = new Conventions().ErrorQueueNamingConvention();
var queue = Queue.DeclareDurable(errorQueueName);
var autoResetEvent = new AutoResetEvent(false);
bus.Advanced.Subscribe<SystemMessages.Error>(queue, (message, info) =>
{
var error = message.Body;
Console.Out.WriteLine("error.DateTime = {0}", error.DateTime);
Console.Out.WriteLine("error.Exception = {0}", error.Exception);
Console.Out.WriteLine("error.Message = {0}", error.Message);
Console.Out.WriteLine("error.RoutingKey = {0}", error.RoutingKey);
autoResetEvent.Set();
return Task.Factory.StartNew(() => { });
});
autoResetEvent.WaitOne(1000);
}
I had to fix a small bug in the error message writing code in EasyNetQ before this worked, so please get a version >= 0.9.2.73 before trying it out. You can see the code example here
Code that works:
(I took a guess)
The screwyness with the 'foo' is because if I just pass that function HandleErrorMessage2 into the Consume call, it can't figure out that it returns a void and not a Task, so can't figure out which overload to use. (VS 2012)
Assigning to a var makes it happy.
You will want to catch the return value of the call to be able to unsubscribe by disposing the object.
Also note that Someone used a System Object name (Queue) instead of making it a EasyNetQueue or something, so you have to add the using clarification for the compiler, or fully specify it.
using Queue = EasyNetQ.Topology.Queue;
private const string QueueName = "EasyNetQ_Default_Error_Queue";
public static void Should_be_able_to_subscribe_to_error_messages(IBus bus)
{
Action <IMessage<Error>, MessageReceivedInfo> foo = HandleErrorMessage2;
IQueue queue = new Queue(QueueName,false);
bus.Advanced.Consume<Error>(queue, foo);
}
private static void HandleErrorMessage2(IMessage<Error> msg, MessageReceivedInfo info)
{
}