spring xd with rabbit errorQueue as errorChannel - rabbitmq

I have stream defined in my spring xd my stream looks something like this and transport is rabbit mq
stream source|transformer1|transformer2|transformer3|sink
I have custom transformers which I have deployed.I want to write all exception/error happening in my transformers/custom modules to a errorQueue
I want to then pull the message in errorQueue to mongoSink
I can achieve this by creating tap from rabbiterror queue to mongo
`tap -->rabbit_ERROR_QUEUE-->mongoSink`
Is there any way I can configure my spring xd custom modules xml to write all exception and error to error queue by default?

If you set autoBindDLQ to true (in servers.yml for all streams, or in the deployment properties at the stream level), XD will create a dead letter queue for you.
You also need to configure retry.
By default, the bus will try to deliver the message 3 times then reject it and the broker will forward it to the dead letter queue.
Another bus/deployment property republishToDLQ provides a mechanism for the bus to republish the message to the DLQ (instead of rejecting it). This will include additional information in the error message (exception exception etc) as headers.
See the application configuration section the deployment section of the reference manual for complete information about these properties.
However, you would not consume from the DLQ using a tap, but....
stream create errorStream --definition "rabbit ..."
i.e. use the rabbit source to pull from the DLQ.

Related

auto process with DLQ configuration in spring xd

1)I want to configure DLQ for my stream
stream create --name httptest7 --definition "http | http-client --url='''http://localhost:8080/mock-sentmessage/customers/send-email''' --httpMethod=GET | log"
stream deploy httptest7 --properties module.*.consumer.autoBindDLQ=true
2)I have made
autoBindDLQ=true
I had one doubt if suppose spring xd fails to process my messages and post it to dlq .Will they me automatically moved to My original queue to retry or should i write a processor to move my DLQ messages to my original queue
3)Now i bring down my webservice http://localhost:8080/mock-sentmessage/customers/send-email i can see message filling in my dlq.
4)When i bring up my service up . But as per my understanding I thought from DLQ the message will be retried again when my service is up.
But From DLQ its not retried again .Any configuration I need to set for ?
As per documentation:
There is no automated mechanism provided to move dead lettered messages back to the bus queue.
I am not sure what your question is, or even if you have one; you seem to have answered your own question by quoting the documentation:
There is no automated mechanism provided to move dead lettered messages back to the bus queue.
So, no; there is no "setting" you can change.
There are a couple of things you can do - write your own code to move the messages back to the main queue from the DLQ; it would just take a few lines of Java using Spring AMQP, or in any language of your choice.
You can also use the technique described here whereby you set a message TTL on the DLQ, and configure it to route back to the main queue when the TTL expires.
Just so you know, You can use shovel plugin in Rabbitmq to do the movement from DLQ back to the bus queue.

Implementing the reliability pattern in CloudHub with VM queues

I have more-or-less implemented the Reliability Pattern in my Mule application using persistent VM queues CloudHub, as documented here. While everything works fine, it has left me with a number of questions about actually ensuring reliable delivery of my messages. To illustrate the points below, assume I have http-request component within my "application logic flow" (see the diagram on the link above) that is throwing an exception because the endpoint is down, and I want to ensure that the in flight message will eventually get delivered to the endpoint:
As detailed on the link above, I have observed that when the exception is thrown within my "application logic flow", and I have made the flow transactional, the message is put back on the VM queue. However all that happens is the message then repeatedly taken off the queue, processed by the flow, and the exception is thrown again - ad infinitum. There appears to be no way of configuring any sort of retry delay or maximum number of retries on VM queues as is possible, for example, with ActiveMQ. The best work around I have come up with is to surround the http-request message processor with the until-successful scope, but I'd rather have these sorts of things apply to my whole flow (without having to wrap the whole flow in until-successful). Is this sort of thing possible using only VM queues and CloudHub?
I have configured my until-successful to place the message on another VM queue which I want to use as a dead-letter-queue. Again, this works fine, and I can login to CloudHub and see the messages populated on my DLQ - but then it appears to offer no way of moving messages from this queue back into the flow when the endpoint comes back up. All it seems you can do in CloudHub is clear your queue. Again, is this possible using VM queues and CloudHub only (i.e. no other queueing tool)?
VM queues are very basic, whether you use them in CloudHub or not.
VM queues have no capacity for delaying redelivery (like exponential back-offs). Use JMS queues if you need such features.
You need to create a flow for processing the DLQ, for example one that regularly consumes the queue via the requester module and re-injects the messages into the main queue. Again, with JMS, you would have better control.
Alternatively to JMS, you could consider hosted queues like CloudAMQP, Iron.io or AWS SQS. You would lose transaction support on the inbound endpoint but would gain better control on the (re)delivery behaviour.

Re-queue Amqp message at tail of Queue

I have a project setup using Spring and RabbitMQ. Currently it is possible for my application to receive an amqp message that cannot be processed until another asynchronous process has completed (legacy and totally detached, i have no control). So the result is i may have to wait on processing a message for some amount of time. The result of this is an exception in a transformer.
When the message is NACK'd back to rabbitMQ it is putting it back into the head of the queue and re-pulling it immediately. If i get unprocessable messages equal to the number of concurrent listeners my workflow locks up. It spins its wheels waiting for messages to become processable, even though there are valid processable messages waiting behind in the queue.
Is there a way to reject and amqp message and have it go back to the tail of the queue instead? From my research rabbitMQ worked this way at one time, but now i appear to get the head of the queue exclusively.
My config is rather straight forward, but for continuity here it is...
Connection factory is: org.springframework.amqp.rabbit.connection.CachingConnectionFactory
RabbitMQ 3.1.1
Spring Integration: 2.2.0
<si:channel id="channel"/>
<si-amqp:inbound-channel-adapter
queue-names="commit" channel="channel" connection-factory="amqpConnectionFactory"
acknowledge-mode="AUTO" concurrent-consumers="${listeners}"
channel-transacted="true"
transaction-manager="transactionManager"/>
<si:chain input-channel="channel" output-channel="nullChannel">
<si:transformer ref="transformer"></si:transformer>
<si:service-activator ref="activator"/>
</si:chain>
You are correct that RabbitMQ was changed some time ago. There is nothing in the API to change the behavior.
You can, of course, put an error-channel on the inbound adapter, followed by a transformer (expression="payload.failedMessage"), followed by an outbound adapter configured with an appropriate exchange/routing-key to requeue the message at the back of the queue.
You might want to add some additional logic in the error flow to check the exception type (payload.cause) and decide which action you want.
If the error flow itself throws an exception, the original message will be requeued at the head, as before; if it exits normally, the message will be acked.

How do I clean messages in the queue if producer is down

I'm using ActiveMQ and I would like to know how to solve this specific case.
When the consumer is down, the producer sends a message to the queue. The message will remain in the queue until the consumer is running to consume it.
Now imagine I shutdown the producer, the message will STILL remain in the queue. Now i run the consumer and it will try to consume that message, but won't be able to reply back to the producer since its down.
I would like to solve this problem by cleaning the messages if the producer is out.
The ActiveMQ Broker cleans the Queue after stopping. I would like to do the same for the messages of a respective producer.
Thanks.
Based on what I understand now from your question and additional comments I propose to add a Message Property to your messages to identify the Producer, and write a small utility that uses a Message Selector to read all messages matching the Producer from the queue. You can run that utility straight after the Producer is stopped (or crashes), and that should quite accurately do what you want to achieve.
EDIT: although primarily focused on EE, the Sun/Oracle JavaEE Tutorial contains a very good chapter on general JMS programming that starts off with standalone producers and consumers. The accompanying source code bundle can be downloaded here, the ready to comoile samples in that bundle should get you started very quickly.
You can solve it a couple of ways. One is to set a TTL on the message so it goes away. The other is to connect via JMX and purge the Queue or remove the specific message using a selector statement or with the Message's specific MessageId value.
See this article for some hints.

NServiceBus transfering message from pub queue to sub queue

I am getting a little confused with NServiceBus. It seems like a lot of examples that I see, they always use publish() and subscribe(). What I am trying to do is that I have a publisher that polling from its queue and distributes the message to subscriber’s queue. The messages are being generated by other application and the body of message will contain a text, which will be parsed later.
Do I still need to call publish() and subsribe() to transfer the messages from publisher's queue to subscriber's queue? The way I understood was that I only need to configure the queue names in both config file and call LoadAllMessages() on subscriber side, will take above scenario. I don't even have to handle the message on the subscriber side.
Thanks.
Your Publisher will still need to call Publish. What this does is the Publisher then looks into Subscription Storage to find out who is interested in that message type. It then will send a message to each Subscriber. On the Subscriber side you need to implement message handlers to do something with those messages. This is done via implementing the IHandleMessages<T> interface in the Subscriber assembly. NSB will discover this and autowire everything up. Be aware by default, the Subscriber will subscriber to all message types. If you want to only subscribe to certain messages, use the .DoNotAutoSubscribe setting in the manual configuration.