AWS: Move message to Dead letter queue while preserving meta information - kotlin

I am trying to move the message to dead letter queue in AWS if there is an exception while handling the message.
Now I am deleting the original message and sending it to DLQ explicitly. However, while doing this I am losing the message meta information like Original message-id, Total receive count, first sent time stamp etc.
Below is the code snippet for the same.
#Inject
#Named("demo-queue")
private SimpleQueueService sqsService;
#Inject
#Named("dlq")
private SimpleQueueService dlqService;
.
.
.
List<Message> messages = sqsService.receiveMessages(10, 30, 20);
messages.forEach(
m -> dlqService.sendMessage(m.getBody(),
attr -> {
new SendMessageRequest()
.withMessageAttributes(m.getMessageAttributes())
.withMessageBody(m.getBody());
})
);
messages.forEach(message -> sqsService.deleteMessage(message.getReceiptHandle()));
After reaching max receive count when AWS moves the message from the original queue to DLQ it preserves all mentioned attributes. Is there any way we can achieve the same using aws-sdk?
I am using the Agorapulse library with Micronaut to send/receive messages from SQS.

Related

Spring Webflux - initial message without subscriber

I am trying to make an SSE Spring application, using Webflux. According to the documentation, the message is not sent to the sink if there is no subscriber. In my use case, I would like that the subscriber would receive the last message when calling for subscription. I have found that Sink can be configured in following way:
Sinks.many().replay().latest();
And when I have both publisher and subscriber, and the next subscriber calls for subscription, he receives the last sent message, which is great. However if I don't have any subscribers, publisher sends the message and then first subscriber comes in, it receives none. Which is just as documentation above says actually, but I am thinking how to solve that issue to meet my needs. As a workaround I did something like this:
if (shareSinks.currentSubscriberCount() == 0) {
shareSinks.asFlux().subscribe();
}
shareSinks.tryEmitNext(shareDTO);
But subscribing the publisher to its own subscription doesn't sound like a clean way to do this...
This is a matter of hot and cold publishers. Currently, your publisher (Sinks.many().replay().latest()) is a cold publisher. Events that are being emitted while there is no subscriber, will just vanish.
What you need is a so called hot publisher. Hot publishers cache the events and a new subscriber will receive all previously cached events.
This will do the trick:
final Sinks.Many<String> shareSinks = Sinks.many()
.replay()
.all(); // or .limit(10); to keep only the last 10 emissions
final Flux<String> hotPublisher = shareSinks.asFlux()
.cache(); // .cache() turns the cold flux into a
// hot flux
shareSinks.tryEmitNext("1");
shareSinks.tryEmitNext("2");
shareSinks.tryEmitNext("3");
shareSinks.tryEmitNext("4");
hotPublisher.subscribe(message -> System.out.println("received: " + message));
The console print out would be:
received: 1
received: 2
received: 3
received: 4
The Reactor docu also has a chapter on hot vs. cold.

Kafka Streams write an event back to the input topic

in my kafka streams app, I need to re-try processing a message whenever a particular type of exception is thrown in the processing logic.
Rather than wrapping my logic in the RetryTemplate (am using springboot), am considering just simply writing the message back into the input topic, my assumption is that this message will be added to the back of the log in the appropriate partition and it will eventually be re-processed.
Am aware that this would mess up the ordering and am okay with that.
My question is, would kafka streams have an issue when it encounters a message that was supposedly already processed in the past (am assuming kafka streams has a way of marking the messages it has processed especially when exactly is enabled)?
Here is an example of the code am considering for this solution.
val branches = streamsBuilder.stream(inputTopicName)
.mapValues { it -> myServiceObject.executeSomeLogic(it) }
.branch(
{ _, value -> value is successfulResult() }, // success
{ _, error -> error is errorResult() }, // exception was thrown
)
branches[0].to(outputTopicName)
branches[1].to(inputTopicName) //write them back to input as a way of retrying

How RabbitMQ handle if queue message bytes length large than x-max-length-bytes?

I had declare a queue like below:
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-max-length-bytes", 2 * 1024 * 1024); // Max length is 2G
channel.queueDeclare("queueName", true, false, false, args);
When the queue messages count bytes is large than 2G, It will auto remove the message on the head of the queue.
But what I expected is That it reject produce the last message and return exception to the producer.
How can I get it?
A possible workaround is check the queue size before send your message using the HTTP API.
For example if you have a queue called: myqueuetest with max size = 20.
Before send the message you can call the HTTP API in this way:
http://localhost:15672/api/queues/
the result is a JSON like this:
"message_bytes":10,
"message_bytes_ready":10,
"message_bytes_unacknowledged":0,
"message_bytes_ram":10,
"message_bytes_persistent":0,
..
"name":"myqueuetest",
"vhost":"test",
"durable":true,
"auto_delete":false,
"arguments":{
"x-max-length-bytes":20
},
then you cloud read the message_bytes field before send your message and then decide if send or not.
Hope it helps
EDIT
This workaround could kill your application performance
This workaround is not safe if you have multi-threading/more publisher
This workaround is not a very "best practise"
Just try to see if it is ok for your application.
As explained on the official docs:
Messages will be dropped or dead-lettered from the front of the queue to make room for new messages once the limit is reached.
https://www.rabbitmq.com/maxlength.html
If you think RabbitMQ should drop messages form the end of the queue, feel free to open an issue here so we can discuss about it https://github.com/rabbitmq/rabbitmq-server/issues

In RabbitMQ how to consume multiple message or read all messages in a queue or all messages in exchange using specific key?

I want to consume multiple messages from specific queue or a specific exchange with a given key.
so the scenario is as follow:
Publisher publish message 1 over queue 1
Publisher publish message 2 over queue 1
Publisher publish message 3 over queue 1
Publisher publish message 4 over queue 2
Publisher publish message 5 over queue 2
..
Consumer consume messages from queue 1
get [message 1, message 2, message 3] all at once and handle them in one call back
listen_to(queue_name , num_of_msg_to_fetch or all, function(messages){
//do some stuff with the returned list
});
the messages are not coming at the same time, it is like events and i want to collect them in a queue, package them and send them to a third party.
I also read this post:
http://rabbitmq.1065348.n5.nabble.com/Consuming-multiple-messages-at-a-time-td27195.html
Thanks
Don't consume directly from the queue as queues follow round robin algorithm(an AMQP mandate)
Use shovel to transfer the queue contents to a fanout exchange and consume messages right from this exchange. You get all messages across all connected consumers. :)
If you want to consume multiple messages from specific queue, you can try as below.
channel.queueDeclare(QUEUE_NAME, false, false,false, null);
Consumer consumer = new DefaultConsumer(channel){
#Override
public void handleDelivery(String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body)
throws IOException {
String message = new String(body, "UTF-8");
logger.info("Recieved Message --> " + message);
}
};
You might need to conceptually separate domain-message from RMQ-message. As a producer you'd then bundle multiple domain messages into a single RMQ-message and .produce() it to RMQ. Remember this kind of design introduces timeouts and latencies due to the existence of a window (you might take some impression from Kafka that does bundling to optimize I/O at the cost of latency).
As a consumer then, you'd have a consumer, with typical .handleDelivery implementation that would transform the received body for the processing: byte[] -> Set[DomainMessage] -> your listener.

Query for Number of Messages in Mule ESB VM Queue

In a Mule flow, I would like to add an Exception Handler that forwards messages to a "retry queue" when there is an exception. However, I don't want this retry logic to run automatically. Instead, I'd rather receive a notification so I can review the errors and then decide whether to retry all messages in the queue or not.
I don't want to receive a notification for every exception. I'd rather have a scheduled job that runs every 15 minutes and checks to see if there are messages in this retry queue and then only send the notification if there are.
Is there any way to determine how many messages are currently in a persistent VM queue?
Assuming you use the default VM queue persistence mechanism and that the VM connector is named vmConnector, you can do this:
final String queueName = "retryQueue";
int messageCount = 0;
final VMConnector vmConnector = (VMConnector) muleContext.getRegistry()
.lookupConnector("vmConnector");
for (final Serializable key : vmConnector.getQueueProfile().getObjectStore().allKeys())
{
final QueueKey queueKey = (QueueKey) key;
if (queueName.equals(queueKey.queueName))
{
messageCount++;
}
}
System.out.printf("Queue %s has %d pending messages%n", queueName, messageCount);