How to implement dead letter queue in WebFlux react kafka? - spring-webflux

May I know how to implement dead letter queue example for the below code? The same input record should be published to some dlq topic.
reactiveKafkaConsumerTemplate
.receiveAutoAck()
.map(ConsumerRecord::value)
.flatMap(this::consumeWithRetry)
.onErrorContinue((error, value)->log.error("something bad happened while consuming : {}", error.getMessage()))
.retryWhen(Retry.backoff(30, Duration.of(10, ChronoUnit.SECONDS)))
.subscribe();
public Mono<Void> consumeWithRetry(MessageRecord message){
return consume(message)
.retry(2);
}
public Mono<Void> consumeWithRetry(MessageRecord message){
return Mono.defer(()->consume(message))
.retry(2);
}

Related

how to check ActiveMQ queue size from message listener

I am using message listener for performing some actions on activeMQ queues
I want to check size of queue while performing.
I am using below logic but it works outside listener.
Any suggestion?
public class TestClass {
MessageConsumer consumerTransformation;
MessageListener listenerObjectTransformation;
public static void main(String []args) throws JMSException {
ActiveMQModel activeMQModelObject = new ActiveMQModel();
//String subject = "TRANSFORMATION_QUEUE";
String subject = "IMPORT_QUEUE";
//consumerTransformation = activeMQModelObject.getActiveMQConsumer(subject);
// Here we set the listener to listen to all the messages in the queue
//listenerObjectTransformation = new TransformationMessageListener();
//consumerTransformation.setMessageListener(listenerObjectTransformation);
boolean isQueueEmpty = activeMQModelObject.isMessageQueueEmpty(subject);
System.out.println("Size " + isQueueEmpty);
}
/*private class TransformationMessageListener implements MessageListener {
#Override
public void onMessage(Message messagearg) {
System.out.println("test....");
}
}*/
}
What is way to check activeMQ queue size from message listener
The JMS API does not define methods for checking Queue size or other metrics from a client, the API is meant to decouple the clients from any server administration and from each other. A sender has no awareness of the receivers that might or might not be there and the receiver is unaware of who might be producing or if there is anything to consume at that given moment. By using the asynchronous listener you are subscribing for content either currently available or content yet to be produced.
You can in some cases make us of the JMX metrics that are available from the server in your code but this is not good practice.

Correct way of using spring webclient in spring amqp

I have below tech stack for a spring amqp application consuming messages from rabbitmq -
Spring boot 2.2.6.RELEASE
Reactor Netty 0.9.12.RELEASE
Reactor Core 3.3.10.RELEASE
Application is deployed on 4 core RHEL.
Below are some of the configurations being used for rabbitmq
#Bean
public CachingConnectionFactory connectionFactory() {
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory();
cachingConnectionFactory.setHost(<<HOST NAME>>);
cachingConnectionFactory.setUsername(<<USERNAME>>);
cachingConnectionFactory.setPassword(<<PASSWORD>>);
cachingConnectionFactory.setChannelCacheSize(50);
return cachingConnectionFactory;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setMaxConcurrentConsumers(50);
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setDefaultRequeueRejected(false); /** DLQ is in place **/
return factory;
}
The consumers make downstream API calls using spring webclient in synchronous mode. Below is configuration for Webclient
#Bean
public WebClient webClient() {
ConnectionProvider connectionProvider = ConnectionProvider
.builder("fixed")
.lifo()
.pendingAcquireTimeout(Duration.ofMillis(200000))
.maxConnections(16)
.pendingAcquireMaxCount(3000)
.maxIdleTime(Duration.ofMillis(290000))
.build();
HttpClient client = HttpClient.create(connectionProvider);
client.tcpConfiguration(<<connection timeout, read timeout, write timeout is set here....>>);
Webclient.Builder builder =
Webclient.builder().baseUrl(<<base URL>>).clientConnector(new ReactorClientHttpConnector(client));
return builder.build();
}
This webclient is autowired into a #Service class as
#Autowired
private Webclient webClient;
and used as below in two places. First place is one call -
public DownstreamStatusEnum downstream(String messageid, String payload, String contentType) {
return call(messageid,payload,contentType);
}
private DownstreamStatusEnum call(String messageid, String payload, String contentType) {
DownstreamResponse response = sendRequest(messageid,payload,contentType).**block()**;
return response;
}
private Mono<DownstreamResponse> sendRequest(String messageid, String payload, String contentType) {
return webClient
.method(POST)
.uri(<<URI>>)
.contentType(MediaType.valueOf(contentType))
.body(BodyInserters.fromValue(payload))
.exchange()
.flatMap(response -> response.bodyToMono(DownstreamResponse.class));
}
Other place requires parallel downstream calls and has been implemented as below
private Flux<DownstreamResponse> getValues (List<DownstreamRequest> reqList, String messageid) {
return Flux
.fromIterable(reqList)
.parallel()
.runOn(Schedulers.elastic())
.flatMap(s -> {
return webClient
.method(POST)
.uri(<<downstream url>>)
.body(BodyInserters.fromValue(s))
.exchange()
.flatMap(response -> {
if(response.statusCode().isError()) {
return Mono.just(new DownstreamResponse());
}
return response.bodyToMono(DownstreamResponse.class);
});
}).sequential();
}
public List<DownstreamResponse> updateValue (List<DownstreamRequest> reqList,String messageid) {
return getValues(reqList,messageid).collectList().**block()**;
}
The application has been working fine for past one year or so. Of late, we are seeing an issue whereby one or more consumers seem to just get stuck with the default prefetch (250) number of messages in unack status. The only way to fix the issue is to restart app.
We have not done any code changes recently. Also there have been no infra changes recently either.
When this happens, we took thread dumps. The pattern observed is similar. Most of the consumer threads are in TIMED_WAITING status while one or two consumers show in WAITING state with below stacks -
"org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0-13" waiting for condition ...
java.lang.Thread.State: WAITING (parking)
- parking to wait for ......
at .......
at .......
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(......
at reactor.core.publisher.Mono.block(....
at .........WebClientServiceImpl.call(...
Also see below -
"org.springframework.amqp.rabbit.RabbitListenerEndpointContainer#0-13" waiting for condition ...
java.lang.Thread.State: WAITING (parking)
- parking to wait for ......
at .......
at .......
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(......
at reactor.core.publisher.Mono.block(....
at .........WebClientServiceImpl.updateValue(...
Not exactly sure if this thread dump is showing that consumer threads are actually stuck at this
"block" call.
Please help advise what could be the issue here and what steps need to be taken to fix this. Earlier we thought it may be some issue with rabbitmq/spring aqmp but based on thread dump, looks like issue with webclient "block" call.
On adding Blockhound, it is printing below stacktrace in log file -
Error has been observed at following site(s)
Checkpoint Request to POST https://....... [DefaultWebClient]
Stack Trace:
at java.lang.Object.wait
......
at java.net.InetAddress.checkLookupTable
at java.net.InetAddress.getAddressFromNameService
......
at io.netty.util.internal.SocketUtils$8.run
......
at io.netty.resolver.DefaultNameResolver.doResolve
Sorry, just realized that the flatMap in parallel flux call was actually like below
.flatMap(response -> {
if(response.statusCode().isError()) {
return Mono.just(new DownstreamResponse());
}
return response.bodyToMono(DownstreamResponse.class);
});
So in error scenarios, I think the underlying connection was not being properly released. When I updated it like below, it seemed to have fixed the issue -
.flatMap(response -> {
if(response.statusCode().isError()) {
response.releaseBody().thenReturn(Mono.just(new DownstreamResponse()));
}
return response.bodyToMono(DownstreamResponse.class);
});

Reading messages from rabbitMQ queue at an interval is not working

What I am trying to achieve is to read messages from a RabbitMQ queue every 15 minutes. From the documentation, I could see that I can use the "receiveTimeout" method to set the interval.
Polling Consumer
The AmqpTemplate itself can be used for polled Message reception. By default, if no message is
available, null is returned immediately. There is no blocking. Starting with version 1.5, you can set
a receiveTimeout, in milliseconds, and the receive methods block for up to that long, waiting for a
message.
But I tried implementing it with sprint integration, the receiveTimeout is not working as I expected.
My test code is given below.
#Bean
Queue createMessageQueue() {
return new Queue(RetryQueue, false);
}
#Bean
public SimpleMessageListenerContainer QueueMessageListenerContainer(ConnectionFactory connectionFactory) {
final SimpleMessageListenerContainer messageListenerContainer = new SimpleMessageListenerContainer(
connectionFactory);
messageListenerContainer.setQueueNames(RetryQueue);
messageListenerContainer.setReceiveTimeout(900000);
return messageListenerContainer;
}
#Bean
public AmqpInboundChannelAdapter inboundQueueChannelAdapter(
#Qualifier("QueueMessageListenerContainer") AbstractMessageListenerContainer messageListenerContainer) {
final AmqpInboundChannelAdapter amqpInboundChannelAdapter = new AmqpInboundChannelAdapter(
messageListenerContainer);
amqpInboundChannelAdapter.setOutputChannelName("channelRequestFromQueue");
return amqpInboundChannelAdapter;
}
#ServiceActivator(inputChannel = "channelRequestFromQueue")
public void activatorRequestFromQueue(Message<String> message) {
System.out.println("Message: " + message.getPayload() + ", recieved at: " + LocalDateTime.now());
}
I am getting the payload logged in the console in near real-time.
Can anyone help? How much time the consumer will be active once it starts?
UPDATE
IntegrationFlow I used to retrieve messages from queue at an interval,
#Bean
public IntegrationFlow inboundIntegrationFlowPaymentRetry() {
return IntegrationFlows
.from(Amqp.inboundPolledAdapter(connectionFactory, RetryQueue),
e -> e.poller(Pollers.fixedDelay(20_000).maxMessagesPerPoll(-1)).autoStartup(true))
.handle(message -> {
channelRequestFromQueue()
.send(MessageBuilder.withPayload(message.getPayload()).copyHeaders(message.getHeaders())
.setHeader(IntegrationConstants.QUEUED_MESSAGE, message).build());
}).get();
}
The Polling Consumer documentation is from the Spring AMQP documentation about the `RabbitTemplate, and has nothing to do with the listener container, or Spring Integration.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#polling-consumer
Spring integration's adapter is message-driven and you will get messages whenever they are available.
To get messages on-demand, you need to call the RabbitTemplate on whatever interval you want.

republishing message into RabbitMQ queue creates an infinite loop

I have a RabbitMQ queue to hold unprocessed messages. I happy path, I will read message from the queue, process it, and removes the message in the queue. But if certain criteria are met while processing I have to republish the message to the queue again. I am using a pollable channel adapter to fetch the message. since I want to fetch all the available messages in that queue while polling I have set the maxMessagesPerPoll to -1. This causes the code to go in an infinite loop. after republishing the message into the queue, the inbound polled adapter picks it up immediately. How can I prevent this situation?
Is there any way to delay the message delivery or can we restrict the message processing once per message in single polling of the InboundPolledAdapter. What will be the best approach?
The inboundPolledAdapter is,
#Bean
public IntegrationFlow inboundIntegrationFlowPaymentRetry() {
return IntegrationFlows
.from(Amqp.inboundPolledAdapter(connectionFactory, RetryQueue),
e -> e.poller(Pollers.fixedDelay(20_000).maxMessagesPerPoll(-1)).autoStartup(true))
.handle(message -> {
channelRequestFromQueue()
.send(MessageBuilder.withPayload(message.getPayload()).copyHeaders(message.getHeaders())
.setHeader(IntegrationConstants.QUEUED_MESSAGE, message).build());
}).get();
}
For the first posting of first message to the queue by,
#Bean
Binding bindingRetryQueue() {
return BindingBuilder.bind(queueRetry()).to(exchangeRetry())
.with(ProcessQueuedMessageService.RETRY_ROUTING_KEY);
}
#Bean
TopicExchange exchangeRetry() {
return new TopicExchange(ProcessQueuedMessageService.RETRY_EXCHANGE);
}
#Bean
Queue queueRetry() {
return new Queue(RetryQueue, false);
}
#Bean
#ServiceActivator(inputChannel = "channelAmqpOutbound")
public AmqpOutboundEndpoint outboundAmqp(AmqpTemplate amqpTemplate) {
final AmqpOutboundEndpoint outbound = new AmqpOutboundEndpoint(amqpTemplate);
outbound.setRoutingKey(RetryQueue);
return outbound;
}
Republishing message by,
StaticMessageHeaderAccessor.getAcknowledgmentCallback(requeueMessage).acknowledge(Status.REQUEUE);
Is there any way to delay the message delivery
See Delayed Exchange feature in Rabbit MQ and its API in Spring AMQP: https://docs.spring.io/spring-amqp/docs/current/reference/html/#delayed-message-exchange
restrict the message processing once per message
For this scenario you can take a look into Idempotent Receiver pattern and its implementation in Spring Integration: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#idempotent-receiver.
The redelivered message is going to have an AmqpHeaders.REDELIVERED header.
See more in docs: https://www.rabbitmq.com/reliability.html#consumer-side

RabbitMQ - Non Blocking Consumer with Manual Acknowledgement

I'm just starting to learn RabbitMQ so forgive me if my question is very basic.
My problem is actually the same with the one posted here:
RabbitMQ - Does one consumer block the other consumers of the same queue?
However, upon investigation, i found out that manual acknowledgement prevents other consumers from getting a message from the queue - blocking state. I would like to know how can I prevent it. Below is my code snippet.
...
var message = receiver.ReadMessage();
Console.WriteLine("Received: {0}", message);
// simulate processing
System.Threading.Thread.Sleep(8000);
receiver.Acknowledge();
public string ReadMessage()
{
bool autoAck = false;
Consumer = new QueueingBasicConsumer(Model);
Model.BasicConsume(QueueName, autoAck, Consumer);
_ea = (BasicDeliverEventArgs)Consumer.Queue.Dequeue();
return Encoding.ASCII.GetString(_ea.Body);
}
public void Acknowledge()
{
Model.BasicAck(_ea.DeliveryTag, false);
}
I modify how I get messages from the queue and it seems blocking issue was fixed. Below is my code.
public string ReadOneAtTime()
{
Consumer = new QueueingBasicConsumer(Model);
var result = Model.BasicGet(QueueName, false);
if (result == null) return null;
DeliveryTag = result.DeliveryTag;
return Encoding.ASCII.GetString(result.Body);
}
public void Reject()
{
Model.BasicReject(DeliveryTag, true);
}
public void Acknowledge()
{
Model.BasicAck(DeliveryTag, false);
}
Going back to my original question, I added the QOS and noticed that other consumers can now get messages. However some are left unacknowledged and my program seems to hangup. Code changes are below:
public string ReadMessage()
{
Model.BasicQos(0, 1, false); // control prefetch
bool autoAck = false;
Consumer = new QueueingBasicConsumer(Model);
Model.BasicConsume(QueueName, autoAck, Consumer);
_ea = Consumer.Queue.Dequeue();
return Encoding.ASCII.GetString(_ea.Body);
}
public void AckConsume()
{
Model.BasicAck(_ea.DeliveryTag, false);
}
In Program.cs
private static void Consume(Receiver receiver)
{
int counter = 0;
while (true)
{
var message = receiver.ReadMessage();
if (message == null)
{
Console.WriteLine("NO message received.");
break;
}
else
{
counter++;
Console.WriteLine("Received: {0}", message);
receiver.AckConsume();
}
}
Console.WriteLine("Total message received {0}", counter);
}
I appreciate any comments and suggestions. Thanks!
Well, the rabbit provides infrastructure where one consumer can't lock/block other message consumer working with the same queue.
The behavior you faced with can be a result of couple of following issues:
The fact that you are not using auto ack mode on the channel leads you to situation where one consumer took the message and still didn't send approval (basic ack), meaning that the computation is still in progress and there is a chance that the consumer will fail to process this message and it should be kept in rabbit queue to prevent message loss (the total amount of messages will not change in management consule). During this period (from getting message to client code and till sending explicit acknowledge) the message is marked as being used by specific client and is not available to other consumers. However this doesn't prevent other consumers from taking other messages from the queue, if there are more mossages to take.
IMPORTANT: to prevent message loss with manual acknowledge make sure
to close the channel or sending nack in case of processing fault, to
prevent situation where your application took the message from queue,
failed to process it, removed from queue, and lost the message.
Another reason why other consumers can't work with the same queue is QOS - parameter of the channel where you declare how many messages should be pushed to client cache to improve dequeue operation latency (working with local cache). Your code example lackst this part of code, so I am just guessing. In case like this the QOS can be so big that there are all messages on server marked as belonging to one client and no other client can take any of those, exactly like with manual ack I've already described.
Hope this helps.