RabbitMQ Ack Timeout - rabbitmq

I'm using RPC Pattern for processing my objects with RabbitMQ.
You suspect,I have an object, and I want to have that process finishes and After that send ack to RPC Client.
Ack as default has a timeout about 3 Minutes.
My process Take long time.
How can I change this timeout for ack of each objects or what I must be do for handling these like processess?

Modern versions of RabbitMQ have a delivery acknowledgement timeout:
In modern RabbitMQ versions, a timeout is enforced on consumer delivery acknowledgement. This helps detect buggy (stuck) consumers that never acknowledge deliveries. Such consumers can affect node's on disk data compaction and potentially drive nodes out of disk space.
If a consumer does not ack its delivery for more than the timeout value (30 minutes by default), its channel will be closed with a PRECONDITION_FAILED channel exception. The error will be logged by the node that the consumer was connected to.
Error message will be:
Channel error on connection <####> :
operation none caused a channel exception precondition_failed: consumer ack timed out on channel 1
Timeout by default is 30 minutes (1,800,000ms)note 1 and is configured by the consumer_timeout parameter in rabbitmq.conf.
note 1: Timeout was 15 minutes (900,000ms) before RabbitMQ 3.8.17.

if you run rabbitmq in docker, you can describe volume with file rabbitmq.conf, then create this file inside volume and set consumer_timeout
for example:
docker compose
version: "2.4"
services:
rabbitmq:
image: rabbitmq:3.9.13-management-alpine
network_mode: host
container_name: 'you name'
ports:
- 5672:5672
- 15672:15672 ----- if you use gui for rabbit
volumes:
- /etc/rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
And you need create file
rabbitmq.conf
on you server by this way
/etc/rabbitmq/
documentation with params: https://github.com/rabbitmq/rabbitmq-server/blob/v3.8.x/deps/rabbit/docs/rabbitmq.conf.example

Related

Rabbitmq: Consumers for SAC queues are not connected after node in a cluster is restarted

Setup:
3 node cluster of rabbitmq nodes(via docker), behind ha-proxy.
Version:
RabbitMQ: 3.8.11
Erlang: 23.2.3
Spring-amqp: 1.7.3
Spring-boot(1.5.4) app with 3 queues.
defined as "exclusive", durable, auto-delete is false
defined as "SAC", durable, auto-delete is false
classic, durable, auto-delete is false
Policy:
Scenario:
When application starts at first, queues are registered correctly.
I bring down any node at random, if it is master node, the mirroring is triggered and one of the mirrored node becomes master.All fine so far.
When I bring that node up, it is when I start to get exceptions in the application logs:
Logs:
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - failed to perform operation on queue 'single-active-consumer-message-queue' in vhost '/dev' due to timeout, class-id=50, method-id=10)
at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:66) ~[amqp-client-4.0.2.jar!/:4.0.2]
at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:32) ~[amqp-client-4.0.2.jar!/:4.0.2]
at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:366) ~[amqp-client-4.0.2.jar!/:4.0.2]
at com.rabbitmq.client.impl.AMQChannel.privateRpc(AMQChannel.java:229) ~[amqp-client-4.0.2.jar!/:4.0.2]
at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:117) ~[amqp-client-4.0.2.jar!/:4.0.2]
... 25 common frames omitted
It attempts to reconnect for 3 times then eventually prints the log below
2021-03-18 17:08:55.487 ERROR 1 --- [cTaskExecutor-4] o.s.a.r.l.SimpleMessageListenerContainer : Stopping container from aborted consumer
2021-03-18 17:08:55.487 INFO 1 --- [cTaskExecutor-4] o.s.a.r.l.SimpleMessageListenerContainer : Waiting for workers to finish.
2021-03-18 17:08:55.487 INFO 1 --- [cTaskExecutor-4] o.s.a.r.l.SimpleMessageListenerContainer : Successfully waited for workers to finish.
On the RabbitMQ Console i see this:
CachingConnectionFactory is defined with basic connection details of the ha-proxy
#Bean
protected SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory, RetryOperationsInterceptor retryAdvice) {
SimpleRabbitListenerContainerFactory containerFactory = new SimpleRabbitListenerContainerFactory();
containerFactory.setConnectionFactory(connectionFactory);
containerFactory.setDefaultRequeueRejected(false);
containerFactory.setAdviceChain(retryAdvice);
containerFactory.setMessageConverter(new Jackson2JsonMessageConverter());
return containerFactory;
}
This behavior was caused bu the bug in RabbitMQ. It seems to be fixed in version 3.8.17.
https://github.com/rabbitmq/rabbitmq-server/issues/3072
Boot 1.5.x and Spring AMQP 1.7.x are end of life and no longer supported.
That said, the following applies to 1.7.x too.
This situation will occur if queue recovery takes longer then 15 seconds (by default).
This is controlled by 2 container properties.
/**
* Set the number of retries after passive queue declaration fails.
* #param declarationRetries The number of retries, default 3.
* #since 1.3.9
* #see #setFailedDeclarationRetryInterval(long)
*/
public void setDeclarationRetries(int declarationRetries) {
this.declarationRetries = declarationRetries;
}
/**
* Set the interval between passive queue declaration attempts in milliseconds.
* #param failedDeclarationRetryInterval the interval, default 5000.
* #since 1.3.9
*/
public void setFailedDeclarationRetryInterval(long failedDeclarationRetryInterval) {
this.failedDeclarationRetryInterval = failedDeclarationRetryInterval;
}
You can increase one or both of these to prevent the container from stopping under this condition.
Regarding the expected behavior of Single Active Consumer queues under this condition, I suggest you ask on the rabbitmq-users Google group.

Why SpringBoot RabbitMQ client auto shutdown connection to rabbitmq server

**
geting rabbitmq connection error as follows.
**
2019-07-11 13:14:51.147.AMQP Connection 127.0.0.1:5672> ERROR - TID[] UID[] MID[] CID[] - Channel shutdown: connection error; protocol method: #method(reply-code=541, reply-text=INTERNAL_ERROR, class-id=0, method-id=0)
2019-07-11 13:14:51.831.bulkNotificationContainer-100> WARN - TID[] UID[] MID[] CID[] - Consumer raised exception, processing can restart if the connection factory supports it
com.rabbitmq.client.ShutdownSignalException: connection error; protocol method: #method(reply-code=541, reply-text=INTERNAL_ERROR, class-id=0, method-id=0)
at com.rabbitmq.client.impl.AMQConnection.startShutdown(AMQConnection.java:742) ~[amqp-client-3.6.5.jar!/:na]
at com.rabbitmq.client.impl.AMQConnection.shutdown(AMQConnection.java:732) ~[amqp-client-3.6.5.jar!/:na]
at com.rabbitmq.client.impl.AMQConnection.handleConnectionClose(AMQConnection.java:671) ~[amqp-client-3.6.5.jar!/:na]
at com.rabbitmq.client.impl.AMQConnection.processControlCommand(AMQConnection.java:625) ~[amqp-client-3.6.5.jar!/:na]
at com.rabbitmq.client.impl.AMQConnection$1.processAsync(AMQConnection.java:102) ~[amqp-client-3.6.5.jar!/:na]
at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:143) ~[amqp-client-3.6.5.jar!/:na]
at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:90) ~[amqp-client-3.6.5.jar!/:na]
at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:549) ~[amqp-client-3.6.5.jar!/:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
My spring-boot service using rabbitmq to send push notification asynchronously. So push payload publish/received via rabbitmq then send on FCM. However, this connection is working for last one year without any problem. But this morning rabbitmq server is restarted as follows
rabbitmq stop
kill process bean ( related erlang )
rabbitmq start
after that we restart spring-boot service successfully, and health-api shows rabbit-up status.
but having error when trying to send push.
application properties configuration.
spring.rabbitmq.host=127.0.0.1 spring.rabbitmq.port=5672
spring.rabbitmq.username=rabbitadmin spring.rabbitmq.password=admin
custom configuration
rabbitmq.listeners.retry-policy = UNIFORM_RANDOM_DELAY
rabbitmq.listeners.max-interval=15000
rabbitmq.listener.push-router.concurrent-consumers=2
rabbitmq.listener.push-router.max-concurrent-consumers=10
rabbitmq.binding.push-notification.queue.name=pushqueue
rabbitmq.binding.push-notification.exchange.name=pushexchange
rabbitmq.binding.push-notification.binding.routing-key=pushroute-binding
I want to publish/receive data to rabbitmq channel.
This issue is resolved after reinstalling the packages of rabbitmq.
Still dont know why this happened..

IO thread error : 1595 (Relay log write failure: could not queue event from master)

Slave status :
Last_IO_Errno: 1595
Last_IO_Error: Relay log write failure: could not queue event from master
Last_SQL_Errno: 0
from error log :
[ERROR] Slave I/O for channel 'db12': Unexpected master's heartbeat data: heartbeat is not compatible with local info; the event's data: log_file_name toku10-bin.000063<D1> log_pos 97223067, Error_code: 1623
[ERROR] Slave I/O for channel 'db12': Relay log write failure: could not queue event from master, Error_code: 1595
I tried to restarting the slave_io thread for many times, still its same.
we need to keep on start io_thread whenever it stopped manually, hope its bug from percona
I have simply written shell and scheduled the same for every 10mins to check if io_thread is not running , start slave io_thread for channel 'db12';. It's working as of now

Masstransit RabbitMq Request/Response cannot create auto-delete exchange

We are trying to implement a request/response scenario where the messages will be deleted server(consumer) is down. We start with no exchanges / queues in the rabbit mq installation.
There is a server which creates its own exchange / queue and we want this to be auto-delete=true.
In case the server is up before the client, the exchange is created with the correct configuration. But when the client is up we get this error:
RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=406, text="PRECONDITION_FAILED - inequivalent arg 'auto_delete' for exchange 'simple_request' in vhost '****': received 'false' but current is 'true'", classId=40, methodId=10, cause=
In case the client is up first, and tries to send a message an exchange is created with the queue name that we have defined but it is not auto-delete=true which results to error:
RabbitMQ receive transport failed: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=406, text="PRECONDITION_FAILED - inequivalent arg 'auto_delete' for exchange 'simple_request' in vhost '****': received 'true' but current is 'false'", classId=40, methodId=10, cause=RabbitMQ receive transport failed: The supervisor is stopping, no additional scopes can be created
when the server is eventually started.
How do we implement auto-delete queues in a request response scenario?
You can update the URI in your client for the service queue to include query string parameters so that the queue is created properly.
rabbitmq://host/vhost/queue?autodelete=true&durable=false
Note I included durable=false but that's only if you're using a non-durable queue and I wanted to be complete.

How to set keepalive option for induvidual socket in VxWorks

Is there any way to set keepalive for induvidual socket descriptor in vxworks? I read in some documents that "SOL_TCP" option in setsockopt function will do such favors in linux. Is such facility available in VxWorks too? If so please provide related details regarding the same, like what are the include file we need to include and how to use such option etc.
From the VxWorks "Library Reference" manual (can be download):
OPTIONS FOR STREAM SOCKETS
The following sections discuss the socket options available for stream (TCP) sockets.
SO_KEEPALIVE -- Detecting a Dead Connection
Specify the SO_KEEPALIVE option to make the transport protocol (TCP) initiate a timer to detect a dead connection:
setsockopt (sock, SOL_SOCKET, SO_KEEPALIVE, &optval, sizeof (optval));
This prevents an application from hanging on an invalid connection. The value at optval for this option is an integer (type int), either 1 (on) or 0 (off).
The integrity of a connection is verified by transmitting zero-length TCP segments triggered by a timer, to force a response from a peer node. If the peer does not respond after repeated transmissions of the KEEPALIVE segments, the connection is dropped, all protocol data structures are reclaimed, and processes sleeping on the connection are awakened with an ETIMEDOUT error.
The ETIMEDOUT timeout can happen in two ways. If the connection is not yet established, the KEEPALIVE timer expires after idling for TCPTV_KEEP_INIT. If the connection is established, the KEEPALIVE timer starts up when there is no traffic for TCPTV_KEEP_IDLE. If no response is received from the peer after sending the KEEPALIVE segment TCPTV_KEEPCNT times with interval TCPTV_KEEPINTVL, TCP assumes that the connection is invalid. The parameters TCPTV_KEEP_INIT, TCPTV_KEEP_IDLE, TCPTV_KEEPCNT, and TCPTV_KEEPINTVL are defined in the file target/h/net/tcp_timer.h.
IP_TCP_KEEPINTVL and also TCP_KEEPIDLE, TCP_KEEPCNT options supported by setsockopt after vxworks 6.8 version. In former releases of vxworks you can change these values globally and all the sockets created effected.
Below question is an answer for how will it be done.
How to set TCP keep alive interval for a specific socket fd (Not system wide) in VxWorks?