rabbitmq consumer not consuming after a certain time - rabbitmq

Rabbitmq consumer consumes initially very fast but when i send messages after 8-9 hrs it not consuming at all. I tried increasing heartbeat count from 30 - 600, keeping prefetch as 1 and acknowledging after processing it still facing the same issue. Im using blocking connection. Any other reason for this is behaviour.
self.rabbit_mq_creds = self.rabbit_mq_credentials()
self.credentials = pika.PlainCredentials(username=xx,
password=xx])
self.parameters = pika.ConnectionParameters(host=xx,
port=xx),
virtual_host=xx,
credentials=self.credentials,
heartbeat=600,
retry_delay=180,
connection_attempts=3)
self.connection = pika.BlockingConnection(self.parameters)
self.channel = self.connection.channel()
self.input_queue = "input_queue"
self.channel.queue_declare(queue=self.input_queue, durable=True, arguments={"x-queue-type": "quorum"})
self.channel.basic_qos(prefetch_count=1))
self.channel.basic_consume(queue=self.input_queue, on_message_callback=self.on_request)
`
def on_request(self, ch, method, props, body):
temp = {}
try:
data = json.loads(body.decode('utf-8'))
///
data processing
///
except Exception as k:
logger.error(f"Error: {k} \n\nTrace: {traceback.format_exc()}")
ch.basic_publish(exchange='', routing_key=props.reply_to,
properties=pika.BasicProperties(correlation_id=props.correlation_id), body=str(temp))
ch.basic_ack(delivery_tag=method.delivery_tag)
def trigger_consumption(self):
self.channel.start_consuming()`

Related

How to consume all messages from a RabbitMQ queue in batches?

My RabbitMQ consumer has to process messages in fixed size batches with a prefetch count of 15. This is limited to 15 in order to match AWS SES email send rate - the consumer process has to issue SES API requests in parallel. What would be the best way of dealing with a partial final batch, i.e. one that's left with fewer than 15 messages.
Any new messages are added to the consumer's batch array. When the batch size is achieved, the batch is processed and an acknowledgement is sent back for all messages in the batch.
The 'leftover messages' in the final batch must be processed in my scenario before the 10 second connection timeout comes into effect. Is there a way to implement a timeout on the callback function so that, if for a certain period the number of messages received is less than the prefetch count, the remaining messages in consumer's temporary batch array are processed and acknowledged. It's worth mentioning that no new messages are published to the queue while the consumer process is executing. Thanks in advance.
$connection = new AMQPStreamConnection(HOST, PORT, USER, PASS);
$channel = $connection->channel();
$channel->queue_declare('test_queue', true, false, false, false);
$channel->basic_qos(null, 15, null);
$callback = function($message){
Ack_Globals::$msg_batch[] = $message;
Ack_Globals::$msg_count++;
$time_diff = Ack_Globals::$cur_time-Ack_Globals::$lbatch_recieved;
if(sizeof(Ack_Globals::$msg_batch) >= 15){
$time_end = microtime(true);
Ack_Globals::$lbatch_recieved = $time_end;
Ack_Globals::$batch_count = Ack_Globals::$batch_count + 1;
//Calculate the average time to create this batch
Ack_Globals::$bgen_time_avg = ($time_end - Ack_Globals::$time_start)/Ack_Globals::$batch_count;
//Process this batch
/* Process */
//Acknowledge this batch
$message->delivery_info['channel']->basic_ack($message->delivery_info['delivery_tag'], true);
echo "\nMessage Count: ".Ack_Globals::$msg_count;
echo "\nSize of Array: ".sizeof(Ack_Globals::$msg_batch);
echo "\nLast batch received: ".Ack_Globals::$lbatch_recieved;
echo "\nBatch ".Ack_Globals::$batch_count." processed.";
echo "\nAverage batch generation time: ". Ack_Globals::$bgen_time_avg;
//Clear the batch array
Ack_Globals::$msg_batch = array();
}else{}
};
if ((Ack_Globals::$batch_count === 0) && (Ack_Globals::$msg_count === 0)){
//initialise the timer
Ack_Globals::$time_start = microtime(true);
}
$channel->basic_consume('int_surveys', '', false, false, false, false, $callback);
while ($channel->is_consuming()){
Ack_Globals::$cur_time = AMQPChannel::$current_time;
$channel->wait(null, false, 10);
}
$channel->close();
$connection->close();

ActiveMQ/STOMP Clear Schedule Messages Pointed To Destination

I would like to remove messages that are scheduled to be delivered to a specific queue but i'm finding the process to be unnecessarily burdensome.
Here I am sending a blank message to a queue with a delay:
self._connection.send(body="test", destination=f"/queue/my-queue", headers={
"AMQ_SCHEDULED_DELAY": 100_000_000,
"foo": "bar"
})
And here I would like to clear the scheduled messages for that queue:
self._connection.send(destination=f"ActiveMQ.Scheduler.Management", headers={
"AMQ_SCHEDULER_ACTION": "REMOVEALL",
}, body="")
Of course the "destination" here needs to be ActiveMQ.Scheduler.Management instead of my actual queue. But I can't find anyway to delete scheduled messages that are destined for queue/my-queue. I tried using the selector header, but that doesn't seem to work for AMQ_SCHEDULER_ACTION type messages.
The only suggestions I've seen is to write a consumer to browser all of the scheduled messages, inspect each one for its destination, and delete each schedule by its ID. This seems insane to me as I don't have just a handful of messages but many millions of messages that I'd like to delete.
Is there a way I could send a command to ActiveMQ to clear scheduled messages with a custom header value?
Maybe I can define a custom scheduled messages location for each queue?
Edit:
I've written a wrapper around the stomp.py connection to handle purging schedules destined for a queue. The MQStompFacade takes an existing stomp.Connection and the name of the queue you are working with and provides enqueue, enqueue_many, receive, purge, and move.
When receiving from a queue, if include_delayed is True, it will subscribe to both the queue and a topic that consumes the schedules. Assuming the messages were enqueued with this class and have the name of the original destination queue as a custom header, scheduled messages that aren't destined for the receiving queue will be filtered out.
Not yet testing in production. Probably a lot of of optimizations here.
Usage:
stomp = MQStompFacade(connection, "my-queue")
stomp.enqueue_many([
EnqueueRequest(message="hello"),
EnqueueRequest(message="goodbye", delay=100_000)
])
stomp.purge() # <- removes queued and scheduled messages destined for "/queues/my-queue"
class MQStompFacade (ConnectionListener):
def __init__(self, connection: Connection, queue: str):
self._connection = connection
self._queue = queue
self._messages: List[Message] = []
self._connection_id = rand_string(6)
self._connection.set_listener(self._connection_id, self)
def __del__(self):
self._connection.remove_listener(self._connection_id)
def enqueue_many(self, requests: List[EnqueueRequest]):
txid = self._connection.begin()
for request in requests:
headers = request.headers or {}
# Used in scheduled message selectors
headers["queue"] = self._queue
if request.delay_millis:
headers['AMQ_SCHEDULED_DELAY'] = request.delay_millis
if request.priority is not None:
headers['priority'] = request.priority
self._connection.send(body=request.message,
destination=f"/queue/{self._queue}",
txid=txid,
headers=headers)
self._connection.commit(txid)
def enqueue(self, request: EnqueueRequest):
self.enqueue_many([request])
def purge(self, selector: Optional[str] = None):
num_purged = 0
for _ in self.receive(idle_timeout=5, selector=selector):
num_purged += 1
return num_purged
def move(self, destination_queue: AbstractQueueFacade,
selector: Optional[str] = None):
buffer_size = 500
move_buffer = []
for message in self.receive(idle_timeout=5, selector=selector):
move_buffer.append(EnqueueRequest(
message=message.body
))
if len(move_buffer) >= buffer_size:
destination_queue.enqueue_many(move_buffer)
move_buffer = []
if move_buffer:
destination_queue.enqueue_many(move_buffer)
def receive(self,
max: Optional[int] = None,
timeout: Optional[int] = None,
idle_timeout: Optional[int] = None,
selector: Optional[str] = None,
peek: Optional[bool] = False,
include_delayed: Optional[bool] = False):
"""
Receiving messages until one of following conditions are met
Args:
max: Receive messages until the [max] number of messages are received
timeout: Receive message until this timeout is reached
idle_timeout (seconds): Receive messages until the queue is idle for this amount of time
selector: JMS selector that can be applied to message headers. See https://activemq.apache.org/selector
peek: Set to TRUE to disable automatic ack on matched criteria. Peeked messages will remain the queue
include_delayed: Set to TRUE to return messages scheduled for delivery in the future
"""
self._connection.subscribe(f"/queue/{self._queue}",
id=self._connection_id,
ack="client",
selector=selector
)
if include_delayed:
browse_topic = f"topic/scheduled_{self._queue}_{rand_string(6)}"
schedule_selector = f"queue = '{self._queue}'"
if selector:
schedule_selector = f"{schedule_selector} AND ({selector})"
self._connection.subscribe(browse_topic,
id=self._connection_id,
ack="auto",
selector=schedule_selector
)
self._connection.send(
destination=f"ActiveMQ.Scheduler.Management",
headers={
"AMQ_SCHEDULER_ACTION": "BROWSE",
"JMSReplyTo": browse_topic
},
id=self._connection_id,
body=""
)
listen_start = time.time()
last_receive = time.time()
messages_received = 0
scanning = True
empty_receive = False
while scanning:
try:
message = self._messages.pop()
last_receive = time.time()
if not peek:
self._ack(message)
messages_received += 1
yield message
except IndexError:
empty_receive = True
time.sleep(0.1)
if max and messages_received >= max:
scanning = False
elif timeout and time.time() > listen_start + timeout:
scanning = False
elif empty_receive and idle_timeout and time.time() > last_receive + idle_timeout:
scanning = False
else:
scanning = True
self._connection.unsubscribe(id=self._connection_id)
def on_message(self, frame):
destination = frame.headers.get("original-destination", frame.headers.get("destination"))
schedule_id = frame.headers.get("scheduledJobId")
message = Message(
attributes=MessageAttributes(
id=frame.headers["message-id"],
schedule_id=schedule_id,
timestamp=frame.headers["timestamp"],
queue=destination.replace("/queue/", "")
),
body=frame.body
)
self._messages.append(message)
def _ack(self, message: Message):
"""
Deletes the message from queue.
If the message has an scheduled_id, will also remove the associated scheduled job
"""
if message.attributes.schedule_id:
self._connection.send(
destination=f"ActiveMQ.Scheduler.Management",
headers={
"AMQ_SCHEDULER_ACTION": "REMOVE",
"scheduledJobId": message.attributes.schedule_id
},
id=self._connection_id,
body=""
)
self._connection.ack(message.attributes.id, subscription=self._connection_id)
In order to remove specific messages you need to know the ID which you can get via a browse of the scheduled messages. The only other option available is to use the start and stop time options in the remove operations to remove all messages inside a range.
MessageProducer producer = session.createProducer(management);
Message request = session.createMessage();
request.setStringProperty(ScheduledMessage.AMQ_SCHEDULER_ACTION, ScheduledMessage.AMQ_SCHEDULER_ACTION_REMOVEALL);
request.setStringProperty(ScheduledMessage.AMQ_SCHEDULER_ACTION_START_TIME, Long.toString(start));
request.setStringProperty(ScheduledMessage.AMQ_SCHEDULER_ACTION_END_TIME, Long.toString(end));
producer.send(request);
If that doesn't suit your need I'm sure the project would welcome contributions.

Get message count in ActiveMQ queue or receiver in c#

I'm at consumer end and reading message from a queue. How to get the count of the total messages in a queue (IDestination) and pass number of messages (let's say 100 messages) at a time to the receiver.
Please help to achieve it in c# .Net
One means of achieving this would be to enable the Statistics Broker Plugin and then send control commands to the broker from your C# client to obtain statistics on Queues you are interested in. The follow is a Java based example but you could produce something quite similar using Apache.NMS.ActiveMQ if that is what you are using for your C# client.
Queue replyTo = session.createTemporaryQueue();
MessageConsumer consumer = session.createConsumer(replyTo);
Queue testQueue = session.createQueue("TEST.FOO");
MessageProducer producer = session.createProducer(null);
String queueName = "ActiveMQ.Statistics.Destination." + testQueue.getQueueName()
Queue query = session.createQueue(queueName);
Message msg = session.createMessage();
producer.send(testQueue, msg)
msg.setJMSReplyTo(replyTo);
producer.send(query, msg);
MapMessage reply = (MapMessage) consumer.receive();
assertNotNull(reply);
assertTrue(reply.getMapNames().hasMoreElements());
for (Enumeration e = reply.getMapNames();e.hasMoreElements();) {
String name = e.nextElement().toString();
System.err.println(name + "=" + reply.getObject(name));
}
That would allow you to see data such as:
memoryUsage=0
dequeueCount=0
inflightCount=0
messagesCached=0
averageEnqueueTime=0.0
destinationName=queue://TEST.FOO
size=1
memoryPercentUsage=0
producerCount=0
consumerCount=0
minEnqueueTime=0.0
maxEnqueueTime=0.0
dispatchCount=0
expiredCount=0
enqueueCount=1
memoryLimit=67108864

celery worker not publishing message to the rabbitmq?

I have a setup where celery_result_backend has been configured to 'amqp'. I can see my tasks getting executed by the worker in logs. But
It is creating the queue with task id but its status is expired.I am not getting the result (result = AsyncResult(taskid); result.get() hangs). I tried all the backed supported:
1)Mysql: It is not putting data to the celery created tables
2) Redis: It is not putting data to the db
I two centos system.
1) I am calling the delay method to send the task to proper rabbitmq. And the worker is listening to the queue, from there it will pick the task and process(I can see task in the queue and getting executed by the worker in machine 2 But the result is not being put into the backend.
).Here I am doing the result.get() It hangs.
2) The worker is running on it to execute the task.It executes the task but I think not able to put the rersult
Settings:
RABBITMQ_BROKER_HOST = '10.213.166.133'
RABBITMQ_BROKER_PORT = dqms_settings.RABBITMQ_BROKER_PORT
RABBITMQ_BROKER_VHOST = dqms_settings.RABBITMQ_BROKER_VHOST
RABBITMQ_BROKER_USERNAME = dqms_settings.RABBITMQ_BROKER_USERNAME
RABBITMQ_BROKER_PASSWORD = dqms_settings.RABBITMQ_BROKER_PASSWORD
BROKER_URL = 'amqp://%s:%s#%s:%s/%s' % (RABBITMQ_BROKER_USERNAME,
RABBITMQ_BROKER_PASSWORD,
RABBITMQ_BROKER_HOST,
RABBITMQ_BROKER_PORT,
RABBITMQ_BROKER_VHOST)
#CELERY_TASK_RESULT_EXPIRES = 18000
#CELERY_IGNORE_RESULT = True
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
#CELERY_RESULT_BACKEND = 'db+mysql://svcacct-dqms:s3cretP#ssw0rd#10.213.166.202:3306/dqms'
#CELERY_RESULT_BACKEND = 'amqp'
#CELERY_AMQP_TASK_RESULT_EXPIRES = 1000
#CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = TIME_ZONE
CELERYD_PREFETCH_MULTIPLIER = dqms_settings.CELERYD_PREFETCH_MULTIPLIER
CELERY_DEFAULT_QUEUE = dqms_settings.CELERY_DEFAULT_QUEUE
CELERY_DEFAULT_EXCHANGE_TYPE = dqms_settings.CELERY_DEFAULT_EXCHANGE_TYPE
CELERY_DEFAULT_ROUTING_KEY = dqms_settings.CELERY_DEFAULT_ROUTING_KEY
CELERY_QUEUES = dqms_settings.CELERY_QUEUES
CELERY_ROUTES = dqms_settings.CELERY_ROUTES
CELERYD_HIJACK_ROOT_LOGGER = dqms_settings.CELERYD_HIJACK_ROOT_LOGGER
CELERY_ACKS_LATE = dqms_settings.CELERY_ACKS_LATE
CELERY_RESULT_BACKEND = 'redis://:s3cretP#ssw0rd#10.213.166.204:6379/5' #'djcelery.backends.database.DatabaseBackend'
#CELERY_REDIS_MAX_CONNECTIONS = 6
#CELERY_ALWAYS_EAGER = False
Can some one help why it is not putting the result in the queue?
This is a issue which is happening quite common now.
setting CELERY_ALWAYS_EAGER to TRUE will do the work
However this is not the best solution in production scenario.

pika dropping basic_publish messages sporadically

I'm trying to publish messages with pika, using Celery tasks.
from celery import shared_task
from django.conf import settings
import json
#shared_task
def publish_message():
params = pika.URLParameters(settings.BROKER_URL + '?' + 'socket_timeout=10&' + 'connection_attempts=2')
conn = pika.BlockingConnection(parameters=params)
channel = conn.channel()
channel.exchange_declare(
exchange = 'foo',
type='topic'
)
channel.tx_select()
channel.basic_publish(
exchange = 'foo',
routing_key = 'bar',
body = json.dumps({'foo':'bar'}),
properties = pika.BasicProperties(content_type='application/json')
)
channel.tx_commit()
conn.close()
This task is called from the views.
Due to some weird reason, sometimes randomly, the messages are not getting queued. In my case, every second message is getting dropped. What am I missing here?
I would recommend that you enable confirm_delivery in pika. This will ensure that messages get delivered properly, and if for some reason the message could not be delivered. Pika will fail with either an exception, or return False.
channel.confirm_delivery()
successful = channel.basic_publish(...)
If the process fails you can try to send the message again, or log the error message from the exception so that you can act accordingly.
Try this:
chanel = conn.channel()
try:
chanel.queue_declare(queue='foo')
except:
pass
chanel.basic_publish(
exchange='',
routing_key='foo',
body=json.dumps({'foo':'bar'})
)