default celery named Queues in RaabitMQ - rabbitmq

I am using Celery and rabbitmq for a django project in which i have created two queues queue_email and queue_push running with one worker.
But rabbitmq has following queues as well, created by default:
celery
celery.pidbox
celeryev
reply.celery.pidbox
How and why these default queues are created ?
Can they be removed, if not necessary ?

I found some imformation in github. But incomplete.
1.The celeryev queues contain the messages celerymon and Flower use for monitoring purposes.
2.Pidbox is the broadcast messaging system used by celery to support remote control of workers.
Refference:
These issues may be help:
Preventing Celery from creating Exchanges celery, celeryev, celeryev.pidbox, reply.celery.pidbox #3895
Hundreds of queues being created #1801

Related

rabbitmq-server start losing data over durable queues

On windows, when I am using rabbitmq-server start/stop commands, data over the RabbitMQ durable queues are deleted. It seems queues are re-created when I start the RabbitMQ server.
If I use rabbitmqctl stop_app/start_app, I am not losing any data. Why?
What will happen if my server goes down and how can I be sure I that I won't lose data if it does?
configuration issue: I was starting rabbitmq from rabbitmq sbin directory. I re-installed the rabbitmq and added rabbitmq to windows services. Now data lost problem was solved on my computer. When I start/stop the windows service , rabbitmq is not losing any data
Making queues durable is not enough. Probably you'll need also to declare exchange as durable as well as send 'persistent' messages.
In Java you'll use:
channel.basicPublish("", "sample_queue",
MessageProperties.PERSISTENT_TEXT_PLAIN, // note that this parameter is not null!
message.getBytes())

Recognize RabbitMQ master node in high-availability cluster

I would like to run RabbitMQ Highly Available Queues in a cluster of two RabbitMQ instances on two separate servers. It's not clear to me from the documentation how can I detect which node is considered as master by RabbitMQ in order to determine which node should I publish messages to and consume from.
Is that something that RabbitMQ resolves internally (and so I can publish and consume from master even when connected to a slave node) or should the application know about master node for each queue and connect only to it?
RabbitMQ will take care of that. The idea of HA queues is that you publish and consume from either node, and RabbitMQ will try to keep a consistent state.

Celery and ha rabbitmq

I developing application where celery with rabbitmq as backend are core modules. Does celery support use case when exist several rabbitmq nodes and when one node goes down celery switch to another node? What the best option to handle cases when rabbitmq is down in order to archive high availability?
There is no HA features in celery by itself. Instead, you can use HA proxy+RabbitMQ for load balancing with fault detection. For more information you can see this:
http://www.joshdevins.net/2010/04/16/rabbitmq-ha-testing-with-haproxy/
http://www.sebastien-han.fr/blog/2012/05/21/openstack-high-availability-rabbitmq/
http://www.amazon.com/RabbitMQ-Action-Distributed-Messaging-Everyone/dp/1935182978 - chapter 5: Clustering and dealing with failure

Can a celery worker/server accept tasks from a non celery producer?

I want to use a comet server written using java nio for sending out live updates. When receiving information I want it to scan the data, and send tasks to worker threads via rabbitmq. Ideally I would like a celery server to sit on the other end of rabbit, managing a pool of worker threads that will handle these tasks.
However, from my understanding, celery works by sitting on both ends of rabbitmq, and it essentially takes over the role of producer and consumer by being embedded in both the consumer and producer's code. Is there a way to set up celery as I described above? Thanks
Yes, of cource !
You can add Custom Message Consumers to a celery app.
Please refer to Extensions and Bootsteps in celery documents.
Here is a part of example code in the link above:
from celery import Celery
from celery import bootsteps
from kombu import Consumer, Exchange, Queue
my_queue = Queue('custom', Exchange('custom'), 'routing_key')
app = Celery(broker='amqp://')
class MyConsumerStep(bootsteps.ConsumerStep):
def get_consumers(self, channel):
return [Consumer(channel,
queues=[my_queue],
callbacks=[self.handle_message],
accept=['json'])]
def handle_message(self, body, message):
print('Received message: {0!r}'.format(body))
message.ack()
app.steps['consumer'].add(MyConsumerStep)
Test it:
python -m celery -A main worker
See also: Using Celery with existing RabbitMQ messages
It is not necessary to use Celery to publish messages. You can publish messages to RabbitMQ or to other broker from your own app and use Celery to consume tasks.
Celery uses simple message protocol. You can implement the client side in you application.
If you don't want to implement the client side of the protocol you can implement a simple http server which accepts requests and makes appropriate calls. Like this.

Using Apache Camel for Load Balancing

Can I access SEDA or VM queue from another machine or JVM?
I actually want to implement load balancing with the help of Camel but do not want introduce another messaging framework for this. I just want to distribute load to different consumers from a producers using some in built queue.
Is it possible? If no then what are my options?
Another Approach:(Pull Approach)
Not sure how optimum new approach is or what are the advantages and disadvantages of new approach, So please help me to analyze this approach.
Messages will be put into a Master queue and all the worker systems will be listening to Master queue.Let's say 100,000 messages are being put into Master queue and 5 worker systems are listening to it. Worker systems will process the messages one by one from the master queue. There are two big benefits with this approach:
I don't need to worry about registering my worker systems with the producer. Sixth system just boot up and start listening to Master queue.
I don't need to worry about sending message to a consumer system which is free. When worker system will be done processing a message, it pick up another one from the Master queue.
Let me know your thoughts on it.
SEDA and VM:// work only on the same JVM.
Load balancing in Java messaging is usually achieved using the JMS and Competing Consumers pattern. You send messages to the queue and multiple consumers compete to process them.
If broker with its queue becomes a bottleneck - consider using fan-out pattern and the network of brokers.
SEDA and VM endpoints are valid for the host Context and JVM respectively. To facilitate JVM-to-JVM messaging you will need to use an over-the-wire protocol component such as, but not limited to, Mina, HTTP or JMS.
The easiest way is to use jms. If you have n routes listening on the same jms queue then they will automatically load balance. If one goes away the load will be balanced over the remaining ones. I recommend starting with ActiveMQ as it is very easy to setup and well integrated with Camel.To make the broker highly available you can either setup two standalone brokers or setup one embedded broker per camel instance.