Has anyone tried actioncable with active job with an adaptor other than async?
When I use active job(with sidekiq) to broadcast messages to clients it does not send data to any of the clients. This makes sense also because sidekiq is running as another process and doesn't have connections to Action cable clients.
When I switch to active job with async adaptor it works which also makes sense because the jobs are run by Puma.
Any idea how can we use Sidekiq or any adaptor can be used to read jobs from redis and send messages to all connected clients?
Thanks
well, it's can be late, but i solve this using redis on actioncable
in cable.yml
adapter: redis
Related
I have set up a Celery task that is using RabbitMQ as the broker and Redis as the backend. After running I noticed that my Redis server was still using a lot of memory. Upon inspection I found that there were still keys for each task that was created.
Is there a way to get Celery to clean up these keys only after the response has been received? I know some MessageBrokers use acks, is there an equivalent for a redis backend in Celery?
Yes, use result_expires. Please note that celery beat should run as well, as written in the documentation:
A built-in periodic task will delete the results after this time (celery.backend_cleanup), assuming that celery beat is enabled. The task runs daily at 4am.
Unfortunately Celery doesn't have acks for its backend, so the best solution for my project was to call forget on my responses after I was done with them.
I wish to run an experiment in which the publisher loses connection with the broker and then enqueues messages in its own queue and then when it regains connectivity it sends all its queued messages to the broker. How can I I do this since if I call close connection, I can no longer send(raises an exception). A trick that I can think of is to use a network of two brokers and simulate the above by breaking the connection between the two brokers. Is there an API call that I can use to do the above?
This is very much like facebook messenger or whatsapp acting as a publisher and enqueuing our to-send messages if we are offline and sending them once we are connected.
There is plenty of solutions you could use to break the connection in order to test, here is a non-comprehensive list :
Make a script that can set/unset a firewall rule on your environement blocking the connection port
If you are working with VMs, you can suspend/resume the one running Activemq, you can even automate it with tools like vagrant (vagrant suspend, then vagrant up)
Tweak the connection manualy accessing the activemq jmx
Develop an activemq plugin able to trash connections on demand (or maybe there is one ?)
Now in order to have the behavior you wish to obtain there is two options :
1) Make sure your connection is failover so it can be reestablished, and store your message on disk before sending them with your producer.
2)Produce to a local broker embbeded in your app, and connect this one to the remote broker.
I'm running Celery on my laptop, with rabbitmq being the broker and redis being the backend. I just used all the default settings and ran celery -A tasks worker --loglevel=info, then it all worked. The workers can get jobs done and I get fetch the execution results by calling result.get(). My question here is that why it works even if I didn't run the rebbitmq and redis servers at all. I did not set the accounts on the servers either. In many tutorials, the first step is to run the broker and backend servers before starting celery.
I'm new to these tools and do not quite understand how they work behind the scene. Any input would be greatly appreciated. Thanks in advance.
Never mind. I just realized that redis and rabbitmq automatically run after installation or shell startup. They must be running for celery to work.
I have a Java API on my server, and I want it to create tasks and add them to Celery via RabbitMQ. I followed the following tutorial, http://www.rabbitmq.com/tutorials/tutorial-two-python.html, where I used java for the client (send.java) and python to receive (receive.py). In receive.py, where the callback method is invoked, I call a method that I've annotated with #celery.task so that the task is added to celery.
I'm wondering how all of this is deployed on a server though, specifically, why there is a receive.py file. Is receive.py a process that must continually run on the server? Is there a way to configure RabbitMQ so that it automatically routes java client tasks to celery?
Thanks!
RabbitMQ is just a message queue. Producers put messages and consumers get them on demand. You can only restrict access for specific queues via RabbitMQ's auth options.
As for deployment: yes, receive.py needs to continuously run. It is Celery's job to do that. See the Workers Guide for info on running a worker.
trying to work past an issue when using a resque job to process inbound AMQP messages.
am using an initializer to set up the message consumer at application startup and then feed the received messages to resque job for processing. that is working quite well.
however, i also want to process a response message out of the worker, i.e. publish it back out to a queue, and am running into the issue of the forking process making the app-wide AMQP connection unaddressable from inside the resque worker. would be very interested to see how other folks have tackled this as i can't believe this pattern is unusual.
due to message volumes, taking the approach of firing up a new thread and amqp connection for every response is not a workable solution.
ideas?
my bust on this, had my eye off the ball and forgot about resque forking when it kicks off a worker. going to go the route suggested by others of daemonizing the process instead....