How to Delete temporary RabbitMQ queues once corresponding result has been retrieved? - rabbitmq

My question builds off this one: Temporary queue made in Celery
My application needs to retrieve results, as it uploads them to an S3 file. however, the number of temporary queues being made is causing my broker to crash (machine doesn't have enough memory). I want to delete the temporary queue once the corresponding result as been retrieved. In my celery client script, I am iterating through a list of of results (where each result is from function.delay() ):
for result in result_list:
while True:
if result.ready():
#do something with result
#I WANT TO DELETE TEMPORARY QUEUE HERE
Is there any way I can achieve the above -- deleting the temporary queue once the result has been retrieved?
I would have used CELERY_TASK_RESULT_EXPIRES option in my celeryconfig , but I don't know when I can safely clean up the temporary queue, as the result may not have been retrieved. Is there anyway I can delete specific queues in this script (note that I have the queue Id from the result).
ADDITIONAL NOTE:
I am running all rabbitmq servers in a cluster with HA enabled.

The way I did this was to use the rabbitmqadmin from rabbitmq. I downloaded it via
wget localhost:15672/cli/rabbitmqadmin
after installing the management plugin
rabbitmq-plugins enable rabbitmq_management
Make sure your user has the administrator tag for rabbitmq, or you will not be able to perform commands. I then deleted the queue in my script using python subprocess import and rabbitmqadmin delete queue name='' . Keep in mind that the queue name is the same as the corresponding result id, except without the hyphens.
Also make sure you add the params -v myvhost -u myusername -p mypassword in rabbitmqadmin commands, default vhost is /.
I believe this will delete queues across all nodes in a cluster, though I am not completely sure of this.

Related

How to attach multiple worker for a queue in rabbit MQ

I am using exchange based pattern in Rabbit MQ.
Producer --> Exchange --> Queues --> Consumer1
How do I run multiple consumer (C1, C2, C3 so on....) for load balancing purpose and scalability of the consumers.
Is it ok run ./worker.js twice thrice based on uses?
Yes it should be ok to run your workers multiple times as that would run multiple instances of your worker listening to your queue to achieve what you want. Please refer this tutorial from RabbitMQ for more info. Specifically see section Round-robin dispatching
To quote a few details:
One of the advantages of using a Task Queue is the ability to easily parallelise work. If we are building up a backlog of work, we can just add more workers and that way, scale easily.
You need three consoles open. Two will run the worker.js script. These consoles will be our two consumers - C1 and C2.
Just to add on #AJS answer. You may want to make use of a 'Process Monitor/Manager' like Supervisord to manage your long-running program C1 and most importantly to run multiple of them (C1, C2, C3 so on....). Just install supervisor in your environment (local, VPS, Docker etc), then add a configuration file like the example below to make it run, monitor and restart multiple worker.js processes as needed,
So, create a supervisor configuration file for your program, eg. my_awesome_worker.conf and place it /etc/supervisor/conf.d directory.
[program:wise_worker]
process_name=%(program_name)s_%(process_num)02d
command=node /my_app_location/worker.js
autostart=true
autorestart=true
numprocs=4
stderr_logfile=/var/log/myapp.err.log
stdout_logfile=/var/log/myapp.out.log
user=myuser
To update the changes, run
$sudo supervisorctl reread
$sudo supervisorctl update
Note the process_name and numprocs section is responsible for running 4 worker.js processes (keep numprocs equal to or less than your number of CPUs). The numprocs in combination with process_name expression of %(program_name)s_%(process_num)02d, will create four processes, namely wise_worker_00, wise_worker_01, wise_worker_02 and wise_worker_03.
Verify that they are all running using
$sudo systemctl status supervisor
or
$sudo service supervisor status

redis-cli FLUSHALL and FLUSHDB return ok but do nothing after Hubot restores redis

On ubuntu 16.04. Interacting with a local redis instance via redis-cli. Working with a node hubot script which uses redis as its primary data store.
when I type keys * I get a single key hubot:storage
so I FLUSHALL and get an ok response. But if the Hubot is running or else as soon as it turns on, it restores the value of that key immediately so I can never delete it.
I'v used the info command to try to see if it is persisting on some other redis instance and I've cleared all backup files from /var/redis. Basically I can't figure out where this data is being stored to keep getting restored from.
Any advice regarding how I could clear this out or where Hubot may be caching this?
It seems to be related to this code: https://github.com/hubotio/hubot-redis-brain/blob/master/src/redis-brain.js specifically the chunk at line 49 is what gets called before each restore.
Steps
Stop hubot
Flush redis (important that this is done while hubot is NOT running)
Start hubot
The reasoning is that hubot has an in-memory representation of the brain and will write it out to redis at intervals. Perhaps a nicer solution to this which would help during script development would be a command that can empty the brain and save that, but I can't see an obvious API for that in either robot.brain or hubot-redis-brain

Redis activity log

We have a redis database running on our server, but for some reason, I cannot see any keys in our database. I'm just wondering if redis stores an activity log, where I can trace if and when the keys were deleted?
I have the usual log file for redis, at /var/log/redis.log but that doesn't have the information I am looking for.
I think there is no straight forward way to log everything but here is a hack.
$ redis-cli monitor >> ~/my_redis_commands.log 2>&1
Here >> tells OS that the output stream has been changed from monitor to a file and 2>&1 tells to redirect STDERR to STDOUT.
n>&m Merge output from stream n with stream m.
Note that file descriptor 0 is normally standard input (STDIN), 1 is standard output (STDOUT), and 2 is standard error output (STDERR).
Go and see the content of file in some SSH session for debugging.
$ tail -f ~/my_redis_commands.log
or you can use grep to find "DEL" instead. You can see the list of commands supported by Redis and try grep queries like SET, GET, etc.
$ grep '"DEL"' ~/my_redis_commands.log
Cons of this idea are:
You need to run a separate process to do this
It's memory and CPU consuming
single MONITOR client can reduce the throughput by more than 50%. Running more MONITOR clients will reduce throughput even more.
For security concerns, certain special administration commands like CONFIG are not logged into the MONITOR output
See this for more info https://redis.io/commands/monitor
The INFO command can be used to glean some forensic info when used with the all or cmdstats switch - you'll be able to see counts of all commands including offensive ones.
Keep in mind that this could be the result of an unauthorized intrusion and that your server may have been compromised.

Rename Command Example With Jedis Client

I am using Spring Jedis Client to use Redis in my application. I want to rename the commands so that no one else can fire the same just in case they are able to connect to my server.
Can anyone give an example of how to use rename command from Jedis and then how to fire subsequent commands using the the modified one ?
You can't rename a Redis command yet w/o changing the config file issue #640.
Even if you add the rename-command config file directive and restart your Redis, Jedis doesn't seem to allow sending arbitrary commands easily or to provide a trivial (i.e. no code changes) way to rename them.
What you could do, however, if you're really insistent on renaming a command and then calling it from Jedis is EVAL it. This will probably go in my pantheon of ugly hacks (:)) but after adding rename-command get foo to my /etc/redis/redis.conf and doing service redis-server restart look what I can do:
$ redis-cli
redis 127.0.0.1:6379> set bar baz
OK
redis 127.0.0.1:6379> get bar
(error) ERR unknown command 'get'
redis 127.0.0.1:6379> foo bar
"baz"
redis 127.0.0.1:6379> eval "return(redis.call('get', KEYS[1]))" 1 bar
(error) ERR Error running script (call to f_db0e060e4f58231d51f21685b20ff847de8ab9e1): Unknown Redis command called from Lua script
redis 127.0.0.1:6379> eval "return(redis.call('foo', KEYS[1]))" 1 bar
"baz"
redis 127.0.0.1:6379>
Of course, if you take this route your code can get pretty messy in no time at all so be careful where you tread... Good luck!
If a malicious user connects directly to Redis, then one can access all opcodes.
There is no feature in the Redis library to rename commands. Even if you expose access to a custom API that renames commands, you cannot change the inner opcodes of Redis itself.
Edit:
You're right, it is possible to rename commands by changing the config file indeed!
After you set the new command names, you have to recompile Jedis.
First rename the enum on src/main/java/redis/clients/jedis/Protocol.java, line 203.
Now find the corresponding enum usage on src/main/java/redis/clients/jedis/BinaryClient.java and change it also.
It may be sufficient: everywhere you still keep the old command java interfaces (e.g. zadd etc.), and inside Jedis it will talk to Redis calling the renamed command.
Is that your intention ?

RabbitMQ management plugin with local cluster

Is there any reason that the rabbitmq-management plugin wouldn't work when I'm using 'rabbitmq-multi' to spin up a cluster of nodes on my desktop? Or, more precisely, that the management plugin would cause that spinup to fail?
I get Error: {node_start_failed,normal} when rabbitmq-multi starts rabbit_1#localhost
The first node, rabbit#localhost seems to start okay though.
If I take out the management plugins, all the nodes start up (and then cluster) fine. I think I'm using a recent enough Erlang version (5.8/OTP R14A according to the README in my erl5.8.2 folder). I'm using all the plugins that are listed as required on the plugins page, including mochiweb, webmachine, amqp_client, rabbitmq-mochiweb, rabbitmq-management-agent, and rabbitmq-management. Those plugins, and only those plugins.
The problem is that rabbitmq-multi only assigns sequential ports for AMQP, not HTTP (or STOMP or AMQPS or anything else the broker may open). Therefore each node tries to listen on the same port for the management plugin and only the first succeeds. rabbitmq-multi will be going away in the next release; this is one reason why.
I think you'll want to start the nodes without using rabbitmq-multi, just with multiple invocations of rabbitmq-server, using environment variables to configure each node differently. I use a script like:
start-node.sh:
#!/bin/sh
RABBITMQ_NODE_PORT=$1 RABBITMQ_NODENAME=$2 \
RABBITMQ_MNESIA_DIR=/tmp/rabbitmq-$2-mnesia \
RABBITMQ_PLUGINS_EXPAND_DIR=/tmp/rabbitmq-$2-plugins-scratch \
RABBITMQ_LOG_BASE=/tmp \
RABBITMQ_SERVER_START_ARGS="-rabbit_mochiweb port 5$1" \
/path/to/rabbitmq-server -detached
and then invoke it as
start-node.sh 5672 rabbit
start-node.sh 5673 hare