Run redis-cli commands as a cron job - redis

How can i run the following redis-cli command using a cron job?
redis-cli info clients
I wrote a simple bash script with this command, and setup a cron job to run every minute. But it does not seem like redis-cli commands are working with crontab.
Any suggestion how i can achieve this?

As posted by Mark Setchell in the comment
You are probably missing the full path to the command, maybe /usr/local/bin/redis-cli or some such. Presumably you want to send the output somewhere too... use > someFile.txt at the and of the command.
referring to the full path of redis-cli worked for me.

Related

Setup Amazon S3 backup on QNAP using s3cmd

I own a QNAP-219P and I want to set this up manually using s3cmd.
I did quite a bit of research on this, and here are the references I got:
http://web.archive.org/web/20091120211330/http://codemonkeybrown.com/qnaps3.html
http://wiki.qnap.com/wiki/Running_Your_Own_Application_at_Startup
http://wiki.qnap.com/wiki/Add_items_to_crontab
http://blog.wingateuk.com/2013/03/cloud-backup-on-qnap-nas.html?showComment=1413660445187#c8935766892046800936
I'm trying to get the s3cmd to work on my TS-219P.
I got everything to work (on command line), even running the script file (s3-backup.sh) on command line:
#!/bin/bash <-- I also tried #!/bin/sh
/share/maintenance/s3cmd-1.5.0-rc1/s3cmd --rr sync -rv /share/all-shared-folders/emilie/ s3://kingjim-backup/kingjim-nas/emilie/ >> /share/maintenance/log/s3cmd/backup_`date "+%Y%m%d-%H-%M"`.log <-- I also tried running s3cmd via python by adding /usr/bin/python on the front.
If I run using the SSH command prompt, it seems to work perfectly.
The problem though, is the cronjob. I can confirm the cronjob trigger, and it was run, because my log file (the one above) was generated, but the log is always empty, even though I'm sure there are some new files created/modified.
This is my cronjob task:
14 3 * * * /share/maintenance/s3-backup.sh 2>&1 | logger
I've done a number of different variations on the above, but couldn't find out what was missing.
I feel like some dependency is missing when the crontab is running, as compared to when I run it on command prompt. But I don't know how to debug crontab.
Found out that the problem was that the s3cmd configuration file was not found when running s3cmd.
So the fix was simply to copy this .s3config file to a safe shared folder, and then call the s3cmd with the "--config" parameter followed by the file.
Like this:
/share/maintenance/s3-backup/s3cmd/s3cmd --config
/share/maintenance/s3-backup/s3cmd.config --rr sync -rv /share/MD0_DATA/ s3://xxx-backup/xxx-nas/ >> /share/maintenance/s3-backup/logs/backup_`date "+%Y%m%d-%H-%M"`.log 2>&1

Unable to run a postgresql script from bash

I am learning the shell language. I have creating a shell script whose function is to login into the DB and run a .sql file. Following are the contents of the script -
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
echo "Running SQL Dump - auto_qa_db_sync"
\\i auto_qa_db_sync.sql
After running the above script, I get the following error
./autoqa_script.sh: 39: ./autoqa_script.sh: /i: not found
Following one article, I tried reversing the slash but it didn't worked.
I don't understand why this is happening. Because when I try manually running the sql file, it works properly. Can anyone help?
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production and run script"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT -f auto_qa_db_sync.sql
The lines you put in a shell script are (moreless, let's say so for now) equivalent to what you would put right to the Bash prompt (the one ending with '$' or '#' if you're a root). When you execute a script (a list of commands), one command will be run after the previous terminates.
What you wanted to do is to run the client and issue a "\i ./autoqa_script.sh" comand in it.
What you did was to run the client, and after the client terminated, issue that command in Bash.
You should read about Bash pipelines - these are the way to run programs and input text inside them. Following your original idea to solving the problem, you'd write something like:
echo '\i auto_qa_db_sync.sql' | $DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
Hope that helps to understand.

SGE Command Not Found, Undefined Variable

I'm attempting to setup a new compute cluster, and currently experiencing errors when using the qsub command in the SGE. Here's a simple experiment that shows the problem:
test.sh
#!/usr/bin/zsh
test="hello"
echo "${test}"
test.sh.eXX
test=hello: Command not found.
test: Undefined variable.
test.sh.oXX
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
If I ran the script on the head node (sh test.sh), the output is correct. I submit the job to the SGE by typing "qsub test.sh".
If I submit the exact same script job in the same way on an established compute cluster like HPC, it works perfectly as expected. What setting could be causing this problem?
Thanks for any help on this matter.
Most likely the queues on your cluster are set to posix_compliant mode with a default shell of /bin/csh. The posix_compliant setting means your #! line is ignored. You can either change the queues to unix_behavior or specify the required shell using qsub's -S option.
#$ -S /bin/sh

Is there a simple way to use scp that will behave like rsync -u or cp -u

I'd like to be able to upload to my remote server only updating new files. I am using a nanoblogger, and it appears to upload the entire thing every time using plain scp -r, but I can't find any -u option for scp mentioned in the man pages.
I suppose I could try to somehow script the upload with an ls or find that grabs only files updated in the last $n minutes, or something, but that seems heavy handed.
Use rsync over SSH.
I you can scp, you can very probably rsync over ssh:
rsync -a /some/dir/ user#server:/dest/dir/
scp doesn't have any conditional option, and probably won't get it anytime soon. rsync seems like a very reasonable way, if it is installed on the target system; if not, some find + uniq magic could do the job, but would be serious work. Compiling rsync would probably be faster :-).

Define cron for mongodb table removing

I want to run db.clickreferrerurls.remove() command for mongodb from crontab by defining cron job at local machine.
How I could do this?
Use the --eval argument to mongo. e.g.
/usr/bin/mongo --eval "db.clickreferrerurls.remove()"