Running pssh as a cron job - ssh

I have the script below.
OUTPUT_FOLDER=/home/user/output
LOGFILE=/root/log/test.log
HOST_FILE=/home/user/host_file.txt
mkdir -p $OUTPUT_FOLDER
rm -f $OUTPUT_FOLDER/*
pssh -h $HOST_FILE -o $OUTPUT_FOLDER "cat $LOGFILE | tail -n 100 | grep foo"
When I run this script on its own, it works fine and the $OUTPUT_FOLDER contains the output from the servers in the $HOST_FILE. However, when I ran the script as a cronjob, the $OUTPUT_FOLDER is created, but it's always empty. It's as if the pssh command was never executed.
Why is this? How do I resolve this?

Related

How to read lines from .txt file into this bash script?

I have this bash script which connects to a postgre sql db and performs a query. I would like to be able to read line from a .txt file into the query as parameters. What is the best way to do that? Your assistance is greatly appreciated! I have my example code below however it is not working.
#!/bin/sh
query="SELECT ci.NAME_VALUE NAME_VALUE FROM certificate_identity ci WHERE ci.NAME_TYPE = 'dNSName' AND reverse(lower(ci.NAME_VALUE)) LIKE reverse(lower('%.$1'));"
(echo $1; echo $query | \
psql -t -h crt.sh -p 5432 -U guest certwatch | \
sed -e 's:^ *::g' -e 's:^*\.::g' -e '/^$/d' | \
sed -e 's:*.::g';) | sort -u
Considering that the file has only one sql query per line:
while read -r line; do echo "${line}" | "your code to run psql here"; done < file_with_query.sql
That means: while read the content of file_with_query.sql line by line, do something with each line.

Run RapSearch-Program with Torque PBS and qsub

My problem is that I have a cluster-server with Torque PBS and want to use it to run a sequence-comparison with the program rapsearch.
The normal RapSearch command is:
./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32
Now I want to run it with 2 nodes on the cluster-server.
I've tried with: echo "./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32" | qsub -l nodes=2 but nothing happened.
Do you have any suggestions? Where I'm wrong? Help please.
Standard output (and error output) files are placed in your home directory by default; take a look. You are looking for a file named STDIN.e[numbers], it will contain the error message.
However, I see that you're using ./rapsearch but are not really being explicit about what directory you're in. Your problem is therefore probably a matter of changing directory into the directory that you submitted from. When your terminal is in the directory of the rapsearch executable, try echo "cd \$PBS_O_WORKDIR && ./rapsearch [arguments]" | qsub [arguments] to submit your job to the cluster.
Other tips:
You could add rapsearch to your path if you use it often. Then you can use it like a regular command anywhere. It's a matter of adding the line export PATH=/full/path/to/rapsearch/bin:$PATH to your .bashrc file.
Create a submission script for use with qsub. Here is a good example.

error when cassandra-cli command executed in ssh

I have two servers A and B, I have a shell script in serverA which logs into serverB (through ssh) and runs the following command:
sh cassandra-cli -h <serverB> -v -f database_import.txt;
so when I do this manually, I follow these steps:
serverA:~$ ssh serverB
serverB:~$ sh cassandra-cli -h <serverB> -v -f database_import.txt;
It works properly when I follow these steps manually but when I automate this process in a shell script by this following line:
serverA:~$ssh serverB "sh cassandra-cli -h <serverB> -v -f database_import.txt;"
I get this error,
cassandra-cli: 46: cassandra-cli: -ea: not found
So, as you already pointed out, $JAVA is empty through ssh.
This is because .bashrc is not sourced when you log in using ssh. You can source it like this:
. ~/.bashrc
And your command is going to look like this:
ssh serverB ". ~/.bashrc; sh cassandra-cli -h <serverB> -v -f database_import.txt;"
You can also try placing this into your .bash_profile instead of invoking it manually each time.
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

background xargs/wget not adhering to -P and -n limits

I'm having a problem with xargs and Wget when run as shell scripts in an Applescript app. I want Wget to run 4 parallel processes in the background. The problem: basically, when I try to run the process in the background with
cat urls.txt | xargs -P 4 -n 1 /usr/local/bin/wget -q -E -b 1> NUL 2> NUL
a Wget process is apparently started for each URL passed in from the .txt file. This is too burdensome on the user's memory. When I run it in the foreground, however, with something like:
cat urls.txt | xargs -P 4 -n 1 /usr/local/bin/wget -q -E
I seem to get the four parallel Wget processes I need. Does anybody know how to get this script to run in the background with only 4 processes? I'm a bit of a novice, and I'm afraid I can't figure out why backgrounding the process causes this change.
You might run xargs on the background instead:
cat urls.txt | xargs -P4 -n1 wget -q &
Or if you want to return control to the AppleScript, disown the xargs process:
do shell script "cat urls.txt | xargs -P4 -n1 /usr/local/bin/wget -q & disown $!"
As far as I can tell, I have solved the problem with
cat urls.txt| (xargs -P4 -n1 wget -q -E >/dev/null 2>&1) &
There may well be a better solution, though...

RVM on debian and rails 3

I'm following https://github.com/diaspora/diaspora/wiki/Installing-on-Debian and trying to get RVM installed on debian. Executing
bash < <(curl -s https://rvm.io/install/rvm)
gives me nothing, it does not result in anything. I have curl installed but the command doesn't generate any output.
You can just download https://rvm.io/install/rvm via browser and then execute bash ./rvm . The command you entered actually executes the same.
EDIT: the new way:
curl -L https://get.rvm.io -o rvm-installer
chmod +x rvm-installer
./rvm-installer
rm -f rvm-installer
Which is equivalent to:
curl -L https://get.rvm.io | bash
The former command was using -s without -S which was hiding errors from curl.