I need help creating a shell script to tell me if a process is up and running so I can monitor that process in PRTG - scripting

I'm using a monitoring system called PRTG to monitor our environment. PRTG KB suggested using a script to monitor processes called SSH Script. The script needs to be stored in /var/prtg/scripts.
I found a script that someone used for PRTG:
#!/bin/sh
pgrep wrapper $1 2>&1 1>/dev/null
if [ $? -ne 0 ]; then
echo "1:$?:$1 Down"
else
echo "0:$?:OK"
fi
However, PRTG is returning the following error code within the Web GUI:
Response not well-formed: "pgrep: only one pattern can be provided Try `pgrep --help' for more information. 1:0:Wrapper Down "
However, when I run the script on the Linux server it prints out:
0:1:OK
So my question would be what would be the best script to use to tell PRTG that a process is "Down" or "UP"?
###################
Editing for further clarification:
I changed the script up and it works awesome on the command line...but it appears that the issue is with how PRTG is reading the output. Apparently it's not in the correct format. So here's my script:
#!/bin/bash
SERVICE="wrapper"
if pgrep -x "$SERVICE" >/dev/null
then
echo "$SERVICE is running"
else
echo "$SERVICE stopped"
fi
This is what PRTG is erroring out with:
Response not well-formed: "wrapper is running "
So...PRTG is saying that the sensor I'm using wants the script output into this format:
The returned data for standard SSH Script sensors must be in the following
format:
returncode:value:message
Value has to be a 64-bit integer or float. It is used as the resulting value
for this sensor (for example, bytes, milliseconds) and stored in the
database. The message can be any string (maximum length: 2000 characters).
The SSH script returncode has to be one of the following values:
VALUE DESCRIPTION
0 OK
1 WARNING
2 System Error
3 Protocol Error
4 Content Error
So I guess now...the question is how do I get that script to output what PRTG wants to see?

From the changes you made to the script, is shall look like below, and called like this example: servicecheck.sh sshd
#!/bin/sh
pgrep -x $1 2>&1 1>/dev/null
if [ $? -ne 0 ]; then
echo "1:$?:$1 Down"
else
echo "0:$?:OK"
fi

Related

ssh to remote server with arguments to run scripts

I have lots of data that needs to be processed, and have access to 3 separate remote servers. The logic is to split up the data crunching among the 3 different servers instead of running on a single one. Note, that all three remote servers are able to point to a single directory, which is where I have the master scripts to process all of the data. The problem I am have is carrying over my arguments when I call different bash scripts.
For example, I have the master script which looks something like:
processing stuff
more stuff
# call the first script
$scriptdir/step1.csh $date $time $name
Within step1.csh, if I have something very simple where I am able to connect to one of the remote servers and output the hostname to a text file, such as:
#!/bin/bash
ssh name#hostname bash -c '
echo `hostname` > host.txt
I get the desired outcome, where 'host.txt' will be the hostname of the desired connected hostname. However, If step1.csh looks like:
#!/bin/bash
mydate=$1
mytime=$2
myname=$3
ssh name#hostname bash '
echo `hostname` > host.txt
echo ${mydate} > host.txt
I get the error saying that 'mydate: undefined variable'
Furthermore, If I do something along the lines of:
#!/bin/bash
mydate=$1
mytime=$2
myname=$3
ssh name#hostname "python /path/to/somewhere/to/run/${mydate}/and/${mytime}
It still runs on the local, and not remote server. What am I missing here?
So the first part:
#!/bin/bash
mydate=$1
mytime=$2
myname=$3
ssh name#hostname bash '
echo `hostname` > host.txt
echo ${mydate} > host.txt
The solution is:
#!/bin/bash
mydate=$1
mytime=$2
myname=$3
ssh -T name#hostname << EOF
echo `hostname` > host.txt
echo ${mydate} > host.txt
EOF
However, I am still having issues as in where I try to run a python script on the remote server; it is always ran on the local server.

resize one video in 2 sizes in single command

When user uploads video then I make its 2 sizes. Earlier, I was doing this in two steps like following
First Size:
ffmpeg -i in.mp4 -filter:v "scale=iw*min(1170/iw\,300/ih):ih*min(1170/iw\,300/ih), pad=1170:300:(1170-iw*min(1170/iw\,300/ih))/2:(300-ih*min(1170/iw\,300/ih))/2" out.mp4
Second Size:
ffmpeg -i in.mp4 -filter:v "scale=iw*min(365/iw\,172/ih):ih*min(365/iw\,172/ih), pad=365:172:(365-iw*min(365/iw\,172/ih))/2:(172-ih*min(365/iw\,172/ih))/2" out1.mp4
But now to reduce processing time, I want to combine these 2 steps in one. I have read https://trac.ffmpeg.org/wiki/Creating%20multiple%20outputs and make following command
ffmpeg -i in.mp4 -filter:v "scale=iw*min(1170/iw\,300/ih):ih*min(1170/iw\,300/ih), pad=1170:300:(1170-iw*min(1170/iw\,300/ih))/2:(300-ih*min(1170/iw\,300/ih))/2" bigVideo.mp4 \ -filter:v "scale=iw*min(365/iw\,172/ih):ih*min(365/iw\,172/ih), pad=365:172:(365-iw*min(365/iw\,172/ih))/2:(172-ih*min(365/iw\,172/ih))/2" smallVideo.mp4
But it is giving following error
[NULL # 0xaee5440] Unable to find a suitable output format for ' -filter:v'
-filter:v: Invalid argument
so can anyone suggest me how i can solve it?
I tried to run both commands using the following script:
#!/bin/bash
for cmd in "$#"; do {
echo "Process \"$cmd\" started";
$cmd & pid=$!
PID_LIST+=" $pid";
} done
trap "kill $PID_LIST" SIGINT
echo "Parallel processes have started";
wait $PID_LIST
echo
echo "All processes have completed";
You can save it as filename.sh and make executable. after that you need to pass two of more commands as arguments, for example I ran as:
./filename.sh "ffmpeg -i input.mp4 -s 720x480 output1.mp4" "ffmpeg -i input.mp4 -s 1170x480 output2.mp4"
Your command was bit complicated for me so I try to run simple commands using parallel script.

glassfish start script fails through crontab

I have a created a script to check to see if my glassfish server is running (installed on a freebsd system), if it isn't, the script attempts to kill the java process to ensure it's not hung, and then issues the asadmin start-domain command
If this script runs from the command line it is successful 100% of the time. When it is run from the cron tab, every line runs except the asadmin start-domain line - it does not seem to execute or at least does not complete, i.e. the server is not running after this script runs.
For anyone not familiar with glassfish or the asadmin utility used to start the server, it is my understanding that a forked process is used. could this be causing a problem via cron?
Again, in all my tests today, the script runs to completion when run from the command line. Once it's executed through the cron, it does not complete... what would be different running this from the crontab???
thanks in advance for any help... i'm pulling my hair out trying to make this work!
#!/bin/bash
JAVA_HOME=/usr/local/diablo-jdk1.6.0/; export JAVA_HOME
timevar=`date +%d-%m-%Y_%H.%M.%S`
process_name='java'
get_contents=`cat urls.txt`
for i in $get_contents
do
echo checking $i
statuscode=$(curl --connect-timeout 10 --write-out %{http_code} --silent --output /dev/null $i)
case $statuscode in
200)
echo "$timevar $i $statuscode okay" >> /usr/home/user1/logfile.txt
;;
*)
echo "$timevar $i $statuscode bad" >> /usr/home/user1/logfile.txt
echo "Status $statuscode found" | mail -s "Check of $i failed" some.address#gmail.com
process_id=`ps acx | grep -i $process_name | awk {'print $1'}`
if [ -z "$process_id" ]
then
echo "java wasn't found in the process list"
else
echo "Killing java, currently process $process_id"
kill -9 $process_id
fi
/usr/home/user1/glassfish3/bin/asadmin start-domain domain1
;;
esac
done
Also, just for completeness, here is the entry in the cron tab:
*/2 * * * * /usr/home/user1/server.check.sh >> /usr/home/user1/cron.log
Ok... found the answer to this on another site, but I thought I'd add the answer in here for future reference.
The problem was the PATH!! even though java_home was set, java itself wasn't in the path for the cron daemon.
A quick test to see what path is available to your cron, add this line:
*/2 * * * * env > /usr/home/user1/env.output
From what I can gather, the PATH initially available to cron is pretty minimal. Since java was in /usr/local/bin, i added that to the path right in the crontab and kaboom! it worked!
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/2 * * * * /usr/home/user1/server.check.sh >> /usr/home/user1/cron.log

Redirect stderr to stdout in C shell

When I run the following command in csh, I got nothing, but it works in bash.
Is there any equivalent in csh which can redirect the standard error to standard out?
somecommand 2>&1
The csh shell has never been known for its extensive ability to manipulate file handles in the redirection process.
You can redirect both standard output and error to a file with:
xxx >& filename
but that's not quite what you were after, redirecting standard error to the current standard output.
However, if your underlying operating system exposes the standard output of a process in the file system (as Linux does with /dev/stdout), you can use that method as follows:
xxx >& /dev/stdout
This will force both standard output and standard error to go to the same place as the current standard output, effectively what you have with the bash redirection, 2>&1.
Just keep in mind this isn't a csh feature. If you run on an operating system that doesn't expose standard output as a file, you can't use this method.
However, there is another method. You can combine the two streams into one if you send it to a pipeline with |&, then all you need to do is find a pipeline component that writes its standard input to its standard output. In case you're unaware of such a thing, that's exactly what cat does if you don't give it any arguments. Hence, you can achieve your ends in this specific case with:
xxx |& cat
Of course, there's also nothing stopping you from running bash (assuming it's on the system somewhere) within a csh script to give you the added capabilities. Then you can use the rich redirections of that shell for the more complex cases where csh may struggle.
Let's explore this in more detail. First, create an executable echo_err that will write a string to stderr:
#include <stdio.h>
int main (int argc, char *argv[]) {
fprintf (stderr, "stderr (%s)\n", (argc > 1) ? argv[1] : "?");
return 0;
}
Then a control script test.csh which will show it in action:
#!/usr/bin/csh
ps -ef ; echo ; echo $$ ; echo
echo 'stdout (csh)'
./echo_err csh
bash -c "( echo 'stdout (bash)' ; ./echo_err bash ) 2>&1"
The echo of the PID and ps are simply so you can ensure it's csh running this script. When you run this script with:
./test.csh >test.out 2>test.err
(the initial redirection is set up by bash before csh starts running the script), and examine the out/err files, you see:
test.out:
UID PID PPID TTY STIME COMMAND
pax 5708 5364 cons0 11:31:14 /usr/bin/ps
pax 5364 7364 cons0 11:31:13 /usr/bin/tcsh
pax 7364 1 cons0 10:44:30 /usr/bin/bash
5364
stdout (csh)
stdout (bash)
stderr (bash)
test.err:
stderr (csh)
You can see there that the test.csh process is running in the C shell, and that calling bash from within there gives you the full bash power of redirection.
The 2>&1 in the bash command quite easily lets you redirect standard error to the current standard output (as desired) without prior knowledge of where standard output is currently going.
I object the above answer and provide my own. csh DOES have this capability and here is how it's done:
xxx |& some_exec # will pipe merged output to your some_exec
or
xxx |& cat > filename
or if you just want it to merge streams (to stdout) and not redirect to a file or some_exec:
xxx |& tee /dev/null
As paxdiablo said you can use >& to redirect both stdout and stderr. However if you want them separated you can use the following:
(command > stdoutfile) >& stderrfile
...as indicated the above will redirect stdout to stdoutfile and stderr to stderrfile.
xxx >& filename
Or do this to see everything on the screen and have it go to your file:
xxx | & tee ./logfile
What about just
xxx >& /dev/stdout
???
I think this is the correct answer for csh.
xxx >/dev/stderr
Note most csh are really tcsh in modern environments:
rmockler> ls -latr /usr/bin/csh
lrwxrwxrwx 1 root root 9 2011-05-03 13:40 /usr/bin/csh -> /bin/tcsh
using a backtick embedded statement to portray this as follows:
echo "`echo 'standard out1'` `echo 'error out1' >/dev/stderr` `echo 'standard out2'`" | tee -a /tmp/test.txt ; cat /tmp/test.txt
if this works for you please bump up to 1. The other suggestions don't work for my csh environment.

Using expect to login to amazon server with .PEM file

I need to do the following:
Log into my amazon server
Change to a specific directory and run a script
The script executes an svn up, I need to be able to pass my username and password to this script.
I've read I might be able to do this with expect? Can I do the login via a shell script and then invoke expect to run the custom script?
Basically, I'm just looking for a good way to do this and would appreciate a pointer in the right direction.
You can use ssh to pass a shell commands to be run on remote Instance.
For example, here's how I check logs on multiple Servers:
#!/bin/bash
nas_servers=(
"ec2-xx-xx-xxx-xxx.ap-xxxx.compute.amazonaws.com"
"ec2-xx-xx-xxx-xxx.ap-xxxx.compute.amazonaws.com"
"ec2-xx-xx-xxx-xxx.ap-xxxx.compute.amazonaws.com"
"ec2-xx-xx-xxx-xxx.ap-xxxx.compute.amazonaws.com"
)
for s in "${nas_servers[#]}"
do
echo "Cheking $s:"
ret=$(ssh -i ~/pem/Key.pem "user#$s" bash << 'EOF'
files=/var/log/syslog*
for f in $files
do
if [[ ${f##*.} = 'gz' ]]; then
cmd=zcat
else
cmd=cat
fi
$cmd $f | egrep -wi 'error|warn|crit|fail'
done
EOF
)
if [[ -z $ret ]]; then
echo "No errors found."
else
echo "$ret"
fi
done