Can Monit send an alert if a folder size exceeds a certain number? - monit

I'm monitoring a file size using Monit the following way:
check file mydb with path /data/mydatabase.db
if size > 1 GB then alert
Can I do something similar to monitor a directory size?

Don't think they have one but my workaround is to use check program the following way:
check program the_folder_size
with path "/path/to/bashscript /path/to/folder_to_monitor 500"
if status != 0 then alert
where the bash script takes in two arguments, the path to the folder to monitor and the size (in kB) which the folder isn't allowed to exceed. The echo part is to make monit output the folder size in the monit web gui.
#!/bin/bash
FOLDER_PATH=$1
REFERENCE_SIZE=$2
SIZE=$(/usr/bin/du -s $FOLDER_PATH | /usr/bin/awk '{print $1}')
echo "$FOLDER_PATH - ${SIZE}kB"
if [[ $SIZE -gt $(( $REFERENCE_SIZE )) ]]; then exit 1;fi

You can write the bash command in the config to check the directory size.
The unit is in KB.
CHECK PROGRAM data_size
WITH PATH /bin/bash -c '[ $(du -s /data/path/ | awk "{print \$1}") -lt 500 ]'
IF status != 0
THEN alert

Related

NcFTP -S with -bb

I am trying to upload all changed files to my FTP server. However, I cannot use -S .tmp and -v when I use the -bb flag - and I can't use those options with ncftpbatch at all. Here is my code:
#!/bin/bash -eo pipefail
IN=$(git diff-tree --no-commit-id --name-only -r HEAD)
OUT=$(echo $IN | tr ";" "\n")
for file in "${OUT[#]}"; do
ncftpput -bb -S .tmp -v -u "zeussite#kolechia.heliohost.org" -p "*****" ftp.kolechia.heliohost.org "/" $file
done
ncftpbatch
As you can see, I need the -S .tmp to avoid breaking the site during uploads. -v provides output to prevent my CI service from timing out.
How can I upload only the changed files - without temporarily breaking the site? I'm thinking of just logging in separately for each file, but that is bad practice.
Why not launch a function in background which just prints dummy values like uploading, please wait and then sleeps for few seconds and do it again. Outside the loop you can kill that background job
If you don't want any output
printf "\0"
or
printf "a\b"

Move file, change permissions and rename it keeping the same extesion

Using zsh 5.2 on Fedora 24 workstation.
I want to be programatically able to:
move an image file (can have jpg/ jpeg/ png/ JPG/ PNG extensions)
from /tmp/folder1 to ~/Pictures
This file will have the same few initial characters --- prefix111.jpg OR prefix222.png, etc.
rename the file such that samefilename.JPG becomes 20161013.jpg
20161013 is today's date in yyyymmdd format
Note that the extension becomes small letters
And JPEG or jpeg becomes jpg
change the permissions of the moved file to 644
All at one go.
If there are multiple prefix* files, the command should just fail silently.
I will initially like to do it at the command prompt with an option to add a cron job later. I mean, will the same zsh command/ script work in cron?
I am sure, this is doable. However, with my limited shell knowledge, could only achieve:
mv /tmp/folder1/prefix-*.JPG ~/Pictures/$(date +'%Y%m%d').jpg
Problems with my approach are many. It does not handle capitalization, does not take care of different extensions and does not address the permission issue.
How about this:
#!/bin/sh
FILES="/tmp/folder1/prefix*.jpg /tmp/folder1/prefix*.jpeg /tmp/folder1/prefix*.png h/tmp/folder1/prefix*.JPG /tmp/folder1/prefix*.PNG"
if [ $(ls $FILES | wc -l ) -gt 1 ]; then
exit 1
fi
if [ $(ls $FILES | grep -i '\.png$') ]; then
SUFF=png
else
SUFF=jpg
fi
DEST=$HOME/Pictures/$(date +'%Y%m%d').$SUFF
mv $FILES $DEST
chmod 644 $DEST

Run RapSearch-Program with Torque PBS and qsub

My problem is that I have a cluster-server with Torque PBS and want to use it to run a sequence-comparison with the program rapsearch.
The normal RapSearch command is:
./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32
Now I want to run it with 2 nodes on the cluster-server.
I've tried with: echo "./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32" | qsub -l nodes=2 but nothing happened.
Do you have any suggestions? Where I'm wrong? Help please.
Standard output (and error output) files are placed in your home directory by default; take a look. You are looking for a file named STDIN.e[numbers], it will contain the error message.
However, I see that you're using ./rapsearch but are not really being explicit about what directory you're in. Your problem is therefore probably a matter of changing directory into the directory that you submitted from. When your terminal is in the directory of the rapsearch executable, try echo "cd \$PBS_O_WORKDIR && ./rapsearch [arguments]" | qsub [arguments] to submit your job to the cluster.
Other tips:
You could add rapsearch to your path if you use it often. Then you can use it like a regular command anywhere. It's a matter of adding the line export PATH=/full/path/to/rapsearch/bin:$PATH to your .bashrc file.
Create a submission script for use with qsub. Here is a good example.

Scripted truecrypt mount, without using /dev/ or UUID

I have 5 truecrypt encrypted drives. Running ubuntu 13.04. I'm trying to run the following command in a script to mount my drives.
truecrypt -t /dev/disk/by-uuid/25f8c629-d0c8-4c39-b4c2-aacba38b5882 /media/P --password="$password" -k "" --protect-hidden=no
Because of the way truecrypt works I cant use this, because the UUID is only accessible once the drives are mounted.
Is it possible to do the same thing but with hard-drive serial numbers, or model numbers? Something a bit more permanent?
I cant use the /dev/ as they change randomly nearly every time I reboot the PC. This is due to 2 of my drives being connected via a PCI card.
Use Disk ID instead:
#!/bin/bash
# Run this script as root to avoid entering the root password twice
secret=0xa52f2c38
# Generate tempfile
tempfile=fdisk.tmp
sudo fdisk -l > $tempfile
# --------------------------------------------------------------------------
# Locate secret drive and mount it
# --------------------------------------------------------------------------
num=$[ $(grep -n "^Disk identifier: $secret" $tempfile | cut -f1 -d:) - 5 ]
if [ $num \> 0 ] # num will be greater than 0 if drive exists
then
# Get line containing /dev
# ----------------------------------------------------------------------
dev=$(sed -n "${num}p" $tempfile | cut -f2 -d' ' | sed 's/://')
truecrypt $dev /media/secret
# Check (Create .truecrypt on the mounted volumen beforehand)
# ----------------------------------------------------------------------
if [ ! -f /media/secret/.truecrypt ]
then
zenity --error --text="There was a problem mounting secret"
fi
fi
rm $tempfile
The source of the script is: http://delightlylinux.wordpress.com/2012/05/21/mounting-truecrypt-volumes-by-disk-id/
I recommend you to read it through if you have difficulty understanding what the script is doing. The explanation is thorough.

Is there a curl/wget option that prevents saving files in case of http errors?

I want to download a lot of urls in a script but I do not want to save the ones that lead to HTTP errors.
As far as I can tell from the man pages, neither curl or wget provide such functionality.
Does anyone know about another downloader who does?
I think the -f option to curl does what you want:
-f, --fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better
enable scripts etc to better deal with failed attempts. In normal cases when an HTTP
server fails to deliver a document, it returns an HTML document stating so (which often
also describes why and more). This flag will prevent curl from outputting that and
return error 22. [...]
However, if the response was actually a 301 or 302 redirect, that still gets saved, even if its destination would result in an error:
$ curl -fO http://google.com/aoeu
$ cat aoeu
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
here.
</BODY></HTML>
To follow the redirect to its dead end, also give the -L option:
-L, --location
(HTTP/HTTPS) If the server reports that the requested page has moved to a different
location (indicated with a Location: header and a 3XX response code), this option will
make curl redo the request on the new place. [...]
One liner I just setup for this very purpose:
(works only with a single file, might be useful for others)
A=$$; ( wget -q "http://foo.com/pipo.txt" -O $A.d && mv $A.d pipo.txt ) || (rm $A.d; echo "Removing temp file")
This will attempt to download the file from the remote Host. If there is an Error, the file is not kept. In all other cases, it's kept and renamed.
Ancient thread.. landed here looking for a solution... ended up writing some shell code to do it.
if [ `curl -s -w "%{http_code}" --compress -o /tmp/something \
http://example.com/my/url/` = "200" ]; then
echo "yay"; cp /tmp/something /path/to/destination/filename
fi
This will download output to a tmp file, and create/overwrite output file only if status was a 200. My usecase is slightly different.. in my case the output takes > 10 seconds to generate... and I did not want the destination file to remain blank for that duration.
NOTE: I am aware that this is an older question, but I believe I have found a better solution for those using wget than any of the above answers provide.
wget -q $URL 2>/dev/null
Will save the target file to the local directory if and only if the HTTP status code is within the 200 range (Ok).
Additionally, if you wanted to do something like print out an error whenever the request was met with an error, you could check the wget exit code for non-zero values like so:
wget -q $URL 2>/dev/null
if [ $? != 0]; then
echo "There was an error!"
fi
I hope this is helpful to someone out there facing the same issues I was.
Update:
I just put this into a more script-able form for my own project, and thought I'd share:
function dl {
pushd . > /dev/null
cd $(dirname $1)
wget -q $BASE_URL/$1 2> /dev/null
if [ $? != 0 ]; then
echo ">> ERROR could not download file \"$1\"" 1>&2
exit 1
fi
popd > /dev/null
}
I have a workaround to propose, it does download the file but it also removes it if its size is 0 (which happens if a 404 occurs).
wget -O <filename> <url/to/file>
if [[ (du <filename> | cut -f 1) == 0 ]]; then
rm <filename>;
fi;
It works for zsh but you can adapt it for other shells.
But it only saves it in first place if you provide the -O option
As alternative you can create a temporal rotational file:
wget http://example.net/myfile.json -O myfile.json.tmp -t 3 -q && mv list.json.tmp list.json
The previous command will always download the file "myfile.json.tmp" however only when the wget exit status is equal to 0 the file is rotated as "myfile.json".
This solution will prevent to overwrite the final file when a network failure occurs.
The advantage of this method is that in case that something is wrong you can inspect the temporal file and see what error message is returned.
The "-t" parameter attempt to download the file several times in case of error.
The "-q" is the quiet mode and it's important to use with cron because cron will report any output of wget.
The "-O" is the output file path and name.
Remember that for Cron schedules it's very important to provide always the full path for all the files and in this case for the "wget" program it self as well.
You can download the file without saving using "-O -" option as
wget -O - http://jagor.srce.hr/
You can get mor information at http://www.gnu.org/software/wget/manual/wget.html#Advanced-Usage