I am studying and making a timeout command that exists in CentOS.
I run the sleep and find commands in the "timeout_bash.sh" file.
However, when the find command is executed, the "timeout_bash.sh" file appears to have been executed by the find command in the process list.
Can you tell me why?
Images URL : https://i.stack.imgur.com/t1U9l.jpg
#script File Name : timeout_bash.sh
#!/bin/bash
timeout_bash() {
scriptName="${0##*/}"
declare -i DEFAULT_TIMEOUT=5
declare -i DEFAULT_INTERVAL=1
declare -i DEFAULT_DELAY=1
declare -i timeout=DEFAULT_TIMEOUT
declare -i interval=DEFAULT_INTERVAL
declare -i delay=DEFAULT_DELAY
while getopts ":t:i:d:" option; do
case "$option" in
t) timeout=$OPTARG ;;
i) interval=$OPTARG ;;
d) delay=$OPTARG ;;
esac
done
shift $((OPTIND - 1))
if (($# == 0 || interval <= 0)); then
echo "Not Enter the Time. So not to execute timeout command"
exit 1
fi
# kill -0 pid Exit code indicates if a signal may be sent to $pid process.
(
((t = timeout))
while ((t > 0)); do
sleep $interval
kill -0 $$
((t -= interval))
done
# Be nice, post SIGTERM first.
# The 'exit 0' below will be executed if any preceeding command fails.
kill -s SIGTERM $$ && kill -0 $$
sleep $delay
kill -s SIGKILL $$
) 2> /dev/null &
exec "$#"
}
sleep 5
timeout_bash -t 60 find / -nouser -print 2>/dev/null >> a.txt
timeout_bash -t 60 find / -nogroup -print 2>/dev/null >> b.txt
timeout_bash -t 60 find / -xdev -perm -2 -ls | >> c.txt
timeout_bash -t 60 find / -xdev -name '.*' -print >> d.txt &
Related
I have the following commands which needs to be included in Ansible.
How can I incorporate these commands in Ansible module?
while sudo fuser /var/lib/dpkg/lock >/dev/null 2>&1 ; do
sleep 1
done
while sudo fuser /var/lib/apt/lists/lock >/dev/null 2>&1 ; do
sleep 1
done
if [ -f /var/log/unattended-upgrades/unattended-upgrades.log ]; then
while sudo fuser /var/log/unattended-upgrades/unattended-upgrades.log >/dev/null 2>&1 ; do
sleep 1
done
fi
You can definitely use a chunk of code in the shell module using the block style indicator for literal of YAML: |.
Just mind that the code should be further indented than the shell task.
- shell: |
while sudo fuser /var/lib/dpkg/lock >/dev/null 2>&1 ; do
sleep 1
done
while sudo fuser /var/lib/apt/lists/lock >/dev/null 2>&1 ; do
sleep 1
done
if [ -f /var/log/unattended-upgrades/unattended-upgrades.log ]; then
while sudo fuser /var/log/unattended-upgrades/unattended-upgrades.log >/dev/null 2>&1 ; do
sleep 1
done
fi
You could also refactor a little bit using a loop:
- shell: |
if [ -f {{ item }}]; then
while sudo fuser {{ item }} >/dev/null 2>&1 ; do
sleep 1
done
fi
loop:
- /var/lib/dpkg/lock
- /var/lib/apt/lists/lock
- /var/log/unattended-upgrades/unattended-upgrades.log
I`m trying to make a script which will make following:
Connect to postgres database, make dump, gzip dump and store to dir.
All this must be rotated - 24 backups for day, 7 for week
File name must contain date, hour and minutes in file name.
Old ones must be deleted (I dont want more backus as described)
This script will run every hour trough cron.
I wrote this script (change some found at web) but there are some bugs:
1) It says: "awk: line 1: syntax error at or near end of line"
2) when I run script it on overwrites dayly backup and don`t make new one
3) hour backup make only folder with name "hourly" and didn`t make backup
4) all backups will be backed up to synology NAS via rsync
Can anyone help me please?
Script backup.sh:
# !/bin/bash
# for use with cron, eg:
# 0 3 * * * postgres /var/db/db_backup.sh example_db
if [[ -z "$1" ]]; then
echo "Usage: $0 <example_db> [pg_dump example_db]"
exit 1
fi
DB="$1"; shift
DUMP_EXAMPLE_DB=$#
DIR="/var/db/backups/$DB"
KEEP_HOURLY=24
KEEP_DAILY=7
KEEP_WEEKLY=5
KEEP_MONTHLY=12
function rotate {
rotation=$1
fdate=`date +%Y-%m-%d-$H -d $date`
file=$DIR/daily/*$fdate*.gz
mkdir -p $DIR/$rotation/ || abort
if [ -f $file ]; then
cp $file $DIR/$rotation/ || abort
else
echo
fi
}
function prune {
dir=$DIR/$1
keep=$2
ls $dir | sort -rn | awk " NR > $keep" | while read f; do rm $dir/$f; done
}
function abort {
echo "aborting..."
exit 1
}
mkdir -p $DIR/hourly || abort
mkdir -p $DIR/daily || abort
mkdir -p $DIR/weekly || abort
mkdir -p $DIR/monthly || abort
mkdir -p $DIR/yearly || abort
date=`date +%Y-%m-%d` || abort
hour=`date -d $date +%H` || abort
minute=`date -d $date +M` || abort
day=`date -d $date +%d` || abort
weekday=`date -d $date +%w` || abort
month=`date -d $date +%m` || abort
# Do the daily backup
/usr/bin/pg_dump $DB $DUMP_EXAMPLE_DB | gzip > $DIR/daily/${DB}_$date.gz
test ${PIPESTATUS[0]} -eq 0 || abort
# Perform rotations
if [[ "$weekday" == "0" ]]; then
rotate weekly
fi
if [[ "$hour/$minute" == "0/0" ]]; then
rotate hourly
fi
if [[ "$day" == "01" ]]; then
rotate monthly
fi
if [[ "$month/$day" == "01/01" ]]; then
rotate yearly
fi
prune hourly $KEEP_HOURLY
prune daily $KEEP_DAILY
prune weekly $KEEP_WEEKLY
prune monthly $KEEP_MONTHLYOD
Many Thanks
I want to write a script to do restart of httpd instances only if it is in running status. For ine instance it is working fine, but more than one instance it is failing.
below is script which I am using:
ctl_var=`find /opt/apache/instances/ -name apachectl | grep -v "\/httpd\/"`
ctl_proc=`ps -ef | grep -i httpd | grep -i " 1 " wc -l`
if [ $ctl_proc <= 0 ];
then echo "httpd is not running";
else $ctl_var -k stop; echo "httpd stopped successfully" ;
sleep 5;
$ctl_var -k start;
sleep 5;
echo "httpd started" ps -ef | grep httpd | grep -i " 1 ";
fi
Please suggest...
You mentioned there are multiple instances, i see it misses for loop on execution of script. Here it only restarts the last one picked in the $ctl_var
Modified script should look something like below, tweak script if necessary :
ctl_var=`find /opt/apache/instances/ -name apachectl | grep -v "\/httpd\/"`
ctl_proc=`ps -ef | grep -i httpd | grep -i " 1 " wc -l`
for i in `echo $ctl_var`
do
if [ $ctl_proc <= 0 ];
then echo "httpd is not running";
else $i -k stop; echo "httpd stopped successfully" ;
sleep 5;
$i -k start;
sleep 5;
echo "httpd started" ps -ef | grep httpd | grep -i " 1 ";
fi
done
Hope this helps.
I have the script that works perfectly in case of starting from terminal, but it doesn't work from cron
#!/bin/bash
echo $(date) Starting...
rsync -avR --files-from=<(ssh -i /root/.ssh/id_rsa root#hostA 'find /Data/for_mk/* -type f -cmin -160') root#hostA:/ /
crontab:
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
*/2 * * * * sh /opt/script.sh >> /var/log/rsync 2>&1
log:
rsync: failed to open files-from file <(ssh -i /root/.ssh/id_rsa root#hostA find /Data/for_mk/* -type f -cmin -160): No such file or directory
rsync error: syntax or usage error (code 1) at main.c(1435) [client=3.0.9]
What I'm doing wrong?
Thanks in advance.
Try removing sh from:
*/2 * * * * sh /opt/script.sh >> /var/log/rsync 2>&1
Or replace it with bash
I need to see which files have been added or removed between two streams. The most obvious way would be "git lsfiles" in each stream. Except this is not GIT and I do not see an analogous command. So for today:
for f in $(find * -type f);do
accurev stat "$f"
done | \
fgrep -v '(external)' | \
awk '{print $1}' > .list
If there is a better way, it should be clear and easy to find here:
http://www.accurev.com/download/docs/5.7.0_books/AccuRev_5_7_User_CLI.pdf
but it is not. Help? Thank you.
If you want to see the difference between two streams, run the following command: accurev diff -a -v "Stream1" -V "Stream2"
As the command line question has been answered, here's how to do the same via the AccuRev GUI.
Select one dynamic stream, workspace or snapshot.
Right click and select "Show Diff By Files"
Select a different dynamic stream, workspace or snapshot.
You'll be presented with a list of files different between the two choices, and yes you can mix-and-match between dynamic streams, workspaces and snapshots.
You can then select any file and select "Show Difference" to see differences between the two files.
Since neither of the two answers addressed the question, I eventually worked out a script to do what is really needed. "accurev lsfiles" is sorely needed.
#! /bin/bash
declare -r progpid=$$
declare -r progdir=$(cd $(dirname $0) >/dev/null && pwd)
declare -r prog=$(basename $0)
declare -r program="$progdir/$prog"
declare -r usage_text=' [ <directory> ... ]
If no directory is specified, "." is assumed'
die() {
echo "$prog error: $*"
exec 1>/dev/null 2>&1
kill -9 $progpid
exit 1
} 1>&2
usage() {
test $# -gt 0 && {
exec 1>&2
echo "$prog usage error: $*"
}
printf "USAGE: $prog %s\n" "$usage_text"
exit $#
}
init() {
shift_ct=$#
tmpd=$(mktemp -d ${TMPDIR:-/tmp}/ls-XXXXXX)
test -d "$tmpd" || die "mktemp -d does not work"
exec 4> ${tmpd}/files
trap "rm -rf '$tmpd'" EXIT
prune_pat=
while :
do
test $# -eq 0 && break
test -f "$1" && break
[[ "$1" =~ -.* ]] || break
case "X$1" in
X-p )
prune_pat+="${2}|"
shift 2 || usage "missing arg for '-p' option"
;;
X-p* )
prune_pat+="${1#-p}"
shift
;;
X-x* )
set -x
tput reset 1>&2
PS4='>${FUNCNAME:-lsf}> '
shift
;;
X-* )
usage "unknown option: $1"
;;
* )
break
;;
esac
done
(( shift_ct -= $# ))
test ${#prune_pat} -gt 0 && \
prune_pat='(^|/)('${prune_pat%|}')$'
}
chkdir() {
declare list=$(exec 2> /dev/null
for f in "$#"
do ls -d ${f}/* ${f}/.*
done | \
grep -v -E '.*/\.\.*$' )
for f in $(accurev stat ${list} | \
grep -v -F '(external)' | \
awk '{print $1}' | \
sed 's#^/*\./##')
do
test ${#prune_pat} -gt 0 && [[ $f =~ ${prune_pat} ]] && continue
if test -d "$f"
then chkdir "$f"
elif test -f "$f" -o -L "$f"
then echo "$f" 1>&4
fi
done
}
init ${1+"$#"}
(( shift_ct > 0 )) && shift ${shift_ct}
test $# -eq 0 && set -- .
chkdir "$#"
exec 4>&-
sort -u ${tmpd}/files
It is a bit over-the-top, but I have a boilerplate I always use for my scripts.