get exit code of a background process that completed a while ago - process

This may not be a completely new topic but I ran into a little bit odd situation.
I'm processing about 1000 files in a loop by kicking off a script in background. I want to take some actions on the files based on the exit code each process returns. By the time I go in a loop to wait for each process to complete I found that some of the process were already done. I modified the script to wait only if pgrep finds a process and just assumed a process completed successfully otherwise. The problem is- I have to know exit code of each process in order to take action on the corresponding file. Any ideas?
pid_list=()
for FILE in $SOME_FOLDER
do
(process with FILE as parameter) &
done
for pid in "${pid_list[#]}"
do
if pgrep $pid; then #process could have just completed as we got here
if wait $pid; then
echo "process $pid successfully completed!" >> $logfile
else
echo "process $pid failed!" >> $logfile
fnc_error_exit
fi
else
echo "assumed that process $pid successfully completed but I DON'T KNOW THE EXIT CODE!" >> $logfile
continue
fi
done

I cannot solve your problem exactly.
Since I do not know the exact situation of you, anyway, could you try another method using parent and child scripts ?
For example, the topmost script is like this:
for FILE in $HOME/*.txt
do
parent.sh $FILE &
done
Then, the parent.sh is like this:
child.sh $1
RC=$?
case $RC in
0 ) echo Exit code 0 for $1 >> parent.log
;;
1 ) echo Exit code 1 for $1 >> parent.log
;;
* ) echo Other Exit code $RC for $1 >> parent.log
;;
esac
The child script is like this:
grep -q hello $1
Then, the parent.sh will handle every exit code of the child.sh
All files will be handled by a parent.sh, without a missing handling.

Related

Is there any if statment in awk to compare a word greped from a file?

I am new to any programming and shell scripting.
I am trying to make a if condition in shell script.
I am used of some computed codes for density functional theory (say Quantum espresso).
I want to make the program automatic via a shell script.
My code produce case.data which contains at the end stop (at $2 or we can say at second column).
For example below script should print stop
cat case.data | tail -n 1 | awk '{print $2}'
so if I get stop from above script then then if statement should not produce anything and the rest file should be executed. If I do not get stop from then above script then executable commands in my file should not be executed and a text file containing exit should be executed so that it terminates my job.
What I tried is:
#!bin/bash
# Here I have my other commands that will give us case.data and below is my if statement.
STOP=$(cat $case.dayfile | tail -n 1 | awk '{print $2}')
if [$STOP =="stop"]
then
echo "nil"
else
echo "exit" > exit
chmod u+x exit
./exit
fi
# here after I have other executable that will be executed depending on above if statement
Your whole script should be changed to just this assuming you really do want to print "nil" in the success case for some reason:
awk '{val=$2} END{if (val=="stop") print "nil"; else exit 1}' "${case}.data" || exit
and this otherwise:
awk '{val=$2} END{exit (val=="stop" ? 0 : 1)}' "${case}.data" || exit
I used a ternary in that last script instead of just exit (val!="stop") or similar for clarity given the opposite values of true in a condition and success in an exit status.
Like this?:
if awk 'END{if($2!="stop")exit 1}' case.data
then
echo execute commands here
fi

shell script to exit out of the script if diskspace is more than 75

I want a script to exit out of the script if disk space is beyond threshold(ex:75%). Trying below things, But no luck.
df -kh | awk '{if( 0+$5 >= 75 ) exit;}'
Trying above command, its not working. Can anyone help me on this.
This is because your df output is NOT coming in a single line or so, to make this you need to add -P option with it try following once.
df -hP | awk '{if( 0+$5 >= 75 ){print "exiting now..";exit 1}}'
OR
df -hP | awk '$5+0>=75{print "exiting now..";exit 1}'
OR with mount name who is the culprit for breaching threshold.
df -hP | awk '$5+0>=75{print "Mount " $1 " has crossed threshold so exiting now..";exit 1}'
In case you don't have -P option in your box then try following.
df -k | awk '/^ +/ && $4+0>=75{print "Mount " prev" has crossed threshold so exiting now..";exit 1} !/^ +/{prev=$0}'
I am using print statement to make sure exit is working. also -P option was tested on BASH systems.
Since OP told he needs to exit from complete script itself so requesting OP to add following code outside of for loop of his code.(I haven't tested it though but this should work)
if [[ $? -eq 1 ]]
then
echo "Exiting the complete script now..."
exit
else
echo "Looks good so going further in script now.."
fi
If you are using this in a script to exit the script (as opposed to exiting a long awk script) then you need to call exit from the outer script:
if df -kh | awk '{if ($5+0 > 75) exit 1 }'; then echo OK; else echo NOT; fi
Don't forget that df returns one line per mount point, you can do:
if dk -kh /home ....
to check for a particular mount point.

impossible capture sql error with shell script

I have 2 shell scripts. One with data connection like that (script1.sh):
#!/usr/bin/ksh
query="$1"
whenever sqlerror exit 3
connect $user/$pass#sid
${query}
EOF
echo $?
if [ 0 -ne "$?" ]; then
exit 1
fi
and other is a shell script bigger where I execute sql commands like these:
#!/usr/bin/ksh
set -x
$PATH/script1.sh "
--set serveroutput on
--set feedback off
insert into table (column) values ('$1');
commit;
"
if [[ $? != 0 ]]
then
echo "Error"
exit 3
else
echo "Ok"
fi
............
..............
The problem is that these second script won't detect error in sql commands and always continues with all code. I put traces and I check that rc is always 0.
Could you help me to can detect errors if the sql failed? Thanks!

Writing a script in Unix to see if a file exist and to show its content

I'm writing a program in Unix to have the user enter in the file they would like to view the contents on but i'm stuck and dont know way i keep getting error.
the errors i keep getting are :unexpected EOF while looking for matching `"'
and the other is: Syntax error: unexpected end of file
# this program allows the user to see the contents of a file
echo
clear
echo
echo "Enter in the the file you would like to see: "
read $1
if [ ! -e "$1" ]
then
echo cat /export/home/cna397/logname/$1
else
echo "This file does not exist
fi
You're missing an ending double quote here:
echo "This file does not exist
$1 is for commandline arguments. You'll need something like this if you want the user to enter a filename while the script is running:
read filename
echo $(cat "/export/home/cna397/logname/$filename")
this is the executable (without errors) version of what you wrote, now continue with a better basis, if you still have questions just update this post with code plus comments, or just make a new one.
echo "Enter in the the file you would like to see:"
read file_name
if test ! = $file_name
then
echo $(cat /export/home/cna397/logname/$file_name)
else
echo "This file does not exist"
fi
P.S if on the if test you are checking strings, keep the equal (=) i put....

Exit when the result is ready and do not wait for the rest job?

I want to exit immediately when the result has been provided and do not wait for the rest of the jobs. I provided three examples by different approaches, i.e. awk, head and read. I want to exit after the '1' is shown in the following example without waiting for sleep. But none of the do not work. Is there any guy to help me?
(echo 1; sleep 10; seq 10) | head -n 1
(echo 1; sleep 10; seq 10) | awk -e 'NR==1{print $1;exit}'
(echo 1; sleep 10; seq 10) | ./test.sh
where the test.sh is the following:
while read -r -d $'\n' x
do
echo "$x"
exit
done
Refactor Using Bash Process Substitution
I want to exit after the '1' is shown in the following example without waiting for sleep.
By default, Bash shell pipelines wait for each pipeline segment to complete before processing the next segment of the pipeline. This is usually the expected behavior, because otherwise your commands wouldn't be able to act on the completed output of from each pipeline element. For example, how could sort do its job in a pipeline if it doesn't have all the data available at once?
In this specific case, you can do what you want, but you have to refactor your code so that awk is reading from process substitution rather than a pipe. For example:
$ time awk -e 'NR==1 {print $1; exit}' < <(echo 1; sleep 10; seq 10)
1
real 0m0.004s
user 0m0.001s
sys 0m0.002s
From the timings, you can see that the process exits when awk does. This may not be how you want to do it, but it certainly does what you want to accomplish with a minimum of fuss. Your mileage with non-Bash shells may vary.
Asynchronous Pipelines
Asynchronous pipelines are not really a generic solution, but using one works sufficiently to accomplish your goals for the given use case. The following returns immediately:
$ { echo 1 & sleep 10 & seq 10 & } | awk -e 'NR==1 {print $1; exit}'
1
because the commands in the command list are run asynchronously. When you run commands asynchronously in Bash:
The shell does not wait for the command to finish, and the return status is 0 (true).
However, note that this only appears to do what you want. Your other commands (e.g. sleep and seq) are actually still running in the background. You can validate this with:
$ { echo 1 & sleep 10 & seq 10 & } | awk -e 'NR==1 {print $1; exit}'; pgrep sleep
1
14921
As you can see, this allows awk to process the output of echo without waiting for the entire list of commands to complete, but it doesn't really short-circuit the execution of the command list. Process substitution is still likely to be the right solution, but it's always good to know you have alternatives.