How to limit the background processes launched by C shell? - process

I process 100 files in a directory with a command called process and as I want to parallel this process as much as possible. So, I issue the following commands in a C shell and it works great:
foreach F (dir/file*.data)
process $F > $F.processed &
echo $F
end
All 100 processes launch at once in the background, maximizing the usage of all my cores.
Now I want to use only a half of my cores (2 out of 4) at once. Is there an elegant way to do this?

If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
parallel -j 50% 'process {} > {}.processed; echo {}' ::: dir/file*.data
You can install GNU Parallel simply by:
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
cp parallel sem
Watch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Related

Using tail to monitor an active logging file

I'm running multiple 'shred' commands on multiple hard drives in a workstation. The 'shred' commands are all run in the background in order to run the commands concurrently. The output of each 'shred' is redirected to a text file, and I also have the output directed to the terminal as well. I'm using tail to monitor the log file for errors, and halt the script if any are encountered. If there are no errors, the script should simply continue on to conclusion. When I test it by forcing a drive failure (disconnecting a drive), it detects the I/O errors and the script halts as expected. The problem I'm having is that when there are NO errors, I cannot get 'tail' to terminate once the 'shred' commands have completed, and the script just hangs at that point. Since I put the 'tail' command in the 'while' loop below, I would have thought that 'tail' would continue to run as long as the 'shred' processes were running, but would then halt after the 'shred' processes stopped, thus ending the 'while' loop. But that hasn't been the case. The script still hangs even after the 'shred' processes have ended. If I go to another terminal window while the script is "hangiing," and kill the 'tail' process, the script continues as normal. Any ideas how to get the 'tail' process to end when the 'shred' processes are gone?
My code:
shred -n 3 -vz /dev/sda 2>&1 | tee -a logfile &
shred -n 3 -vz /dev/sdb 2>&1 | tee -a logfile &
shred -n 3 -vz /dev/sdc 2>&1 | tee -a logfile &
pids=$(pgrep shred)
while kill -0 $pids 2> /dev/null; do
tail -qn0 -f logfile | \
read LINE
echo "$LINE" | grep -q "error"
if [ $? = 0 ]; then
killall shred > /dev/null 2>&1
echo "Error encountered. Halting."
exit
fi
done
wait $pids
There is other code after the 'wait' that does other stuff, but this is where the script is hanging
Not directly related to the question, but you can use Daggy - Data Aggregation Utility
In this case, all subprocesses will be end with main daggy process.

How to gzip tee output while running in gnu parallel?

Suppose I am running tee inside a command executed by parallel.
I would like to gzip the output from tee:
... | tee --gzip the_file | and_continue
bash process substitution is useful for cases like this. Something like:
... | tee >(gzip -c the_file) | and_continue
If you're choosing different files in a parallel run and need to format the name differently each time, take a look at GNU Parallel argument placeholder in bash process substitution for how that has to change (to defer the process substitution to act per parallel job).

Cant Terminate process which is launched at bootup with at daemon

I have fooinit.rt process launched at boot (/etc/init.d/boot.local)
Here is boot.local file
...
/bin/fooinit.rt &
...
I create an order list at job in order to kill fooinit.rt. that is Triggered in C code
and I wrote a stop script (in)which kill -9 pidof fooinit.rt is written
Here is stop script
#!/bin/sh
proc_file="/tmp/gdg_list$$"
ps -ef | grep $USER > $proc_file
echo "Stop script is invoked!!"
suff=".rt"
pid=`fgrep "$suff" $proc_file | awk '{print $2}'`
echo "pid is '$pid'"
rm $proc_file
When at job timer expires 'kill -9 pid'( of fooinit.rt) command can not terminate fooinit.rt process!!
I checked pid number printed and the sentence "Stop script is invoked!!" is Ok !
Here is "at" job command in C code (I verified that the stop scriptis is called after 1 min later)
...
case 708: /* There is a trigger signal here*/
{
result = APP_RES_PRG_OK;
system("echo '/sbin/stop' | at now + 1 min");
}
...
On the other hand, It works properly in case launching fooinit.rt manually from shell as a ordinary command. (not from /etc/init.d/boot.local). So kill -9 work and terminates fooinit.rt process
Do you have any idea why kill -9 can not terminate foo.rt process if it is launched from /etc/init.d/boot.local
Your solution is built around a race condition. There is no guarantee it will kill the right process (an unknowable amount of time can pass between the ps call and the attempt to make use of the pid), plus it's also vulnerable to a tmp exploit: someone could create a few thousand symlinks under /tmp called "gdg_list[1-32767]" that point to /etc/shadow and your script would overwrite /etc/shadow if it runs as root.
Another potential problem is the setting of $USER -- have you made sure it's correct? Your at job will be called as the user your C program runs as, which may not be the same user your fooinit.rt runs as.
Also, your script doesn't include a kill command at all.
A much cleaner way of doing this would be to run your fooinit.rt under some process supervisor like runit and use runit to shut it down when it's no longer needed. That avoids the pid bingo as well as the /tmp attack vector.
But even using pkill -u username -f fooinit.rt would be less racy than the script you provided.

parallel : how to pass options to commands

For parallelizing gzip compression:
parallel gzip ::: myfile_*
does the job but how to pass gzip options such as -r or -9
I tried parallel gzip -r -9 ::: myfile_* and parallel gzip ::: 9 r myfile_*
but it doesn't work.
when I tried parallel "gzip -9 -r" ::: myfile_*
I get this error message :
gzip: compressed data not written to a terminal. Use -f to force compression
Also the -r switch for recursively adding directories is not working.
....
Similarly for other commands: how to pass the options while using parallel ?
You have the correct syntax:
parallel gzip -r -9 ::: myfile_*
So something else is wrong. What is the output of
parallel --version
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
wget -O - pi.dk/3 | sh
Watch the intro video on
http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
(I don't think this question belongs here. Maybe superuser.com?)
parallel gzip -r -9 ::: * worked fine for me, going into directories and all. I am using parallel version 20130622.
Note that with this approach, each directory will be a single task. You may instead want to pipe the output of find to parallel to give each file separately to parallel.
Have you tried the --gnu flag for parallel??
parallel -j+0 --gnu "command"....
In some systems (like Ubuntu) is disabled by default.

running same script over many machines

I have setup a few EC2 instances, which all have a script in the home directory. I would like to run the script simultaneously across each EC2 instance, i.e. without going through a loop.
I have seen csshX for OSX for terminal interactive useage...but was wondering what the commandline code is to execute commands like
ssh user#ip.address . test.sh
to run the test.sh script across all instances since...
csshX user#ip.address.1 user#ip.address.2 user#ip.address.3 . test.sh
does not work...
I would like to do this over the commandline as I would like to automate this process by adding it into a shell script.
and for bonus points...if there is a way to send a message back to the machine sending the command that it has completed running the script that would be fantastic.
will it be good enough to have a master shell script that runs all these things in the background? e.g.,
#!/bin/sh
pidlist="ignorethis"
for ip in ip1 ip2
do
ssh user#$ip . test.sh &
pidlist="$pidlist $!" # get the process number of the last forked process
done
# Now all processes are running on the remote machines, and we want to know
# when they are done.
# (EDIT) It's probably better to use the 'wait' shell built-in; that's
# precisely what it seems to be for.
while true
do
sleep 1
alldead=true
for pid in $pidlist
do
if kill -0 $pid > /dev/null 2>&1
then
alldead=false
echo some processes alive
break
fi
done
if $alldead
then
break
fi
done
echo all done.
it will not be exactly simultaneous, but it should kick off the remote scripts in parallel.