I want to make a shell where the child process runs linux commands(with the help of execvp) such as "ls" etc.The problem is that i also want it to support pipe commands such as "ls /tmp | wc -l" .Τhe program i have for now works for commands like "ls" or "ls -l /tmp" :
Related
I'm running multiple 'shred' commands on multiple hard drives in a workstation. The 'shred' commands are all run in the background in order to run the commands concurrently. The output of each 'shred' is redirected to a text file, and I also have the output directed to the terminal as well. I'm using tail to monitor the log file for errors, and halt the script if any are encountered. If there are no errors, the script should simply continue on to conclusion. When I test it by forcing a drive failure (disconnecting a drive), it detects the I/O errors and the script halts as expected. The problem I'm having is that when there are NO errors, I cannot get 'tail' to terminate once the 'shred' commands have completed, and the script just hangs at that point. Since I put the 'tail' command in the 'while' loop below, I would have thought that 'tail' would continue to run as long as the 'shred' processes were running, but would then halt after the 'shred' processes stopped, thus ending the 'while' loop. But that hasn't been the case. The script still hangs even after the 'shred' processes have ended. If I go to another terminal window while the script is "hangiing," and kill the 'tail' process, the script continues as normal. Any ideas how to get the 'tail' process to end when the 'shred' processes are gone?
My code:
shred -n 3 -vz /dev/sda 2>&1 | tee -a logfile &
shred -n 3 -vz /dev/sdb 2>&1 | tee -a logfile &
shred -n 3 -vz /dev/sdc 2>&1 | tee -a logfile &
pids=$(pgrep shred)
while kill -0 $pids 2> /dev/null; do
tail -qn0 -f logfile | \
read LINE
echo "$LINE" | grep -q "error"
if [ $? = 0 ]; then
killall shred > /dev/null 2>&1
echo "Error encountered. Halting."
exit
fi
done
wait $pids
There is other code after the 'wait' that does other stuff, but this is where the script is hanging
Not directly related to the question, but you can use Daggy - Data Aggregation Utility
In this case, all subprocesses will be end with main daggy process.
I want to execute multiple lines of shell commands over remote ssh.
According to https://unix.stackexchange.com/questions/1459/remote-for-loop-over-ssh, I just need to use single quotes to execute a multi-line for loop. Here is what I tried:
ssh user#server ‘cd ~/Data; cwd=pwd; for i in `find 201806 -name "day_*"`; do echo $i; cd $i; a.sh; cd $cwd; done’
Since nothing happens, I am speculating that there is a syntax error that I must not be understanding. 201806 is the name of a folder in the Data directory, and I have tested that the command works without the ssh user#server. Any suggestions?
Try this
ssh -v user#server ‘cd ~/Data; cwd=`pwd`; for i in `find 201806 -name "day_*"`; do echo $i; cd $i; ./a.sh; cd $cwd; done’
Also, make sure your a.sh file have execute permission. -v option will give debugging messages about its progress.
I have the script below.
OUTPUT_FOLDER=/home/user/output
LOGFILE=/root/log/test.log
HOST_FILE=/home/user/host_file.txt
mkdir -p $OUTPUT_FOLDER
rm -f $OUTPUT_FOLDER/*
pssh -h $HOST_FILE -o $OUTPUT_FOLDER "cat $LOGFILE | tail -n 100 | grep foo"
When I run this script on its own, it works fine and the $OUTPUT_FOLDER contains the output from the servers in the $HOST_FILE. However, when I ran the script as a cronjob, the $OUTPUT_FOLDER is created, but it's always empty. It's as if the pssh command was never executed.
Why is this? How do I resolve this?
Is there a way to execute a command before accessing a remote terminal
When I enter this command:
bash
$> ssh user#server.com 'ls'
The ls command is executed on the remote computer but ssh quits and I cannot continue in my remote session.
Is there a way of keeping the connection? The reason that I am asking this is that I want to create a setup for ssh session without having to modify the remote .bashrc file.
I would force the allocation of a pseudo tty and then run bash after the ls command:
syzdek#host1$ ssh -t host2.example.com 'ls -l /dev/null; bash'
-rwxrwxrwx 1 root other 27 Apr 1 2005 /dev/null
bash-4.1$
You can try using process subsitution on the init file of bash. In the example below, I define a function myfunc:
myfunc () {
echo "Running myfunc"
}
which I transform to a properly-escaped one-liner echoed in the <(...) construct for process subsitution for the --init-file argument of bash:
$ ssh -t localhost 'bash --init-file <( echo "myfunc() { echo \"Running myfunc\" ; }" ) '
Password:
bash-3.2$ myfunc
Running myfunc
bash-3.2$ exit
Note that once connected, my .bashrc is not sourced but myfunc is defined and properly usable in an interactive session.
It might prove a little difficult for more complex bash functions, but it works.
about run script.sh via ssh
#!/bin/bash
/usr/local/cpanel/scripts/cpbackup
clamscan -i -r --remove /home/
exit
are that mean run /usr/local/cpanel/scripts/cpbackup and after finished run clamscan -i -r --remove /home/
or run two command at same time ???
Commands in a script are run one at a time in order unless any of the commands "daemonizes" itself.