when i give ps -ax command i find number of processes with name "sh". parent of all these sh process is same that is some other process. I am not able to kill sh processes with any of kill commands say kill -9 pid, but in order to kill these sh processes i need to kill parent process.
did you try killall sh or killall -9 sh?
Hope this helps
This sh processes are due to unclosed file discriptors in a process. Due to this multiple sh child processes will be created for main parent process. So in main process we closed all these descriptors. This resolved our issue.
Related
Once I've started tensorboard server with the command
tensorboard --logdir=path/to/logdir
is there a command that explicitly close it or can I just kill it without any harm?
Thanks
In my case, CTRL+C doesn't work. The following works for me:
CTRL+Z halts the on-going TensorBoard process.
Check the id of this halted process by typing in the terminal
jobs -l
kill this process, otherwise you can't restart TensorBoard with the default port 6006 (of course, you can change the port with --port=xxxx)
kill -9 #PROCESS_ID
You can kill it without any harm! TensorBoard simply reads your log files and generates visualizations based on them in memory, so you don't need to worry about file corruption, etc.
This command will find the tensorbroad process and terminate it:
kill $(ps -e | grep 'tensorboard' | awk '{print $1}')
I solved this problem by this way - (actually in my ssh, sometimes CTRL+C don't work properly. Then I use this)
Get the running tensorboard process details
ps -ef|grep tensorboard
Sample Output: uzzal_x+ 4585 4413 0 02:46 pts/4 00:00:01 bin/python /bin/tensorboard --logdir=runs/
Kill the process using pid (process id)
kill -9 <pid>
first number 4585 is my current pid for tensorflow
There is a shortcut that is more drastic than CTRL+C:
Try CTRL+\
You can write this:
ps -ef | grep port_number
Get the port number of the tensorboard, then use:
kill -9 PortNumber
On windows, use: taskkill /F /PID <pid> where <pid> is the process ID.
There is the process named flask I can't kill either with
kill -9 PID #(PID found from ps aux | grep flask)
or with
killall -9 flask
it always respawns with higher PID
this command may be useful for you. But be careful, this command will kill all python processes.
pkill python3
I have fooinit.rt process launched at boot (/etc/init.d/boot.local)
Here is boot.local file
...
/bin/fooinit.rt &
...
I create an order list at job in order to kill fooinit.rt. that is Triggered in C code
and I wrote a stop script (in)which kill -9 pidof fooinit.rt is written
Here is stop script
#!/bin/sh
proc_file="/tmp/gdg_list$$"
ps -ef | grep $USER > $proc_file
echo "Stop script is invoked!!"
suff=".rt"
pid=`fgrep "$suff" $proc_file | awk '{print $2}'`
echo "pid is '$pid'"
rm $proc_file
When at job timer expires 'kill -9 pid'( of fooinit.rt) command can not terminate fooinit.rt process!!
I checked pid number printed and the sentence "Stop script is invoked!!" is Ok !
Here is "at" job command in C code (I verified that the stop scriptis is called after 1 min later)
...
case 708: /* There is a trigger signal here*/
{
result = APP_RES_PRG_OK;
system("echo '/sbin/stop' | at now + 1 min");
}
...
On the other hand, It works properly in case launching fooinit.rt manually from shell as a ordinary command. (not from /etc/init.d/boot.local). So kill -9 work and terminates fooinit.rt process
Do you have any idea why kill -9 can not terminate foo.rt process if it is launched from /etc/init.d/boot.local
Your solution is built around a race condition. There is no guarantee it will kill the right process (an unknowable amount of time can pass between the ps call and the attempt to make use of the pid), plus it's also vulnerable to a tmp exploit: someone could create a few thousand symlinks under /tmp called "gdg_list[1-32767]" that point to /etc/shadow and your script would overwrite /etc/shadow if it runs as root.
Another potential problem is the setting of $USER -- have you made sure it's correct? Your at job will be called as the user your C program runs as, which may not be the same user your fooinit.rt runs as.
Also, your script doesn't include a kill command at all.
A much cleaner way of doing this would be to run your fooinit.rt under some process supervisor like runit and use runit to shut it down when it's no longer needed. That avoids the pid bingo as well as the /tmp attack vector.
But even using pkill -u username -f fooinit.rt would be less racy than the script you provided.
I started a process in the background using:
erl -s system start -detached
I need to kill the process. Is there a way to kill all processes that are running in the background?
I tried:
init:reboot()
if you want to kill all running erlang processes on your system, probably run this as super user if possible. In the bash shell:
for i in `ps -ef | grep erl | awk '{print $2}'`; do echo $i; kill -9 $i; done
One way to achieve this is to start another erlang console, attach from it to first one and do all necessary things to terminate it properly.
You need to know name of the target node. From your example node is started without any name, you can give it by adding flag -name -sname like this: erl -sname node_1 -s system start -detached
Start another node with different name: erl -sname node_2
Press ^G (control G) on the terminal with node_2
Press r and type name of the first node: node_1#localhost (or whatever name it have)
Press c
Eshell V5.10.1 (abort with ^G)
(node_2#localhost)1>
User switch command
--> r 'node_1#localhost'
--> c
Eshell V5.10.1 (abort with ^G)
(node_1#localhost)1>
You shell see new prompt with name of the first node. Now all your commands will be executed on the first node. To terminate first node you could type erlang:halt().
I have setup a few EC2 instances, which all have a script in the home directory. I would like to run the script simultaneously across each EC2 instance, i.e. without going through a loop.
I have seen csshX for OSX for terminal interactive useage...but was wondering what the commandline code is to execute commands like
ssh user#ip.address . test.sh
to run the test.sh script across all instances since...
csshX user#ip.address.1 user#ip.address.2 user#ip.address.3 . test.sh
does not work...
I would like to do this over the commandline as I would like to automate this process by adding it into a shell script.
and for bonus points...if there is a way to send a message back to the machine sending the command that it has completed running the script that would be fantastic.
will it be good enough to have a master shell script that runs all these things in the background? e.g.,
#!/bin/sh
pidlist="ignorethis"
for ip in ip1 ip2
do
ssh user#$ip . test.sh &
pidlist="$pidlist $!" # get the process number of the last forked process
done
# Now all processes are running on the remote machines, and we want to know
# when they are done.
# (EDIT) It's probably better to use the 'wait' shell built-in; that's
# precisely what it seems to be for.
while true
do
sleep 1
alldead=true
for pid in $pidlist
do
if kill -0 $pid > /dev/null 2>&1
then
alldead=false
echo some processes alive
break
fi
done
if $alldead
then
break
fi
done
echo all done.
it will not be exactly simultaneous, but it should kick off the remote scripts in parallel.