I am trying to run Valgrind on a daemon over several days via ssh connection.
In order to run it in the background I am using the following command:
nohup valgrind --leak-check=full --show-reachable=yes --error-limit=no --log-file=/var/log/valgrind_log.txt hello_world </dev/null >/dev/null &
However when disconnecting from ssh, Valgrind seems to stop.
Does someone know if it is possible to run Valgrind for several days, in the background, and that it will keep running after closing the ssh connection? If so, what is the correct way to do it?
Related
I'm using TestCafe to test an app I'm working on. Today, when I went to run a TC test, I got the following message:
Error: listen EADDRINUSE: address already in use :::57664
I can usually handle these pretty easily: I issue the command:
lsof -i -P | grep -i "listen"
or
lsof -i tcp:57664
and then kill the offending task that is identified.
However, in this case, that port number isn't listed so I don't know which task to kill. also,
ps aux | grep -i "TestCafe"
doesn't show anything helpful.
Any suggestions on how to identify the hung task and kill it?
I have a script on ESXi that I need to be able to ssh into, execute the script, disown the process and exit ssh session, while keeping the process running? I have tried executing it the following ways:
nohup /etc/run_command
nohup /etc/run_comand &
I have also run across trying this example: (Website - Sudoall.com)
exec </dev/null >/dev/null 2>/dev/null
but I must not be using it right, because I get the same results.
Is there a way to disown the process, exit the ssh session without killing the running process on ESXi?
---- Update ----
After a lot of google research, I stumbled across this Website. ESXi does not have screen, tmux, or disown. An nohup was not working...I gave setsid a shot, and it worked.
setsid /etc/run_command &
After a lot of google research, I stumbled across this Website. ESXi does not have screen, tmux, or disown. An nohup was not working...I gave setsid a shot, and it worked.
setsid /etc/run_command &
Well I was smart enough to put an exit 0 in one of my dotfiles on a remote machine. Whenever I log in, the shell exits instantly now. How can I ssh into a machine without sourcing all the dot files?
I found a workaround to solve this problem by directly running a command:
ssh -t remotehost vim /dotfile/i/had/to/revert
I have setup a few EC2 instances, which all have a script in the home directory. I would like to run the script simultaneously across each EC2 instance, i.e. without going through a loop.
I have seen csshX for OSX for terminal interactive useage...but was wondering what the commandline code is to execute commands like
ssh user#ip.address . test.sh
to run the test.sh script across all instances since...
csshX user#ip.address.1 user#ip.address.2 user#ip.address.3 . test.sh
does not work...
I would like to do this over the commandline as I would like to automate this process by adding it into a shell script.
and for bonus points...if there is a way to send a message back to the machine sending the command that it has completed running the script that would be fantastic.
will it be good enough to have a master shell script that runs all these things in the background? e.g.,
#!/bin/sh
pidlist="ignorethis"
for ip in ip1 ip2
do
ssh user#$ip . test.sh &
pidlist="$pidlist $!" # get the process number of the last forked process
done
# Now all processes are running on the remote machines, and we want to know
# when they are done.
# (EDIT) It's probably better to use the 'wait' shell built-in; that's
# precisely what it seems to be for.
while true
do
sleep 1
alldead=true
for pid in $pidlist
do
if kill -0 $pid > /dev/null 2>&1
then
alldead=false
echo some processes alive
break
fi
done
if $alldead
then
break
fi
done
echo all done.
it will not be exactly simultaneous, but it should kick off the remote scripts in parallel.
i want to run some command on several machine using ssh. I know it can be done by just using the command "ssh user#hostname command". However, the command i want to run print some string on the console. Is there any way that send all the strings back to the console that i'm on?
You could run the commands in a screen:
screen -S test
ssh user#hostname command1
ssh user#hostname2 command2
You can then detach (Ctrl-D) from the screen, let it run for however long it will run, then re-attach (screen -r test) to the screen and see all of the output. This assumes that you won't have a ton of output from the commands, however. Here's a link to a tutorial on screen.
ssh user#hostname command
Does just that. if 'command' outputs something, it'll show on the terminal you ran ssh from.
Try e.g. ssh user#hostname ls -l
But as others have said, GNU screen is invaluable for this type of work.
You probably want to use Gnu Screen for this. You can start a process in a "virtual" terminal, "detach" the terminal and log out for however long you want... Then you can ssh back in and re-attach the terminal to see the console output.
Also have a look at nohup, for example:
ssh user#domain.com nohup script_that_outputs_strings.py > the_strings.txt
Then if you want to go back and monitor the progress, you could check back and tail the file or scp the output back to your local machine.
Why don't you send you an email back?
Or use a log file, and scp it to your current computer?
otherwise, I don't know!