keep server running on EC2 instance after ssh is terminated - ssh

Currently, I have two servers running on an EC2 instance (MongoDB and bottlepy). Everything works when I SSHed to the instance and started those two servers. However, when I closed the SSH session (the instance is still running), I lost those two servers. Is there a way to keep the server running after logging out? I am using Bitvise Tunnelier on Windows 7.
The instance I am using is Ubuntu Server 12.04.3 LTS.

For those landing here from a google search, I would like to add tmux as another option. tmux is widely used for this purpose, and is preinstalled on new Ubuntu EC2 instances.
Managing a single session
Here is a great answer by Hamish Downer given to a similar question over at askubuntu.com:
I would use a terminal multiplexer - screen being the best known, and tmux being a more recent implementation of the idea. I use tmux, and would recommend you do to.
Basically tmux will run a terminal (or set of terminals) on a computer. If you run it on a remote server, you can disconnect from it without the terminal dying. Then when you login in again later you can reconnect, and see all the output you missed.
To start it the first time, just type
tmux
Then, when you want to disconnect, you do Ctrl+B, D (ie press Ctrl+B, then release both keys, and then press d)
When you login again, you can run
tmux attach
and you will reconnect to tmux and see all the output that happened. Note that if you accidentally lose the ssh connection (say your network goes down), tmux will still be running, though it may think it is still attached to a connection. You can tell tmux to detach from the last connection and attach to your new connection by running
tmux attach -d
In fact, you can use the -d option all the time. On servers, I have this in my >.bashrc
alias tt='tmux attach -d'
So when I login I can just type tt and reattach. You can go one step further >if you want and integrate the command into an alias for ssh. I run a mail client >inside tmux on a server, and I have a local alias:
alias maileo='ssh -t mail.example.org tmux attach -d'
This does ssh to the server and runs the command at the end - tmux attach -d The -t option ensures that a terminal is started - if a command is supplied then it is not run in a terminal by default. So now I can run maileo on a local command line and connect to the server, and the tmux session. When I disconnect from tmux, the ssh connection is also killed.
This shows how to use tmux for your specific use case, but tmux can do much more than this. This tmux tutorial will teach you a bit more, and there is plenty more out there.
Managing multiple sessions
This can be useful if you need to run several processes in the background simultaneously. To do this effectively, each session will be given a name.
Start (and connect to) a new named session:
tmux new-session -s session_name
Detach from any session as described above: Ctrl+B, D.
List all active sessions:
tmux list-sessions
Connect to a named session:
tmux attach-session -t session_name
To kill/stop a session, you have two options. One option is to enter the exit command while connected to the session you want to kill. Another option is by using the command:
tmux kill-session -t session_name

If you don't want to run some process as a service (or via an apache module) you can (like I do for using IRC) use gnome-screen Install screen http://hostmar.co/software-small.
screen keeps running on your server even if you close the connection - and thus every process you started within will keep running too.

It would be nice if you provided more info about your environment but assuming it's Ubuntu Linux you can start the services in the background or as daemons.
sudo service mongodb start
nohup python yourbottlepyapp.py &
(Use nohup if you want are in a ssh session and want to prevent it from closing file descriptors)
You can also run your bottle.py app using Apache mod_wsgi. (Running under the apache service) More info here: http://bottlepy.org/docs/dev/deployment.html
Hope this helps.
Addition: (your process still runs after you exit the ssh session)
Take this example time.py
import time
time.sleep(3600)
Then run:
$ python3 time.py &
[1] 3027
$ ps -Af | grep -v grep | grep time.py
ubuntu 3027 2986 0 18:50 pts/3 00:00:00 python3 time.py
$ exit
Then ssh back to the server
$ ps -Af | grep -v grep | grep time.py
ubuntu 3027 1 0 18:50 ? 00:00:00 python3 time.py
Process still running (notice with no tty)

You will want the started services to disconnect from the controlling terminal. I would suggest you use nohup to do that, e.g.
ssh my.server "/bin/sh -c nohup /path/to/service"
you may need to put an & in there (in the quotes) to run it in the background.
As others have commented, if you run proper init scripts to start/stop services (or ubuntu's service command), you should not see this issue.

If on Linux based instances putting the job in the background followed by disown seems to do the job.
$ ./script &
$ disown

Related

SSH to multiple hosts at once

I have a script which loops through a list of hosts, connecting to each of them with SSH using an RSA key, and then saving the output to a file on my local machine - this all works correctly. However, the commands to run on each server take a while (~30 minutes) and there are 10 servers. I would like to run the commands in parallel to save time, but can't seem to get it working. Here is the code as it is now (working):
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh"
done
How can I speed this up?
You should add & to the end of the ssh call, it will run on the background.
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh" &
done
I tried using & to send the SSH commands to the background, but I abandoned this because after the SSH commands are completed, the script performs some more commands on the output files, which need to have been created.
Using & made the script skip directly to those commands, which failed because the output files were not there yet. But then I learned about the wait command which waits for background commands to complete before continuing. Now this is my code which works:
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh" &
done
wait
Try massh http://m.a.tt/er/massh/. This is a nice tool to run ssh across multiple hosts.
The Hypertable project has recently added a multi-host ssh tool. This tool is built with libssh and establishes connections and issues commands asynchronously and in parallel for maximum parallelism. See Multi-Host SSH Tool for complete documentation. To run a command on a set of hosts, you would run it as follows:
$ ht ssh host00,host01,host02 /data/reports/formatted_report.sh
You can also specify a host name or IP pattern, for example:
$ ht ssh 192.168.17.[1-99] /data/reports/formatted_report.sh
$ ht ssh host[00-99] /data/reports/formatted_report.sh
It also supports a --random-start-delay <millis> option that will delay the start of the command on each host by a random time interval between 0 and <millis> milliseconds. This option can be used to avoid thundering herd problems when the command being run accesses a central resource.

How to do remote ssh non-interactively

I am trying to connect to a remote host from my local host through the below command.But there was a setting in the remote host that soon after we login it will prompt to enter a badge ID,password and reason for logging in, because it was coded like that in profile file on remote-host How can I overcome those steps and login directly non-interactively, without disturbing the code in profile.
jsmith#local-host$ ssh -t -t generic_userID#remote-host
Enter your badgeID, < exit > to abort:
Enter your password for <badgeID> :
Enter a one line justification for your interactive login to generic_userID
Small amendment: to overcome remote server expect approach is required, but in case local script connects to bunch of remote servers, which configuration may be broken, just use SSH options:
ssh -f -q -o BatchMode=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null USER#TARGETSYSTEM
This will omit ask for password in case there is no ssh_key setup, exit silently and continue with script/other hosts.
Puts ssh to background with -f, which is required when calling ssh command from sh (batch) file to remove local console redirect to remote input (implies -n).
Look into setting up a wrapper script around expect. This should do exactly what you're looking for.
Here are a few examples you can work from.
I have upvoted Marvin Pinto's answer because there is every reason to script this, in case there are other features in the profile that you need, such as Message of the Day motd.
However, there is a quick and dirty alternative if you don't want to make a script and you don't want other features from the profile. Depending on your preferred shell on the remote host, you can insist that the shell bypasses the profile files. For example, if bash is available on the remote host, you can invoke it with:
ssh -t -t generic_userID#remote-host bash --noprofile
I tested the above on the macOS 10.13 version of OpenSSH. Normally the command at the end of the ssh invocation is run non-interactively, but the -t flag allows bash to start an interactive shell.
Details are in the Start-up files section of the Bash Reference Manual.

Causing SSH to Time Out (client side)

I have a little Raspberry Pi that I'm playing with. I've got it running headless, and I need to make it forward one of its ports to a remote server when certain conditions are satisfied.
However, I don't want the connection to sit indefinitely until the server closes it. Is there a way to close an SSH connection (from the client, I have no root to the server) after a certain amount of time? Ideally I'd do it directly via the ssh command, but I'm writing in Python 3, so if there's a way to do this in Python, then I'm all ears.
In your /etc/ssh/sshd_config:
ClientAliveInterval <time interval in seconds>
ClientAliveCountMax 0
So using 300 in the first directive will kick the connection after 5 minutes idle. You'll need to restart sshd to make it take effect.
try ssh -o ServerAliveInterval=10 server.org
Unless you run ssh with the "-N" option, it normally launches some kind of command or shell on the remote system (the Pi in this case). Ssh disconnects when this remote command exits.
If you're running ssh just to create some port forwards, you may be running with "-N", or you may be letting the ssh session sit at a command prompt. Instead, you could launch a command on the Pi which exits after the desired period of time. You could use the sleep command, for example:
ssh -Lwhatever -Rwhatever user#pi "sleep 3600"

SSH-add is not persistent event though ssh-agent is started

I did ssh-add. During the session it works fine, but when I exit and reconnect to the server with ssh it does not work anymore ssh-add -l says: no identities.
I do start the ssh-agent in my .bash_profile using eval $(ssh-agent).
Any ideas what I can do to keep the identities?
// EDIT
So here is my scenario. I am connecting to my web-space using ssh. I want to pull data from github using git pull. I added the connection to github and git pull works fine. But it asks for the passphrase. Doing a ssh-add adn adding the passphrase stops that, but only for the current session.
I have to start the ssh-agent using something like eval $(ssh-agent) because it does not autostart on the server.
The main problem I am having is, that I need a script to do the git pull which is invoked by a request from github, so I can not give it the passphrase.
If you're spawning the agent when your session starts, then it'll die when you disconnect. You could connect to a screen or byobu/tmux server which keeps the agent alive (you can skip an instance of bash if you connect like ssh user#hostname -t 'byobu').
Otherwise, have the agent come up when the machine boots so your session comings and goings don't affect it.
Edit: you can also forward your agent from your local machine. This works very well if you happen to have the same keys available on both machines. Try something like this in your ~/.ssh/config:
Host whatever
Hostname whatever.com
User username
ForwardAgent yes
You invoke this with ssh whatever, in case you are not familiar with that config file.

Cygrunsrv & autossh : A way to embedd remote commands in the command line?

I'm using cygrunsrv and autossh on windows XP to create a service building a tunnel to a remote server but i also want to create another tunnel from the remote server to another server.
I can achieve it with this command line :
autossh -M 5432 serverA -t 'autossh -M 4321 serverB -N'
but when I want to set it up in cygwin through cygrunsrv to make it works as a service :
cygrunsrv -I TUNNEL -p /usr/bin/autossh -a "-M 5432 serverA -t 'autossh -M 4321 serverB -N'" -e AUTOSSH_NTSERVICE=yes -e AUTOSSH_POLL=20 -e AUTOSSH_GATETIME=30
It's not fully working. The service is creating the tunnel correctly to ServerA but it's not sending the autossh command "autossh -M 4321 serverB -N" to ServerA.
I tried to escape the quote but all my efforts didn't make any difference and I'm not seeing any command sent in the autossh logs.
I think the problem is related to pseudo terminal that is not created through the cygrunsrv.
I'd like to know if there's a way to fix my cygrunsrv command line to make it work or should I consider a different approach ?
Lionel, try removing the AUTOSSH_NTSERVICE=yes from the cygrunsrv invocation. As /usr/share/doc/autossh/README.Cygwin explains:
Setting AUTOSSH_NTSERVICE=yes in the calling environment ...
change[s] autossh's behavior in three useful
ways:
(1) Add an -N flag to each invocation of ssh, thus disabling shell
access. The idea is that if you're running autossh as a system
service, you're using it to forward ports; it wouldn't make sense to
run a shell session as a system service. (If you think this reasoning
is wrong, please send a bug report to the author or Cygwin maintainer,
and tell us what you're trying to do.)
Despite what the above says, it seems that you may have a good reason for not wanting -N (which suppresses command execution) in your service's ssh invocation. Removing AUTOSSH_NTSERVICE=yes should take care of it. It will have a couple of other minor disadvantages, but you can probably live with it. Read the rest of README.Cygwin for the details.