PuTTY SSH reboot command - ssh

I am trying to reboot a Unix based machine with SSH on windows.
I am using PuTTY command, it looks like this :
putty.exe -ssh user#my.ip.add.ress -pw password -m reboot.txt -t
And the reboot.txt contains :
reboot
When I am connecting on ssh by myself and use the "reboot" command my machine is rebooting, it is not with the putty command line.
Do you have an idea of a putty command to reboot my machine ?
Thanks a lot !

There are two reasons that I can think of for this to not work.
The putty instance is closing before the reboot times out (60 seconds by default for most systems) which when your connection closes it kills the process. You can resolve this by either sending NOHUP or by forcing reboot immediately with /sbin/shutdown -r now
Your user is not the one processing the script, which the user that is processing the script does not have permissions to shutdown the machine. You can log a whoami from the putty command to see which user is calling shutdown.
Here is a guide on reboot: https://www.cyberciti.biz/faq/howto-reboot-linux/#:~:text=To%20reboot%20Linux%20using%20the%20command%20line%3A,reboot%20%E2%80%9D%20to%20reboot%20the%20box.

Related

How to do other "stuff" in the same terminal where you establish an SSH tunnel

I often use an ssh tunnel. I open up one terminal to create the tunnel (e.g. ssh -L 1111:servera:2222 user#serverb). Then I open a new terminal to do my work. Is there a way to establish the tunnel in a terminal and somehow put it in the background so I don't need to open up a new terminal? I tried putting "&" at the end, but that didn't do the trick. The tunnel went into the background before I could enter the password. Then I did fg, entered the password and I was stuck in the ssh session.
I know one possible solution would be to use screen or tmux or something like that. Is there a simple solution I'm missing?
There is the -f and -N options exactly for that:
-f Requests ssh to go to background just before command execution. This is useful if
ssh is going to ask for passwords or passphrases, but the user wants it in the
background. This implies -n. The recommended way to start X11 programs at a remote
site is with something like ssh -f host xterm.
If the ExitOnForwardFailure configuration option is set to ``yes'', then a client
started with -f will wait for all remote port forwards to be successfully established
before placing itself in the background.
-N Do not execute a remote command. This is useful for just forwarding ports
(protocol version 2 only).
So the full command would be ssh -fNL 1111:servera:2222 user#serverb.
A way to prevent ssh asking for the password would also be to use SSH public keys for authentication with an agent that either saves the password or prompts it using an external graphical program such as pinentry.
It might also be useful for you to look into autossh, which will reconnect your SSH automatically if the connection drops.

start ssh-agent in remote ssh session

I connect to a server that runs xubuntu and start ssh-agent there. Then I execute ssh-add on the remote server and run rysnc commands that would require to enter the passwort mutliple times.
With my solution I only have to enter it one time. But how can I start the ssh-agent permanentely? I want to reuse it over multiple ssh sessions.
My solution so far:
ssh myhost 'eval $(ssh-agent); ssh-add;'
You can use agent-forwarding in ssh: -A switch. Basically, it will create the agent on your host and when you connect to myhost, you will have your agent in all your sessions and you will not be prompted for password again.
Basically agent should be started automatically with your session, so all you need to do is to add your keys locally and then connect to remote hosts with -A switch.
It is impossible to have running ssh-agent permanently, since it is running under your session. Basically if you don't close the first session, there is some way to connect to it, but it is certainly not the thing you want to do

keep server running on EC2 instance after ssh is terminated

Currently, I have two servers running on an EC2 instance (MongoDB and bottlepy). Everything works when I SSHed to the instance and started those two servers. However, when I closed the SSH session (the instance is still running), I lost those two servers. Is there a way to keep the server running after logging out? I am using Bitvise Tunnelier on Windows 7.
The instance I am using is Ubuntu Server 12.04.3 LTS.
For those landing here from a google search, I would like to add tmux as another option. tmux is widely used for this purpose, and is preinstalled on new Ubuntu EC2 instances.
Managing a single session
Here is a great answer by Hamish Downer given to a similar question over at askubuntu.com:
I would use a terminal multiplexer - screen being the best known, and tmux being a more recent implementation of the idea. I use tmux, and would recommend you do to.
Basically tmux will run a terminal (or set of terminals) on a computer. If you run it on a remote server, you can disconnect from it without the terminal dying. Then when you login in again later you can reconnect, and see all the output you missed.
To start it the first time, just type
tmux
Then, when you want to disconnect, you do Ctrl+B, D (ie press Ctrl+B, then release both keys, and then press d)
When you login again, you can run
tmux attach
and you will reconnect to tmux and see all the output that happened. Note that if you accidentally lose the ssh connection (say your network goes down), tmux will still be running, though it may think it is still attached to a connection. You can tell tmux to detach from the last connection and attach to your new connection by running
tmux attach -d
In fact, you can use the -d option all the time. On servers, I have this in my >.bashrc
alias tt='tmux attach -d'
So when I login I can just type tt and reattach. You can go one step further >if you want and integrate the command into an alias for ssh. I run a mail client >inside tmux on a server, and I have a local alias:
alias maileo='ssh -t mail.example.org tmux attach -d'
This does ssh to the server and runs the command at the end - tmux attach -d The -t option ensures that a terminal is started - if a command is supplied then it is not run in a terminal by default. So now I can run maileo on a local command line and connect to the server, and the tmux session. When I disconnect from tmux, the ssh connection is also killed.
This shows how to use tmux for your specific use case, but tmux can do much more than this. This tmux tutorial will teach you a bit more, and there is plenty more out there.
Managing multiple sessions
This can be useful if you need to run several processes in the background simultaneously. To do this effectively, each session will be given a name.
Start (and connect to) a new named session:
tmux new-session -s session_name
Detach from any session as described above: Ctrl+B, D.
List all active sessions:
tmux list-sessions
Connect to a named session:
tmux attach-session -t session_name
To kill/stop a session, you have two options. One option is to enter the exit command while connected to the session you want to kill. Another option is by using the command:
tmux kill-session -t session_name
If you don't want to run some process as a service (or via an apache module) you can (like I do for using IRC) use gnome-screen Install screen http://hostmar.co/software-small.
screen keeps running on your server even if you close the connection - and thus every process you started within will keep running too.
It would be nice if you provided more info about your environment but assuming it's Ubuntu Linux you can start the services in the background or as daemons.
sudo service mongodb start
nohup python yourbottlepyapp.py &
(Use nohup if you want are in a ssh session and want to prevent it from closing file descriptors)
You can also run your bottle.py app using Apache mod_wsgi. (Running under the apache service) More info here: http://bottlepy.org/docs/dev/deployment.html
Hope this helps.
Addition: (your process still runs after you exit the ssh session)
Take this example time.py
import time
time.sleep(3600)
Then run:
$ python3 time.py &
[1] 3027
$ ps -Af | grep -v grep | grep time.py
ubuntu 3027 2986 0 18:50 pts/3 00:00:00 python3 time.py
$ exit
Then ssh back to the server
$ ps -Af | grep -v grep | grep time.py
ubuntu 3027 1 0 18:50 ? 00:00:00 python3 time.py
Process still running (notice with no tty)
You will want the started services to disconnect from the controlling terminal. I would suggest you use nohup to do that, e.g.
ssh my.server "/bin/sh -c nohup /path/to/service"
you may need to put an & in there (in the quotes) to run it in the background.
As others have commented, if you run proper init scripts to start/stop services (or ubuntu's service command), you should not see this issue.
If on Linux based instances putting the job in the background followed by disown seems to do the job.
$ ./script &
$ disown

How can I force `vagrant ssh` to do pseudo-tty allocation?

The first thing I do after vagrant ssh is usually attaching to a tmux session.
I want to automate this, so I try: vagrant ssh -c "tmux attach", but it fails and says "not a terminal".
After some googling I find this article and know that I should force a pseudo-tty allocation before executing a screen-based program, and it can be done with the -t option of ssh.
But I don't know how to use this option with vagrant ssh.
According to this documentation, you should try adding -- to the command.
As I have not used vagrant, I am unsure of the formatting, but assume it would be similar to:
vagrant ssh -- -t
Unless, you need to include the username and host, in which case add the username and host.

How to use ssh to run a local command after connection and quit after this local command is executed?

I wish to use SSH to establish a temporary port forward, run a local command and then quit, closing the ssh connection.
The command has to be run locally, not on the remote site.
For example consider a server in a DMZ and you need to allow an application from your machine to connect to port 8080, but you have only SSH access.
How can this be done?
Assuming you're using OpenSSH from the command line....
SSH can open a connection that will sustain the tunnel and remain active for as long as possible:
ssh -fNT -Llocalport:remotehost:remoteport targetserver
You can alternately have SSH launch something on the server that runs for some period of time. The tunnel will be open for that time. The SSH connection should remain after the remote command exits for as long as the tunnel is still in use. If you'll only use the tunnel once, then specify a short "sleep" to let the tunnel expire after use.
ssh -f -Llocalport:remotehost:remoteport targetserver sleep 10
If you want to be able to kill the tunnel from a script running on the local side, then I recommend you background it in your shell, then record the pid to kill later. Assuming you're using an operating system that includes Bourne shell....
#/bin/sh
ssh -f -Llocalport:remotehost:remoteport targetserver sleep 300 &
sshpid=$!
# Do your stuff within 300 seconds
kill $sshpid
If backgrounding your ssh using the shell is not to your liking, you can also use advanced ssh features to control a backgrounded process. As described here, the SSH features ControlMaster and ControlPath are how you make this work. For example, add the following to your ~/.ssh/config:
host targetserver
ControlMaster auto
ControlPath ~/.ssh/cm_sockets/%r#%h:%p
Now, your first connection to targetserver will set up a control, so that you can do things like this:
$ ssh -fNT -Llocalport:remoteserver:remoteport targetserver
$ ssh -O check targetserver
Master running (pid=23450)
$ <do your stuff>
$ ssh -O exit targetserver
Exit request sent.
$ ssh -O check targetserver
Control socket connect(/home/sorin/.ssh/cm_socket/sorin#192.0.2.3:22): No such file or directory
Obviously, these commands can be wrapped into your shell script as well.
You could use a script similar to this (untested):
#!/bin/bash
coproc ssh -L 8080:localhost:8080 user#server
./run-local-command
echo exit >&${COPROC[1]}
wait