I have to connect to many server machines by ssh into them.
But if I didn't use terminal for some time, connections are getting disconnected. Now I have to close my terminal and login again with ssh.
Are there any plugins which does help me in this case?
I think there are built in functions in ssh solving your purpose.
From man ssh_config:
ServerAliveInterval
Sets a timeout interval in seconds after which if no data has been received from the server, ssh(1) will send a message through the encrypted channel to request a response from the server. The default is 0, indicating that these messages will not be sent to the server. This option applies to protocol version 2 only.
By default, keep alives are disabled but you can enable them for a single connection by passing the ServerAliveInterval-Parameter with the -o Option:
ssh -oServerAliveInterval=<time in seconds> <rest of your ssh command arguments>
If you like having this configuration for all of your SSH connections. It's easier to put the following in your ~/.ssh/config:
Host *
ServerAliveInterval <time in seconds>
Furthermore there is a second parameter affecting the keep-alive-behaviour: ServerAliveCountMax (see man ssh_config).
I've found a nice article about the ServerAlive-Parameter: How to Keep Alive SSH Sessions
Related
I was wondering if there's a way to send files using SFTP to a remote machine through a jump server.
As you can see in the image below first it's needed an SSH connection and after that an SFTP connection.
My main problem here comes after the SSH connection, my workspace has changed and I cannot retrieve the necessary files to execute the SFTP successfully.
I've tried the following code:
ssh jump-server-user#ip-jump-server 'echo "put /source/files /remote/files" | sftp -v remote-machine-user#ip-remote-machine'
But it does not work.
I've tried to execute a simple command like pwd using the SFTP connection and it works so I think the problem here is how the workspace change.
There would probably be an easier solution but I cannot use SSH on the jump server-remote machine connection and I cannot store the local files in the jump server to send them later to the remote machine.
If you have a recent OpenSSH (at least 8.0) locally, you can use the -J (jump) switch:
sftp -J jump-server-user#ip-jump-server remote-machine-user#ip-remote-machine
With older version (but at least 7.3), you can use ProxyJump directive:
sftp -o ProxyJump=jump-server-user#ip-jump-server remote-machine-user#ip-remote-machine
There are other options like ProxyCommand or port forwarding, which you can use on even older versions of OpenSSH. These are covered in Does OpenSSH support multihop login?
I often use an ssh tunnel. I open up one terminal to create the tunnel (e.g. ssh -L 1111:servera:2222 user#serverb). Then I open a new terminal to do my work. Is there a way to establish the tunnel in a terminal and somehow put it in the background so I don't need to open up a new terminal? I tried putting "&" at the end, but that didn't do the trick. The tunnel went into the background before I could enter the password. Then I did fg, entered the password and I was stuck in the ssh session.
I know one possible solution would be to use screen or tmux or something like that. Is there a simple solution I'm missing?
There is the -f and -N options exactly for that:
-f Requests ssh to go to background just before command execution. This is useful if
ssh is going to ask for passwords or passphrases, but the user wants it in the
background. This implies -n. The recommended way to start X11 programs at a remote
site is with something like ssh -f host xterm.
If the ExitOnForwardFailure configuration option is set to ``yes'', then a client
started with -f will wait for all remote port forwards to be successfully established
before placing itself in the background.
-N Do not execute a remote command. This is useful for just forwarding ports
(protocol version 2 only).
So the full command would be ssh -fNL 1111:servera:2222 user#serverb.
A way to prevent ssh asking for the password would also be to use SSH public keys for authentication with an agent that either saves the password or prompts it using an external graphical program such as pinentry.
It might also be useful for you to look into autossh, which will reconnect your SSH automatically if the connection drops.
I have enabled two factor authentication for ssh using duosecurity (using this playbook https://github.com/CoffeeAndCode/ansible-duo ).
How can I use ansible to manage the server now. The SSH calls fail at gathering facts because of this. I want the person running the playbook to enter the two factor code before the playbook is run.
Disabling two factor for the deployment user is a possible solution but creates a security issue which I would I like to avoid.
It's a hack, but you can tunnel a non-2fac Ansible SSH connection through a 2fac-enabled SSH connection.
Overview
We will setup two users: ansible will be the user Ansible will use. It should be authenticated in a way that's supported by Ansible (i.e., not 2fac). This user will be restricted so it cannot connect from anywhere but 127.0.0.1, so it is not accessible from outside the machine.
The second user, ansible_tunnel will be open to the outside world, but will be authenticated by two factors, and will only allow tunneling of SSH connections to the local machine.
You must be able to configure 2-factor authentication only for some users (not all).
Some info on SSH tunnels.
On the target machine:
Create two users: ansible and ansible_tunnel
Put your public key in ~/.ssh/authorized_keys of both users
Set the shell of ansible_tunnel to /bin/false, or lock the user - it will be used for tunneling exclusively, not running commands
Add the following to /etc/ssh/sshd_config:
AllowTcpForwarding no
AllowUsers ansible#127.0.0.1 ansible_tunnel
Match User ansible_tunnel
AllowTcpForwarding yes
PermitOpen 127.0.0.1:22
ForceCommand echo 'This account can only be used for tunneling SSH sessions'
Setup 2-factor authentication only for ansible_tunnel
Restart sshd
On the machine running Ansible:
Before running Ansible, run the following (on the Ansible machine, not the target):
ssh -N -L 8022:127.0.0.1:22 ansible_tunnel#<host>
You will be authenticated using two factors.
Once the tunnel is up (check with netstat), run Ansible with ansible_ssh_user=ansible, ansible_ssh_port=8022 and ansible_ssh_host=localhost.
Recap
Only ansible_tunnel can connect from the outside, and it will be authenticated using two factors
Once the tunnel is set up, connecting to port 8022 on the local machine is the same as connecting to sshd on the remote machine
We're allowing ansible to connect over SSH only when it is done through the localhost, so only connections that are tunneled are allowed
Scale
This will not scale well for multiple server, due to the need to open a separate tunnel for each machine, which requires manual action. However, if you've chosen 2-factor authentication for your servers you're already willing to do some manual action to connect to each server, and this solution will only add a little overhead with some script-wrapping.
[EDITED TO ADD]
Bonus
For convenience, we may want to log into the maintenance account directly to do some manual work, without going through the process of setting up a tunnel. We can configure SSH to require 2fac authentication in this case, while maintaining the ability to connect without 2fac through the tunnel:
# All users must authenticate using two factors
AuthenticationMethods publickey,keyboard-interactive
# Allow both maintenance user and tunnel user with no restrictions
AllowUsers ansible ansible_tunnel
# The maintenance user is allowed to authenticate using a single factor only
# when connecting from a local address - it should be impossible to connect to
# this user using a single factor from the outside (the only way to do that is
# having an existing access to the machine, or use the two-factor tunnel)
Match User ansible Address 127.0.0.1
AuthenticationMethods publickey
I can use ansible with ssh and 2FA using the ControlMaster feature of ssh and ansible.
My local ssh client is configured to dump a ControlPath socket for multiplexing connection. Ansible is configured to use the same socket.
Local ssh client
This configuration enable multiplexing for all connections. I personally store this configuration in `~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p.socket
ControlPersist 1m
When a connection is established, a socket appears in the $HOME/.ssh directory. This socket persists during one minute after disconnection.
Configure ansible
Ansible is configured to re-use the local socket.
Add this in your ansible configuration file (for instance, ~/.ansible.cfg):
[ssh_connection]
control_path=~/.ssh/master-%%r#%%h:%%p.socket
Note the double % for variable substitution.
Usage
Connect to your server using ssh regular command (ssh user#server), and perform 2FA;
Launch your ansible command as usual.
The step 2 must be performed within the ControlPersist configuration, or keep an ssh connection in a terminal when you launch ansible command in another one.
You can also force to close connection when you do not need it, using: ssh -O exit user#server.
Note that, if you open a third terminal and run ssh user#server, you will not be asked for credentials: the connection established in 1. will be re-used.
Drawbacks
In case of bad network conditions
Sometimes, when you loose connection, the socket persists. Every further connection hangs. You must manually disconnect this connection, using ssh -O exit user#server. This is the only known drawback for this method.
References:
Ansible parameter ANSIBLE_SSH_CONTROL_PATH
About multiplexing ssh (a very old blog post which makes me discover ssh multiplexing: https://blog.scottlowe.org/2015/12/11/using-ssh-multiplexing/)
Solution using a Bastion Host
Even using an ssh bastion host it took me quite a while to get this working. In case it helps anyone else, here's what I came up with. It uses the ControlMaster ssh config options and since ansible uses regular ssh it can be configured to use the same ssh features and re-use the connection to the bastion host regardless of how many connections it opens to remote hosts. I've seen these Control options recommended in general (presumably for performance reasons if you have a lot of hosts) but not in the context of 2FA to a bastion host.
With this approach you don't need any sshd config changes, so you'll want AuthenticationMethods publickey,keyboard-interactive as the only authentication method setting on the bastion server, and publickey only for all your other servers that you're proxying through the bastion to get to. Since the bastion host is the only one that accepts external connections from the internet, it's the only one that requires 2FA, and internal hosts rely on agent forwarding for public key authentication but don't use 2FA.
On the client, I created a new ssh config file for my ansible environment in the top-level directory that I run ansible from (so sibling of ansible.cfg) called ssh.config. It contains:
Host bastion-persistent-connection
HostName <bastion host>
ForwardAgent yes
IdentityFile ~/.ssh/my-key
ControlMaster auto
ControlPath ~/.ssh/ansible-%r#%h:%p
ControlPersist 10m
Host 10.0.*.*
ProxyCommand ssh -W %h:%p bastion-persistent-connection -F ./ssh.config
IdentityFile ~/.ssh/my-key
Then in ansible.cfg I have:
[ssh_connection]
ssh_args = -F ./ssh.config
A few things to note:
My private subnet in this case is 10.0.0.0/16 which maps to the host wildcard option above. The bastion proxies all ssh connections to servers on this subnet.
This is a bit brittle in that I can only run my ssh or ansible commands in this directory, because of the ProxyCommand passing the local path to this config file. Unfortunately I don't think there's an ssh variable that maps to the current config file being used so that I could pass the same config file to the ProxyCommand automatically. Depending on your environment it might be better to use an absolute path for this.
The one gotcha is it makes running ansible more complex. Unfortunately, from what I can tell ansible has no support whatsoever for 2FA. So if you have no existing ssh connection to the bastion, ansible will print out Verification code: once for every private server it's connecting to, but it's not actually listening for the input so no matter what you do the connections will fail.
So I first run: ssh -F ssh.config bastion-persistent-connection
This creates the socket file in ~/.ssh/ansible-*, and the ssh agent locally will close & remove that socket after the configurable time (what I have set to 10m).
Once the socket is open I can run ansible commands like normal, e.g. ansible all -m ping and they succeed.
I'm creating a small script to update some remote servers (2+)
I am making multiple connects to each server; is there a way I can reuse the SSH connections so I don't have to open too many at once?
If you open the first connection with -M:
ssh -M $REMOTEHOST
subsequent connections to $REMOTEHOST will "piggyback" on the connection established by the master ssh. Most noticeably, further authentication is not required. See man ssh_config under "ControlMaster" for more details. Use -S to specify the path to the shared socket; I'm not sure what the default is, because I configure connection sharing using the configuration file instead.
In my .ssh/config file, I have the following lines:
host *
ControlMaster auto
ControlPath ~/.ssh/ssh_mux_%h_%p_%r
This way, I don't have to remember to use -M or -S; ssh figures out if a sharable connection already exists for the host/port/username combination and uses that if possible.
This option is available in OpenSSH since 2004.
I prefer the method described at Puppet Labs https://puppetlabs.com/blog/speed-up-ssh-by-reusing-connections
Add these lines to ~/.ssh/config and run mkdir ~/.ssh/sockets
Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/%r#%h-%p
ControlPersist 600
Read the full blog post for more useful information about what these do and the idiosyncrasies of ssh when used like this. I highly recommend reading the blog or you may find things don't work as you expect.
Alternatively, you can do it this way:
$ssh_conn="ssh -t -o ControlPath=~/.ssh/master-$$ -o ControlMaster=auto -o ControlPersist=60"
$ssh_conn user#server
ControlPath=~/.ssh/master-$$ sets up a control path for the ssh
connection limiting connection reuse to the current shell (via the
$$ PID)
ControlMaster=auto allows the connection session to be
shared using the ControlPath
ControlPesist=60 sets the amount of
time the connection should remain open due to inactivity
For modern-distro setups that have a /run/user/$UID/ for just-this-boot runtime stuff,
controlmaster auto
controlpath /run/user/%i/ssh-%C
controlpersist 900
at the top of the config (where no match or host restrictions are in effect) will set all ssh sessions that share hosts, port and remote username to use a single connection. I keep addkeystoagent yes and identityfile ~/.ssh/id_ed25519 up there too so ssh doesn't offer all my keys for every host.
When connecting to remote hosts via ssh, I frequently want to bring a file on that system to the local system for viewing or processing. Is there a way to copy the file over without (a) opening a new terminal/pausing the ssh session (b) authenticating again to either the local or remote hosts which works (c) even when one or both of the hosts is behind a NAT router?
The goal is to take advantage of as much of the current state as possible: that there is a connection between the two machines, that I'm authenticated on both, that I'm in the working directory of the file---so I don't have to open another terminal and copy and paste the remote host and path in, which is what I do now. The best solution also wouldn't require any setup before the session began, but if the setup was a one-time or able to be automated, than that's perfectly acceptable.
zssh (a ZMODEM wrapper over openssh) does exactly what you want.
Install zssh and use it instead of openssh (which I assume that you normally use)
You'll have to have the lrzsz package installed on both systems.
Then, to transfer a file zyxel.png from remote to local host:
antti#local:~$ zssh remote
Press ^# (C-Space) to enter file transfer mode, then ? for help
...
antti#remote:~$ sz zyxel.png
**B00000000000000
^#
zssh > rz
Receiving: zyxel.png
Bytes received: 104036/ 104036 BPS:16059729
Transfer complete
antti#remote:~$
Uploading goes similarly, except that you just switch rz(1) and sz(1).
Putty users can try Le Putty, which has similar functionality.
On a linux box I use the ssh-agent and sshfs. You need to setup the sshd to accept connections with key pairs. Then you use ssh-add to add you key to the ssh-agent so you don't have type your password everytime. Be sure to use -t seconds, so the key doesn't stay loaded forever.
ssh-add -t 3600 /home/user/.ssh/ssh_dsa
After that,
sshfs hostname:/ /PathToMountTo/
will mount the server file system on your machine so you have access to it.
Personally, I wrote a small bash script that add my key and mount the servers I use the most, so when I start to work I just have to launch the script and type my passphrase.
Using some little known and rarely used features of the openssh
implementation you can accomplish precisely what you want!
takes advantage of the current state
can use the working directory where you are
does not require any tunneling setup before the session begins
does not require opening a separate terminal or connection
can be used as a one-time deal in an interactive session or can be used as part of an automated session
You should only type what is at each of the local>, remote>, and
ssh> prompts in the examples below.
local> ssh username#remote
remote> ~C
ssh> -L6666:localhost:6666
remote> nc -l 6666 < /etc/passwd
remote> ~^Z
[suspend ssh]
[1]+ Stopped ssh username#remote
local> (sleep 1; nc localhost 6666 > /tmp/file) & fg
[2] 17357
ssh username#remote
remote> exit
[2]- Done ( sleep 1; nc localhost 6666 > /tmp/file )
local> cat /tmp/file
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
...
Or, more often you want to go the other direction, for example if you
want to do something like transfer your ~/.ssh/id_rsa.pub file from
your local machine to the ~/.ssh/authorized_keys file of the remote
machine.
local> ssh username#remote
remote> ~C
ssh> -R5555:localhost:5555
remote> ~^Z
[suspend ssh]
[1]+ Stopped ssh username#remote
local> nc -l 5555 < ~/.ssh/id_rsa.pub &
[2] 26607
local> fg
ssh username#remote
remote> nc localhost 5555 >> ~/.ssh/authorized_keys
remote> cat ~/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2ZQQQQBIwAAAQEAsgaVp8mnWVvpGKhfgwHTuOObyfYSe8iFvksH6BGWfMgy8poM2+5sTL6FHI7k0MXmfd7p4rzOL2R4q9yjG+Hl2PShjkjAVb32Ss5ZZ3BxHpk30+0HackAHVqPEJERvZvqC3W2s4aKU7ae4WaG1OqZHI1dGiJPJ1IgFF5bWbQl8CP9kZNAHg0NJZUCnJ73udZRYEWm5MEdTIz0+Q5tClzxvXtV4lZBo36Jo4vijKVEJ06MZu+e2WnCOqsfdayY7laiT0t/UsulLNJ1wT+Euejl+3Vft7N1/nWptJn3c4y83c4oHIrsLDTIiVvPjAj5JTkyH1EA2pIOxsKOjmg2Maz7Pw== username#local
A little bit of explanation is in order.
The first step is to open a LocalForward; if you don't already have
one established then you can use the ~C escape character to open an
ssh command line which will give you the following commands:
remote> ~C
ssh> help
Commands:
-L[bind_address:]port:host:hostport Request local forward
-R[bind_address:]port:host:hostport Request remote forward
-D[bind_address:]port Request dynamic forward
-KR[bind_address:]port Cancel remote forward
In this example I establish a LocalForward on port 6666 of localhost
for both the client and the server; the port number can be any
arbitrary open port.
The nc command is from the netcat package; it is described as the
"TCP/IP swiss army knife"; it is a simple, yet very flexible and
useful program. Make it a standard part of your unix toolbelt.
At this point nc is listening on port 6666 and waiting for another
program to connect to that port so it can send the contents of
/etc/passwd.
Next we make use of another escape character ~^Z which is tilde
followed by control-Z. This temporarily suspends the ssh process and
drops us back into our shell.
One back on the local system you can use nc to connect to the
forwarded port 6666. Note the lack of a -l in this case because that
option tells nc to listen on a port as if it were a server which is
not what we want; instead we want to just use nc as a client to
connect to the already listening nc on the remote side.
The rest of the magic around the nc command is required because if
you recall above I said that the ssh process was temporarily
suspended, so the & will put the whole (sleep + nc) expression
into the background and the sleep gives you enough time for ssh to
return to the foreground with fg.
In the second example the idea is basically the same except we set up
a tunnel going the other direction using -R instead of -L so that
we establish a RemoteForward. And then on the local side is where
you want to use the -l argument to nc.
The escape character by default is ~ but you can change that with:
-e escape_char
Sets the escape character for sessions with a pty (default: ‘~’). The escape character is only recognized at the beginning of a line. The escape character followed by a dot
(‘.’) closes the connection; followed by control-Z suspends the connection; and followed by itself sends the escape character once. Setting the character to “none” disables any
escapes and makes the session fully transparent.
A full explanation of the commands available with the escape characters is available in the ssh manpage
ESCAPE CHARACTERS
When a pseudo-terminal has been requested, ssh supports a number of functions through the use of an escape character.
A single tilde character can be sent as ~~ or by following the tilde by a character other than those described below. The escape character must always follow a newline to be interpreted
as special. The escape character can be changed in configuration files using the EscapeChar configuration directive or on the command line by the -e option.
The supported escapes (assuming the default ‘~’) are:
~. Disconnect.
~^Z Background ssh.
~# List forwarded connections.
~& Background ssh at logout when waiting for forwarded connection / X11 sessions to terminate.
~? Display a list of escape characters.
~B Send a BREAK to the remote system (only useful for SSH protocol version 2 and if the peer supports it).
~C Open command line. Currently this allows the addition of port forwardings using the -L, -R and -D options (see above). It also allows the cancellation of existing remote port-
forwardings using -KR[bind_address:]port. !command allows the user to execute a local command if the PermitLocalCommand option is enabled in ssh_config(5). Basic help is avail‐
able, using the -h option.
~R Request rekeying of the connection (only useful for SSH protocol version 2 and if the peer supports it).
Using ControlMaster (the -M switch) is the best solution, way simpler and easier than the rest of the answers here. It allows you to share a single connection among multiple sessions. Sounds like it does what the poster wants. You still have to type the scp or sftp command line though. Try it. I use it for all of my sshing.
In order to do this I have my home router set up to forward port 22 back to my home machine (which is firewalled to only accept ssh connections from my work machine) and I also have an account set up with DynDNS to provide Dynamic DNS that will resolve to my home IP automatically.
Then when I ssh into my work computer, the first thing I do is run a script that starts an ssh-agent (if your server doesn't do that automatically). The script I run is:
#!/bin/bash
ssh-agent sh -c 'ssh-add < /dev/null && bash'
It asks for my ssh key passphrase so that I don't have to type it in every time. You don't need that step if you use an ssh key without a passphrase.
For the rest of the session, sending files back to your home machine is as simple as
scp file_to_send.txt your.domain.name:~/
Here is a hack called ssh-xfer which addresses the exact problem, but requires patching OpenSSH, which is a nonstarter as far as I'm concerned.
Here is my preferred solution to this problem. Set up a reverse ssh tunnel upon creating the ssh session. This is made easy by two bash function: grabfrom() needs to be defined on the local host, while grab() should be defined on the remote host. You can add any other ssh variables you use (e.g. -X or -Y) as you see fit.
function grabfrom() { ssh -R 2202:127.0.0.1:22 ${#}; };
function grab() { scp -P 2202 $# localuser#127.0.0.1:~; };
Usage:
localhost% grabfrom remoteuser#remotehost
password: <remote password goes here>
remotehost% grab somefile1 somefile2 *.txt
password: <local password goes here>
Positives:
It works without special software on either host beyond OpenSSH
It works when local host is behind a NAT router
It can be implemented as a pair of two one-line bash function
Negatives:
It uses a fixed port number so:
won't work with multiple connections to remote host
might conflict with a process using that port on the remote host
It requires localhost accept ssh connections
It requires a special command on initiation the session
It doesn't implicitly handle authentication to the localhost
It doesn't allow one to specify the destination directory on localhost
If you grab from multiple localhosts to the same remote host, ssh won't like the keys changing
Future work:
This is still pretty kludgy. Obviously, it would be possible to handle the authentication issue by setting up ssh keys appropriately and it's even easier to allow the specification of a remote directory by adding a parameter to grab()
More difficult is addressing the other negatives. It would be nice to pick a dynamic port but as far as I can tell there is no elegant way to pass that port to the shell on the remote host; As best as I can tell, OpenSSH doesn't allow you to set arbitrary environment variables on the remote host and bash can't take environment variables from a command line argument. Even if you could pick a dynamic port, there is no way to ensure it isn't used on the remote host without connecting first.
You can use SCP protocol for tranfering a file.you can refer this link
http://tekheez.biz/scp-protocol-in-unix/
The best way to use this you can expose your files over HTTP and download it from another server, you can achieve this using ZSSH Python library,
ZSSH - ZIP over SSH (Simple Python script to exchange files between servers).
Install it using PIP.
python3 -m pip install zssh
Run this command from your remote server.
python3 -m zssh -as --path /desktop/path_to_expose
It will give you an URL to execute from another server.
In the local system or another server where you need to download those files and extract.
python3 -m zssh -ad --path /desktop/path_to_download --zip http://example.com/temp_file.zip
For more about this library: https://pypi.org/project/zssh/
You should be able to set up public & private keys so that no auth is needed.
Which way you do it depends on security requirements, etc (be aware that there are linux/unix ssh worms which will look at keys to find other hosts they can attack).
I do this all the time from behind both linksys and dlink routers. I think you may need to change a couple of settings but it's not a big deal.
Use the -M switch.
"Places the ssh client into 'master' mode for connection shar-ing. Multiple -M options places ssh into ``master'' mode with confirmation required before slave connections are accepted. Refer to the description of ControlMaster in ssh_config(5) for details."
I don't quite see how that answers the OP's question - can you expand on this a bit, David?