Simplifying SCP - partly via Reverse Tunnel - between three nodes connectivity - ssh

I am doing a file copy from a Smartphone to a Privat Cloud Server, from this to a Debian Server (on premise at home).
The Debian Server has established a permanent Reverse Tunnel by autossh to the Privat Cloud Server.
The file copy works fine when using the following two command steps:
Copy photo from Smartphone to Private Cloud Server:
scp -i /home/.ssh/id_rsa_1 -P 5022 /storage/DCIM/Camera/20201128_212840.jpg user_1#private_cloud.com:/tmp/
Login at Private Cloud Server and copy from Private Cloud Server to Debian-Server on Premise:
ssh -i /home/.ssh/id_rsa_1 -p 5022 user_1#private_cloud.com 'scp -i /home/user_2/.ssh/id_rsa_2 -P 6022 /tmp/20201128_212840.jpg user_2#localhost:/home/user_2/tmp/'
As this method (of a two step copy within two command lines) is too much time consuming, I am now looking for doing the copy within "one strike" (only using one command line), probably the actual time needed can be reduced dramatically when one ssh login at the Private Cloud can be eliminated.
Any help is appreciated.
Best regards

Related

Is there a way to make an SFTP connection to a remote machine through jump server to transfer files?

I was wondering if there's a way to send files using SFTP to a remote machine through a jump server.
As you can see in the image below first it's needed an SSH connection and after that an SFTP connection.
My main problem here comes after the SSH connection, my workspace has changed and I cannot retrieve the necessary files to execute the SFTP successfully.
I've tried the following code:
ssh jump-server-user#ip-jump-server 'echo "put /source/files /remote/files" | sftp -v remote-machine-user#ip-remote-machine'
But it does not work.
I've tried to execute a simple command like pwd using the SFTP connection and it works so I think the problem here is how the workspace change.
There would probably be an easier solution but I cannot use SSH on the jump server-remote machine connection and I cannot store the local files in the jump server to send them later to the remote machine.
If you have a recent OpenSSH (at least 8.0) locally, you can use the -J (jump) switch:
sftp -J jump-server-user#ip-jump-server remote-machine-user#ip-remote-machine
With older version (but at least 7.3), you can use ProxyJump directive:
sftp -o ProxyJump=jump-server-user#ip-jump-server remote-machine-user#ip-remote-machine
There are other options like ProxyCommand or port forwarding, which you can use on even older versions of OpenSSH. These are covered in Does OpenSSH support multihop login?

Best way to copy files from Docker volume on remote server to local host?

I've got,
My laptop
A remote server I can SSH into which has a Docker volume inside of which are some files I'd like to copy to my laptop.
What is the best way to copy these files over? Bonus points for using things like rsync, etc.. which are fast / can resume / show me progress and not writing any temporary files.
Note: my user on the remote server does not have permission to just scp the data straight out of the volume mount in /var/lib/docker, although I can run any containers on there.
Having this problem, I created dvsync which uses ngrok to establish a tunnel that is being used by rsync to copy data even if the machine is in a private VPC. To use it, you first start the dvsync-server locally, pointing it at the source directory:
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=MY_DIRECTORY,target=/data,readonly \
quay.io/suda/dvsync-server
Note, you need the NGROK_AUTHTOKEN which can be obtained from ngrok dashboard. Then start the dvsync-client on the target machine:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
The DVSYNC_TOKEN can be found in dvsync-server output and it's a base64 encoded private key and tunnel info. Once the data has been copied, the client wil exit.
I'm not sure about the best way of doing so, but if I were you I would run a container sharing the same volume (in read-only -- as it seems you just want to download the files within the volume) and download theses.
This container could be running rsync as you wish.

Multiple reverse ssh to provision Vagrant with Ansible

I have to provision a bunch of development vagrantboxes installed on different physical computers (OS X, Win, Ubuntu) with Ansible. As long as all ansible playbooks/roles/templates are unified (prod, dev) and stored in git, using ansible-pull and dealing with prod configs and vault that stores a real passwords is not an option. So the idea is to make every vagrantbox to create a reverse ssh tunnel to some server where ansible-playbook will be applied to a range of ports.
The question is: how to pick a free port from vagrantbox so I don't have to hardcode numbers to each VM created by developers?
Another question: is there any other, less complicated way to provision vagrant VMs working on different OS?
I found out that ssh automatically picks a free port if 0 is set as a port number. So running
ssh -N -f -R 0:localhost:22 user#middle-server
on my vagrant establishes a connection at the middle server where redirects localhost:port-picked to vagrant:22, and I can then apply ansible-playbook to a range of ports on middle server without copying playbooks to vagrant machines.

ssh tunnel to a computer and create another tunnel a third server

I need to do some work on a server to which I don't have direct access to. I do have access to my company network (via vpn). If I were on that network, I could access the server directly. But, for some reason when I'm on the vpn, I can't access the server directly.
So, I need to ssh into an intermediary ubuntu box, and then create an ssh tunnel from that box to the server.
Then, I can do my work on my laptop and send it through a local tunnel that points to a foreign tunnel (on my ubuntu box) that goes to the server.
But I don't know how to do a tunnel that creates another tunnel to a third server.
Any ideas?
Thanks,
Scott
What are you trying to achieve? If you just want to get to a shell on the server then ssh into the Ubuntu box and then ssh from there to the server.
If you want to access some other network resource on the server then you want to forward a port from the server (where you can't get to it) to the Ubuntu box (where you can). Take a look at the -L option in ssh.
Edit:
Copying files to the server:
tar c path/* | ssh ubuntuName 'ssh serverName "tar x"'
Copying stuff back:
ssh ubuntuName 'ssh serverName "tar c path/*"' | tar x
Obviously you need to change ubuntuName, serverName and path/* to what you want. To use rsync you need the -E option and the same trick of wrapping one ssh command inside another. After reading your comment I'd say that the most general answer to your question is that the trick is making ssh execute a command on the target machine. You do this by specifying the command as an argument after the machine name. If you use ssh as the target command for ssh to execute then you get the two-hop behaviour that you are looking for. Then it is just a matter of playing with quotes until everything is escaped correctly.
It's just a double port forward. Forward the ports from the PC to the ubuntu box, then on the ubuntu box forward those destination ports to the final endpoint. It's been a while since I've done command line ssh (been trapped in windows hell :)), so I can't give the command line you need. Another possibility is to use the SOCKS proxy ability built into SSH.
To connect from your local machine over a second machine to a specific port on the third machine you can use the ssh -N -L option:
ssh -N second_machine -L 8080:third_machine:8082
This maps the Port 8082 on the third machine to port 8080 on the local machine (eg. http://localhost:8080/ ).

Which is the best way to bring a file from a remote host to local host over an SSH session?

When connecting to remote hosts via ssh, I frequently want to bring a file on that system to the local system for viewing or processing. Is there a way to copy the file over without (a) opening a new terminal/pausing the ssh session (b) authenticating again to either the local or remote hosts which works (c) even when one or both of the hosts is behind a NAT router?
The goal is to take advantage of as much of the current state as possible: that there is a connection between the two machines, that I'm authenticated on both, that I'm in the working directory of the file---so I don't have to open another terminal and copy and paste the remote host and path in, which is what I do now. The best solution also wouldn't require any setup before the session began, but if the setup was a one-time or able to be automated, than that's perfectly acceptable.
zssh (a ZMODEM wrapper over openssh) does exactly what you want.
Install zssh and use it instead of openssh (which I assume that you normally use)
You'll have to have the lrzsz package installed on both systems.
Then, to transfer a file zyxel.png from remote to local host:
antti#local:~$ zssh remote
Press ^# (C-Space) to enter file transfer mode, then ? for help
...
antti#remote:~$ sz zyxel.png
**B00000000000000
^#
zssh > rz
Receiving: zyxel.png
Bytes received: 104036/ 104036 BPS:16059729
Transfer complete
antti#remote:~$
Uploading goes similarly, except that you just switch rz(1) and sz(1).
Putty users can try Le Putty, which has similar functionality.
On a linux box I use the ssh-agent and sshfs. You need to setup the sshd to accept connections with key pairs. Then you use ssh-add to add you key to the ssh-agent so you don't have type your password everytime. Be sure to use -t seconds, so the key doesn't stay loaded forever.
ssh-add -t 3600 /home/user/.ssh/ssh_dsa
After that,
sshfs hostname:/ /PathToMountTo/
will mount the server file system on your machine so you have access to it.
Personally, I wrote a small bash script that add my key and mount the servers I use the most, so when I start to work I just have to launch the script and type my passphrase.
Using some little known and rarely used features of the openssh
implementation you can accomplish precisely what you want!
takes advantage of the current state
can use the working directory where you are
does not require any tunneling setup before the session begins
does not require opening a separate terminal or connection
can be used as a one-time deal in an interactive session or can be used as part of an automated session
You should only type what is at each of the local>, remote>, and
ssh> prompts in the examples below.
local> ssh username#remote
remote> ~C
ssh> -L6666:localhost:6666
remote> nc -l 6666 < /etc/passwd
remote> ~^Z
[suspend ssh]
[1]+ Stopped ssh username#remote
local> (sleep 1; nc localhost 6666 > /tmp/file) & fg
[2] 17357
ssh username#remote
remote> exit
[2]- Done ( sleep 1; nc localhost 6666 > /tmp/file )
local> cat /tmp/file
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
...
Or, more often you want to go the other direction, for example if you
want to do something like transfer your ~/.ssh/id_rsa.pub file from
your local machine to the ~/.ssh/authorized_keys file of the remote
machine.
local> ssh username#remote
remote> ~C
ssh> -R5555:localhost:5555
remote> ~^Z
[suspend ssh]
[1]+ Stopped ssh username#remote
local> nc -l 5555 < ~/.ssh/id_rsa.pub &
[2] 26607
local> fg
ssh username#remote
remote> nc localhost 5555 >> ~/.ssh/authorized_keys
remote> cat ~/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2ZQQQQBIwAAAQEAsgaVp8mnWVvpGKhfgwHTuOObyfYSe8iFvksH6BGWfMgy8poM2+5sTL6FHI7k0MXmfd7p4rzOL2R4q9yjG+Hl2PShjkjAVb32Ss5ZZ3BxHpk30+0HackAHVqPEJERvZvqC3W2s4aKU7ae4WaG1OqZHI1dGiJPJ1IgFF5bWbQl8CP9kZNAHg0NJZUCnJ73udZRYEWm5MEdTIz0+Q5tClzxvXtV4lZBo36Jo4vijKVEJ06MZu+e2WnCOqsfdayY7laiT0t/UsulLNJ1wT+Euejl+3Vft7N1/nWptJn3c4y83c4oHIrsLDTIiVvPjAj5JTkyH1EA2pIOxsKOjmg2Maz7Pw== username#local
A little bit of explanation is in order.
The first step is to open a LocalForward; if you don't already have
one established then you can use the ~C escape character to open an
ssh command line which will give you the following commands:
remote> ~C
ssh> help
Commands:
-L[bind_address:]port:host:hostport Request local forward
-R[bind_address:]port:host:hostport Request remote forward
-D[bind_address:]port Request dynamic forward
-KR[bind_address:]port Cancel remote forward
In this example I establish a LocalForward on port 6666 of localhost
for both the client and the server; the port number can be any
arbitrary open port.
The nc command is from the netcat package; it is described as the
"TCP/IP swiss army knife"; it is a simple, yet very flexible and
useful program. Make it a standard part of your unix toolbelt.
At this point nc is listening on port 6666 and waiting for another
program to connect to that port so it can send the contents of
/etc/passwd.
Next we make use of another escape character ~^Z which is tilde
followed by control-Z. This temporarily suspends the ssh process and
drops us back into our shell.
One back on the local system you can use nc to connect to the
forwarded port 6666. Note the lack of a -l in this case because that
option tells nc to listen on a port as if it were a server which is
not what we want; instead we want to just use nc as a client to
connect to the already listening nc on the remote side.
The rest of the magic around the nc command is required because if
you recall above I said that the ssh process was temporarily
suspended, so the & will put the whole (sleep + nc) expression
into the background and the sleep gives you enough time for ssh to
return to the foreground with fg.
In the second example the idea is basically the same except we set up
a tunnel going the other direction using -R instead of -L so that
we establish a RemoteForward. And then on the local side is where
you want to use the -l argument to nc.
The escape character by default is ~ but you can change that with:
-e escape_char
Sets the escape character for sessions with a pty (default: ‘~’). The escape character is only recognized at the beginning of a line. The escape character followed by a dot
(‘.’) closes the connection; followed by control-Z suspends the connection; and followed by itself sends the escape character once. Setting the character to “none” disables any
escapes and makes the session fully transparent.
A full explanation of the commands available with the escape characters is available in the ssh manpage
ESCAPE CHARACTERS
When a pseudo-terminal has been requested, ssh supports a number of functions through the use of an escape character.
A single tilde character can be sent as ~~ or by following the tilde by a character other than those described below. The escape character must always follow a newline to be interpreted
as special. The escape character can be changed in configuration files using the EscapeChar configuration directive or on the command line by the -e option.
The supported escapes (assuming the default ‘~’) are:
~. Disconnect.
~^Z Background ssh.
~# List forwarded connections.
~& Background ssh at logout when waiting for forwarded connection / X11 sessions to terminate.
~? Display a list of escape characters.
~B Send a BREAK to the remote system (only useful for SSH protocol version 2 and if the peer supports it).
~C Open command line. Currently this allows the addition of port forwardings using the -L, -R and -D options (see above). It also allows the cancellation of existing remote port-
forwardings using -KR[bind_address:]port. !command allows the user to execute a local command if the PermitLocalCommand option is enabled in ssh_config(5). Basic help is avail‐
able, using the -h option.
~R Request rekeying of the connection (only useful for SSH protocol version 2 and if the peer supports it).
Using ControlMaster (the -M switch) is the best solution, way simpler and easier than the rest of the answers here. It allows you to share a single connection among multiple sessions. Sounds like it does what the poster wants. You still have to type the scp or sftp command line though. Try it. I use it for all of my sshing.
In order to do this I have my home router set up to forward port 22 back to my home machine (which is firewalled to only accept ssh connections from my work machine) and I also have an account set up with DynDNS to provide Dynamic DNS that will resolve to my home IP automatically.
Then when I ssh into my work computer, the first thing I do is run a script that starts an ssh-agent (if your server doesn't do that automatically). The script I run is:
#!/bin/bash
ssh-agent sh -c 'ssh-add < /dev/null && bash'
It asks for my ssh key passphrase so that I don't have to type it in every time. You don't need that step if you use an ssh key without a passphrase.
For the rest of the session, sending files back to your home machine is as simple as
scp file_to_send.txt your.domain.name:~/
Here is a hack called ssh-xfer which addresses the exact problem, but requires patching OpenSSH, which is a nonstarter as far as I'm concerned.
Here is my preferred solution to this problem. Set up a reverse ssh tunnel upon creating the ssh session. This is made easy by two bash function: grabfrom() needs to be defined on the local host, while grab() should be defined on the remote host. You can add any other ssh variables you use (e.g. -X or -Y) as you see fit.
function grabfrom() { ssh -R 2202:127.0.0.1:22 ${#}; };
function grab() { scp -P 2202 $# localuser#127.0.0.1:~; };
Usage:
localhost% grabfrom remoteuser#remotehost
password: <remote password goes here>
remotehost% grab somefile1 somefile2 *.txt
password: <local password goes here>
Positives:
It works without special software on either host beyond OpenSSH
It works when local host is behind a NAT router
It can be implemented as a pair of two one-line bash function
Negatives:
It uses a fixed port number so:
won't work with multiple connections to remote host
might conflict with a process using that port on the remote host
It requires localhost accept ssh connections
It requires a special command on initiation the session
It doesn't implicitly handle authentication to the localhost
It doesn't allow one to specify the destination directory on localhost
If you grab from multiple localhosts to the same remote host, ssh won't like the keys changing
Future work:
This is still pretty kludgy. Obviously, it would be possible to handle the authentication issue by setting up ssh keys appropriately and it's even easier to allow the specification of a remote directory by adding a parameter to grab()
More difficult is addressing the other negatives. It would be nice to pick a dynamic port but as far as I can tell there is no elegant way to pass that port to the shell on the remote host; As best as I can tell, OpenSSH doesn't allow you to set arbitrary environment variables on the remote host and bash can't take environment variables from a command line argument. Even if you could pick a dynamic port, there is no way to ensure it isn't used on the remote host without connecting first.
You can use SCP protocol for tranfering a file.you can refer this link
http://tekheez.biz/scp-protocol-in-unix/
The best way to use this you can expose your files over HTTP and download it from another server, you can achieve this using ZSSH Python library,
ZSSH - ZIP over SSH (Simple Python script to exchange files between servers).
Install it using PIP.
python3 -m pip install zssh
Run this command from your remote server.
python3 -m zssh -as --path /desktop/path_to_expose
It will give you an URL to execute from another server.
In the local system or another server where you need to download those files and extract.
python3 -m zssh -ad --path /desktop/path_to_download --zip http://example.com/temp_file.zip
For more about this library: https://pypi.org/project/zssh/
You should be able to set up public & private keys so that no auth is needed.
Which way you do it depends on security requirements, etc (be aware that there are linux/unix ssh worms which will look at keys to find other hosts they can attack).
I do this all the time from behind both linksys and dlink routers. I think you may need to change a couple of settings but it's not a big deal.
Use the -M switch.
"Places the ssh client into 'master' mode for connection shar-ing. Multiple -M options places ssh into ``master'' mode with confirmation required before slave connections are accepted. Refer to the description of ControlMaster in ssh_config(5) for details."
I don't quite see how that answers the OP's question - can you expand on this a bit, David?