I have been using openSSH for a little bit and just learned the basics of port forwarding in OpenSSH. I own some equipment that has dropbear installed on it but it seems the options are different. The equipment has an internal webpage operating on port 443 and I would like to forward that to another PC securely.
Port-forwarding requires the ssh client and ssh server to interoperate, and of course for the feature to be present and allowed in both. At build time Dropbear has 4 distinct settings for this
#define DROPBEAR_CLI_LOCALTCPFWD 1
#define DROPBEAR_CLI_REMOTETCPFWD 1
#define DROPBEAR_SVR_LOCALTCPFWD 1
#define DROPBEAR_SVR_REMOTETCPFWD 1
These are all set by default in the official (current dropbear-2022.82) source. AFAICT every public release since 2003 has had some form of TCP forwarding support (but not necessarily enabled when it was built).
Usefully, these options control both the feature itself, and whether the feature is documented in the -h help output—if the relevant options are omitted from the help output then they were omitted from the build.
With Dropbear server you should be able to run dropbear -h or (sshd -h if it has been renamed), the presence of the -j and/or -k options indicate DROPBEAR_SVR_LOCALTCPFWD and DROPBEAR_SVR_REMOTETCPFWD respectively were set at build time.
With Dropbear client you should run dbclient -h (or ssh -h), the presence of the -L and/or -R indicate DROPBEAR_CLI_LOCALTCPFWD and DROPBEAR_CLI_REMOTETCPFWD respectively were set at build time.
(If the binaries were renamed you can confirm their identity with the -V option.)
Finally, for Dropbear server it must be started without the -j or -k options to allow it to observe client requests for local and remote forwarding respectively.
If all of the above is as expected (specifically the capabilities and run-time options of the target system Dropbear ssh server) you should then be able to do something like one of:
ssh -L10443:127.0.0.1:443 dropbearhost
ssh -L10443:x.x.x.x:443 dropbearhost
where localhost:10443 (e.g. https://localhost:10443/ on the initiating system will forward to 127.0.0.1:443 on dropbearhost (or x.x.x.x:443 with an alternate IP if the web server is bound to a specific address). If SNI is enforced, then adding the virtualhost name to the hosts file on the initiating system should fix that (and might also be required if the web content uses redirects).
If you happen to be building Dropbear, you change those build options by hand after running configure:
grep TCPFWD default_options.h >> localoptions.h
and amend, setting them to 0 or 1 as required. (Note though in older versions these defines were named, set and used slightly differently.)
Related
I'm trying to use the private key from my openpgp card from my Debian laptop to a RPi. I followed the different hints found on google, in particular:
extra-socket in ~/.gnupg/gpg-agent.conf
removed it again when founding that this extra socket already will be created in /run/user/<uid>/gnupg
forward this socket using ~/.ssh/config
Host homegear
HostName homegear
RemoteForward ~/.gnupg/S.gpg-agent /run/user/1000/gnupg/S.gpg-agent.extra
changed the order of the both sockets in the RemoteForward line since I'm always confused which one should be the first one
add the following into /etc/ssh/sshd_config of the RPi
StreamLocalBindUnlink yes
reload the gpg-agent on the laptop
open new ssh connection to RPi
But I always get
Warning: remote port forwarding failed for listen path ~/.gnupg/S.gpg-agent
when connecting to the RPi.
openssh on both laptop and RPi is 7.4 (Debian Stretch), gpg is 2.1.18.
Forwarding the agent connect to be used as ssh private key (for connecting to gitlab from RPi) works perfectly, forwarding gpg private key (for signing commits) doesn't. I'm a bit helpless at the moment. Is there anything obviously wrong? Or is there still a problem with forwarding unix domain socket and I need to use the socat workaround?
Thank you!
I've run into exactly the same issue except between two Macs running 10.14.2 and GPG 2.2.11, and the only way I was able to get it to work was to specify the absolute path to the sockets on both ends. :( Having a relative path for either the remote or local socket both failed in various ways, which makes it a bit of a pain if you're connecting as different usernames on various machines.
I was able to work around that by specifying a number of different Match exec blocks in my ~/.ssh/config:
# Source machine is a personal Mac, connecting to another personal Mac on my local network
Match exec "hostname | grep -F .core" Host *.core
RemoteForward /Users/virtualwolf/.gnupg/S.gpg-agent /Users/virtualwolf/.gnupg/S.gpg-agent.extra
# Source machine is a personal Mac, connecting to my Linux box
Match exec "hostname | grep -F .core" Host <name of the host block for my Linux box>
RemoteForward /home/virtualwolf/.gnupg/S.gpg-agent /Users/virtualwolf/.gnupg/S.gpg-agent.extra
# Source machine is my work Mac, connecting to my Linux box
Match exec "hostname | grep -F <work machine name>" Host <name of the host block for my Linux box>
RemoteForward /home/virtualwolf/.gnupg/S.gpg-agent /Users/<work username>/.gnupg/S.gpg-agent.extra
(SSH bits are taken from this answer).
First off: I have read the answers to similar questions on SO, but none of them worked.
IMPORTANT NOTE: The answer below is still valid, but maybe jump to the end for an alternative.
The situation:
App with GUI is running in a docker container (CentOS 7.1) under Arch Linux. (machine A)
Machine A has a monitor connected to it.
I want to access this GUI via X11 forwarding on my Arch Linux client machine. (machine B)
What works:
GUI works locally on machine A (with /tmp/.X11-unix being mounted in the Docker container).
X11 forwarding of any app running outside of docker (X11 forwarding is set up and running properly for non-docker usage).
I can even switch the user while remotely logged in, copy the .Xauthority file to the other user and X11 forwarding works as well.
Some setup info:
Docker networking is 'bridged'.
Container can reach host (firewall is open).
DISPLAY variable is set in container (to host-ip-addr:10.0 because of TCP port 6010 where sshd is listening).
Packets to X forward port (6010) are reaching the host from the container (tcpdump checked).
What does not work:
X11 forwarding of the Docker app
Errors:
X11 connection rejected because of wrong authentication.
xterm: Xt error: Can't open display: host-ip-addr:10.0
Things i tried:
starting client ssh with ssh -Y option on machine B
putting "X11ForwardTrusted yes" in ssh_config on machine B
xhost + (so allow any clients to connect) on machine B
putting Host * in ssh_config on machine B
putting X11UseLocalhost no in sshd_config on machine A (to allow non-localhost clients)
Adding the X auth token in the container with xauth add from the login user on machine A
Just copying over the .Xauthority file from a working user into the container
Making shure .Xauthority file has correct permissions and owner
How can i just disable all the X security stuff and get this working?
Or even better: How can i get it working with security?
Is there at least a way to enable extensive debugging to see where exactly the problem is?
Alternative: The first answer below shows how to effectively resolve this issue. However: I would recommend you to look into a different approach all together, namely VNC. I personally switched to a tigerVNC setup that replaces the X11 forwarding and have not looked back. The performance is just leagues above what X11 forwarding delivered for me. There might be some instances where you cannot use VNC for whatever reason, but i would try it first.
The general setup is now as follows:
-VNC server runs on machine A on the host (not inside a docker container).
-Now you just have to figure out how to get a GUI for inside a docker container (which is a much more trivial undertaking).
-If the docker container was started NOT from the VNC environment, the DISPLAY variable maybe needs ajdusting.
Thanks so much #Lazarus535
I found that for me adding the following to my docker command worked:
--volume="$HOME/.Xauthority:/root/.Xauthority:rw"
I found this trick here
EDIT:
As Lazarus pointed out correctly you also have to set the --net=host option to make this work.
Ok, here is the thing:
1) Log in to remote machine
2) Check which display was set with echo $DISPLAY
3) Run xauth list
4) Copy the line corresponding to your DISPLAY
5) Enter your docker container
6) xauth add <the line you copied>*
7) Set DISPLAY with export DISPLAY=<ip-to-host>:<no-of-display>
*so far so good right?
This was nothing new...however here is the twist:
The line printed by xauth list for the login user looks something like this (in my case):
<hostname-of-machine>/unix:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
Because i use the bridged docker setup, the X forwarding port is not listening locally, because the sshd is not running in the container. Change the line above to:
<ip-of-host>:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
In essence: Remove the /unix part.
<ip-of-host> is the IP address where the sshd is running.
Set the DISPLAY variable as above.
So the error was that the DISPLAY name in the environment variable was not the "same" as the entry in the xauth list / .Xauthority file and the client could therefor not authenticate properly.
I switched back to an untrusted X11 forwarding setting.
The X11UseLocalhost no setting in the sshd_config file however is important, because the incomming connection will come from a "different" machine (the docker container).
This works in any scenario.
Install xhost if you don't have it. Then, in bash,
export DISPLAY=:0.0
xhost +local:docker
After this run your docker run command (or whatever docker command you are running) with -e DISPLAY=$DISPLAY
It works usually via https://stackoverflow.com/a/61060528/429476
But if you are running docker with a different user than the one used for ssh -X into the server with; then copying the Xauthority only helped along with volume mapping the file.
Example - I sshed into the server with alex user.Then ran docker after su -root and got this error
X11 connection rejected because of wrong authentication.
After copying the .XAuthoirty file and mapping it like https://stackoverflow.com/a/51209546/429476 made it work
cp /home/alex/.Xauthority .
docker run -it --network=host --env DISPLAY=$DISPLAY --privileged \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
-v /tmp/.X11-unix:/tmp/.X11-unix --rm <dockerimage>
More details on wiring here https://unix.stackexchange.com/a/604284/121634
Some clarifying remarks. Host is A, local machine is B
Ive edited this post to note things that I think should work in theory but haven't been tested, vs things I know to work
Running docker non-interactively
If your docker is running not interactively and running sshd, you can use jumphosts or proxycommand and specify the x11 client to run. You should NOT volume share your Xauthority file with the container, and sharing -e DISPLAY likely has no effect on future ssh sessions
Since you essentially have two sshd servers, either of the following should work out of the box
if you have openssh-client greater than version 7.3, you can use the following command
ssh -X -J user-on-host#hostmachine,user-on-docker#dockercontainer xeyes
If your openssh client is older, the syntax is instead
(google says the -X is not needed in the proxy command, but I am suspicious)
ssh -X -o ProxyCommand="ssh -W %h:%p user-on-host#hostmachine" user-on-docker#dockermachine xeyes
Or ssh -X into host, then ssh -X into docker.
In either of the above cases, you should NOT share .Xauthority with the container
Running docker interactively from within the ssh session
The easiest way to get this done is to set --net=host and X11UseLocalhost yse.
If your docker is running sshd, you can open a second ssh -X session on your local machine and use the jumphost method as above.
If you start it in the ssh session, you can either -e DISPLAY=$DISPLAY or export it when you're in. You might have to export it if you attach to an exiting container where this line wasn't used.
Use these docker args for --net host and x11uselocalhost yes
ssh -X to host
-e DISPLAY=$DISPLAY
-v $HOME/.Xauthority:/home/same-as-dash-u-user/.Xauthority
-u user
What follows is explanation of how everything works and other approaches to try
About Xauthority
ssh -X/-Y set up a session key in the hosts Xauthority file, and then sets up a listen port on which it places an x11 proxy that uses the session key, and converts it to be compatible with the key on your local machine. By design, the .Xauthority keys will be different between your local machine and the host machine. If you use jumphosts/proxycommand the keys between the host and the container will yet again be different from each other. If you instead use ssh tunnels or direct X11 connection, you will have to share the host Xauthority with the container, in the case of sharing .Xauthority with the container, you can only have one active session per user, since new sessions will invalidate the previous ones by modifying the hosts .Xauthority such that it only works with that session's ssh x11 proxy
X11UserLocalhost no theory##
Even Though X11UseLocalhost no causes the x server to listen on the wildcard address, With --net host I could not redirect the container display to localhost:X.Y where x and why are from the host $DISPLAY
X11UseLocalhost yes is the easy way
If you choose X11UseLocalhost yes the DISPLAY variable on the host becomes localhost:X:Y, which causes the ssh x11 proxy to listen only on localhost port x.
If X11UseLocalhost is no, the DISPLAY variable on the host becomes the host's hostname:X:Y, which causes the xerver to listen on 0.0.0.0:6000+X and causes xclients to reach out over the network to the hostname specified.
this is theoretical, I don't yet have access to docker on a remote host to test this
But this is the easy way. We bypass that by redirecting the DISPLAY variable to always be localhost, and do docker port mapping to move the data from localhost:X+1.Y on the container, to localhost:X.Y on the host, where ssh is waiting to forward x traffic back to the local machine. The +1 makes us agnostic to running either --net=host or --net=bridge
setting up container ports requires specifying expose in the dockerfile and publishing the ports with the -p command.
Setting everything up manually without ssh -X
This works only with --net host. This approach works without xauth because we are directly piping to your unix domain socket on the local machine
ssh to host without -X
ssh -R6010:localhost:6010 user#host
start docker with -e DISPLAY=localhost:10.1 or export inside
in another terminal on local machine
socat -d -d TCP-LISTEN:6010,fork UNIX-CONNECT:/tmp/.X11-unix/X0
In original terminal run xclients
if container is net --bridged and you can't use docker ports, enable sshd on the container and use the jumphosts method
I'm testing out Ansible and am already stuck on a fairly simple thing. I configured my /etc/ansible/hosts to contain the IP of my server:
[web]
1.2.3.4
Now, connecting to it with ansible all -vvvv -m ping fails since my ~/.ssh/config for the specified server uses a custom port, a key file not in the default location, etc. How do I tell Ansible to reuse my SSH configuration for the connection?
It's a little esoteric, so it's understandable you missed it. This is from the Ansible Inventory page:
If you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon. Ports listed in your SSH config file won’t be used with the paramiko connection but will be used with the openssh connection.
To make things explicit, it is suggested that you set them if things are not running on the default port
So, in your case:
[web]
1.2.3.4:9000
(Using 9000 as your alt port, of course)
Ansible uses paramiko on systems with a dated version of ssh, such as CentOS/RHEL.
What tedder42 said plus there are other slightly more advanced ways of defining your ssh config on a per host basis.
[web]
1.2.3.4 ansible_ssh_port=9000
This only makes sense if you're also using the other ansible_ssh special variables like ansible_ssh_user and ansible_ssh_host.
ansible_ssh_host is helpful if you want to be able to refer to a server by a name of your choosing instead of its IP.
[web]
apache ansible_ssh_host=1.2.3.4 ansible_ssh_port=9000
If you end up with multiple hosts with the same alternative ssh port you can makes use of Ansible's group variable function.
[web]
1.2.3.4
5.6.7.8
9.10.11.12
[web:vars]
ansible_ssh_port=9000
Now Ansible will use port 9000 on all three of the hosts In the web group.
Understanding how to organize your inventory data goes a long way to your success with Ansible.
I'm creating a small script to update some remote servers (2+)
I am making multiple connects to each server; is there a way I can reuse the SSH connections so I don't have to open too many at once?
If you open the first connection with -M:
ssh -M $REMOTEHOST
subsequent connections to $REMOTEHOST will "piggyback" on the connection established by the master ssh. Most noticeably, further authentication is not required. See man ssh_config under "ControlMaster" for more details. Use -S to specify the path to the shared socket; I'm not sure what the default is, because I configure connection sharing using the configuration file instead.
In my .ssh/config file, I have the following lines:
host *
ControlMaster auto
ControlPath ~/.ssh/ssh_mux_%h_%p_%r
This way, I don't have to remember to use -M or -S; ssh figures out if a sharable connection already exists for the host/port/username combination and uses that if possible.
This option is available in OpenSSH since 2004.
I prefer the method described at Puppet Labs https://puppetlabs.com/blog/speed-up-ssh-by-reusing-connections
Add these lines to ~/.ssh/config and run mkdir ~/.ssh/sockets
Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/%r#%h-%p
ControlPersist 600
Read the full blog post for more useful information about what these do and the idiosyncrasies of ssh when used like this. I highly recommend reading the blog or you may find things don't work as you expect.
Alternatively, you can do it this way:
$ssh_conn="ssh -t -o ControlPath=~/.ssh/master-$$ -o ControlMaster=auto -o ControlPersist=60"
$ssh_conn user#server
ControlPath=~/.ssh/master-$$ sets up a control path for the ssh
connection limiting connection reuse to the current shell (via the
$$ PID)
ControlMaster=auto allows the connection session to be
shared using the ControlPath
ControlPesist=60 sets the amount of
time the connection should remain open due to inactivity
For modern-distro setups that have a /run/user/$UID/ for just-this-boot runtime stuff,
controlmaster auto
controlpath /run/user/%i/ssh-%C
controlpersist 900
at the top of the config (where no match or host restrictions are in effect) will set all ssh sessions that share hosts, port and remote username to use a single connection. I keep addkeystoagent yes and identityfile ~/.ssh/id_ed25519 up there too so ssh doesn't offer all my keys for every host.
When connecting to remote hosts via ssh, I frequently want to bring a file on that system to the local system for viewing or processing. Is there a way to copy the file over without (a) opening a new terminal/pausing the ssh session (b) authenticating again to either the local or remote hosts which works (c) even when one or both of the hosts is behind a NAT router?
The goal is to take advantage of as much of the current state as possible: that there is a connection between the two machines, that I'm authenticated on both, that I'm in the working directory of the file---so I don't have to open another terminal and copy and paste the remote host and path in, which is what I do now. The best solution also wouldn't require any setup before the session began, but if the setup was a one-time or able to be automated, than that's perfectly acceptable.
zssh (a ZMODEM wrapper over openssh) does exactly what you want.
Install zssh and use it instead of openssh (which I assume that you normally use)
You'll have to have the lrzsz package installed on both systems.
Then, to transfer a file zyxel.png from remote to local host:
antti#local:~$ zssh remote
Press ^# (C-Space) to enter file transfer mode, then ? for help
...
antti#remote:~$ sz zyxel.png
**B00000000000000
^#
zssh > rz
Receiving: zyxel.png
Bytes received: 104036/ 104036 BPS:16059729
Transfer complete
antti#remote:~$
Uploading goes similarly, except that you just switch rz(1) and sz(1).
Putty users can try Le Putty, which has similar functionality.
On a linux box I use the ssh-agent and sshfs. You need to setup the sshd to accept connections with key pairs. Then you use ssh-add to add you key to the ssh-agent so you don't have type your password everytime. Be sure to use -t seconds, so the key doesn't stay loaded forever.
ssh-add -t 3600 /home/user/.ssh/ssh_dsa
After that,
sshfs hostname:/ /PathToMountTo/
will mount the server file system on your machine so you have access to it.
Personally, I wrote a small bash script that add my key and mount the servers I use the most, so when I start to work I just have to launch the script and type my passphrase.
Using some little known and rarely used features of the openssh
implementation you can accomplish precisely what you want!
takes advantage of the current state
can use the working directory where you are
does not require any tunneling setup before the session begins
does not require opening a separate terminal or connection
can be used as a one-time deal in an interactive session or can be used as part of an automated session
You should only type what is at each of the local>, remote>, and
ssh> prompts in the examples below.
local> ssh username#remote
remote> ~C
ssh> -L6666:localhost:6666
remote> nc -l 6666 < /etc/passwd
remote> ~^Z
[suspend ssh]
[1]+ Stopped ssh username#remote
local> (sleep 1; nc localhost 6666 > /tmp/file) & fg
[2] 17357
ssh username#remote
remote> exit
[2]- Done ( sleep 1; nc localhost 6666 > /tmp/file )
local> cat /tmp/file
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
...
Or, more often you want to go the other direction, for example if you
want to do something like transfer your ~/.ssh/id_rsa.pub file from
your local machine to the ~/.ssh/authorized_keys file of the remote
machine.
local> ssh username#remote
remote> ~C
ssh> -R5555:localhost:5555
remote> ~^Z
[suspend ssh]
[1]+ Stopped ssh username#remote
local> nc -l 5555 < ~/.ssh/id_rsa.pub &
[2] 26607
local> fg
ssh username#remote
remote> nc localhost 5555 >> ~/.ssh/authorized_keys
remote> cat ~/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2ZQQQQBIwAAAQEAsgaVp8mnWVvpGKhfgwHTuOObyfYSe8iFvksH6BGWfMgy8poM2+5sTL6FHI7k0MXmfd7p4rzOL2R4q9yjG+Hl2PShjkjAVb32Ss5ZZ3BxHpk30+0HackAHVqPEJERvZvqC3W2s4aKU7ae4WaG1OqZHI1dGiJPJ1IgFF5bWbQl8CP9kZNAHg0NJZUCnJ73udZRYEWm5MEdTIz0+Q5tClzxvXtV4lZBo36Jo4vijKVEJ06MZu+e2WnCOqsfdayY7laiT0t/UsulLNJ1wT+Euejl+3Vft7N1/nWptJn3c4y83c4oHIrsLDTIiVvPjAj5JTkyH1EA2pIOxsKOjmg2Maz7Pw== username#local
A little bit of explanation is in order.
The first step is to open a LocalForward; if you don't already have
one established then you can use the ~C escape character to open an
ssh command line which will give you the following commands:
remote> ~C
ssh> help
Commands:
-L[bind_address:]port:host:hostport Request local forward
-R[bind_address:]port:host:hostport Request remote forward
-D[bind_address:]port Request dynamic forward
-KR[bind_address:]port Cancel remote forward
In this example I establish a LocalForward on port 6666 of localhost
for both the client and the server; the port number can be any
arbitrary open port.
The nc command is from the netcat package; it is described as the
"TCP/IP swiss army knife"; it is a simple, yet very flexible and
useful program. Make it a standard part of your unix toolbelt.
At this point nc is listening on port 6666 and waiting for another
program to connect to that port so it can send the contents of
/etc/passwd.
Next we make use of another escape character ~^Z which is tilde
followed by control-Z. This temporarily suspends the ssh process and
drops us back into our shell.
One back on the local system you can use nc to connect to the
forwarded port 6666. Note the lack of a -l in this case because that
option tells nc to listen on a port as if it were a server which is
not what we want; instead we want to just use nc as a client to
connect to the already listening nc on the remote side.
The rest of the magic around the nc command is required because if
you recall above I said that the ssh process was temporarily
suspended, so the & will put the whole (sleep + nc) expression
into the background and the sleep gives you enough time for ssh to
return to the foreground with fg.
In the second example the idea is basically the same except we set up
a tunnel going the other direction using -R instead of -L so that
we establish a RemoteForward. And then on the local side is where
you want to use the -l argument to nc.
The escape character by default is ~ but you can change that with:
-e escape_char
Sets the escape character for sessions with a pty (default: ‘~’). The escape character is only recognized at the beginning of a line. The escape character followed by a dot
(‘.’) closes the connection; followed by control-Z suspends the connection; and followed by itself sends the escape character once. Setting the character to “none” disables any
escapes and makes the session fully transparent.
A full explanation of the commands available with the escape characters is available in the ssh manpage
ESCAPE CHARACTERS
When a pseudo-terminal has been requested, ssh supports a number of functions through the use of an escape character.
A single tilde character can be sent as ~~ or by following the tilde by a character other than those described below. The escape character must always follow a newline to be interpreted
as special. The escape character can be changed in configuration files using the EscapeChar configuration directive or on the command line by the -e option.
The supported escapes (assuming the default ‘~’) are:
~. Disconnect.
~^Z Background ssh.
~# List forwarded connections.
~& Background ssh at logout when waiting for forwarded connection / X11 sessions to terminate.
~? Display a list of escape characters.
~B Send a BREAK to the remote system (only useful for SSH protocol version 2 and if the peer supports it).
~C Open command line. Currently this allows the addition of port forwardings using the -L, -R and -D options (see above). It also allows the cancellation of existing remote port-
forwardings using -KR[bind_address:]port. !command allows the user to execute a local command if the PermitLocalCommand option is enabled in ssh_config(5). Basic help is avail‐
able, using the -h option.
~R Request rekeying of the connection (only useful for SSH protocol version 2 and if the peer supports it).
Using ControlMaster (the -M switch) is the best solution, way simpler and easier than the rest of the answers here. It allows you to share a single connection among multiple sessions. Sounds like it does what the poster wants. You still have to type the scp or sftp command line though. Try it. I use it for all of my sshing.
In order to do this I have my home router set up to forward port 22 back to my home machine (which is firewalled to only accept ssh connections from my work machine) and I also have an account set up with DynDNS to provide Dynamic DNS that will resolve to my home IP automatically.
Then when I ssh into my work computer, the first thing I do is run a script that starts an ssh-agent (if your server doesn't do that automatically). The script I run is:
#!/bin/bash
ssh-agent sh -c 'ssh-add < /dev/null && bash'
It asks for my ssh key passphrase so that I don't have to type it in every time. You don't need that step if you use an ssh key without a passphrase.
For the rest of the session, sending files back to your home machine is as simple as
scp file_to_send.txt your.domain.name:~/
Here is a hack called ssh-xfer which addresses the exact problem, but requires patching OpenSSH, which is a nonstarter as far as I'm concerned.
Here is my preferred solution to this problem. Set up a reverse ssh tunnel upon creating the ssh session. This is made easy by two bash function: grabfrom() needs to be defined on the local host, while grab() should be defined on the remote host. You can add any other ssh variables you use (e.g. -X or -Y) as you see fit.
function grabfrom() { ssh -R 2202:127.0.0.1:22 ${#}; };
function grab() { scp -P 2202 $# localuser#127.0.0.1:~; };
Usage:
localhost% grabfrom remoteuser#remotehost
password: <remote password goes here>
remotehost% grab somefile1 somefile2 *.txt
password: <local password goes here>
Positives:
It works without special software on either host beyond OpenSSH
It works when local host is behind a NAT router
It can be implemented as a pair of two one-line bash function
Negatives:
It uses a fixed port number so:
won't work with multiple connections to remote host
might conflict with a process using that port on the remote host
It requires localhost accept ssh connections
It requires a special command on initiation the session
It doesn't implicitly handle authentication to the localhost
It doesn't allow one to specify the destination directory on localhost
If you grab from multiple localhosts to the same remote host, ssh won't like the keys changing
Future work:
This is still pretty kludgy. Obviously, it would be possible to handle the authentication issue by setting up ssh keys appropriately and it's even easier to allow the specification of a remote directory by adding a parameter to grab()
More difficult is addressing the other negatives. It would be nice to pick a dynamic port but as far as I can tell there is no elegant way to pass that port to the shell on the remote host; As best as I can tell, OpenSSH doesn't allow you to set arbitrary environment variables on the remote host and bash can't take environment variables from a command line argument. Even if you could pick a dynamic port, there is no way to ensure it isn't used on the remote host without connecting first.
You can use SCP protocol for tranfering a file.you can refer this link
http://tekheez.biz/scp-protocol-in-unix/
The best way to use this you can expose your files over HTTP and download it from another server, you can achieve this using ZSSH Python library,
ZSSH - ZIP over SSH (Simple Python script to exchange files between servers).
Install it using PIP.
python3 -m pip install zssh
Run this command from your remote server.
python3 -m zssh -as --path /desktop/path_to_expose
It will give you an URL to execute from another server.
In the local system or another server where you need to download those files and extract.
python3 -m zssh -ad --path /desktop/path_to_download --zip http://example.com/temp_file.zip
For more about this library: https://pypi.org/project/zssh/
You should be able to set up public & private keys so that no auth is needed.
Which way you do it depends on security requirements, etc (be aware that there are linux/unix ssh worms which will look at keys to find other hosts they can attack).
I do this all the time from behind both linksys and dlink routers. I think you may need to change a couple of settings but it's not a big deal.
Use the -M switch.
"Places the ssh client into 'master' mode for connection shar-ing. Multiple -M options places ssh into ``master'' mode with confirmation required before slave connections are accepted. Refer to the description of ControlMaster in ssh_config(5) for details."
I don't quite see how that answers the OP's question - can you expand on this a bit, David?