Working on a remote server that doesn't have access to internet. I have 3 machines, A, hasinet and noinet. I want to give noinet access to internet, I succesfully achieved it thanks to this link. (For others interested, don't forget to add the right socks config in your .curlrc if you don't have sudo).
I'm looking how to automate & pythonize this procedure with Paramiko:
machineA$ ssh noinet
noinet$ ssh -D 1080 hasinet
Here's what I came up with:
noinet = paramiko.SSHClient()
noinet.set_missing_host_key_policy(paramiko.AutoAddPolicy())
noinet.connect('noinet', port=22, username='XXX', password='YYY')
noinet_transport = noinet.get_transport()
noinet_channel = noinet_transport.open_channel("direct-tcpip", ("hasinet",22), ("noinet", 22))
hasinet = paramiko.SSHClient()
hasinet.set_missing_host_key_policy(paramiko.AutoAddPolicy())
hasinet.connect('hasinet', username='XXX', password='YYY', sock=noinet_channel)
# Opening a tunnel from noinet to hasinet on port 1080
print("Opening tunnel")
forward_tunnel(1080,"localhost",1080, hasinet.get_transport())
The code doesn't raise any errors, and the original code worked fine when I use only 2 machines. I don't have an SSH server on machine A so I need to serve internet by passing through B, that's why there are 3 machines. If machine B had python I guess it would also be easy (but exposes my password).
The forward_tunnel comes from paramiko's demo examples: https://github.com/paramiko/paramiko/blob/master/demos/forward.py
The tunnel opens fine, the script doesn't raise any error, but the proxy localhost:1080 gives a Connection refused which means the tunnel is not open.
Related
I have two things that I'm able to do separately but would like to combine into a single step so I can automate it with Ansible.
Host A is my own laptop that cannot directly access C
Host B is a server with internet access that can access C. It is running squid
Host C which has no internet access
I can manually SSH to B and set up a reverse tunnel when I SSH to C. This allows C to have internet access if I set up http_proxy and https_proxy in the environment.
I can also use Ansible to connect to Host C from Host A via the proxy Host B. However so far whenever I do this host C has no internet access as the reverse tunnel isn't set up.
So I'm able to get close to my goal of running Ansible jobs on C while enabling internet access with a reverse tunnel from B, but I can't combine these two steps. So far I have
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=ERROR -o ProxyCommand="ssh -p 22 -W %h:%p -q admin#HOST_B"'
ansible_ssh_extra_args="-R 3129:localhost:3128"
This works to connect to C, but I cannot access the internet. I'm guessing the ansible_ssh_extra_args is run on my host machine, when really I want to be run from the proxy server B when connecting to B.
I've tried putting the -R 3129:localhost:3128 in a few different places but without success. If I manually connect to B and run the reverse proxy command before running the Ansible task (which tests pinging Google) it works as expected.
How can I tell Ansible to use a reverse tunnel from the proxy server when connecting to C?
According your description
This allows C to have internet access if I set up http_proxy and https_proxy in the environment.
I understand that (only) for tasks which will require internet access (annot. and which might be very few), you need probably just to set the remote environment via
- name: Update all packages
yum:
name: '*'
state: latest
environment:
http_proxy: http://localhost:3128
https_proxy: http://localhost:3128
Please consider that Ansible bases on Task execution. For each task a connection will be made and a small package for remote execution transferred.
I'm trying to ssh into my cisco ISR router. DHCP is working and I can ping the default gateway (the ISR), and can ssh with other devices on the LAN. so I know the LAN connection isn't the problem. I set up a local user that works with console logins so that's not the problem either.
I set up my ssh connection on the router with the following commands:
(config)#line vty 0 21
(config-line)#login local
(config-line)#exec-timeout 3
(config-line)#rotary 1
(config-line)#transport input ssh
(config)#crypto key generate rsa
(config)#ip ssh version 2
(config)#ip ssh port 2222 rotary 1
(config)#ip ssh authentication-retries 3
Then when I nmap the router it has the following ports open:
PORT STATE SERVICE
22/tcp open ssh
However, every time I try to log in to the router I get a Network is unreachable error. This is using the 2222 and 22 port and testing the normal IP and the hostname#ip formats for ssh and absolutely nothing works. I managed to get into telnet with the default settings earlier, but I'm not sure how to get in with SSH.
Thank you all for the help, I know it was very open ended so just let me know anything that could be helpful and I'll provide it.
One logical test step to do, would be switch back to port 22.
Network unreachable usually indicates there's no packet response coming from the host.
It could be because of multiple reasons, but since you've mentioned that ping and telnet went fine. I'd suggest you revert the port config, restart the unit once. See how it goes.
Other possible reasons could be ACL block and/or firewall block on your machine but I think it's unlikely.
I have used these instructions for Running Gui Apps with Docker to create images that allow me to launch GUI based applications.
It all works flawlessly when running Docker on the same machine, but it stops working when running it on a remote host.
Locally, I can run
docker --rm --ti -e DISPLAY -e <X tmp> <image_name> xclock
And I can get xclock running on my host machine.
When connecting remotely to a host with XForwarding, I am able to run X applications that show up on my local X Server, as anyone would expect.
However if in the remote host I try to run the above docker command, it fails to connect to the DISPLAY (usually localhost:10.0)
I think the problem is that the XForwarding is setup on the localhost interface of the remote host.
So the docker host has no way to connect to DISPLAY=localhost:10.0 because that localhost means the remote host, unreachable from docker itself.
Can anyone suggest an elegant way to solve this?
Regards
Alessandro
EDIT1:
One possible way I guess is to use socat to forward the remote /tmp/.X11-unix to the local machine. This way I would not need to use port forwarding.
It also looks like openssh 6.7 will natively support unix socket forwarding.
When running X applications through SSH (ssh -X), you are not using the /tmp/.X11-unix socket to communicate with the X server. You are rather using a tunnel through SSH reached via "localhost:10.0".
In order to get this to work, you need to make sure the SSH server supports X connections to the external address by setting
X11UseLocalhost no
in /etc/ssh/sshd_config.
Then $DISPLAY inside the container should be set to the IP address of the Docker host computer on the docker interface - typically 172.17.0.1. So $DISPLAY will then be 172.17.0.1:10
You need to add the X authentication token inside the docker container with "xauth add" (see here)
If there is any firewall on the Docker host computer, you will have to open up the TCP ports related to this tunnel. Typically you will have to run something like
ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
if you use ufw.
Then it should work. I hope it helps. See also my other answer here https://stackoverflow.com/a/48235281/5744809 for more details.
I am trying to set up Open MPI between a few machines on out network.
Open MPI works fine locally, but I just can't get it to work on a remote node.
I can ssh into the remote machine (without password) just fine, but if I try something like
mpiexec -n 4 --host remote.host hello_c
then the ssh connection just times out.
I checked several tutorials but the only configuration instructions they give is "make sure you can ssh into the remote machine without a password". I did and I still can't launch nodes on remote machines. What's the problem?
I've the same issue. Try to connect in ssh with rsa certificates
Edit 03/24 : This not work.. sorry
I need to do some work on a server to which I don't have direct access to. I do have access to my company network (via vpn). If I were on that network, I could access the server directly. But, for some reason when I'm on the vpn, I can't access the server directly.
So, I need to ssh into an intermediary ubuntu box, and then create an ssh tunnel from that box to the server.
Then, I can do my work on my laptop and send it through a local tunnel that points to a foreign tunnel (on my ubuntu box) that goes to the server.
But I don't know how to do a tunnel that creates another tunnel to a third server.
Any ideas?
Thanks,
Scott
What are you trying to achieve? If you just want to get to a shell on the server then ssh into the Ubuntu box and then ssh from there to the server.
If you want to access some other network resource on the server then you want to forward a port from the server (where you can't get to it) to the Ubuntu box (where you can). Take a look at the -L option in ssh.
Edit:
Copying files to the server:
tar c path/* | ssh ubuntuName 'ssh serverName "tar x"'
Copying stuff back:
ssh ubuntuName 'ssh serverName "tar c path/*"' | tar x
Obviously you need to change ubuntuName, serverName and path/* to what you want. To use rsync you need the -E option and the same trick of wrapping one ssh command inside another. After reading your comment I'd say that the most general answer to your question is that the trick is making ssh execute a command on the target machine. You do this by specifying the command as an argument after the machine name. If you use ssh as the target command for ssh to execute then you get the two-hop behaviour that you are looking for. Then it is just a matter of playing with quotes until everything is escaped correctly.
It's just a double port forward. Forward the ports from the PC to the ubuntu box, then on the ubuntu box forward those destination ports to the final endpoint. It's been a while since I've done command line ssh (been trapped in windows hell :)), so I can't give the command line you need. Another possibility is to use the SOCKS proxy ability built into SSH.
To connect from your local machine over a second machine to a specific port on the third machine you can use the ssh -N -L option:
ssh -N second_machine -L 8080:third_machine:8082
This maps the Port 8082 on the third machine to port 8080 on the local machine (eg. http://localhost:8080/ ).