I'd like to create a ssh tunnel from my computer to a remote server to a docker container running Jupyter Notebook (computer>server>Docker container) that allows me to run a Jupyter Notebook in my browser on my computer.
The Docker container is hosted on a machine running OS X (El Capitan). Docker is using the default machine IP: 192.168.99.100.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.11.1
I am able to physically sit at the server running the Docker container and use my browser (192.168.99.100:8888) to create Jupyter Notebooks from that Docker container. This verifies that my Docker port bindings work and that I'm running the Jupyter Notebook correctly.
However, I don't know how to establish a ssh tunnel from a client machine to that remote machine's Docker container and launch a Jupyter Notebook in my browser on the client machine.
The output from:
$ docker ps
produces the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
48a8ac126c72 kubu4/bioinformatics:v11 "/bin/bash" 55 minutes ago Up 55 minutes 8787/tcp, 0.0.0.0:8888->8888/tcp stupefied_pasteur
My attempts at creating a ssh tunnel to the remote machine's Docker container results in the following error message in Terminal when I try to launch the Jupyter Notebook in my browser on the client machine (localhost:8888):
channel 3: open failed: connect failed: Connection refused
I'm currently using the following in my .ssh/config file to create the tunnel:
Host tunnel3
HostName remote.ip.address
User user
ControlMaster auto
ServerAliveInterval 30
ServerAliveCountMax 3
LocalForward localhost:8888 localhost:8888
I can use this tunneling configuration to successfully launch Jupyter Notebooks in my client browser if I run the Jupyter Notebook on the remote machine outside of the Docker container that's on the remote machine.
Just for added info, this is the output when I launch the Jupyter Notebook in the remote machine's Docker container:
$ jupyter notebook
[I 18:23:32.951 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
[I 18:23:33.072 NotebookApp] Serving notebooks from local directory: /usr/local/bioinformatics
[I 18:23:33.073 NotebookApp] 0 active kernels
[I 18:23:33.073 NotebookApp] The Jupyter Notebook is running at: http://0.0.0.0:8888/
[I 18:23:33.074 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
I figured it out! The "A-ha!" moment was remembering that the remote machine running Docker was OS X (El Capitan). All my Docker builds/tests had been performed on a Linux (Ubuntu 14.04) machine. The difference, it turns out, is critical to solving this problem.
Docker installs on Ubuntu allow you to use "localhost" to address the Docker container. Docker installs on OSX generate an IP address to use to address the Docker container.
Realizing this, I changed my ssh tunneling configuration in the.ssh/config file on my client computer.
Old tunneling config:
Host tunnel3
HostName remote.ip.address
User user
ControlMaster auto
ServerAliveInterval 30
ServerAliveCountMax 3
LocalForward localhost:8888 localhost:8888
New tunneling config:
Host tunnel3
HostName remote.ip.address
User user
ControlMaster auto
ServerAliveInterval 30
ServerAliveCountMax 3
LocalForward localhost:8888 192.168.99.100:8888
With this change, I can successfully create/use Jupyter Notebooks in my client browser that are actually hosted in the Docker container on the remote machine, using localhost:8888 in the URL bar.
Had the same problem, trying to ssh-tunnel into a google cloud instance, then into a docker container.
Local machine: Ubuntu (14.04)
Cloud Instance: Debian (9-stretch)
Find the IP address Debian assigns to docker (credit):
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
This gave me 172.18.0.2 for the first instance running, 172.18.0.3 for the second, ..0.4, ..0.5, etc. (Note: The below didn't work if I was running multiple containers on the same instance. Since I only need to run one container, I'm not going to figure out how to fix it)
ssh into the compute instance
Make sure ports are exposed between your Docker container and Compute instance (I used 8888:8888), then (credit):
gcloud compute ssh {stuff to your instance} -- -L 8888:172.18.0.2:8888
Run jupyter
jupyter-notebook --no-browser --ip=0.0.0.0 --allow-root
Now I can open my local browser to localhost:8888/?token... and use jupyter running in a container on my gcloud instance.
Related
I have the following setup:
A Windows 10 Pro Laptop ("Win10Laptop") that has a Windows 10 Pro VM ("Win10VM") running on Hyper-V. I have created an nginx container by running the following command on the host machine:
docker run -d -p 80:80 --name webserver nginx
While the container is running I can access http://localhost from Win10Laptop and this works fine. My question is what do I need to configure to access nginx from Win10VM? Win10VM has only one network adaptor which is configured to use the "External" Vswitch connected to my Wifi interface.
Let me know if you need any more details. I've tried all sorts and can't figure it out!
Thanks,
Michael
You need to connect to the IP the VM has acquired on the External switch. Run ipconfig inside the VM to see what IP it has, then open http://<vm-ip> from your host.
Using my own laptop to run Tensorflow on remote server of lab
I used tensorboard --logdir=./log try to view curves of the running results
I got:
Starting TensorBoard on port 6006
(You can navigate to http://0.0.0.0:6006)
and then I tried to connect it in the browser, but it failed...
anyone know how to configure in order to view tensorboard of remote server on my own laptop?
If you start the tensorboard server on your lab instance using the command you mentioned, it will be running on the lab server and hosting the tensorboard webpage from labserverIP:6006.
I use a cluster running SLURM (which manages everyone's job submissions) and am able to start the tensorboard server on cluster node and then SSH into the specific node running the tensorboard server and essentially forward the site from from the labserverIP:6006 to my laptop at localhost:6006. My script on github here shows the commands I use to do this for SLURM. Essentially it is these three steps:
1) Start the remote server and run tensorboard --logdir=./log --host $SERVER_IP --port $SERVER_PORT
2) SSH from your laptop using ssh uname#login.node.edu -L $LOCAL_PORT:$SERVER_IP:$SERVER_PORT
You can replace uname#login.node.edu with the server public IP.
3) Got to http://localhost:$LOCAL_PORT in your laptop's browser to access the tensorboard page.
The other option is to copy all of the log files to your local machine or a shared drive and then start tensorboard on your laptop with the local or shared directory as your logdir.
This is how I can forward a port at remote server to my local home computer
ssh -NfL 6006:localhost:6006 username#remote_server_address
If you are able to SSH into your lab instance from your laptop using a public IP, regardless of the message shown, you could use http://<publicIP>:6006 to view TensorBoard.
Else if there is no public IP associated with the lab machine server, you could try to forward port 6006 while SSH-ing into your lab machine.
Please refer OpenSSH port forwarding manual for the same.
This is how we solved it (Linux SLURM server)
ssh to your server and find its IP via terminal by running: IP=`hostname -I`
Open the tensorboard server on the host server:
python -m tensorboard.main --logdir=/your/dir --host $IP
Use your browser to and surf to http://$IP:6006
You can use following option
tensorboard --logdir logs --bind_all
Then, copy and paste the link to your local browser
I have used these instructions for Running Gui Apps with Docker to create images that allow me to launch GUI based applications.
It all works flawlessly when running Docker on the same machine, but it stops working when running it on a remote host.
Locally, I can run
docker --rm --ti -e DISPLAY -e <X tmp> <image_name> xclock
And I can get xclock running on my host machine.
When connecting remotely to a host with XForwarding, I am able to run X applications that show up on my local X Server, as anyone would expect.
However if in the remote host I try to run the above docker command, it fails to connect to the DISPLAY (usually localhost:10.0)
I think the problem is that the XForwarding is setup on the localhost interface of the remote host.
So the docker host has no way to connect to DISPLAY=localhost:10.0 because that localhost means the remote host, unreachable from docker itself.
Can anyone suggest an elegant way to solve this?
Regards
Alessandro
EDIT1:
One possible way I guess is to use socat to forward the remote /tmp/.X11-unix to the local machine. This way I would not need to use port forwarding.
It also looks like openssh 6.7 will natively support unix socket forwarding.
When running X applications through SSH (ssh -X), you are not using the /tmp/.X11-unix socket to communicate with the X server. You are rather using a tunnel through SSH reached via "localhost:10.0".
In order to get this to work, you need to make sure the SSH server supports X connections to the external address by setting
X11UseLocalhost no
in /etc/ssh/sshd_config.
Then $DISPLAY inside the container should be set to the IP address of the Docker host computer on the docker interface - typically 172.17.0.1. So $DISPLAY will then be 172.17.0.1:10
You need to add the X authentication token inside the docker container with "xauth add" (see here)
If there is any firewall on the Docker host computer, you will have to open up the TCP ports related to this tunnel. Typically you will have to run something like
ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
if you use ufw.
Then it should work. I hope it helps. See also my other answer here https://stackoverflow.com/a/48235281/5744809 for more details.
I am testing a Google Compute Engine, and I created a VM with Ubuntu OS. When I connect to it, by clicking this Connect SSH button, it opens a console window.
Is that the connection you get?
How do I open a real screen with a GUI on it? I don't want the console.
Much better solution from Google themselves:
https://medium.com/google-cloud/linux-gui-on-the-google-cloud-platform-800719ab27c5
You need to forward the X11 session from the VM to your local machine. This has been covered in the Unix and Linux stack site before:
https://unix.stackexchange.com/questions/12755/how-to-forward-x-over-ssh-from-ubuntu-machine
Since you are connecting to a server that is expected to run compute tasks there may well be no X11 server installed on it. You may need to install X11 and similar. You can do that by following the instructions here:
https://help.ubuntu.com/community/ServerGUI
Since I have needed to do this recently, I am going to briefly write up the required changes here:
Configure the Server
$ sudo vim /etc/ssh/sshd_config
Ensure that X11Forwarding yes is present. Restart the ssh daemon if you change the settings:
$ sudo /etc/init.d/sshd restart
Configure the Client
$ vim ~/.ssh/config
Ensure that ForwardX11 yes is present for the host. For example:
Host example.com
ForwardX11 yes
Forwarding X11
$ ssh -X -C example.com
...
$ gedit example.txt
Trusted X11 Forwarding
http://dailypackage.fedorabook.com/index.php?/archives/48-Wednesday-Why-Trusted-and-Untrusted-X11-Forwarding-with-SSH.html
You may wish to enable trusted forwarding if applications have trouble with untrusted forwarding.
You can enable this permanently by using ForwardX11Trusted yes in the ~/.ssh/config file.
You can enable this for a single connection by using the -Y argument in place of the -X argument.
These instructions are for setting up Ubuntu 16.04 LTS with LXDE (I use SSH port forwarding instead of opening port 5901 in the VM instance firewall)
1. Build a new Ubuntu VM instance using the GCP Console
2. connect to your instance using google cloud shell
gcloud compute --project "project_name" ssh --zone "project_zone" "instance_name"
3. install the necessary packages
sudo apt update && sudo apt upgrade
sudo apt-get install xorg lxde vnc4server
4. setup vncserver (you will be asked to provide a password for the vncserver)
vncserver
sudo echo "lxpanel & /usr/bin/lxsession -s LXDE &" >> ~/.vnc/xstartup
6. Reboot your instance (this returns you to the Google cloud shell prompt)
sudo reboot
7. Use the google cloud shell download file facility to download the auto-generated private key stored at $HOME/.ssh/google_compute_engine and save it in your local machine*****
cloudshell download-files $HOME/.ssh/google_compute_engine
8. From your local machine SSH to your VM instance (forwarding port 5901) using your private key (downloaded at step 7)
ssh -L 5901:localhost:5901 -i "google_compute_engine" username#instance_external_ip -v -4
9. Run the vncserver in your VM instance
vncserver -geometry 1280x800
10. In your local machine's Remote Desktop Client (e.g. Remmina) set Server to localhost:5901 and Protocol to VNC
Note 1: to check if the vncserver is working ok use:
netstat -na | grep '[:.]5901'
tail -f /home/user_id/.vnc/instance-1:1.log
Note 2: to restart the vncserver use:
sudo vncserver -kill :1 && vncserver
***** When first connected via the Google cloud shell the public and private keys are auto-generated and stored in the cloud shell instance at $HOME/.ssh/
ls $HOME/.ssh/
google_compute_engine google_compute_engine.pub google_compute_known_hosts
The public key should be added to the home/*user_id*/.ssh/authorized_keys
in the VM instance (this is done automatically when you first SHH to the VM instance from the google cloud shell, i.e. in step 2)
you can confirm this in the instance metadata
Chrome Remote Desktop allows you to remotely access applications with a graphical user interface from a local computer or mobile device. For this approach, you don't need to open firewall ports, and you use your Google Account for authentication and authorization.
Check out this google tutorial to use it with Compute Engine : https://cloud.google.com/solutions/chrome-desktop-remote-on-compute-engine
I'm trying to open an ipython-notebook (which is running on a server) on a macbook from a remote location through an ssh tunnel but no data received.
This is the command for the SSH tunnel
ssh -L 5558:localhost:5558 -N -t -x user#remote-host
and this is the command I used to lunch the notebook form the server
ipython notebook --pylab=inline --port=5558 --ip=* --no-browser --notebook-dir notebooks
Than I tried to open it on a new tab with this remote-host:5558 but no data received.
Thanks in advance!
The directive -L AAAA:somehost:BBBB will cause SSH to listen on port AAAA on localhost (the machine the ssh command is run on) and forward any connection to that port, over the SSH session, to the host somehost port BBBB. So, you need to open http://localhost:5558/ in the browser on the machine you run the ssh command on.
Read this: How do I add a kernel on a remote machine in IPython (Jupyter) Notebook?
Remote jupyter kernel/kernels administration utility (the rk) here: https://github.com/korniichuk/rk