Using my own laptop to run Tensorflow on remote server of lab
I used tensorboard --logdir=./log try to view curves of the running results
I got:
Starting TensorBoard on port 6006
(You can navigate to http://0.0.0.0:6006)
and then I tried to connect it in the browser, but it failed...
anyone know how to configure in order to view tensorboard of remote server on my own laptop?
If you start the tensorboard server on your lab instance using the command you mentioned, it will be running on the lab server and hosting the tensorboard webpage from labserverIP:6006.
I use a cluster running SLURM (which manages everyone's job submissions) and am able to start the tensorboard server on cluster node and then SSH into the specific node running the tensorboard server and essentially forward the site from from the labserverIP:6006 to my laptop at localhost:6006. My script on github here shows the commands I use to do this for SLURM. Essentially it is these three steps:
1) Start the remote server and run tensorboard --logdir=./log --host $SERVER_IP --port $SERVER_PORT
2) SSH from your laptop using ssh uname#login.node.edu -L $LOCAL_PORT:$SERVER_IP:$SERVER_PORT
You can replace uname#login.node.edu with the server public IP.
3) Got to http://localhost:$LOCAL_PORT in your laptop's browser to access the tensorboard page.
The other option is to copy all of the log files to your local machine or a shared drive and then start tensorboard on your laptop with the local or shared directory as your logdir.
This is how I can forward a port at remote server to my local home computer
ssh -NfL 6006:localhost:6006 username#remote_server_address
If you are able to SSH into your lab instance from your laptop using a public IP, regardless of the message shown, you could use http://<publicIP>:6006 to view TensorBoard.
Else if there is no public IP associated with the lab machine server, you could try to forward port 6006 while SSH-ing into your lab machine.
Please refer OpenSSH port forwarding manual for the same.
This is how we solved it (Linux SLURM server)
ssh to your server and find its IP via terminal by running: IP=`hostname -I`
Open the tensorboard server on the host server:
python -m tensorboard.main --logdir=/your/dir --host $IP
Use your browser to and surf to http://$IP:6006
You can use following option
tensorboard --logdir logs --bind_all
Then, copy and paste the link to your local browser
Related
I'm trying to connect to a remote kernel in Spyder, however the machine on which it is running is not directly accessible. Rather, to connect to it I must go through a bastion host / jumpbox as follows:
ssh -i ~/.ssh/id_rsa -J me#jumpbox me#remote which logs me directly into remote, automatically sending the connection through jumpbox.
I have python -m spyder-kernels.console running on remote, where I want to do my computing, but no way to connect to it directly since it's only accessible from jumpbox. I've tried setting up my ssh config with a ProxyJump entry which works for logging into the machine through ssh in the command line, but it appears that Spyder ignores the config file when setting up the remote kernel connection,
Is there a way to connect to this remote kernel? It appears there's a way to do this with IPython and I know I can do it with Jupyter Notebook, but I'm wondering if I can do this in Spyder.
(Related: Connect spyder to a remote kernel via ssh tunnel)
I don't know if you're still looking for an answer to this, but for future people arriving here, and for my own reference:
Yes, you can. You have to create an ssh-tunnel and connect Spyder to the kernel via localhost. For you that would look something like this:
ssh -L 3336:me#jumpbox:22 me#remote
22 is for the port your ssh server at remote is listening to. This is usually 22, unless the moderator changed this. 3336 is the port at localhost to connect to, you can choose any number you like above 1024 (which are privileged ports).
Then proceed as explained in the Spyder docs, i.e., launch the spider kernel (in the environment you want) on remote
python -m spyder_kernels.console
copy the connection file (kernel-pid.json) file to your local computer:
scp -oProxyJump=me#jumpbox remote:/path/to/json-file/kernel-pid.json ~/Desktop
/path/to/json-file you have to change to the path to the connection file (which you can find by running jupyter --runtime-dir on remote in the same environment as the spyder-kernel is running) and kernel-pid.json of course to the real file name. ~/Desktop copies it to your Desktop-folder, you can change that to wherever you want.
Connect Spyder to the kernel via "Connect to existing kernel", point it to the connection file you just copied, check the This is a remote kernel (via SSH) box and enter localhost as the Hostname, and 3336 as the port (or whichever port you changed it to).
That should do it.
Note, that, as is the case for me, your jumpbox server may break your ssh connection over which you launched the Spyder kernel, which will cause your kernel to break. So you might want to use
python -m spyder_kernels.console &
to have it run in the background, or launch it in a screen session. However, note that you cannot shutdown a remote kernel with exit, and it will keep running (see here), so you have to kill it in a different way.
I have a distant computer (A) with a training on tensorflow. I run locally tensorboard on port 30080.
I redirect the port 30080 to my server B so in my computer A I run that command:
ssh -R 30080:localhost:30080 user#mydomain.net
When I try to reach with my other computer C the page mydomain.net:30080 there is nothing. The port 30080 is open because I can use it for other application.
The only way I found to get tensorboard result on C is :
ssh -L 30080:localhost:30080 user#mydomain.net:30080
And then on my computer C I can go on localhost:30080 to see tensorboard result.
How can I modify the pipeline to see the result on public page on my server B?
Try adding the option --bind_all while launching the tensorboard.
I have the following setup:
A Windows 10 Pro Laptop ("Win10Laptop") that has a Windows 10 Pro VM ("Win10VM") running on Hyper-V. I have created an nginx container by running the following command on the host machine:
docker run -d -p 80:80 --name webserver nginx
While the container is running I can access http://localhost from Win10Laptop and this works fine. My question is what do I need to configure to access nginx from Win10VM? Win10VM has only one network adaptor which is configured to use the "External" Vswitch connected to my Wifi interface.
Let me know if you need any more details. I've tried all sorts and can't figure it out!
Thanks,
Michael
You need to connect to the IP the VM has acquired on the External switch. Run ipconfig inside the VM to see what IP it has, then open http://<vm-ip> from your host.
I'd like to create a ssh tunnel from my computer to a remote server to a docker container running Jupyter Notebook (computer>server>Docker container) that allows me to run a Jupyter Notebook in my browser on my computer.
The Docker container is hosted on a machine running OS X (El Capitan). Docker is using the default machine IP: 192.168.99.100.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://192.168.99.100:2376 v1.11.1
I am able to physically sit at the server running the Docker container and use my browser (192.168.99.100:8888) to create Jupyter Notebooks from that Docker container. This verifies that my Docker port bindings work and that I'm running the Jupyter Notebook correctly.
However, I don't know how to establish a ssh tunnel from a client machine to that remote machine's Docker container and launch a Jupyter Notebook in my browser on the client machine.
The output from:
$ docker ps
produces the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
48a8ac126c72 kubu4/bioinformatics:v11 "/bin/bash" 55 minutes ago Up 55 minutes 8787/tcp, 0.0.0.0:8888->8888/tcp stupefied_pasteur
My attempts at creating a ssh tunnel to the remote machine's Docker container results in the following error message in Terminal when I try to launch the Jupyter Notebook in my browser on the client machine (localhost:8888):
channel 3: open failed: connect failed: Connection refused
I'm currently using the following in my .ssh/config file to create the tunnel:
Host tunnel3
HostName remote.ip.address
User user
ControlMaster auto
ServerAliveInterval 30
ServerAliveCountMax 3
LocalForward localhost:8888 localhost:8888
I can use this tunneling configuration to successfully launch Jupyter Notebooks in my client browser if I run the Jupyter Notebook on the remote machine outside of the Docker container that's on the remote machine.
Just for added info, this is the output when I launch the Jupyter Notebook in the remote machine's Docker container:
$ jupyter notebook
[I 18:23:32.951 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
[I 18:23:33.072 NotebookApp] Serving notebooks from local directory: /usr/local/bioinformatics
[I 18:23:33.073 NotebookApp] 0 active kernels
[I 18:23:33.073 NotebookApp] The Jupyter Notebook is running at: http://0.0.0.0:8888/
[I 18:23:33.074 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
I figured it out! The "A-ha!" moment was remembering that the remote machine running Docker was OS X (El Capitan). All my Docker builds/tests had been performed on a Linux (Ubuntu 14.04) machine. The difference, it turns out, is critical to solving this problem.
Docker installs on Ubuntu allow you to use "localhost" to address the Docker container. Docker installs on OSX generate an IP address to use to address the Docker container.
Realizing this, I changed my ssh tunneling configuration in the.ssh/config file on my client computer.
Old tunneling config:
Host tunnel3
HostName remote.ip.address
User user
ControlMaster auto
ServerAliveInterval 30
ServerAliveCountMax 3
LocalForward localhost:8888 localhost:8888
New tunneling config:
Host tunnel3
HostName remote.ip.address
User user
ControlMaster auto
ServerAliveInterval 30
ServerAliveCountMax 3
LocalForward localhost:8888 192.168.99.100:8888
With this change, I can successfully create/use Jupyter Notebooks in my client browser that are actually hosted in the Docker container on the remote machine, using localhost:8888 in the URL bar.
Had the same problem, trying to ssh-tunnel into a google cloud instance, then into a docker container.
Local machine: Ubuntu (14.04)
Cloud Instance: Debian (9-stretch)
Find the IP address Debian assigns to docker (credit):
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
This gave me 172.18.0.2 for the first instance running, 172.18.0.3 for the second, ..0.4, ..0.5, etc. (Note: The below didn't work if I was running multiple containers on the same instance. Since I only need to run one container, I'm not going to figure out how to fix it)
ssh into the compute instance
Make sure ports are exposed between your Docker container and Compute instance (I used 8888:8888), then (credit):
gcloud compute ssh {stuff to your instance} -- -L 8888:172.18.0.2:8888
Run jupyter
jupyter-notebook --no-browser --ip=0.0.0.0 --allow-root
Now I can open my local browser to localhost:8888/?token... and use jupyter running in a container on my gcloud instance.
I have used these instructions for Running Gui Apps with Docker to create images that allow me to launch GUI based applications.
It all works flawlessly when running Docker on the same machine, but it stops working when running it on a remote host.
Locally, I can run
docker --rm --ti -e DISPLAY -e <X tmp> <image_name> xclock
And I can get xclock running on my host machine.
When connecting remotely to a host with XForwarding, I am able to run X applications that show up on my local X Server, as anyone would expect.
However if in the remote host I try to run the above docker command, it fails to connect to the DISPLAY (usually localhost:10.0)
I think the problem is that the XForwarding is setup on the localhost interface of the remote host.
So the docker host has no way to connect to DISPLAY=localhost:10.0 because that localhost means the remote host, unreachable from docker itself.
Can anyone suggest an elegant way to solve this?
Regards
Alessandro
EDIT1:
One possible way I guess is to use socat to forward the remote /tmp/.X11-unix to the local machine. This way I would not need to use port forwarding.
It also looks like openssh 6.7 will natively support unix socket forwarding.
When running X applications through SSH (ssh -X), you are not using the /tmp/.X11-unix socket to communicate with the X server. You are rather using a tunnel through SSH reached via "localhost:10.0".
In order to get this to work, you need to make sure the SSH server supports X connections to the external address by setting
X11UseLocalhost no
in /etc/ssh/sshd_config.
Then $DISPLAY inside the container should be set to the IP address of the Docker host computer on the docker interface - typically 172.17.0.1. So $DISPLAY will then be 172.17.0.1:10
You need to add the X authentication token inside the docker container with "xauth add" (see here)
If there is any firewall on the Docker host computer, you will have to open up the TCP ports related to this tunnel. Typically you will have to run something like
ufw allow from 172.17.0.0/16 to any port $TCPPORT proto tcp
if you use ufw.
Then it should work. I hope it helps. See also my other answer here https://stackoverflow.com/a/48235281/5744809 for more details.