SSH Jupyter notebook using non-base Conda envirnoment? - ssh

My problem is the following: I want to run a Jupyter notebook on my remote desktop and access it via my laptop elsewhere. I have accomplished this, but I can't use my GPU for tensorflow because the GPU-supported version is only installed in my custom, non-base environment. Even though all of my installed jupyter kernels are available, it seems things don't work right unless I run 'jupyter notebook' from within the correct activated conda environment (says "no GPU" even though I select as the kernel the one where tensorflow-gpu is installed).
Is there a simple way of running jupyter notebook from within that environment by a batch script? I also need it to run the notebook on a secondary drive.
I could of course just start up the server while at home and then access it using the token, but that's a little clumsy.

I've found a solution. On windows, in %AppData%\Roaming\Microsoft\Windows\Start Menu\Programs\Anaconda3, there are shortcuts for various Anaconda-related programs, including Jupyter notebook for each environment.
The shortcut for Jupyter notebook for my given env is
`E:\Software\Anaconda3\python.exe E:\Software\Anaconda3\cwp.py E:\Software\Anaconda3\envs\tf E:\Software\Anaconda3\envs\tf\python.exe E:\Software\Anaconda3\envs\tf\Scripts\jupyter-notebook-script.py "%USERPROFILE%".
I modified this to end in '"E:" --no-browser' instead of the userprofile bit and made that into a script. Now when I SSH into the computer and run this script, the notebook is within the correct environment and I have access to my GPU, all on the correct drive, E.

Related

'Upload' function not working for Jupyter Notebook in ssh mode on a Ubuntu 18.04 machine

I am new to using Jupyter, but am well versed with R. My new role requires me to use R-Kernel inside a jupyter notebook via ssh to share common data and save space. However, I am unable to upload any files from my local machine for some reason - although the permissions check out. There is no error - the entire computer just hangs the moment I click 'Upload'!! Has anybody ever faced this issue?
I am using Jupyter 3.1.18 via ssh on Ubuntu 18.04.
I don't have jupyter installed on my local machine.

which one is better in installing tensorflow

I followed the instructions on the official website to download the TensorFlow. I chose to create a virtual environment as the instruction shown for macOS. My question is that if I need to activate the virtual environment each time before I use TensorFlow?
For example, I want to use tensor flow on Jupiter notebook and that means I need to install Jupiter and other required packages like Seaborn/pandas as well on the virtual environment. However I already downloaded anaconda and basically, it has all the packages I need.
Besides, will it make a difference if I download it with conda?
Well, if you downloaded the packages (like you said TensorFlow and Seaborn) in the base Conda environment which is the default environment that anaconda provides on installation, then to use what it has, you need to run whatever program/IDE like Jupyter lab from it. So you would open Anaconda Prompt and then type in jupyter lab and it would open up a new socket and you can edit with your installed python libraries from Conda.
Otherwise in IDE's VSCode you can simply set the python interpreter to that from Conda.
However, if you install the libraries and packages you need using pip on your actual python installation not Conda, then there is no need for any activation. Everything will run right out of the box. You don't need to select the interpreter in IDE's like VSCode.
Bottom line, if you know what libraries you need and don't mind running pip install package-name every time you need a package, stick with pip.
If you don't like to that sort of 'low level' stuff then use Anaconda or Miniconda.

Keep jupyter lab notebook running when SSH is terminated on local machine?

I would like to be able to turn off my local machine while having my code continuously running in jupyter lab and come back to it running later, however as soon as the SSH is terminated, the jupyter lab kernel is stopped. My code also stops executing when I close the jupyter lab browser tab.
From the Google Cloud Platform marketplace I'm using a 'Deep Learning VM'. From there, I SSH to it through the suggested gcloud command (Cloud SDK) gcloud compute ssh --project projectname --zone zonename vmname -- -L 8080:localhost:8080. It then opens a PuTTY connection to the VM and automatically has jupyter lab running that I can now access on local host.
What can I do to be able to run my code with my local machine off in this case?
I usually use "nohup" when using jupter notebook through ssh!
:~$ nohup jupyter notebook --ip=0.0.0.0 --port=xxxx --no-browser &
you can know more about it here
Hope it helps!
You can use Notebook remote execution.
Basically your Notebook code will run in a remote machine and results will be stored there or in GCS for later view.
You have the following options:
nbconvert based options:
nbconvert: Provides a convenient way to execute the input cells of an .ipynb notebook file and save the results, both input and output cells, as a .ipynb file.
papermill: is a Python package for parameterizing and executing Jupyter Notebooks. (Uses nbconvert --execute under the hood.)
notebook executor: This tool that can be used to schedule the execution of Jupyter notebooks from anywhere (local, GCE, GCP Notebooks) to the Cloud AI Deep Learning VM. You can read more about the usage of this tool here. (Uses gcloud sdk and papermill under the hood)
Notebook training tool
Python package allows users to run a Jupyter notebook at Google Cloud AI Platform Training Jobs.
AI Platform Notebook Scheduler
This is in Alpha (Beta soon) with AI Platform Notebooks and the recommended option. Allows you scheduling a Notebook for recurring runs follows the exact same sequence of steps, but requires a crontab-formatted schedule option.
There are other options which allow you to execute Notebooks remotely:
tensorflow_cloud (Keras for GCP) Provides APIs that will allow to easily go from debugging and training your Keras and TensorFlow code in a local environment to distributed training in the cloud.
GCP runner Allows running any Jupyter notebook function on Google Cloud Platform
Unlike all other solutions listed above, it allows to run training for the whole project, not single Python file or Jupyter notebook

How to use a remote machine's GPU in jupyter notebook

I am trying to run tensorflow on a remote machine's GPU through Jupyter notebook. However, if I print the available devices using tf, I only get CPUs. I have never used a GPU before and am relatively new at using conda / jupyter notebook remotely as well, so I am not sure how to set up using the GPU in jupyter notebook.
I am using an environment set up by someone else who already executed the same code on the same GPU, but they did it via python script, not in a jupyter notebook.
this is the only code in the other person's file that had to do with the GPU
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
set_session(tf.Session(config=config))
I think the problem was that I had tensorflow in my environment instead of tensorflow-gpu. But now I get this message "cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version" and I don't know how to update the driver through terminal
How is your environment set up? Specifically, what is your remote environment, and what is your local environment? Sounds like your CUDA drivers are out of date, but it could be more than just that. If you are just getting started, I would recommend finding an environment that requires little to no configuration work on your part, so you can get started more easily/quickly.
For example, you can run GPUs on the cloud, and connect to them via local terminal. You also have your "local" frontend be Colab by connecting it to a local runtime. (This video explains that particular setup, but there's lots of other options)
You may also want to try running nvidia-smi on the remote machine to see if the GPUs are visible.
Here is another solution, that describes how to set up a GPU-Jupyterlab instance with Docker.
To update your drivers via terminal, run:
ubuntu-drivers devices
sudo ubuntu-drivers autoinstall
sudo reboot
Are your CUDA paths set appropriately? Like that?
export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

How to run Jupyter notebook and Tensorboard at the same time inside virtualenv?

I already found its workaround inside Docker. But, in my case, I am running TensorFlow inside virtualenv so that I could run my Jupyter notebook to make a code and to run it.
But I also needed to run the Tensorboard. How can I run two web application inside virtualenv? I have never run two things at the same time. If I want to, I don't know how to run it in a background.
I found a simple solution. Just open another shell and run virtualenv / Jupyter notebook after it is already running a Tensorboard in other shell.