Keep jupyter lab notebook running when SSH is terminated on local machine? - ssh

I would like to be able to turn off my local machine while having my code continuously running in jupyter lab and come back to it running later, however as soon as the SSH is terminated, the jupyter lab kernel is stopped. My code also stops executing when I close the jupyter lab browser tab.
From the Google Cloud Platform marketplace I'm using a 'Deep Learning VM'. From there, I SSH to it through the suggested gcloud command (Cloud SDK) gcloud compute ssh --project projectname --zone zonename vmname -- -L 8080:localhost:8080. It then opens a PuTTY connection to the VM and automatically has jupyter lab running that I can now access on local host.
What can I do to be able to run my code with my local machine off in this case?

I usually use "nohup" when using jupter notebook through ssh!
:~$ nohup jupyter notebook --ip=0.0.0.0 --port=xxxx --no-browser &
you can know more about it here
Hope it helps!

You can use Notebook remote execution.
Basically your Notebook code will run in a remote machine and results will be stored there or in GCS for later view.
You have the following options:
nbconvert based options:
nbconvert: Provides a convenient way to execute the input cells of an .ipynb notebook file and save the results, both input and output cells, as a .ipynb file.
papermill: is a Python package for parameterizing and executing Jupyter Notebooks. (Uses nbconvert --execute under the hood.)
notebook executor: This tool that can be used to schedule the execution of Jupyter notebooks from anywhere (local, GCE, GCP Notebooks) to the Cloud AI Deep Learning VM. You can read more about the usage of this tool here. (Uses gcloud sdk and papermill under the hood)
Notebook training tool
Python package allows users to run a Jupyter notebook at Google Cloud AI Platform Training Jobs.
AI Platform Notebook Scheduler
This is in Alpha (Beta soon) with AI Platform Notebooks and the recommended option. Allows you scheduling a Notebook for recurring runs follows the exact same sequence of steps, but requires a crontab-formatted schedule option.
There are other options which allow you to execute Notebooks remotely:
tensorflow_cloud (Keras for GCP) Provides APIs that will allow to easily go from debugging and training your Keras and TensorFlow code in a local environment to distributed training in the cloud.
GCP runner Allows running any Jupyter notebook function on Google Cloud Platform
Unlike all other solutions listed above, it allows to run training for the whole project, not single Python file or Jupyter notebook

Related

How to connect to local runtime in Google Colab for this specific notebook WhisperWithVAD

I wish someone could help me to connect local runtime on this specific notebook on Google Colab at this link:
https://colab.research.google.com/github/ANonEntity/WhisperWithVAD/blob/main/WhisperWithVAD.ipynb
Basically it is a modified version of OpenAI's Whisper for Speech-to-Text for my videos to learn languages, with this model it take use of GPU accellation acceleration
I have been using the free version but it has been restricted due to the fact that it is not unlimited to use this model through Google's host. Since I am not really a dev, I have difficulties with the setup. It would be also if you can give me a clear instruction.
Corresponding with error in Colab: Unable to connect to runtime
You can follow these instructions to install Jupyter on your local machine and then connect the WhisperWithVAD notebook to your local runtime: https://research.google.com/colaboratory/local-runtimes.html
I was able to get this up running, but had to comment out some code in the Run Whisper block where the notebook was getting hung up:
#from google.colab import files (near the top of the block)
and
#files.download(out_path) (at the bottom of the block)

SSH Jupyter notebook using non-base Conda envirnoment?

My problem is the following: I want to run a Jupyter notebook on my remote desktop and access it via my laptop elsewhere. I have accomplished this, but I can't use my GPU for tensorflow because the GPU-supported version is only installed in my custom, non-base environment. Even though all of my installed jupyter kernels are available, it seems things don't work right unless I run 'jupyter notebook' from within the correct activated conda environment (says "no GPU" even though I select as the kernel the one where tensorflow-gpu is installed).
Is there a simple way of running jupyter notebook from within that environment by a batch script? I also need it to run the notebook on a secondary drive.
I could of course just start up the server while at home and then access it using the token, but that's a little clumsy.
I've found a solution. On windows, in %AppData%\Roaming\Microsoft\Windows\Start Menu\Programs\Anaconda3, there are shortcuts for various Anaconda-related programs, including Jupyter notebook for each environment.
The shortcut for Jupyter notebook for my given env is
`E:\Software\Anaconda3\python.exe E:\Software\Anaconda3\cwp.py E:\Software\Anaconda3\envs\tf E:\Software\Anaconda3\envs\tf\python.exe E:\Software\Anaconda3\envs\tf\Scripts\jupyter-notebook-script.py "%USERPROFILE%".
I modified this to end in '"E:" --no-browser' instead of the userprofile bit and made that into a script. Now when I SSH into the computer and run this script, the notebook is within the correct environment and I have access to my GPU, all on the correct drive, E.

'Upload' function not working for Jupyter Notebook in ssh mode on a Ubuntu 18.04 machine

I am new to using Jupyter, but am well versed with R. My new role requires me to use R-Kernel inside a jupyter notebook via ssh to share common data and save space. However, I am unable to upload any files from my local machine for some reason - although the permissions check out. There is no error - the entire computer just hangs the moment I click 'Upload'!! Has anybody ever faced this issue?
I am using Jupyter 3.1.18 via ssh on Ubuntu 18.04.
I don't have jupyter installed on my local machine.

Tensorflow 2.0 beta GPU running in jupyter notebook, but not in google colab

I am working with tensorflow 2.0 beta, and while i managed to get my GPU working on anaconda through a few youtube tutorials I am unable to get my gpu running in google colab. I know google has the option to enable a gpu from one of their servers but My GTX 1070 is much faster, and i need to run off colab and not just Jupyter exclusively.
So I read the documentation like a good boy and the only thing i think i could have done wrong is my path settings I have screenshots bellow.
I followed several different youtube tutorials faithfully until the final one here gave me a way to install it to jupyter. Which is great, but I also need it to run on google colab as well.
I've been trying this since Friday and it's now tuesday and I'm losing my mind over this. Help me stackoverflow, you're my only hope.
https://imgur.com/a/8WibGWT
If you can get it running on your own Jupyter server then you can point colab to that local server.
Full instructions here: https://research.google.com/colaboratory/local-runtimes.html but edited highlights are:
install jupyter_http_over_ws:
pip install jupyter_http_over_ws
jupyter serverextension enable --py jupyter_http_over_ws
start your local server allowing colab domain:
jupyter notebook \
--NotebookApp.allow_origin='https://colab.research.google.com' \
--port=8888 \
--NotebookApp.port_retries=0
Click 'connect to local runtime' in colab

How to use a remote machine's GPU in jupyter notebook

I am trying to run tensorflow on a remote machine's GPU through Jupyter notebook. However, if I print the available devices using tf, I only get CPUs. I have never used a GPU before and am relatively new at using conda / jupyter notebook remotely as well, so I am not sure how to set up using the GPU in jupyter notebook.
I am using an environment set up by someone else who already executed the same code on the same GPU, but they did it via python script, not in a jupyter notebook.
this is the only code in the other person's file that had to do with the GPU
config = tf.ConfigProto()
config.gpu_options.allow_growth=True
set_session(tf.Session(config=config))
I think the problem was that I had tensorflow in my environment instead of tensorflow-gpu. But now I get this message "cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version" and I don't know how to update the driver through terminal
How is your environment set up? Specifically, what is your remote environment, and what is your local environment? Sounds like your CUDA drivers are out of date, but it could be more than just that. If you are just getting started, I would recommend finding an environment that requires little to no configuration work on your part, so you can get started more easily/quickly.
For example, you can run GPUs on the cloud, and connect to them via local terminal. You also have your "local" frontend be Colab by connecting it to a local runtime. (This video explains that particular setup, but there's lots of other options)
You may also want to try running nvidia-smi on the remote machine to see if the GPUs are visible.
Here is another solution, that describes how to set up a GPU-Jupyterlab instance with Docker.
To update your drivers via terminal, run:
ubuntu-drivers devices
sudo ubuntu-drivers autoinstall
sudo reboot
Are your CUDA paths set appropriately? Like that?
export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}