How to use local Coral USB TPU with Google Colab (instead of Cloud TPU) - google-colaboratory

I have a USB TPU and would like to use it as LOCAL RUNTIME in Google Colab.
I was not able to find any resources on this topic.
You can use a local Runtime (local Jupyter) and it is explained here :
https://research.google.com/colaboratory/local-runtimes.html
Do I need to install all the TPU libraries in my local Jupyter and then connect to local Jupyter as local runtime to start using my USB TPU in Colab?

I'm not familiar with Google Colab, but looks like it allows you to expose your model on your hardware. You'll then need to locate your model in order to run inference with it. There are multiple ways that you can choose to run it which are all listed here:
https://coral.withgoogle.com/docs/edgetpu/inference/

Related

How to connect to local runtime in Google Colab for this specific notebook WhisperWithVAD

I wish someone could help me to connect local runtime on this specific notebook on Google Colab at this link:
https://colab.research.google.com/github/ANonEntity/WhisperWithVAD/blob/main/WhisperWithVAD.ipynb
Basically it is a modified version of OpenAI's Whisper for Speech-to-Text for my videos to learn languages, with this model it take use of GPU accellation acceleration
I have been using the free version but it has been restricted due to the fact that it is not unlimited to use this model through Google's host. Since I am not really a dev, I have difficulties with the setup. It would be also if you can give me a clear instruction.
Corresponding with error in Colab: Unable to connect to runtime
You can follow these instructions to install Jupyter on your local machine and then connect the WhisperWithVAD notebook to your local runtime: https://research.google.com/colaboratory/local-runtimes.html
I was able to get this up running, but had to comment out some code in the Run Whisper block where the notebook was getting hung up:
#from google.colab import files (near the top of the block)
and
#files.download(out_path) (at the bottom of the block)

Is it possible to run .ipynb notebooks locally using GPU acceleration? How?

Every time I need to train a 'large' deep learning model I do it from Google Collab, as it allows you to use GPU acceleration.
My pc has a dedicated GPU, I was wondering if it is possible to use it to run my notebooks locally in a fast way. Is it possible to train models using my pc GPU? In that case, how?
I am open to work with DataSpell, VSCode or any other IDE.
Nicholas Renotte has a great 'Getting Started' video that goes through the entire process of setting up GPU accelerated notebooks on your PC. The stuff you're interested starts around the 12 minute mark.
Yes, it is possible to run .ipynb notebooks locally using GPU acceleration. To do so, you will need to install the necessary libraries and frameworks such as TensorFlow, PyTorch, or Keras. Depending on the IDE you choose, you will need to install the relevant plugins and packages for GPU acceleration.
In terms of IDEs, DataSpell, VSCode, PyCharm, and Jupyter Notebook are all suitable for running notebooks locally with GPU acceleration.
Once the necessary libraries and frameworks are installed, you will then need to install the appropriate drivers for your GPU and configure the environment for GPU acceleration.
Finally, you will need to modify the .ipynb notebook to enable GPU acceleration and specify the number of GPUs you will be using. Once all the necessary steps have been taken, you will then be able to run the notebook locally with GPU acceleration.

How to enforce Google Colab to utilise the GPU (using an external package to make sure GPU is used)?

So I am using Google Colab because I have some functions I need to execute that take far too long on my cpu. I have set the runtime to the GPU accelrator, however when I run the cell, I still get this message: 'Warning: You are connected to a GPU runtime, but not utilizing the GPU'.
I understand that this means the code I am running is just using my cpu. However using my cpu, the function takes hours to execute. This is why I want to utilise Colab's GPU, however, even when I change runtime, it still uses my cpu... How do I specifically force Colab to utilise the GPU for executing a certain cell/function in Colab???
Edit: I have just found out apparently Colab uses GPU only when the package being used is a package specifically made for GPU usage. Is there some sort of external package I can use that forces a function to find a GPU to use before executing the function?
Edit: (The package I am using for the long calculation is Network X if that makes any difference)
Check out cuGraph, which lets you do the same graph calculations on the gpu as networkx. A medium post on compatibility between cuGraph and networkx graphs.
You only need to do a couple of things to get cuGraph working on Google Colab. As the Google Colab demo from this medium post suggests:
Use pynvml to confirm Colab allocated you a Tesla T4 GPU.
Install most recent Miniconda release compatible with Google Colab's Python install (3.6.7)
Install RAPIDS libraries
Copy RAPIDS .so files into current working directory, a workaround for conda/colab interactions
Update env variables so Python can find and use RAPIDS artifacts
!wget -nc https://github.com/rapidsai/notebooks-
extended/raw/master/utils/rapids-colab.sh
!bash rapids-colab.sh
import sys, os
sys.path.append('/usr/local/lib/python3.6/site-packages/')
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
And then you can do the same calculations on the gpu:
pagerank = cugraph.pagerank(G)
instead of
pagerank = nx.pagerank(G)

Keep jupyter lab notebook running when SSH is terminated on local machine?

I would like to be able to turn off my local machine while having my code continuously running in jupyter lab and come back to it running later, however as soon as the SSH is terminated, the jupyter lab kernel is stopped. My code also stops executing when I close the jupyter lab browser tab.
From the Google Cloud Platform marketplace I'm using a 'Deep Learning VM'. From there, I SSH to it through the suggested gcloud command (Cloud SDK) gcloud compute ssh --project projectname --zone zonename vmname -- -L 8080:localhost:8080. It then opens a PuTTY connection to the VM and automatically has jupyter lab running that I can now access on local host.
What can I do to be able to run my code with my local machine off in this case?
I usually use "nohup" when using jupter notebook through ssh!
:~$ nohup jupyter notebook --ip=0.0.0.0 --port=xxxx --no-browser &
you can know more about it here
Hope it helps!
You can use Notebook remote execution.
Basically your Notebook code will run in a remote machine and results will be stored there or in GCS for later view.
You have the following options:
nbconvert based options:
nbconvert: Provides a convenient way to execute the input cells of an .ipynb notebook file and save the results, both input and output cells, as a .ipynb file.
papermill: is a Python package for parameterizing and executing Jupyter Notebooks. (Uses nbconvert --execute under the hood.)
notebook executor: This tool that can be used to schedule the execution of Jupyter notebooks from anywhere (local, GCE, GCP Notebooks) to the Cloud AI Deep Learning VM. You can read more about the usage of this tool here. (Uses gcloud sdk and papermill under the hood)
Notebook training tool
Python package allows users to run a Jupyter notebook at Google Cloud AI Platform Training Jobs.
AI Platform Notebook Scheduler
This is in Alpha (Beta soon) with AI Platform Notebooks and the recommended option. Allows you scheduling a Notebook for recurring runs follows the exact same sequence of steps, but requires a crontab-formatted schedule option.
There are other options which allow you to execute Notebooks remotely:
tensorflow_cloud (Keras for GCP) Provides APIs that will allow to easily go from debugging and training your Keras and TensorFlow code in a local environment to distributed training in the cloud.
GCP runner Allows running any Jupyter notebook function on Google Cloud Platform
Unlike all other solutions listed above, it allows to run training for the whole project, not single Python file or Jupyter notebook

GUI is not possible on Google Colab

I understand the GUI (such as those powered by tkinter) does not work on Google Colab, any alternatives at this point?
Error message
TclError: no display name and no $DISPLAY environment variable in google's colab
To use these notebooks you need to install binary MoebInv libraries and their dependencies.
In short, you simply need to execute it in CoLab or your Ubuntu-18.04 desktop the next cell