How to enforce Google Colab to utilise the GPU (using an external package to make sure GPU is used)? - gpu

So I am using Google Colab because I have some functions I need to execute that take far too long on my cpu. I have set the runtime to the GPU accelrator, however when I run the cell, I still get this message: 'Warning: You are connected to a GPU runtime, but not utilizing the GPU'.
I understand that this means the code I am running is just using my cpu. However using my cpu, the function takes hours to execute. This is why I want to utilise Colab's GPU, however, even when I change runtime, it still uses my cpu... How do I specifically force Colab to utilise the GPU for executing a certain cell/function in Colab???
Edit: I have just found out apparently Colab uses GPU only when the package being used is a package specifically made for GPU usage. Is there some sort of external package I can use that forces a function to find a GPU to use before executing the function?
Edit: (The package I am using for the long calculation is Network X if that makes any difference)

Check out cuGraph, which lets you do the same graph calculations on the gpu as networkx. A medium post on compatibility between cuGraph and networkx graphs.
You only need to do a couple of things to get cuGraph working on Google Colab. As the Google Colab demo from this medium post suggests:
Use pynvml to confirm Colab allocated you a Tesla T4 GPU.
Install most recent Miniconda release compatible with Google Colab's Python install (3.6.7)
Install RAPIDS libraries
Copy RAPIDS .so files into current working directory, a workaround for conda/colab interactions
Update env variables so Python can find and use RAPIDS artifacts
!wget -nc https://github.com/rapidsai/notebooks-
extended/raw/master/utils/rapids-colab.sh
!bash rapids-colab.sh
import sys, os
sys.path.append('/usr/local/lib/python3.6/site-packages/')
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
And then you can do the same calculations on the gpu:
pagerank = cugraph.pagerank(G)
instead of
pagerank = nx.pagerank(G)

Related

Is it possible to run .ipynb notebooks locally using GPU acceleration? How?

Every time I need to train a 'large' deep learning model I do it from Google Collab, as it allows you to use GPU acceleration.
My pc has a dedicated GPU, I was wondering if it is possible to use it to run my notebooks locally in a fast way. Is it possible to train models using my pc GPU? In that case, how?
I am open to work with DataSpell, VSCode or any other IDE.
Nicholas Renotte has a great 'Getting Started' video that goes through the entire process of setting up GPU accelerated notebooks on your PC. The stuff you're interested starts around the 12 minute mark.
Yes, it is possible to run .ipynb notebooks locally using GPU acceleration. To do so, you will need to install the necessary libraries and frameworks such as TensorFlow, PyTorch, or Keras. Depending on the IDE you choose, you will need to install the relevant plugins and packages for GPU acceleration.
In terms of IDEs, DataSpell, VSCode, PyCharm, and Jupyter Notebook are all suitable for running notebooks locally with GPU acceleration.
Once the necessary libraries and frameworks are installed, you will then need to install the appropriate drivers for your GPU and configure the environment for GPU acceleration.
Finally, you will need to modify the .ipynb notebook to enable GPU acceleration and specify the number of GPUs you will be using. Once all the necessary steps have been taken, you will then be able to run the notebook locally with GPU acceleration.

How can I run ImageAI on my GPU and not my CPU

So I am quite new to this I was trying to find answers on google but it is kind of not working. So I am trying to run this library ImageAI library
I am able to run it normally on the CPU at least I think it runs on CPU by just calling python test.py. Am I correct there
But since the model prediction takes a long time I would like to run it on my GPU. What I was trying to do is create a conda environment and activate it but after I do I get this error.
ModuleNotFoundError: No module named 'imageai.Classification'
Although I have imageai installed in my environment
pip freeze | findstr imageai
imageai==2.1.5
As you can see by executing this command. What am I doing wrong here?
I found the solution it doesnt require the conda environment. ImageAI automatically runs on GPU if available. All you need to do is to ensure you have the GPU version of Tensorflow installed.

More than one GPU on vast.ai

Anyone with experience using vast.ai for cloud GPU computing knows if when renting more than one GPU do you need to do some setup to take advantage of the extra GPUs?
Because I can't notice any difference on speed when renting 6 or 8 GPUs instead of just one. I'm new at using vast.ai for cloud GPU computing.
I am using this default docker:
Official docker images for deep learning framework TensorFlow (http://www.tensorflow.org)
Successfully loaded tensorflow/tensorflow:nightly-gpu-py3
And just installing keras afterwards:
pip install keras
I have also checked the available GPUs using this and all the GPUs are detected correctly:
from keras import backend as K
K.tensorflow_backend._get_available_gpus()
cheers
Solution:
Finally I found the solution myself. I just used another docker image with an older version of tensorflow (2.0.0), and the error disappeared.

Running Tensorboard without CUDA support

Is it possible to run Tensorboard on a machine without CUDA support?
I'm working at a computation center (via ssh) which has two major clusters:
CPU-Cluster which is a general workhorse without CUDA support (no dedicated GPU)
GPU-Cluster with dedicated GPUs e.g. for running neural networks with tensorflow-gpu.
The access to the GPU-cluster is limited to Training etc. such that I can't afford to run Tensorboard on a machine with CUDA-support. Instead, I'd like to run Tensorboard on the CPU-Cluster.
With the TF bundled Tensorboard I get import errors due to missing CUDA support.
It seems reasonable that the official Tensorboard should have a mode for running with CPU-only. Is this true?
I've also found an inofficial standalone Tensorboard version (github.com/dmlc/tensorboard), does this work without CUDA-support?
Solved my problem: just install tensorflow instead of tensorflow-gpu.
Didn't work for me for a while due to my virtual environment (conda), which didn't properly remove tensorflow-gpu.
Tensorboard is not limited by whether a machine has GPU or not.
And as far as I know, what Tensorboard do is parsing events pb files and display them on web. There is not computing, so it doesn't need GPU.

How to develop for tensor flow with gpu without a gpu

I have previously asked if it is possible to run tensor flow with gpu support on a cpu. I was told that it is possible and the basic code to switch which device I want to use but not how to get the initial code working on a computer that doesn't have a gpu at all. For example I would like to train on a computer that has a NVidia gpu but program on a laptop that only has a cpu. How would I go about doing this? I have tried just writing the code as normal but it crashes before I can even switch which device I want to use. I am using Python on Linux.
This thread might be helpful: Tensorflow: ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory
I've tried to import tensorflow with tensorflow-gpu loaded in the uni's HPC login node, which does not have GPUs. It works well. I don't have Nvidia GPU in my laptop, so I never go through the installation process. But I think the cause is it cannot find relevant libraries of CUDA, cuDNN.
But, why don't you just use cpu version? As #Finbarr Timbers mentioned, you still can run a model in a computer with GPU.
What errors are you getting? It is very possible to train on a GPU but develop on a CPU- many people do it, including myself. In fact, Tensorflow will automatically put your code on a GPU if possible.
If you add the following code to your model, you can see which devices are being used:
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
This should change when you run your model on a computer with a GPU.