How can I run ImageAI on my GPU and not my CPU - tensorflow

So I am quite new to this I was trying to find answers on google but it is kind of not working. So I am trying to run this library ImageAI library
I am able to run it normally on the CPU at least I think it runs on CPU by just calling python test.py. Am I correct there
But since the model prediction takes a long time I would like to run it on my GPU. What I was trying to do is create a conda environment and activate it but after I do I get this error.
ModuleNotFoundError: No module named 'imageai.Classification'
Although I have imageai installed in my environment
pip freeze | findstr imageai
imageai==2.1.5
As you can see by executing this command. What am I doing wrong here?

I found the solution it doesnt require the conda environment. ImageAI automatically runs on GPU if available. All you need to do is to ensure you have the GPU version of Tensorflow installed.

Related

How to enforce Google Colab to utilise the GPU (using an external package to make sure GPU is used)?

So I am using Google Colab because I have some functions I need to execute that take far too long on my cpu. I have set the runtime to the GPU accelrator, however when I run the cell, I still get this message: 'Warning: You are connected to a GPU runtime, but not utilizing the GPU'.
I understand that this means the code I am running is just using my cpu. However using my cpu, the function takes hours to execute. This is why I want to utilise Colab's GPU, however, even when I change runtime, it still uses my cpu... How do I specifically force Colab to utilise the GPU for executing a certain cell/function in Colab???
Edit: I have just found out apparently Colab uses GPU only when the package being used is a package specifically made for GPU usage. Is there some sort of external package I can use that forces a function to find a GPU to use before executing the function?
Edit: (The package I am using for the long calculation is Network X if that makes any difference)
Check out cuGraph, which lets you do the same graph calculations on the gpu as networkx. A medium post on compatibility between cuGraph and networkx graphs.
You only need to do a couple of things to get cuGraph working on Google Colab. As the Google Colab demo from this medium post suggests:
Use pynvml to confirm Colab allocated you a Tesla T4 GPU.
Install most recent Miniconda release compatible with Google Colab's Python install (3.6.7)
Install RAPIDS libraries
Copy RAPIDS .so files into current working directory, a workaround for conda/colab interactions
Update env variables so Python can find and use RAPIDS artifacts
!wget -nc https://github.com/rapidsai/notebooks-
extended/raw/master/utils/rapids-colab.sh
!bash rapids-colab.sh
import sys, os
sys.path.append('/usr/local/lib/python3.6/site-packages/')
os.environ['NUMBAPRO_NVVM'] = '/usr/local/cuda/nvvm/lib64/libnvvm.so'
os.environ['NUMBAPRO_LIBDEVICE'] = '/usr/local/cuda/nvvm/libdevice/'
And then you can do the same calculations on the gpu:
pagerank = cugraph.pagerank(G)
instead of
pagerank = nx.pagerank(G)

Stopping and starting a deep learning google cloud VM instance causes tensorflow to stop recognizing GPU

I am using the pre-built deep learning VM instances offered by google cloud, with an Nvidia tesla K80 GPU attached. I choose to have Tensorflow 2.5 and CUDA 11.0 automatically installed. When I start the instance, everything works great - I can run:
Import tensorflow as tf
tf.config.list_physical_devices()
And my function returns the CPU, accelerated CPU, and the GPU. Similarly, if I run tf.test.is_gpu_available(), the function returns True.
However, if I log out, stop the instance, and then restart the instance, running the same exact code only sees the CPU and tf.test.is_gpu_available() results in False. I get an error that looks like the driver initialization is failing:
E tensorflow/stream_executor/cuda/cuda_driver.cc:355] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
Running nvidia-smi shows that the computer still sees the GPU, but my tensorflow can’t see it.
Does anyone know what could be causing this? I don’t want to have to reinstall everything when I’m restarting the instance.
Some people (sadly not me) are able to resolve this by setting the following at the beginning of their script/main:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
I had to reinstall CUDA drivers and from then on it worked even after restarting the instance. You can configure your system settings on NVIDIAs website and it will provide you the commands you need to follow to install cuda. It also asks you if you want to uninstall the previous cuda version (yes!).This is luckily also very fast.
I fixed the same issue with the commands below, taken from https://issuetracker.google.com/issues/191612865?pli=1
gsutil cp gs://dl-platform-public-nvidia/b191551132/restart_patch.sh /tmp/restart_patch.sh
chmod +x /tmp/restart_patch.sh
sudo /tmp/restart_patch.sh
sudo service jupyter restart
Option-1:
Upgrade a Notebooks instance's environment. Refer the link to upgrade.
Notebooks instances that can be upgraded are dual-disk, with one boot disk and one data disk. The upgrade process upgrades the boot disk to a new image while preserving your data on the data disk.
Option-2:
Connect to the notebook VM via SSH and run the commands link.
After execution of the commands, the cuda version will update to 11.3 and the nvidia driver version to 465.19.01.
Restart the notebook VM.
Note: Issue has been solved in gpu images. New notebooks will be created with image version M74. About new image version is not yet updated in google-public-issue-tracker but you can find the new image version M74 in console.

Problem with importing tensorflow and testing NN

I'm currently working on a program to play a game similar to atari-games. I'm using keras (python 3). I finished writing the code and I want to test it, and I have few questions about the process:
first of all, I have trouble importing tesnorflow for some reason. I've installed it using pip. I've made sure to created new env. before the installation (which finished successfully), but when I try to run my program it says:
ModuleNotFoundError: No module named 'tensorflow'
I also, tried to install the package from within pycharm, but then I get this error:
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
I've checked program requirements (such as pip, python, virtualenv and setuptools versions) and everything seems up to date. perhaps someone could point out what else might be the problem?
Is there any other way I can test the performance of my program?
Thank you very much for your time and attention.
Anaconda is a complete time-saver. I suggest create an enviornment using Anaconda and install the tensorflow by conda install tensorflow If you would like to use the gpu version, conda automatically installs the CUDA and cudnn for you too.

Can't run Tensorflow on GPU within jupyter-notebook

I can't be able to run the tensorflow code with GPU when I ran it from a jupyter notebook. Same code runs no problem, if I ran in a python script.
I followed the main installation link:
https://www.tensorflow.org/install/install_windows
Also tried:
http://bailiwick.io/2017/11/05/tensorflow-gpu-windows-and-jupyter/
No problems outside notebook when I ran in a python script file.
Most likely the problem is similar to this:
Tensorflow not running on GPU in jupyter notebook
More specifically my test:
I can see both devices CPU and GPU via python a script
I can see only CPU via notebook
Thanks a lot for any help in advance!
Very late, but short answer:
Here is a Tutorial on how to set up a GPU-based Jupyterlab instance with Docker (which makes the installation faster).
I hope this helps!
I removed all existing environments and created a new one, which resolved the issue.
(Also, I had to apply the following to get around an issue caused by removed environments:
https://github.com/jupyter/notebook/issues/2301
)

Is it time saving for loading a saved tensorflow model

The question is,I cannot make my computer work for my tensorflow-gpu on ubuntu system. Because NVIDIA driver cannot be installed on ubuntu.So I run tensorflow-gpu on Windows10,but it doesnot support tensorflow-serving.
I know Docker can help me to do it,and i really installed it,but just tensorflow-cpu.That would be very slowly if I just run tensorflow-cpu version.
In case that,I came up with a thought that I install two tensorflow,one is GPU version and on system,the other is CPU version on Docker.GPU version for training and save a model,then CPU version loading the saved model.
What I want to know is does this way work,and is it time saving?Or put it simply,does it take less time than just run tensorflow-cpu version on Docker?
TensorFlow GPU with NVIDIA GPUs on Ubuntu is supported, and there are drivers available. Check this tutorial.