I'm fairly sure this is a very stupid question but I can't get it off my brain. So I'm sure that you know that you can use CUDA or ROCm to accelerate learning in TensorFlow/Keras, but I was just wondering if there was any way that a Raspberry Pi 4 with its GPU could help with training?
I don't know what you mean by "help".. but in the raspberry pi 4 the GPU is a Videocore VI, integrated. It does not support CUDA and you can not use an external GPU (there are not connections dedicated to it). You could only train on CPU but, Raspberry is a resource-limited device, forget about it. You can do inference on raspberry.
You should train on a computer and test the model. If it worked, save your model weights and structure and deploy it to a RaspberryPie.
Related
Every time I need to train a 'large' deep learning model I do it from Google Collab, as it allows you to use GPU acceleration.
My pc has a dedicated GPU, I was wondering if it is possible to use it to run my notebooks locally in a fast way. Is it possible to train models using my pc GPU? In that case, how?
I am open to work with DataSpell, VSCode or any other IDE.
Nicholas Renotte has a great 'Getting Started' video that goes through the entire process of setting up GPU accelerated notebooks on your PC. The stuff you're interested starts around the 12 minute mark.
Yes, it is possible to run .ipynb notebooks locally using GPU acceleration. To do so, you will need to install the necessary libraries and frameworks such as TensorFlow, PyTorch, or Keras. Depending on the IDE you choose, you will need to install the relevant plugins and packages for GPU acceleration.
In terms of IDEs, DataSpell, VSCode, PyCharm, and Jupyter Notebook are all suitable for running notebooks locally with GPU acceleration.
Once the necessary libraries and frameworks are installed, you will then need to install the appropriate drivers for your GPU and configure the environment for GPU acceleration.
Finally, you will need to modify the .ipynb notebook to enable GPU acceleration and specify the number of GPUs you will be using. Once all the necessary steps have been taken, you will then be able to run the notebook locally with GPU acceleration.
I've coded a Neural Network from scratch in Python and I am using Google Colaboratory to train it. However, if I enable GPU or TPU acceleration, the training is not faster.
When you search for examples online, all of them use Tensorflow and other libraries, and their training times are shorter with GPU than without it.
Am I doing it correctly or am I missing something and the GPU is not being used?
Just enabling GPU or TPU won't help your problem, you need to explicitly code them to run on GPU if you are not using any frameworks or libraries.
I'm using tensorflow with gpu. My computer have NVIDIA gforce 750 ti and I'm gonna replace it with 1080 ti. do I have to re install tensorflow(or other drivers etc.)? If it is true, what exactly do I have to re-install?
One more question, Can I speed up the training process by install one more gpu in the computer?
As far as I know the only thing you need to reinstall are the GPU drivers (CUDA an/or cuDNN). If you install the exact same version with the exact same bindings Tensorflow should not notice you changed the GPU and continue working...
And yes, you can speed up the training process with multiple GPUs, but telling you how to install and manage that is a bit too broad for a Stackoverflow answer....
I have previously asked if it is possible to run tensor flow with gpu support on a cpu. I was told that it is possible and the basic code to switch which device I want to use but not how to get the initial code working on a computer that doesn't have a gpu at all. For example I would like to train on a computer that has a NVidia gpu but program on a laptop that only has a cpu. How would I go about doing this? I have tried just writing the code as normal but it crashes before I can even switch which device I want to use. I am using Python on Linux.
This thread might be helpful: Tensorflow: ImportError: libcusolver.so.8.0: cannot open shared object file: No such file or directory
I've tried to import tensorflow with tensorflow-gpu loaded in the uni's HPC login node, which does not have GPUs. It works well. I don't have Nvidia GPU in my laptop, so I never go through the installation process. But I think the cause is it cannot find relevant libraries of CUDA, cuDNN.
But, why don't you just use cpu version? As #Finbarr Timbers mentioned, you still can run a model in a computer with GPU.
What errors are you getting? It is very possible to train on a GPU but develop on a CPU- many people do it, including myself. In fact, Tensorflow will automatically put your code on a GPU if possible.
If you add the following code to your model, you can see which devices are being used:
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
This should change when you run your model on a computer with a GPU.
I am using Windows 7. After i tested my GPU in tensorflow, which was awkwardly slowly on a already tested model on cpu, i switched to cpu with:
tf.device("/cpu:0")
I was assuming that i can switch back to gpu with:
tf.device("/gpu:0")
However i got the following error message from windows, when i try to rerun with this configuration:
The device "NVIDIA Quadro M2000M" is not exchange device and can not be removed.
With "nvida-smi" i looked for my GPU, but the system said the GPU is not there.
I restarted my laptop, tested if the GPU is there with "nvida-smi" and the GPU was recogniced.
I imported tensorflow again and started my model again, however the same error message pops up and my GPU vanished.
Is there something wrong with the configuration in one of the tensorflow configuration files? Or Keras files? What can i change to get this work again? Do you know why the GPU is so much slower that the 8 CPUs?
Solution: Reinstalling tensorflow-gpu worked for me.
However there is still the question why that happened and how i can switch between gpu and cpu? I dont want to use a second virtual enviroment.