I've tensorflow installed with CUDA 7.5 and cuDNN 5.0. My graphics card is NVIDIA Geforce 820M with capability 2.1. However, I get this error.
Ignoring visible gpu device (device: 0, name: GeForce 820M, pci bus id: 0000:08:00.0) with Cuda compute capability 2.1. The minimum required Cuda capability is 3.0.
Device mapping: no known devices.
Is there any way to run GPU on a 2.1 capability?
I scoured online to find that it is cuDNN that requires this capability, so will installing an earlier version of cuDNN enable me to use GPU?
tensorflow-gpu requires GPUs of compute capability 3.0 or higher for GPU acceleration and this has been true since the very first release of tensorflow.
cuDNN has also required GPUs of compute capability 3.0 or higher since the very first release of cuDNN.
With tensorflow (using Keras), you might be able to get it to run with PlaidML PlaidML. I have been able to run tensorflow with GPU on AMD and NVidia GPUs (some are old) with PlaidML. It's not as fast as CUDA, but much faster than your CPU.
For reference, I have run it on an old Macbook Pro (2012) with an NVidia 650 GPU (1.5 GB) as well as an AMD HD Radeon 750 3GB.
The caveat is that it needs to be Keras vs lower level TF. There are lots of articles on it, and now it has support from Intel.
Related
I tried to use an NVIDIA GeForce RTX 1080Ti GPU PC Card to work with TFJS and followed the hardware and software requirements as stated in the documentation, but I could not see a drastic difference in performance yet. Seems like it's ignoring the GPU.
I am unsure if I’m following the correct guidelines as the above documentation seems like it’s for Tensorflow Python.
Do I need to do some more settings for using the GPU version of TensorFlow.js node?
Difference in performance is massive in anything but trivial tasks (where CPU is faster simply because it takes time to get ops and data to&from GPU)
Like you said, its probably not using GPU to start with - which is most commonly due to cuda issues (its non-trivial to get cuda installation & versions right).
Increase tensorflow logging info: export TF_CPP_MIN_LOG_LEVEL=0 before running node app, you should see something like this in the output when its using GPU:
tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9575 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:01:00.0, compute capability: 8.6
If you don't see that, go back to docs and make sure cuda is installed correctly.
Has anyone been able to make tensorflow detect the GPU using python 3.7?
How did you do it? I've downloaded cuDNN 8.1, CUDA 11.2, then pip installed tensorflow using pip install tensorflow-gpu==2.5 I've added another environment variable for cuDNN's bin, however I am still getting this result Num GPUs Available 0. Does Tensorflow (CUDA 11.2) even work with the AMD Radeon Vega 8?
No it does not, because cuDNN is a product of NVIDIA and so is CUDA. NVIDIA designs their own GPUs and their product will look for those GPUs. In order for tensorflow to detect the GPU you will have to use one of NVIDIA's GPU.
I am running a model written with TensorFlow 1.x on 4x RTX 3090 and it is taking a long time to start up the training than as in 1x RTX 3090. Although, as training starts, it gets finished up earlier in 4x than in 1x. I am using CUDA 11.1 and TensorFlow 1.14 in both the GPUs.
Secondly, When I am using 1x RTX 2080ti, with CUDA 10.2 and TensorFlow 1.14, it is taking less amount to start the training as compared to 1x RTX 3090 with 11.1 CUDA and Tensorflow 1.14. Tentatively, it is taking 5 min in 1x RTX 2080ti, 30-35 minutes in 1x RTX 3090, and 1.5 hrs in 4x RTX 3090 to start the training for one of the datasets.
I'll be grateful if anyone can help me to resolve this issue.
I am using Ubuntu 16.04, Core™ i9-10980XE CPU, and 32 GB ram both in 2080ti and 3090 machines.
EDIT: I found out that TF takes a long start-up time in Ampere architecture GPUs, according to this, but I'm still unclear if this is the case; and, if this is the case, does any solution exist for it?
T.F. 1.x does not have binaries for CUDA 11.1, so at the start, it takes time to compile. Because RTX 3090 compiles using PTX & JIT-compiler it takes a long time.
A general solution for this is to increase the cache size,.using code:-"export CUDA_CACHE_MAXSIZE=2147483648" (here 2147483648 is the cache size, you can set it any number by considering memory limit and it's usage in other processes in account). Refer to https://www.tensorflow.org/install/gpu for clarification. From this in the subsequent run, start-up time will be small. But even after this, binaries produce(At this start) will not be compatible with CUDA 11.1
The best is to migrate the code from T.F. 1.x to 2.x(2.4+) to make it run on RTX 30XX series or try compiling T.F. 1.x from source with CUDA 11.1(Not sure on this).
As Thunder explained, TensorFlow 1.x is not supported on Nvidia Ampere GPUs, and it looks like it never will be, as Ampere streaming multiprocessor (SM_86) are only supported on CUDA 11.1, see https://forums.developer.nvidia.com/t/can-rtx-3080-support-cuda-10-1/155849/2 and TensorFlow 1.x wasn't fully supported on new versions of CUDA for a while now, for probably similar reason as described in the link above. Unfortunately TensorFlow version 1.x is no longer supported or maintained, see https://github.com/tensorflow/tensorflow/issues/43629#issuecomment-700709796
However, if you have to use Stylegan 2 model, you might have some luck with Nvidia Tensorflow, which apparently has support for version 1.15 on Ampere GPUs, see https://developer.nvidia.com/blog/accelerating-tensorflow-on-a100-gpus/
Here's the proposed solution on linux:
https://www.pugetsystems.com/labs/hpc/How-To-Install-TensorFlow-1-15-for-NVIDIA-RTX30-GPUs-without-docker-or-CUDA-install-2005/
On windows, I managed to get my RTX3080TI working with TF 1.15 using WSL2 with directml:
https://learn.microsoft.com/en-us/windows/ai/directml/gpu-tensorflow-wsl
Results is abt 1.5 times faster compared to my RTX2080TI.
I have a Samsung notebook Windows 10 with 8GB of Ram, an Intel Graphics 5500 GPU and a Geforce 920M. I have been trying to use my NVIDIA to run code on Jupyter Notebook using Tensorflow. My Tensorflow codes do not run on the version
tensorflow 2.0, so I had to install previous versions of tensorflow. I installed CUDDA 9.0, tensorflow_gpu-1.12.0, and cuDNN 7, and it didn't work, then I tried to install tensorflow_gpu-1.5.0 with Anaconda, and it worked using the Intel GPU instead of mine NVIDIA, in that one moment I modified the settings in the NVIDIA Control Panel for my Geforce, but still the Intel GPU is being used instead of my NVIDIA. Why is this happening?
Try installing Nvidia's CUDA. Afterwards, when you run Tensorflow, it should run on your GPU.
I want to use the GPU enabled version of Tensorflow. I read that it works best with CUDA 7.5 but the server that I am using has CUDA version 5.5 installed.
Can I configure Tensorflow with CUDA 5.5? If yes, how? I have installed tensorflow in a virtual environment.
From the installation doc:
In order to build or run TensorFlow with GPU support, both NVIDIA's Cuda Toolkit (>= 7.0) and cuDNN (>= v2) need to be installed.
If you don't install from source, you will need to use version 7.5.
Also make sure that you have a compatible GPU:
TensorFlow GPU support requires having a GPU card with NVidia Compute Capability >= 3.0. Supported cards include but are not limited to:
NVidia Titan
NVidia Titan X
NVidia K20
NVidia K40