Computing capability of GTX 870M - tensorflow

I was trying to run tensorflow-gpu on an ASUS laptop with a GTX 870M card on Ubuntu 16.40, but got an error message
018-10-07 16:54:50.537324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1482] Ignoring visible gpu device (device: 0, name: GeForce GTX 870M, pci bus id: 0000:01:00.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
However, GTX 870M's computing capability is listed as 5.0 (Maxwell). My questions are (1) what is GTX 870M's computing capability, and (2) Can I run tensorflow-gpu (latest or nightly) with GTX 870M? Thanks, CC.

I have a Razer Blade 14 2014 with the same GPU on Windows and run on the same problems. Unfortunately the CUDA compute capability is listed as 3.0 for this GPU.
In the context on Linux you have an advantage, for computing capability of 3.0 you can build the code from source using the official instructions, they recommend docker when doing this: https://www.tensorflow.org/install/source
A post I found a post where the problem is solved can help as well.
Please let me know if you could make it work, I will try to install it using latest stable Ubuntu Mate 18.04.2 LTS.

Related

Requirements for TFJS on GPU: Trying to Compare the performance of TFJS-node and TFJS-node-GPU

I tried to use an NVIDIA GeForce RTX 1080Ti GPU PC Card to work with TFJS and followed the hardware and software requirements as stated in the documentation, but I could not see a drastic difference in performance yet. Seems like it's ignoring the GPU.
I am unsure if I’m following the correct guidelines as the above documentation seems like it’s for Tensorflow Python.
Do I need to do some more settings for using the GPU version of TensorFlow.js node?
Difference in performance is massive in anything but trivial tasks (where CPU is faster simply because it takes time to get ops and data to&from GPU)
Like you said, its probably not using GPU to start with - which is most commonly due to cuda issues (its non-trivial to get cuda installation & versions right).
Increase tensorflow logging info: export TF_CPP_MIN_LOG_LEVEL=0 before running node app, you should see something like this in the output when its using GPU:
tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9575 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:01:00.0, compute capability: 8.6
If you don't see that, go back to docs and make sure cuda is installed correctly.

Model returns only NaN values on GTXA5000 but not on 1080TI

I have replaced a GTX 1080TI graphics card with a GTX A5000 in a desktop machine and reinstalled Ubuntu to upgraded from 16.04 to 20.04 in order to meet requirements.
But now I can't retrain or predict with our current model; When loading the model, Keras hangs for a very long time and all predicted results are NaN values.
We use Keras 2.2.4 with tensorflow 2.1.0 and Cuda 10.1.243, which I installed using Conda and I have tried with different drivers.
If I put the old GTX 1080 TI card back in to the machine the code works fine.
Any idea of what can be wrong - can it be the case that the A5000 does not support the same models as an old 1080TI card?
Ok, I can confirm that this setup works on the GTX A5000
CUDA: 11.6.0
Tensorflow: 2.7.0
Driver Version: 510.47.03
Thanks to #talonmies for his comment.

Keras R GPU configuration is using Intel dedicated GPU, not NVIDIA Card

I've gone through the arduous process of setting up GPU support for Keras, and it appears to have worked. Running the following code appears to confirm this:
> tensorflow::tf_gpu_configured()
2021-09-06 11:48:01.448811: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /device:GPU:0 with 3495 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6
TensorFlow built with CUDA: TRUE
2021-09-06 11:48:01.450971: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /device:GPU:0 with 3495 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6
GPU device name: /device:GPU:0[1] TRUE
This seems to say it thinks it's using my NVIDIA card, but on my computer GPU 0 is the dedicated Intel graphics, and GPU 1 is the NVIDIA.
When I run the MNIST example model, it clearly is running on my dedicated Intel graphics:
If I crank up the model size to where it exceeds Intel's 2GB to try to force it to my NVIDIA card (6gb) I get an overflow error.
Anyone know a way to update Keras to see the NVIDIA card or what my next troubleshooting steps might be?

Trying to use NVIDIA Geforce 920M to run Tensorflow codes

I have a Samsung notebook Windows 10 with 8GB of Ram, an Intel Graphics 5500 GPU and a Geforce 920M. I have been trying to use my NVIDIA to run code on Jupyter Notebook using Tensorflow. My Tensorflow codes do not run on the version
tensorflow 2.0, so I had to install previous versions of tensorflow. I installed CUDDA 9.0, tensorflow_gpu-1.12.0, and cuDNN 7, and it didn't work, then I tried to install tensorflow_gpu-1.5.0 with Anaconda, and it worked using the Intel GPU instead of mine NVIDIA, in that one moment I modified the settings in the NVIDIA Control Panel for my Geforce, but still the Intel GPU is being used instead of my NVIDIA. Why is this happening?
Try installing Nvidia's CUDA. Afterwards, when you run Tensorflow, it should run on your GPU.

Cannot use GPU with Tensorflow

I've tensorflow installed with CUDA 7.5 and cuDNN 5.0. My graphics card is NVIDIA Geforce 820M with capability 2.1. However, I get this error.
Ignoring visible gpu device (device: 0, name: GeForce 820M, pci bus id: 0000:08:00.0) with Cuda compute capability 2.1. The minimum required Cuda capability is 3.0.
Device mapping: no known devices.
Is there any way to run GPU on a 2.1 capability?
I scoured online to find that it is cuDNN that requires this capability, so will installing an earlier version of cuDNN enable me to use GPU?
tensorflow-gpu requires GPUs of compute capability 3.0 or higher for GPU acceleration and this has been true since the very first release of tensorflow.
cuDNN has also required GPUs of compute capability 3.0 or higher since the very first release of cuDNN.
With tensorflow (using Keras), you might be able to get it to run with PlaidML PlaidML. I have been able to run tensorflow with GPU on AMD and NVidia GPUs (some are old) with PlaidML. It's not as fast as CUDA, but much faster than your CPU.
For reference, I have run it on an old Macbook Pro (2012) with an NVidia 650 GPU (1.5 GB) as well as an AMD HD Radeon 750 3GB.
The caveat is that it needs to be Keras vs lower level TF. There are lots of articles on it, and now it has support from Intel.