How can I use tensorflow 1.14 on RTX 3080? - tensorflow

I'm using RTX 3080 graphic cards.Amper cards only support CUDA 11.0,or higher version.The matching tensorflow version is 2.x.And I need to run code on tensorflow 1.14.How can I do that on RTX 3080?

Related

Model returns only NaN values on GTXA5000 but not on 1080TI

I have replaced a GTX 1080TI graphics card with a GTX A5000 in a desktop machine and reinstalled Ubuntu to upgraded from 16.04 to 20.04 in order to meet requirements.
But now I can't retrain or predict with our current model; When loading the model, Keras hangs for a very long time and all predicted results are NaN values.
We use Keras 2.2.4 with tensorflow 2.1.0 and Cuda 10.1.243, which I installed using Conda and I have tried with different drivers.
If I put the old GTX 1080 TI card back in to the machine the code works fine.
Any idea of what can be wrong - can it be the case that the A5000 does not support the same models as an old 1080TI card?
Ok, I can confirm that this setup works on the GTX A5000
CUDA: 11.6.0
Tensorflow: 2.7.0
Driver Version: 510.47.03
Thanks to #talonmies for his comment.

Tensorflow (CUDA 11.2) not detecting GPU on a AMD Radeon Vega 8 (Envy Laptop) using Python 3.7

Has anyone been able to make tensorflow detect the GPU using python 3.7?
How did you do it? I've downloaded cuDNN 8.1, CUDA 11.2, then pip installed tensorflow using pip install tensorflow-gpu==2.5 I've added another environment variable for cuDNN's bin, however I am still getting this result Num GPUs Available 0. Does Tensorflow (CUDA 11.2) even work with the AMD Radeon Vega 8?
No it does not, because cuDNN is a product of NVIDIA and so is CUDA. NVIDIA designs their own GPUs and their product will look for those GPUs. In order for tensorflow to detect the GPU you will have to use one of NVIDIA's GPU.

Computing capability of GTX 870M

I was trying to run tensorflow-gpu on an ASUS laptop with a GTX 870M card on Ubuntu 16.40, but got an error message
018-10-07 16:54:50.537324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1482] Ignoring visible gpu device (device: 0, name: GeForce GTX 870M, pci bus id: 0000:01:00.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
However, GTX 870M's computing capability is listed as 5.0 (Maxwell). My questions are (1) what is GTX 870M's computing capability, and (2) Can I run tensorflow-gpu (latest or nightly) with GTX 870M? Thanks, CC.
I have a Razer Blade 14 2014 with the same GPU on Windows and run on the same problems. Unfortunately the CUDA compute capability is listed as 3.0 for this GPU.
In the context on Linux you have an advantage, for computing capability of 3.0 you can build the code from source using the official instructions, they recommend docker when doing this: https://www.tensorflow.org/install/source
A post I found a post where the problem is solved can help as well.
Please let me know if you could make it work, I will try to install it using latest stable Ubuntu Mate 18.04.2 LTS.

Nvidia Titan X (Pascal) Tensorflow Windows 10

My Operating System is Windows 10 and I am using Keras with Tensorflow backend on CPU. I want to buy the "Nvidia Titan x (Pascal)" GPU as it is recommended for tensorflow on Nvidia website:
http://www.nvidia.com/object/gpu-accelerated-applications-tensorflow-configurations.html
They recommend Ubuntu 14.04 as the OS.
Does anybody know if I can use Tensorflow on Nvidia Titan x (Pascal) GPU, on my Windows 10 machine?
Thanks a lot.

Cannot use GPU with Tensorflow

I've tensorflow installed with CUDA 7.5 and cuDNN 5.0. My graphics card is NVIDIA Geforce 820M with capability 2.1. However, I get this error.
Ignoring visible gpu device (device: 0, name: GeForce 820M, pci bus id: 0000:08:00.0) with Cuda compute capability 2.1. The minimum required Cuda capability is 3.0.
Device mapping: no known devices.
Is there any way to run GPU on a 2.1 capability?
I scoured online to find that it is cuDNN that requires this capability, so will installing an earlier version of cuDNN enable me to use GPU?
tensorflow-gpu requires GPUs of compute capability 3.0 or higher for GPU acceleration and this has been true since the very first release of tensorflow.
cuDNN has also required GPUs of compute capability 3.0 or higher since the very first release of cuDNN.
With tensorflow (using Keras), you might be able to get it to run with PlaidML PlaidML. I have been able to run tensorflow with GPU on AMD and NVidia GPUs (some are old) with PlaidML. It's not as fast as CUDA, but much faster than your CPU.
For reference, I have run it on an old Macbook Pro (2012) with an NVidia 650 GPU (1.5 GB) as well as an AMD HD Radeon 750 3GB.
The caveat is that it needs to be Keras vs lower level TF. There are lots of articles on it, and now it has support from Intel.