Nvidia Titan X (Pascal) Tensorflow Windows 10 - tensorflow

My Operating System is Windows 10 and I am using Keras with Tensorflow backend on CPU. I want to buy the "Nvidia Titan x (Pascal)" GPU as it is recommended for tensorflow on Nvidia website:
http://www.nvidia.com/object/gpu-accelerated-applications-tensorflow-configurations.html
They recommend Ubuntu 14.04 as the OS.
Does anybody know if I can use Tensorflow on Nvidia Titan x (Pascal) GPU, on my Windows 10 machine?
Thanks a lot.

Related

How to access NVIDIA Quadro K4000 from my remote desktop Windows 10?

I have been trying to access my NVIDIA Quadro K4000 GPU from my remote desktop Windows 10. I need to use it for TensorFlow object detection version 2.9 or greater. For TensorFlow 2.9 or higher I have installed CUDA and cuDNN 11.2 and visual studio 2019 according to the build configuration. It runs perfectly on my local PC and shows the local GPU on my laptop after running this code:
import tensorflow as tf
import os
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
tf.config.list_physical_devices('GPU')
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
But this line of code doesnot show any GPU device when I connect to my remote desktop with NVIDIA Quadro K4000 GPU.
This line of code returns null value:
tf.config.list_physical_devices('GPU')
I have tried everything from editing path variable to editing with 'gpedit.msc' from Run command. I cannot use my GPU remotely. I am stuck for long time.
Please help me.
Tried editing all these. but in vain
I solved the issue. I was unaware about the compute capability of my GPU and the version of CUDA and cuDNN I was using on the system. It took me two days for this but now I have access to my Tensorflow-GPU version 2.0.0.

Tensorflow (CUDA 11.2) not detecting GPU on a AMD Radeon Vega 8 (Envy Laptop) using Python 3.7

Has anyone been able to make tensorflow detect the GPU using python 3.7?
How did you do it? I've downloaded cuDNN 8.1, CUDA 11.2, then pip installed tensorflow using pip install tensorflow-gpu==2.5 I've added another environment variable for cuDNN's bin, however I am still getting this result Num GPUs Available 0. Does Tensorflow (CUDA 11.2) even work with the AMD Radeon Vega 8?
No it does not, because cuDNN is a product of NVIDIA and so is CUDA. NVIDIA designs their own GPUs and their product will look for those GPUs. In order for tensorflow to detect the GPU you will have to use one of NVIDIA's GPU.

Trying to use NVIDIA Geforce 920M to run Tensorflow codes

I have a Samsung notebook Windows 10 with 8GB of Ram, an Intel Graphics 5500 GPU and a Geforce 920M. I have been trying to use my NVIDIA to run code on Jupyter Notebook using Tensorflow. My Tensorflow codes do not run on the version
tensorflow 2.0, so I had to install previous versions of tensorflow. I installed CUDDA 9.0, tensorflow_gpu-1.12.0, and cuDNN 7, and it didn't work, then I tried to install tensorflow_gpu-1.5.0 with Anaconda, and it worked using the Intel GPU instead of mine NVIDIA, in that one moment I modified the settings in the NVIDIA Control Panel for my Geforce, but still the Intel GPU is being used instead of my NVIDIA. Why is this happening?
Try installing Nvidia's CUDA. Afterwards, when you run Tensorflow, it should run on your GPU.

using GPU in Hyper-V virtual machine

I'm trying to run my Nvidia GPU in Hyper-V guest machine. I read that this can be done with RemoteFX from Hyper-V settings. The problem is that when I try to add my GPU to RemoteFX it say "this gpu does not meet the minimum requirements for RemoteFX". The only option, that can be chosen is my Intel GPU. So can someone tell what are the minmum requirements and why my Intel GPU can be used, but Nvidia GPU cannot?
I'm using windows 10 Enterprise, with GPU Nvidia GTX 850M and Intel i5 4200H with Intel HD Graphics 4600. My DirectX version is 12 and Driver model: WDDM 2.0.

Cannot use GPU with Tensorflow

I've tensorflow installed with CUDA 7.5 and cuDNN 5.0. My graphics card is NVIDIA Geforce 820M with capability 2.1. However, I get this error.
Ignoring visible gpu device (device: 0, name: GeForce 820M, pci bus id: 0000:08:00.0) with Cuda compute capability 2.1. The minimum required Cuda capability is 3.0.
Device mapping: no known devices.
Is there any way to run GPU on a 2.1 capability?
I scoured online to find that it is cuDNN that requires this capability, so will installing an earlier version of cuDNN enable me to use GPU?
tensorflow-gpu requires GPUs of compute capability 3.0 or higher for GPU acceleration and this has been true since the very first release of tensorflow.
cuDNN has also required GPUs of compute capability 3.0 or higher since the very first release of cuDNN.
With tensorflow (using Keras), you might be able to get it to run with PlaidML PlaidML. I have been able to run tensorflow with GPU on AMD and NVidia GPUs (some are old) with PlaidML. It's not as fast as CUDA, but much faster than your CPU.
For reference, I have run it on an old Macbook Pro (2012) with an NVidia 650 GPU (1.5 GB) as well as an AMD HD Radeon 750 3GB.
The caveat is that it needs to be Keras vs lower level TF. There are lots of articles on it, and now it has support from Intel.