Keras R GPU configuration is using Intel dedicated GPU, not NVIDIA Card - tensorflow

I've gone through the arduous process of setting up GPU support for Keras, and it appears to have worked. Running the following code appears to confirm this:
> tensorflow::tf_gpu_configured()
2021-09-06 11:48:01.448811: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /device:GPU:0 with 3495 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6
TensorFlow built with CUDA: TRUE
2021-09-06 11:48:01.450971: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /device:GPU:0 with 3495 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6
GPU device name: /device:GPU:0[1] TRUE
This seems to say it thinks it's using my NVIDIA card, but on my computer GPU 0 is the dedicated Intel graphics, and GPU 1 is the NVIDIA.
When I run the MNIST example model, it clearly is running on my dedicated Intel graphics:
If I crank up the model size to where it exceeds Intel's 2GB to try to force it to my NVIDIA card (6gb) I get an overflow error.
Anyone know a way to update Keras to see the NVIDIA card or what my next troubleshooting steps might be?

Related

Requirements for TFJS on GPU: Trying to Compare the performance of TFJS-node and TFJS-node-GPU

I tried to use an NVIDIA GeForce RTX 1080Ti GPU PC Card to work with TFJS and followed the hardware and software requirements as stated in the documentation, but I could not see a drastic difference in performance yet. Seems like it's ignoring the GPU.
I am unsure if I’m following the correct guidelines as the above documentation seems like it’s for Tensorflow Python.
Do I need to do some more settings for using the GPU version of TensorFlow.js node?
Difference in performance is massive in anything but trivial tasks (where CPU is faster simply because it takes time to get ops and data to&from GPU)
Like you said, its probably not using GPU to start with - which is most commonly due to cuda issues (its non-trivial to get cuda installation & versions right).
Increase tensorflow logging info: export TF_CPP_MIN_LOG_LEVEL=0 before running node app, you should see something like this in the output when its using GPU:
tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9575 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:01:00.0, compute capability: 8.6
If you don't see that, go back to docs and make sure cuda is installed correctly.

GPU utilization is N/A when using nvidia-smi for GeForce GTX 1650 graphic card

I want to see the GPU usage of my graphic card but its showing N/A!. I use Windows 10 x64, with an Nvidia GeForce GTX 1650.
I am getting the GPU availability status when executing my custom code on the Jupyter-Notebook. But after running Nvidia-smi it shows N/A for all kind of processes. In the Task-Manager my Python process is running on GPU 1 and the power consumption shows that GPU 1 is used.
Why is the GPU utilization N/A and how to fix or circumvent the issue?
Here is the output of nvidia-smi command
Here is the Task-Manager process view

Computing capability of GTX 870M

I was trying to run tensorflow-gpu on an ASUS laptop with a GTX 870M card on Ubuntu 16.40, but got an error message
018-10-07 16:54:50.537324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1482] Ignoring visible gpu device (device: 0, name: GeForce GTX 870M, pci bus id: 0000:01:00.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
However, GTX 870M's computing capability is listed as 5.0 (Maxwell). My questions are (1) what is GTX 870M's computing capability, and (2) Can I run tensorflow-gpu (latest or nightly) with GTX 870M? Thanks, CC.
I have a Razer Blade 14 2014 with the same GPU on Windows and run on the same problems. Unfortunately the CUDA compute capability is listed as 3.0 for this GPU.
In the context on Linux you have an advantage, for computing capability of 3.0 you can build the code from source using the official instructions, they recommend docker when doing this: https://www.tensorflow.org/install/source
A post I found a post where the problem is solved can help as well.
Please let me know if you could make it work, I will try to install it using latest stable Ubuntu Mate 18.04.2 LTS.

using GPU in Hyper-V virtual machine

I'm trying to run my Nvidia GPU in Hyper-V guest machine. I read that this can be done with RemoteFX from Hyper-V settings. The problem is that when I try to add my GPU to RemoteFX it say "this gpu does not meet the minimum requirements for RemoteFX". The only option, that can be chosen is my Intel GPU. So can someone tell what are the minmum requirements and why my Intel GPU can be used, but Nvidia GPU cannot?
I'm using windows 10 Enterprise, with GPU Nvidia GTX 850M and Intel i5 4200H with Intel HD Graphics 4600. My DirectX version is 12 and Driver model: WDDM 2.0.

Cannot use GPU with Tensorflow

I've tensorflow installed with CUDA 7.5 and cuDNN 5.0. My graphics card is NVIDIA Geforce 820M with capability 2.1. However, I get this error.
Ignoring visible gpu device (device: 0, name: GeForce 820M, pci bus id: 0000:08:00.0) with Cuda compute capability 2.1. The minimum required Cuda capability is 3.0.
Device mapping: no known devices.
Is there any way to run GPU on a 2.1 capability?
I scoured online to find that it is cuDNN that requires this capability, so will installing an earlier version of cuDNN enable me to use GPU?
tensorflow-gpu requires GPUs of compute capability 3.0 or higher for GPU acceleration and this has been true since the very first release of tensorflow.
cuDNN has also required GPUs of compute capability 3.0 or higher since the very first release of cuDNN.
With tensorflow (using Keras), you might be able to get it to run with PlaidML PlaidML. I have been able to run tensorflow with GPU on AMD and NVidia GPUs (some are old) with PlaidML. It's not as fast as CUDA, but much faster than your CPU.
For reference, I have run it on an old Macbook Pro (2012) with an NVidia 650 GPU (1.5 GB) as well as an AMD HD Radeon 750 3GB.
The caveat is that it needs to be Keras vs lower level TF. There are lots of articles on it, and now it has support from Intel.