NVIDIA GPUs and PhysX engine - gpu

How is the NVIDIA PhysX engine implemented in the NVIDIA GPUs: It's a co-processor or the physical algorithms are implemented as fragment programs to be executed in the GPU pipeline ?

PhysX is implemented using NVIDIA's CUDA (GPGPU implementation). There isn't a separate co-processor or other dedicated piece of silicon.

Related

Requirements for TFJS on GPU: Trying to Compare the performance of TFJS-node and TFJS-node-GPU

I tried to use an NVIDIA GeForce RTX 1080Ti GPU PC Card to work with TFJS and followed the hardware and software requirements as stated in the documentation, but I could not see a drastic difference in performance yet. Seems like it's ignoring the GPU.
I am unsure if I’m following the correct guidelines as the above documentation seems like it’s for Tensorflow Python.
Do I need to do some more settings for using the GPU version of TensorFlow.js node?
Difference in performance is massive in anything but trivial tasks (where CPU is faster simply because it takes time to get ops and data to&from GPU)
Like you said, its probably not using GPU to start with - which is most commonly due to cuda issues (its non-trivial to get cuda installation & versions right).
Increase tensorflow logging info: export TF_CPP_MIN_LOG_LEVEL=0 before running node app, you should see something like this in the output when its using GPU:
tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 9575 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3060, pci bus id: 0000:01:00.0, compute capability: 8.6
If you don't see that, go back to docs and make sure cuda is installed correctly.

External GPU with Vulkan

According to this Vulkan tutorial, I can use vkEnumeratePhysicalDevices to get a list of available GPUs. However, I don't see my external NVIDIA GPU in there, only my Intel iGPU.
This eGPU is connected via Thunderbolt and is running CUDA code just fine. Is there anything I might have missed? Is it supposed to work out of the box?
My machine is running Arch Linux with up-to-date proprietary NVIDIA drivers.
The eGPU is a NVIDIA GTX 1050 (Lenovo Graphics Dock). Is it possible that it just does not support Vulkan somehow?
Vulkan support should work just as well with external GPUs (eGPUs). Seeing the eGPU enumerated as a Vulkan device may require the eGPU to be recoznized by Xorg (or Wayland in the future).
See recently created https://wiki.archlinux.org/title/External_GPU#Xorg for changes probably required in Xorg config.

I'd like to manipulate the way using gpu in tensorflow lite, what can i study for that

At first, let me explain what i have to do.
My develop enviroment is Tizen OS. may be you are unfamilier that, anyway this os is using linux kernel based redhat and targeting on mobile, tv, etc.. And my target device is consists of exynos 5422 and arm mali-t628.
My main work is implement some gpu library to let tensorflow lite's operation can use the library.
I proceeded to build and install tensorflow lite as a rpm package file.
I am googling many times about the tensorflow and gpu. and get some useless information about cuda. i didnt see any info for my case(tizen and mali gpu).
i think linux have gpu instruction like the cpu or library.. but i cant find them.
can you suggest search keyword or document?
You can go to nvidia’s cuda toolkit page, where you can find the documentation and
Training buttons / options.
Also there’s the CUDA programming guide wich i myself find very usefull and helpull for CUDA.
I believe that one or two of those may help you.
CUDA is for NVidia GPU. Mali is not NVidia's, but ARM's. So you CANNOT use CUDA in your given hardware. Besides, if you want CUDA, you'd better drop Tensorflow-lite and use Tensorflow.
If you want to use CUDA, get a hardware with supported NVidia GPU (e.g., x64 machine with NVidia GPU). Note that you can use Tensorflow-GPU & CUDA/CUDNN in Tizen with x64+NVidia GPU. You just need to be careful on nvidia GPU kernel driver version and userspace driver version. Because NVidia's GPU userspace driver and CUDA/CUDNN are statically built, its Linux drivers are compatible with Tizen. (I've tested tensorflow-gpu, CUDA/CUDNN in Tizen with NVidia driver version 111... probably in winter, 2017)
If you want to use Tizen/Tensorflow-lite in the given hardware, forget CUDA.

How do I install tensorflow with gpu support for Mac?

My MacBook Pro doesn't have a NVIDIA gpu. So it's not possible to run CUDA. I'm wondering which of the earlier versions of TensorFlow have gpu support for Mac OS? And how can I install on Anaconda?
As stated on the official site:
Note: As of version 1.2, TensorFlow no longer provides GPU support on
Mac OS X.
..so installing any earlier version should be fine. But since your hardware does not have NVIDIA graphics card with CUDA support, it doesn't matter anyway.
In terms of installing TensorFlow on Mac OSX using Anaconda, you can just follow steps nicely described in the official docs
TensorFlow relies on CUDA for GPU use so you need Nvidia GPU. There's experimental work on adding OpenCL support to TensorFlow, but it's not supported on MacOS.
On anecdotal note, I've heard bad things from people trying to use AMD cards for deep learning. Basically AMD doesn't care about deep learning, they change their interfaces without notice so things break or run slower than CPU.

Cannot use GPU with Tensorflow

I've tensorflow installed with CUDA 7.5 and cuDNN 5.0. My graphics card is NVIDIA Geforce 820M with capability 2.1. However, I get this error.
Ignoring visible gpu device (device: 0, name: GeForce 820M, pci bus id: 0000:08:00.0) with Cuda compute capability 2.1. The minimum required Cuda capability is 3.0.
Device mapping: no known devices.
Is there any way to run GPU on a 2.1 capability?
I scoured online to find that it is cuDNN that requires this capability, so will installing an earlier version of cuDNN enable me to use GPU?
tensorflow-gpu requires GPUs of compute capability 3.0 or higher for GPU acceleration and this has been true since the very first release of tensorflow.
cuDNN has also required GPUs of compute capability 3.0 or higher since the very first release of cuDNN.
With tensorflow (using Keras), you might be able to get it to run with PlaidML PlaidML. I have been able to run tensorflow with GPU on AMD and NVidia GPUs (some are old) with PlaidML. It's not as fast as CUDA, but much faster than your CPU.
For reference, I have run it on an old Macbook Pro (2012) with an NVidia 650 GPU (1.5 GB) as well as an AMD HD Radeon 750 3GB.
The caveat is that it needs to be Keras vs lower level TF. There are lots of articles on it, and now it has support from Intel.