OpenACC PGI compiler GPU support - pgi

Which GPUs support PGI compiler (OpenACC and cuda unified memory)? Kepler, Maxwell, Pascal? What about GTX 670, 770, 970 ? M-series(for notebooks)? It is true that support only for tesla and all pascal series?

While exact devices PGI supports will change depending on the PGI compiler version, for PGI 18.10, we support NVIDIA GPU devices with compute capabilities 3.0 to 7.0. (See: https://www.pgroup.com/resources/docs/18.10/x86/pgi-release-notes/index.htm#compute-cap)
Officially, PGI supports the Tesla product line. But by support we're meaning the devices that we've done extensive testing on. In practice, other NVIDIA devices such as the GeForce and Quadro products that use the same compute capabilities should work as well.

Related

Cytoscape How to select a specific GPU card? Nvidia over Intel

My pc has two cards: one Intel and one NVIDIA. Although I select the NVIDIA card in the Cytoscape Desktop OpenCl preferences everything runs in the Intel one.
How can I force it to use the NVIDIA one?
Many thanks!
I already made sure I have installed all the appropriate drivers.

External GPU with Vulkan

According to this Vulkan tutorial, I can use vkEnumeratePhysicalDevices to get a list of available GPUs. However, I don't see my external NVIDIA GPU in there, only my Intel iGPU.
This eGPU is connected via Thunderbolt and is running CUDA code just fine. Is there anything I might have missed? Is it supposed to work out of the box?
My machine is running Arch Linux with up-to-date proprietary NVIDIA drivers.
The eGPU is a NVIDIA GTX 1050 (Lenovo Graphics Dock). Is it possible that it just does not support Vulkan somehow?
Vulkan support should work just as well with external GPUs (eGPUs). Seeing the eGPU enumerated as a Vulkan device may require the eGPU to be recoznized by Xorg (or Wayland in the future).
See recently created https://wiki.archlinux.org/title/External_GPU#Xorg for changes probably required in Xorg config.

Computing capability of GTX 870M

I was trying to run tensorflow-gpu on an ASUS laptop with a GTX 870M card on Ubuntu 16.40, but got an error message
018-10-07 16:54:50.537324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1482] Ignoring visible gpu device (device: 0, name: GeForce GTX 870M, pci bus id: 0000:01:00.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5.
However, GTX 870M's computing capability is listed as 5.0 (Maxwell). My questions are (1) what is GTX 870M's computing capability, and (2) Can I run tensorflow-gpu (latest or nightly) with GTX 870M? Thanks, CC.
I have a Razer Blade 14 2014 with the same GPU on Windows and run on the same problems. Unfortunately the CUDA compute capability is listed as 3.0 for this GPU.
In the context on Linux you have an advantage, for computing capability of 3.0 you can build the code from source using the official instructions, they recommend docker when doing this: https://www.tensorflow.org/install/source
A post I found a post where the problem is solved can help as well.
Please let me know if you could make it work, I will try to install it using latest stable Ubuntu Mate 18.04.2 LTS.

Cannot use GPU with Tensorflow

I've tensorflow installed with CUDA 7.5 and cuDNN 5.0. My graphics card is NVIDIA Geforce 820M with capability 2.1. However, I get this error.
Ignoring visible gpu device (device: 0, name: GeForce 820M, pci bus id: 0000:08:00.0) with Cuda compute capability 2.1. The minimum required Cuda capability is 3.0.
Device mapping: no known devices.
Is there any way to run GPU on a 2.1 capability?
I scoured online to find that it is cuDNN that requires this capability, so will installing an earlier version of cuDNN enable me to use GPU?
tensorflow-gpu requires GPUs of compute capability 3.0 or higher for GPU acceleration and this has been true since the very first release of tensorflow.
cuDNN has also required GPUs of compute capability 3.0 or higher since the very first release of cuDNN.
With tensorflow (using Keras), you might be able to get it to run with PlaidML PlaidML. I have been able to run tensorflow with GPU on AMD and NVidia GPUs (some are old) with PlaidML. It's not as fast as CUDA, but much faster than your CPU.
For reference, I have run it on an old Macbook Pro (2012) with an NVidia 650 GPU (1.5 GB) as well as an AMD HD Radeon 750 3GB.
The caveat is that it needs to be Keras vs lower level TF. There are lots of articles on it, and now it has support from Intel.

NVIDIA GPUs and PhysX engine

How is the NVIDIA PhysX engine implemented in the NVIDIA GPUs: It's a co-processor or the physical algorithms are implemented as fragment programs to be executed in the GPU pipeline ?
PhysX is implemented using NVIDIA's CUDA (GPGPU implementation). There isn't a separate co-processor or other dedicated piece of silicon.