How to run the JOGL code on the GPU? - gpu

How to run the JOGL code on the GPU?How to Check whether the JOGL code is Running on the CPU or GPU?How to Select the particular GPU When multiple GPU are present ?

JOGL allows to choose a profile and some capabilities, they are used to pick a driver and some of them aren't hardware accelerated. You can use the boolean parameter "favorHardwareRasterizer" of this method as an hint to indicate to JOGL that you prefer using an hardware accelerated (GPU) profile.
GLContext.isHardwareRasterizer tells you whether you benefit of hardware acceleration. GL.glGetString(GL.GL_RENDERER) and GL.glGetString(GL.GL_VENDOR) can help you to get some information about the renderer and the vendor of the OpenGL driver.
You can't pick a particular GPU. There is no support of NVidia GPU affinity yet. "multiple GPUs" is vague, it can designate Optimus, SLI or Crossfire.
Rather ask your JOGL specific question on our official forum. Only a few contributors and maintainers come here.

Related

Does tensorflow-quantum support GPU, and if so how do I make it use mine?

I am getting started on using tensorflow-quantum for some QML circuit simulations. I have everything configured correctly for TensorFlow with GPU, and when I run print(tf.config.list_physical_devices('GPU')), it reports the presence of my GPU.
However, I've done some Googling, and I've come across a few things suggesting that tensorflow-quantum doesn't actually support GPU acceleration for simulations (e.g. MichaelBroughton's first reply here, and this issue which is still open). However, it's unclear to me how up-to-date this state of affairs is. I can't find anything about adding GPU support in the version notes.
Does tensorflow-quantum currently support GPU? If so, how do I (a) make it use my GPU for simulations and (b) verify that it is doing so?

How can I determine the generation/codename of AMD gpu in Linux?

I want to detect the AMD gpu deneration in python code. My case is that to run specific application (davinci resolve), it is required to use amdgpu pro drivers for gpu cards before Vega. And amdgpu pro drivers are not required when AMD gpu is Vega or newer generation. See the list of amd gpus in wikipedia. I am writing a script (davinci-resolve-checker) that tells user if he/she need to use that driver.
The question is, how do I differentiate between gpu generations/chip codenames of a gpu? I am using pylspci to get information of the presented gpus. Is there a list of generations that I can check with?
There is a pci id list published here: https://pci-ids.ucw.cz/v2.2/pci.ids
In that list for Advanced Micro Devices, Inc. [AMD/ATI] (1002 vendor id) you can see their devices. For example, for AMD Radeon PRO W6600 GPU there is 73e3 Navi 23 WKS-XL [Radeon PRO W6600] line.
There you can see if the device name contains the codename substring. In this case it is Navi.
For the specified question, the codenames that currently describes vega and above are: Vega, Navi. So if the device name does not contain that substring, I consider it as "older than vega".
For programming that, you do not actually need the list, as you can just take the device name from VerboseParser device.device.name. But just in case, this list is located in /usr/share/hwdata/pci.ids in the system.
Probably, it is not a very reliable way. But I did not yet found a better way. Any suggestions are welcome.

Is it safer to use OpenCL rather than SYCL when the objective is to have the most hardware-compatible program?

My objective is to obtain the ability of parallelizing a code in order to be able to run it on GPU, and the Graal would be to have a software that can run in parallel on any GPU or even CPU (Intel, NVIDIA, AMD, and so...).
From what I understood, the best solution would be to use OpenCL. But shortly after that, I also read about SYCL, that is supposed to simplify the codes that run on GPU.
But is it just that ? Isn't better to use a lower level language in order to be sure that it will be possible to be used in the most hardware possible ?
I know that all the compatibilities are listed on The Khronos Group website, but I am reading everything and its opposite on the Internet (like if a NVIDIA card supports CUDA, then it supports OpenCL, or NVIDIA cards will never work with OpenCL, even though OpenCL is supposed to work with everything)...
This is a new topic to me and there are lots of informations on the Internet... It would be great if someone could give me a simple answer to this question.
Probably yes.
OpenCL is supported on all AMD/Nvidia/Intel GPUs and on all Intel CPUs since around 2009. For best compatibility with almost any device, use OpenCL 1.2. The nice thing is that the OpenCL Runtime is included in the graphics drivers, so you don't have to install anything extra to work with it or to get it working on another machine.
SYCL on the other hand is newer and not yet established that well. For example, it is not officially supported (yet) on Nvidia GPUs: https://forums.developer.nvidia.com/t/is-sycl-available-on-cuda/37717/7
But there are already SYCL implememtations that are compatible with Nvidia/AMD GPUs, essentially built on top of CUDA or OpenCL 1.2, see here: https://stackoverflow.com/a/63468372/9178992

VK_KHR_ray_tracing_pipeline supported by which AMD GPUs on Linux?

Is there a comprehensive list, where i can check the supported Vulkan extensions for all AMD GPUs? I have been looking all over the internet, but can't find any information on this.
I currently have a RX570, but I thought the Vulkan API would feature a fallback mode for cards lacking hardware acceleration.
I think i installed the amdgpu-driver correctly, but when i try to run the raytracing_simple example, it says that the RX570 is lacking the requested extensions.
AMD introduced ray tracing support with the RX 6x00 series. A fallback mode for older hardware would have to be implemented by the vendor, which is not the case on AMD. So you need a RX 6x00 GPU for doing hardware accelerated ray tracing on Linux.
You can check VK_KHR_ray_tracing_pipeline support on Linux here: https://vulkan.gpuinfo.org/listdevicescoverage.php?extension=VK_KHR_ray_tracing_pipeline&platform=linux
That's a Vulkan hardware database I'm maintaining, which also has listings for extension support on different platforms. The data provided there is from user-uploaded reports. While not an official Vulkan database, thanks to regular contributions it's mostly complete and gives a good overview on Vulkan support for different hardware.
Note: As mentioned above, reports are submitted by users, so the list may not be 100% complete.

Is Kaveri a HSA-compliant processor?

I have looked at lots of HSA introductions and find that a HSA-compliant GPU should be preemptible and support context switch.
But the Wikipedia article "AMD Accelerated Processing Unit" says GPU compute context switch, GPU graphics preemption will have support in Carizzo APU (2015).
So I wonder whether Kaveri is a HSA-compliant processor?
Thanks!
Kaveri is a 1st generation HSA-compliant APU.
As a 1st generation, it is still missing some features of the HSA specification. One of those features is Mid-wave preemption, which means the ability to preempt a graphic/compute work in the middle, context-switch to a different wave (work) and then resume the original wave.
Without this feature, Kaveri needs to finish the wave and only then can it move to a different wave.
Having said that, there is already an infrastructure for running HSA applications on Kaveri in Linux (Ubuntu 13/14). See https://github.com/HSAFoundation/Linux-HSA-Drivers-And-Images-AMD for kernel bits and https://github.com/HSAFoundation/Okra-Interface-to-HSA-Device for userspace bits.
This infrastructure also supports the Aparapi and Sumatra projects on Kaveri - running Java code on the GPU.
Hope this helps.