I want to detect the AMD gpu deneration in python code. My case is that to run specific application (davinci resolve), it is required to use amdgpu pro drivers for gpu cards before Vega. And amdgpu pro drivers are not required when AMD gpu is Vega or newer generation. See the list of amd gpus in wikipedia. I am writing a script (davinci-resolve-checker) that tells user if he/she need to use that driver.
The question is, how do I differentiate between gpu generations/chip codenames of a gpu? I am using pylspci to get information of the presented gpus. Is there a list of generations that I can check with?
There is a pci id list published here: https://pci-ids.ucw.cz/v2.2/pci.ids
In that list for Advanced Micro Devices, Inc. [AMD/ATI] (1002 vendor id) you can see their devices. For example, for AMD Radeon PRO W6600 GPU there is 73e3 Navi 23 WKS-XL [Radeon PRO W6600] line.
There you can see if the device name contains the codename substring. In this case it is Navi.
For the specified question, the codenames that currently describes vega and above are: Vega, Navi. So if the device name does not contain that substring, I consider it as "older than vega".
For programming that, you do not actually need the list, as you can just take the device name from VerboseParser device.device.name. But just in case, this list is located in /usr/share/hwdata/pci.ids in the system.
Probably, it is not a very reliable way. But I did not yet found a better way. Any suggestions are welcome.
Related
My objective is to obtain the ability of parallelizing a code in order to be able to run it on GPU, and the Graal would be to have a software that can run in parallel on any GPU or even CPU (Intel, NVIDIA, AMD, and so...).
From what I understood, the best solution would be to use OpenCL. But shortly after that, I also read about SYCL, that is supposed to simplify the codes that run on GPU.
But is it just that ? Isn't better to use a lower level language in order to be sure that it will be possible to be used in the most hardware possible ?
I know that all the compatibilities are listed on The Khronos Group website, but I am reading everything and its opposite on the Internet (like if a NVIDIA card supports CUDA, then it supports OpenCL, or NVIDIA cards will never work with OpenCL, even though OpenCL is supposed to work with everything)...
This is a new topic to me and there are lots of informations on the Internet... It would be great if someone could give me a simple answer to this question.
Probably yes.
OpenCL is supported on all AMD/Nvidia/Intel GPUs and on all Intel CPUs since around 2009. For best compatibility with almost any device, use OpenCL 1.2. The nice thing is that the OpenCL Runtime is included in the graphics drivers, so you don't have to install anything extra to work with it or to get it working on another machine.
SYCL on the other hand is newer and not yet established that well. For example, it is not officially supported (yet) on Nvidia GPUs: https://forums.developer.nvidia.com/t/is-sycl-available-on-cuda/37717/7
But there are already SYCL implememtations that are compatible with Nvidia/AMD GPUs, essentially built on top of CUDA or OpenCL 1.2, see here: https://stackoverflow.com/a/63468372/9178992
I am currently working on a project using SYCL to apply an unsharp mask to an image. My machine has an NVIDIA and an Intel GPU inside it. I am starting with the following code:
default_selector deviceSelector;
queue myQueue(deviceSelector);
The issue is that the line of code "default_selector deviceSelector;" automatically grabs the NVIDIA GPU inside my machine, this breaks all the code that follows as SYCL does not work with NVIDIA.
Therefore my question is - how can I force "default_selector deviceSelector;" to get my Intel GPU and not the NVIDIA GPU? Perhaps I can say something like:
if (device.has_extension(cl::sycl::string_class("Intel")))
if (device.get_info<info::device::device_type>() == info::device_type::gpu)
then select this GPU;//pseudo code
Thus making the code skip over the NVIDIA GPU and guaranteeing the selecting of my Intel GPU.
You are checking the extensions contain an entry called "Intel" which it would not. Extensions are things the device supports, such as SPIR-V You can see the supported extensions by calling clinfo at the command line. To choose the Intel GPU you need to check the manufacturer of the device to select the correct one.
So in the sample code for custom device selection https://github.com/codeplaysoftware/computecpp-sdk/blob/master/samples/custom-device-selector.cpp#L46
You would need to just have something like
if (device.get_info<info::device::name>() == "Name of device") {
return 100;
}
You could print out the value of
device.get_info<info::device::name>
to get the value to check against.
I need a system for OpenCL programming with the following restrictions:
The discrete GPU must not run as a display card --> I can do that
from BIOS
The internal GPU of the AMD's APU must be used as display GPU --> I can do that
from BIOS
OpenCL must not recognize the internal APU's GPU and must always
default to the discrete GPU
Why do I need this?
It is because I am working on a GPU code that demands the GPU's BIOS
to be flashed and a custom BIOS to be installed, which makes the GPU
unusable for display.
AMD boards can't boot without VGA card so I am getting an APU that
has internal GPU.
The code base I am working on can't deal with conflicting GPUs so I
need to disable that (APU's GPU) from OpenCL seeing it.
How can I approach it?
According to the AMD OpenCL Programming Guide, AMD's drivers support the GPU_DEVICE_ORDINAL environment variable to configure which devices are used (Section 2.3.3):
In some cases, the user might want to mask the visibility of the GPUs seen by
the OpenCL application. One example is to dedicate one GPU for regular graphics operations and the other three (in a four-GPU system) for Compute. To
do that, set the GPU_DEVICE_ORDINAL environment parameter, which is a comma-separated
list variable:
Under Windows: set GPU_DEVICE_ORDINAL=1,2,3
Under Linux: export GPU_DEVICE_ORDINAL=1,2,3
You'll first need to determine the ordinal for the devices you want to include. For this, I would recommend using clinfo with the -l switch, which will give you a basic tree of the available OpenCL platforms and devices. If the devices are listed with the APU first and then the dedicated GPU, you would want to only enable device 1 (the GPU), and would set the environment variable to GPU_DEVICE_ORDINAL=1.
Where can I purchase good VGA card(GPU) for Microsoft Cognitive CNTK/TensorFlow programming? can you suggest commonly used GPU model with affordable price?
For tensorflow programming you can find a list of CUDA compatible graphics cards here and choose whichever fits your needs best but know if you want to use the Tensorflow-gpu pre-built binaries you need a card with CUDA Compatability of 3.5 or higher(at least on windows).
If you want to run tensorflow on windows 10 with a graphics card of CUDA capability of 3.0 you could look at this, you would build from source with some edits to the CMakeList here This would allow you to find a card with 3.0 compatibility.
Although as a fair warning on building your own tensorflow binary "don't build a TensorFlow binary yourself unless you are very comfortable building complex packages from source and dealing with the inevitable aftermath should things not go exactly as documented." from here
You need a CUDA-compatible graphics card with Compute Capability (CC) 3.0 or more to use CNTK GPU capabilities.
You can find CUDA-compatible graphics cards here and here (for older cards)
You can also use Microsoft Azure’s specialized Deep Learning Virtual Machine if you’re considering the cloud as an option.
How to run the JOGL code on the GPU?How to Check whether the JOGL code is Running on the CPU or GPU?How to Select the particular GPU When multiple GPU are present ?
JOGL allows to choose a profile and some capabilities, they are used to pick a driver and some of them aren't hardware accelerated. You can use the boolean parameter "favorHardwareRasterizer" of this method as an hint to indicate to JOGL that you prefer using an hardware accelerated (GPU) profile.
GLContext.isHardwareRasterizer tells you whether you benefit of hardware acceleration. GL.glGetString(GL.GL_RENDERER) and GL.glGetString(GL.GL_VENDOR) can help you to get some information about the renderer and the vendor of the OpenGL driver.
You can't pick a particular GPU. There is no support of NVidia GPU affinity yet. "multiple GPUs" is vague, it can designate Optimus, SLI or Crossfire.
Rather ask your JOGL specific question on our official forum. Only a few contributors and maintainers come here.