Driver says it supports interface version 6 but still exports core entrypoints - vulkan

I have experience in OpenGL and I'm starting to learn Vulkan, following from this tutorial, and I am onto the testing section.
However, instead of the intended output:
I'm getting the following output instead:
[vlk] Searching for ICD drivers named /usr/lib32/amdvlkpro32.so
[vlk] Searching for ICD drivers named /usr/lib/amdvlkpro64.so
[vlk] loader_scanned_icd_add: Driver /usr/lib/amdvlkpro64.so says it supports interface version 6 but still exports core entrypoints (Policy #LDP_DRIVER_6)
[vlk] Searching for ICD drivers named /usr/lib32/libvulkan_radeon.so
[vlk] Searching for ICD drivers named /usr/lib/libvulkan_radeon.so
[vlk] Build ICD instance extension list
[vlk] Build ICD instance extension list
[vlk] Build ICD instance extension list
The difference between validation layer: and vlk prefixes is just a difference in how I'm displaying the messages in my callback, nothing else has been changed. For the record, I have indeed removed the call to DestroyDebugUtilsMessengerEXT() as told in the tutorial.
Sorry; because I'm so new to Vulkan, I don't know what code to put here, but I can add whatever's necessary.
I'm using a Radeon RX 480. I'm running on Arch Linux, and here is the driver-related output of lspci -v:
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev c7) (prog-if 00 [VGA controller])
...
Kernel driver in use: amdgpu
Kernel modules: amdgpu
Looking at the output, it seems that Vulkan is searching for both the proprietary drivers (amdvlkpro) and the open-source (mesa) drivers (libvulkan_radeon) - at least I assume this is what these executables are.
It seems to me this is a problem with the proprietary AMD drivers, so - if this is indeed the case - how would I prevent this from happening? Is there a way to force Vulkan to use the open-source driver without uninstalling the proprietary one?
Update
To answer my previous question, yes - with amd-vulkan-prefixes. I have now tested the program on all AMD drivers, and it doesn't work as intended with any of them.
The following is the output of the program when running under each driver. This is the entire output, from start to end.
With RADV (vulkan-radeon)
[vlk] Searching for ICD drivers named /usr/lib32/libvulkan_radeon.so
[vlk] Searching for ICD drivers named /usr/lib/libvulkan_radeon.so
[vlk] Build ICD instance extension list
[vlk] Build ICD instance extension list
Even though there is no LDP_DRIVER_6 message, the desired output is still not achieved.
With AMDVLK Open (amdvlk)
[vlk] Searching for ICD drivers named /usr/lib32/amdvlk32.so
[vlk] Searching for ICD drivers named /usr/lib/amdvlk64.so
[vlk] loader_scanned_icd_add: Driver /usr/lib/amdvlk64.so says it supports interface version 6 but still exports core entrypoints (Policy #LDP_DRIVER_6)
[vlk] Build ICD instance extension list
[vlk] Build ICD instance extension list
With AMDVLK Closed (vulkan-amdgpu-pro)
[vlk] Searching for ICD drivers named /usr/lib32/amdvlkpro32.so
[vlk] Searching for ICD drivers named /usr/lib/amdvlkpro64.so
[vlk] loader_scanned_icd_add: Driver /usr/lib/amdvlkpro64.so says it supports interface version 6 but still exports core entrypoints (Policy #LDP_DRIVER_6)
[vlk] Build ICD instance extension list
[vlk] Build ICD instance extension list

LDP_DRIVER_6 has been removed from the list of errors. In the first version of the loader drivers would export basic vulkan functions, then they later changed the spec so that drivers exported different functions with a vk_icd prefix. They then added the policy LDP_DRIVER_6 due to concerns about how some platforms handle imports from dynamic libraries; in practice this isn't really an issue and major vendors continued to support all versions of the loader, thus policy LDP_DRIVER_6 was removed. You can view some more details in the Loader-Driver Interface Spec archive.
If you really want to use another driver that is possible. This is done by setting the environment variable VK_ICD_FILENAMES to a colon separated list of full paths to the driver's JSON manifest files, typically in the same location as the actaul driver. This will cause only the drivers specified in the VK_ICD_FILENAMES list to be loaded. The latest way to do this - though it is probably not supported by your vulkan loader as the version which implements it deprecates LDP_DRIVER_6 - is to use VK_DRIVER_FILES which behaves exactly the same as VK_ICD_FILENAMES (both are supported in the latest version, but it will prefer VK_DRIVER_FILES), additionally there is now VK_ADD_DRIVER_FILES which contains a list of drivers to be loaded before the default list. If VK_ICD_FILENAMES or VK_DRIVER_FILES is set VK_ADD_DRIVER_FILES will be ignored. The current way of doing this can be read here archive, though given you're getting an LDP_DRIVER_6 error you likely need to use the older method described here archive.

Related

Assembly-Binding Error on shared Component

I have the folloing structure after setting up one of our programs on the customers machine:
c:\Program Files (x86)\Common Files\company\DLL\LicenseServer\
LicenseServer.dll (V1.0)
Tools.dll (V1.0)
c:\Program Files (x86)\company\FancyProg1\
MyProg1.exe (.net)
MyProg1.exe is using the LicenseServer.dll from the common files-folder which by itself has a reference to Tools.dll. The reference at the Licenserver was build with "SpecificVersion=False".
LicenseServer ist registered via Regasm /codebase for use as com-server as well
Now the user wants to setup another Program which also uses LicenseServer.
After that my Folders look like this
c:\Program Files (x86)\Common Files\company\DLL\LicenseServer\
LicenseServer.dll (V1.0)
Tools.dll (V1.1)
c:\Program Files (x86)\company\FancyProg2\
MyProg2.exe (native, c++)
It's setup contains LicenseServer.dll still in V1.0 but the Tools.dll has evolved to V1.1 (some additional Methods, but no change in any public methods definition).
The LicenseServer of MyProg2 was build with a Reference to Tools.dll V1.1 (still with "Specific Version=False") but has no other change so its still V1.0.
While setting MyProg2 on the target-machine the setup-Program does not Copy LicenseServer.dll because it has the same version that the one already installed. Yet it updates the Tools.dll from V1.0 to V1.1 because is brings a newer one.
Well, after that MyProg2 cannot reference LicenseServer. It won't load Tools.dll because it still searches for V1.0.
Notice that MyProg2 is a native-c++-program with uses LicenseServer as a Com-Server. Its registered via Regasm /codebase.
However, MyProg1 (the .net-program which loads LicenseServer as a regular .net-dll) works and uses the newer Tools.dll
So the question is: how to make LicenseServer as a Com-Server work, even when there is a newer Tolls.dll in it.
The trick is not to version LicenseServer and all its references independently, but to give them all a new, same version number everytime Prog2 gets a new release. Even when there was noch change in the sources. This prevents only one component from being updated because it is newer.
During setup, either all or no components are then replaced

QtVirtualkeyboard Languages issue on ARM processor

i am using Qt5.7.1 on debian jessie linux virtual machine and deploy my application on the iMx6 processor, also running Qt5.7.1 and debian jessie.
I compiled the QtVirtualkeyboard project to add all languages that Qt supports.
First i compiled it for the PC linux and after for the iMx6.
I copied the new build plugin into the iMx6 Qt install plugin path and the other files required.
So on PC side the "basic" example project shows all languages with no issue.
Running the same example project on iMx6, i get almost all languages except those:
qml: Qt.createQmlObject(): failed to create object:
qrc:/QtQuick/VirtualKeyboard/content/layouts/ja_JP/japaneseInputMethod:1:57: JapaneseInputMethod is not a type
qml: Qt.createQmlObject(): failed to create object:
qrc:/QtQuick/VirtualKeyboard/content/layouts/ko_KR/hangulInputMethod:1:57: HangulInputMethod is not a type
qml: Qt.createQmlObject(): failed to create object:
qrc:/QtQuick/VirtualKeyboard/content/layouts/zh_CN/pinyinInputMethod:1:57: PinyinInputMethod is not a type
qml: Qt.createQmlObject(): failed to create object:
qrc:/QtQuick/VirtualKeyboard/content/layouts/zh_TW/tcInputMethod:1:57: TCInputMethod is not a type
Did i miss to copy a source file for those languages or something?
If yes which files and where should they be copied to?
So i did made it to work.The problem is that QtVirtualkeyboard uses thirdparty libraries and it cannot find it.
To solve the proble you must also compile all QtVirtualkeyboard thirdparty libraries and copy it to your ARM CPU like iMx6.
Example for simple chinese:
cd /home/yourname/Qt5.9.1/5.9.1/Src/qtvirtualkeyboard/src/virtualkeyboard/3rdparty/pinyin
qmake
make
Copy the .dat library for pinyin (dict_pinyin.dat) to directory like
/usr/local/qt5.9.1/qtvirtualkeyboard/pinyin
Now QtVirtualkeyboard should find the simple chinese dictonary. This should be done for japonese and traditional chinese and hundspell as well if you use them.

How can I find which part of my code is associated with an entry in the symbol table?

I am working on a project which needs to be executed in a Linux machine that has turned out not to have the GLIBCXX_3.4.20 version of a library, but the code needs it. Is there anyway to find which part of my code (C++) asks for this version?
I read the ELF file using objdump and realdef and I found which symbol needs it:
_ZSt24__throw_out_of_rang#GLIBCXX_3.4.20 (4)
but I don't know to which part of my code can be related.
Your question is essentially a duplicate of this question.
Except in your case, it's not libc.so.6, but libstdc++.so that's giving you trouble.
Your problem is that you are compiling with new GCC, but are running on a machine with an old libstdc++.so.
You have a few options:
you can update target machine to have a new enough libstdc++.so
you can build using older version of GCC
you could use -static-libstdc++ flag to link required version of libstdc++ directly into your application. This will make a larger binary, but it will not be using libstdc++.so at all.
Note that if you link against other shared libraries that do link against libstdc++.so, your binary may not run correctly on the target machine, so this solution should be used with caution.

Add image sensor driver to linux kernel

I am working on a project that is using Leopardboard DM368 interfacing with LI-M024DUAL camera board for stereo vision. The camera uses Aptina's MT09M024 as its image sensor.
After spending a lot of time on the web searching for appropriate drivers I asked the OEM to provide me some support. They provided me with the driver source files. The problem is I am not able to include them to the kernel.
I also looked up for the method to build modules and am fairly comfortable with it. But with the current driver I have a bunch of *.c files that use non-existent header files (I am not able to find these linux header files in the /linux directory).
Now my question is if I have the source code for an image sensor driver and want to build it, what is the general procedure followed for the same.
Any help in this regard would be welcome.
-Kartik
There are two ways to build you module:
1. Statically linking to kernel image (inbuilt)
2. Creating dynamically loadable Modules
Statically linking to kernel image (inbuilt)
For this you must find a appropriate place in kernel folder (somewhere in drivers/) to copy your .c files. copy them there. Edit Kconfig and Makefiles refering to other kernel drivers. and enable the support using menuconfig. Compile.
Creating dynamically loadable Modules
You can built without copying them to Kernel source. Just create a Makefile and place rules in Makefile to compile your module. Here you must link your module to your kernel by providing the kernel source path.
For more google should help.

Eclipse PDE UI feature export with two fragments for same os, different arch

Hoping to have some Eclipse PDE guru's weigh in here in a problem I'm having trouble solving.
I am attempting export (via PDE UI) a feature that has two fragments included where both fragments target the same os (e.g., Linux) but have different architecture values (e.g., x86 and x86_64). Each fragment has their own copy of several .so library files, which were built on the either Linux x86 or Linux x86 64 bit. For example:
FragmentA (os=Linux,arch=x86)
lib1.so
lib2.so
lib3.so
FragmentB (os=Linux,arch=x86_64)
lib1.so
lib2.so
lib3.so
Exporting the hosting feature with using the corresponding delta pack to select either linux (gtk/x86) OR linux (gtk/x86_64), the export works as expected. However, when I select BOTH platforms, the export fails with the following message:
Processing inclusion from feature com.sample.feature:
Bundle com.sample.linux.x86_64_1.0.0.qualifier failed to resolve.:
Unsatisfied native code filter:
lib1.so; lib2.so; lib3.so; processor=x86_64; osname=linux.
Why can't I export both fragments together? I also have a Win32 x86 fragment that I can export with the linux X86 feature but instead of having .so files, it has .dll files with the same file titles (e.g., lib1.dll, lib2.dll, lib3.dll).
Could having .so library files named the same in the two Linux-based fragments cause this issue?
Any help would be much appreciated as this is a critical block to our build process (both manually via the UI and headlessly).