First I wonder about some minor details to see if I understand some concepts properly:
Is vulkan-1.dll (or libvulkan.so.1 on Linux) what is referred to as the loader?
When I use HMODULE vulkan_module = LoadLibrary( "vulkan-1.dll" );, is this using the loader from the graphics driver (provided that the previous detail is true)?
Now to the actual question. It seems that the loader is responsible for pulling drivers together to have them seem as one "unit" of sorts, as well as collecting available extensions and validation layers. What then differs the LunarG loader (for example) from those provided by graphics drivers? Why would one want to use one over the other?
Vulkan drivers do not contain anything that would reasonably be called a "loader". They are "providers".
The purpose of a "loader" is to load what the "providers" provide. The most basic thing a loader does is find the implementations' DLLs and interact with them. This differs based on the platform. With Windows, they probably use registry settings to hunt down the implementation DLLs. On Android, their built-in support probably centralizes things. And so forth.
The only commonly used loader is LunarG's SDK loader (which does use the filename vulkan-1). Some have written their own, but LunarG's is the only one with widespread usage.
"the loader" or "official loader" or "Khronos loader" or "LunarG loader" or "VulkanRT" are AFAIK the same. It's from the project KhronosGroup/Vulkan-LoaderAndValidationLayers.
What differs (between those provided by the Khronos, LunarG SDK, and drivers) is usually only a version. (Typically LunarG SDK lags behind Khronos and driver lags behind both.)
More then you ever wanted to know of its inner workings is in the loader documentation.
Run-time dynamic linking as you propose should be possible (you would do the LoadLibrary() then GetProcAddress() the vkGetInstanceProcAddr() command and then rest from it).
(On Windows) I think most people use the convenient dll import library vulkan-1.lib from LnG SDK with whatever vulkan-1.dll is in the System32.
Related
I've been trying to write Vulkan bindings for a language and I'm a bit confused about how extensions work. On Linux I'm using libdl to load function pointers from libvulkan.so.1, and I've noticed that some extension functions (like those from VK_KHR_swapchain and VK_KHR_Wayland_Surface) can be linked through libdl, but others (like the ones in VK_EXT_debug_utils or VK_EXT_extended_dynamic_state2) can only be found through vkGetInstanceProcAddr or vkGetDeviceProcAddr.
My questions are these:
Why are some Vulkan extensions available through dynamic linking but not others?
Can I rely on these dynamically-linkable extensions always being there? (For example, can I be sure that if the VK_KHR_swapchain extension is available, vkCreateSwapchainKHR will definitly be found by libdl?)
TFM:
Vulkan Direct Exports
The loader library on Windows, Linux, Android, and macOS will export all core Vulkan entry-points and all appropriate Window System Interface (WSI) entry-points. This is done to make it simpler to get started with Vulkan development. When an application links directly to the loader library in this way, the Vulkan calls are simple trampoline functions that jump to the appropriate dispatch table entry for the object they are given.
Specifics: https://github.com/KhronosGroup/Vulkan-Loader/blob/main/docs/LoaderApplicationInterface.md#wsi-extensions
We'd like to offer a compiled library that implement a protocol layer to be imported into C/C++ source code project for microcontrollers. And eventually expose a sort of compiled function to the source code project. let's say a sort of "dll". Is there any know technique to realize something of similar?
While it is possible to provide functions via a library, generally in the microcontroller/embedded realm it quickly becomes impractical.
Each microcontroller core will have a unique instruction set. Further, micros from the same family may have a variety of extensions which are either supported or not... So you're left with providing a library file for each individual microcontroller (from each vendor) that you'd like to support.
But...
In my experience, calling conventions between compilers are not the same. So a library compiled by one toolchain will not be able to be linked to object files created by another toolchain.
That leads you to then provide a library for each individual micro from each vendor for each toolchain someone might use. Ick. Oh, and don't rely on an OS calls either, as you don't know what you'll be linked with...
A more conventional approach is to use the same approach RTOS vendors tend to use: provide the source, and protect your IP with licensing terms. The reality is that if your end users want to, they can step through the assembly and figure out exactly what is happening, so you're not hiding your implementation that carefully anyway.
I'm new to objective-c & osx architecture. I started playing with building a framework and then using it. I followed this great tutorial.
During the tutorial, I had to set the framework's target's Dynamic Library Install Name to #rpath/MyFramework.framework/Versions/A/MyFramework. My understanding is that #rpath will expand to the loader's (consumer's) run-path search paths.
It seems as if the responsibility of loading the framework is split between the framework author and the consumer author. Could someone please explain why the author of the framework needs to be concerned with the consumer's run-path search path? For example, if the framework-author set the Dynamic Library Install Name to point to some random directory (instead of #rpath) how would the client be able to consume the framework?
Thanks in advance.
It depends a lot on how the framework is being used. And it's important to remember that the framework construct has existed for a long time on the platform.
For a system framework, such as the ones that Apple creates, you're going to be quite happy that they keep the frameworks in a known location. In those cases, the paths that they use are fixed for the OS, and it guarantees that you don't accidentally load the wrong one. Further, as indicated in the Framework documentation, these frameworks are loaded only once on the machine, regardless of how many times they are used (see Apple:What Are Frameworks) . The benefit here is performance and it is for both the code and the resources in many cases.
Due to the recent move to randomize framework locations,and Apple's comments in the release notes that "Mountain Lion randomly relocates the kernel, kexts, and system frameworks at system boot," it certainly appears they're still sharing these resources, and thus still gaining from this benefit.
For embedded frameworks, the situation is a lot more tedious, and Apple has moved through a variety of methods over the years to make it easier to find frameworks wherever they may be. Due, again, to the shared nature, it would make sense for Applications which share common library requirements to share them on the machine, both for purposes of efficiency, and to make sure they're at the same version if they're sharing data. So, for example, if you have two separate apps that use the same framework to work with shared data, you might put the shared framework in /Library/Frameworks and have both apps explicitly look for that, making sure that some other (possibly older) version of the framework, that has been loaded by another App, is not used instead.
In the end, there's a lot of flexibility for the Framework producer and consumer the way that it currently works. It means that the developer can decide to share a framework, include a private copy of the framework, or even do both, depending upon whether the framework exists on the machine or not. However, the price for that flexibility is the complexity that we have today.
Another example of a reason you might not want to use #rpath specifically is for tightly-linked embedded frameworks (yes, people embed frameworks within other frameworks). In these cases, you don't know where the first framework is loaded, but you want to put the embedded framework inside of it, so that they stay together. In this case #loader_path is relative to the code that is loading it, so that your plug-in's framework can find its resources correctly.
In answer to your specific example about somebody setting the Dynamic Library Install Name
to a "random" location. In this case, you'd have to know that location. There might be many reasons for somebody doing this, such as wanting to discourage reuse by other programs, or because there are large resources within the framework that should only be installed in a known shared location.
Windows still use DLLs and Mac programs seem to not use DLL at all. Are there benefits or disadvantages of using either technique?
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
MacOS X, like other flavours of Unix, use shared libraries, which are just another form of DLL.
And yes both are advantageous as the DLL or shared library code can be shared between multiple processes. It does this by the OS loading the DLL or shared library and mapping it into the virtual address space of the processes that use it.
On Windows, you have to use dynamically-loaded libraries because GDI and USER libraries are avaliable as a DLL only. You can't link either of those in or talk to them using a protocol that doesn't involve dynamic loading.
On other OSes, you want to use dynamic loading anyway for complex apps, otherwise your binary would bloat for no good reason, and it increases the probably that your app would be incompatible with the system in the long run (However, in short run static linking can somewhat shield you from tiny breaking changes in libraries). And you can't link in proprietary libraries on OSes which rely on them.
Windows still use DLLs and Mac
programs seem to not use DLL at all.
Are they benefits or disadvantages of
using either technique?
Any kind of modularization is good since it makes updating the software easier, i.e. you do not have to update the whole program binary if a bug is fixed in the program. If the bug appears in some dll, only the dll needs to be updated.
The only downside with it imo, is that you introduce another complexity into the development of the program, e.g. if a dll is a c or c++ dll, different calling conventions etc.
If a program installation includes all
the DLL it requires, will it be the
same as statically linking all the
libraries?
More or less yes. Depends on if you are calling functions in a dll which you assume static linkage with. The dll could just as well be a "free standing" dynamic library, that you only can access via LoadLibrary() and GetProcAddress() etc.
One big advantage of shared libraries (DLLs on Windows or .so on Unix) is that you can rebuild the library and its consumers separately while with static libraries you have to rebuild the library and then relink all the consumers which is very slow on Unix systems and not very fast on Windows.
MacOS software uses "dll's" as well, they are just named differently (shared libraries).
Dll's make sense if you have code you want to reuse in different components of your software. Mostly this makes sense in big software projects.
Static linking makes sense for small single-component applications, when there is no need for code reuse. It simplifies distribution since your component has no external dependencies.
Besides memory/disk space usage, another important advantage of using shared libraries is that updates to the library will be automatically picked up by all programs on the system which use the library.
When there was a security vulnerability in the InfoZIP ZIP libraries, an update to the DLL/.so automatically made all software safe which used these. Software that was linked statically had to be recompiled.
Windows still use DLLs and Mac programs seem to not use DLL at all. Are they benefits or disadvantages of using either technique?
Both use shared libraries, they just use a different name.
If a program installation includes all the DLL it requires so that it will work 100% well, will it be the same as statically linking all the libraries?
Somewhat. When you statically link libraries to a program, you will get a single, very big file, with DLLs, you will have many files.
The statically linked file won't need the "resolve shared libraries" step (which happens while the program loads). A long time ago, loading a static program meant that the whole program was first loaded into RAM and then, the "resolve shared libraries" step happened. Today, only the parts of the program, which are actually executed, are loaded on demand. So with a static program, you don't need to resolve the DLLs. With DLLs, you don't need to load them all at once. So performance wise, they should be on par.
Which leaves the "DLL Hell". Many programs on Windows bring all DLLs they need and they write them into the Windows directory. The net effect is that the last installed programs works and everything else might be broken. But there is a simple workaround: Install the DLLs into the same directory as the EXE. Windows will search the current directory first and then the various Windows paths. This way, you'll waste a bit of disk space but your program will work and, more importantly, you won't break anything else.
One might argue that you shouldn't install DLLs which already exist (with the same version) in the Windows directory but then, you're again vulnerable to some bad app which overwrites the version you need with something that breaks your neck. The drawback is that you must distribute security fixes for your app yourself; you can't rely on Windows Update or similar things to secure your code. This is a tight spot; crackers are making lots of money from security issues and people will not like you when someone steals their banking data because you didn't issue security fixes soon enough.
If you plan to support your application very tightly for many, say, 20 years, installing all DLLs in the program directory is for you. If not, then write code which checks that suitable versions of all DLLs are installed and tell the user about it, so they know why your app suddenly starts to crash.
Yes, see this text :
Dynamic linking has the following
advantages: Saves memory and
reduces swapping. Many processes can
use a single DLL simultaneously,
sharing a single copy of the DLL in
memory. In contrast, Windows must load
a copy of the library code into memory
for each application that is built
with a static link library. Saves
disk space. Many applications can
share a single copy of the DLL on
disk. In contrast, each application
built with a static link library has
the library code linked into its
executable image as a separate
copy. Upgrades to the DLL are
easier. When the functions in a DLL
change, the applications that use them
do not need to be recompiled or
relinked as long as the function
arguments and return values do not
change. In contrast, statically linked
object code requires that the
application be relinked when the
functions change. Provides
after-market support. For example, a
display driver DLL can be modified to
support a display that was not
available when the application was
shipped. Supports multilanguage
programs. Programs written in
different programming languages can
call the same DLL function as long as
the programs follow the function's
calling convention. The programs and
the DLL function must be compatible in
the following ways: the order in which
the function expects its arguments to
be pushed onto the stack, whether the
function or the application is
responsible for cleaning up the stack,
and whether any arguments are passed
in registers. Provides a mechanism
to extend the MFC library classes. You
can derive classes from the existing
MFC classes and place them in an MFC
extension DLL for use by MFC
applications. Eases the creation
of international versions. By placing
resources in a DLL, it is much easier
to create international versions of an
application. You can place the strings
for each language version of your
application in a separate resource DLL
and have the different language
versions load the appropriate
resources. A potential
disadvantage to using DLLs is that the
application is not self-contained; it
depends on the existence of a separate
DLL module.
From my point of view an shared component has some advantages that are somtimes realized as disadvantages.
shared component defines interfaces in your process. So you are forced to decide which components/interfaces are visible outside and which are hidden. This automatically defines which interface has to be stable and which does not have to be stable and can be refactored without affecting any code outside the component..
Memory administration in case of C++ and Windows must be well thought. So normally you should not handle memory outside of an dll that isn't freed in the same dll. If you do so your component may fail if: different runtimes or compiler version are used.
So I think that using shared coponents will help the software to get better organized.
If I statically link an executable in ubuntu, is there any chance that that executable won't work within another distribution such as mint os? or fedora? I know processor types are affected, but other then that is there anything else I have to be wary of? Sorry if this is a dumb question. Thanks for any help
There are a few corner cases, but for the most part, you should be in good shape with static linking. The one that comes to mind is libnss. This particular library is essentially impossible to link statically, because of the way it does its job (permissions, authentication, security tasks). As long as the glibc-versions are similar, you should be ok on this issue, though.
If your program needs to work with subtle features of the kernel, like volume managers, you've got a pretty slim chance of getting your program to work, statically linked, across distros, because the kernel interfaces may change slightly.
Most typical applications, the kind that even makes sense to discuss portability, like network services, gui-applications, language tools (like compilers/interpreters) wont have a problem with any of this.
If you statically link a program on one computer and then move it to another computer in which the system basically runs the same way, then it should work just fine. That's the point of static linking; that there are no other files the program depends on - it's entirely self-contained, so as long as it can run at all, it will run the same way it does on its "host" system.
This contrasts with dynamic linking, in which the program incorporates elements of other files (libraries) at runtime. If you move a dynamically linked program to another system where the libraries it depends on are different (or nonexistent), it won't work.
In most cases, your executable will work just fine. As long as your executable doesn't depend on anything unusual being present for it to function, there will be no problem. (And, if it does depend on something unusual being present, then you'll have the same issue even if you dynamically link.)
Statically linking is usually safer than dynamically linking for compatibility between different UNIX environments, as long as the same CPU is in use.
To have a statically linked binary fail, again assuming the same processor architecture, you would have to do something such as link on a system using the a.out binary format and try to execute it on a system running ELF, in which case the dynamically linked version would fail just as badly.
So why do people not routinely link statically? Two reasons:
It makes the executable larger, sometimes MUCH larger, and
If bugs in the libraries are fixed, you'll have to relink your program to get access to the bug fixes. If a critical security bug is fixed in the libraries, you have to relink and redistribute your exe.
On the contrary. Whatever your chances are of getting a binary to work across distributions or even OSes, those chances are maximized by static linking. Static linking makes an executable self-contained in terms of libraries. It can still go wrong if it tries to read a file that's not there on another system.
For even better chances of portability, try linking against dietlibc or some other libc. An article at Linux Journal mentions some candidates. A smaller, simpler libc is less likely to depend on things in the filesystem that differ from distro to distro.
I would, for the reasons noted above avoid statically linking something unless you absolutely must.
That being said, it should work on any other similar kernel of the same architecture (i.e. if you statically link on a machine running linux 2.4.x , the loader VDSO is going to be different on linux 2.6, VDSO being virtual dynamic shared object, a shared object that the kernel exposes to every process containing loader code).
Other pitfalls include things in /etc not being where you'd think, logs being in different places, system utilities being absent or different (ubuntu uses update-rc.d, RHEL uses chkconfig), etc.
There are sometimes that you just have no choice. I was writing a program that talked to LVM2's string based cmdlib interface in favor of using execv().. low and behold, 30% of the distros I needed to support did NOT include that library and offered no way of getting it. So, I had to link against the static object when producing binary packages.
If you are using glibc, you can be confident that stuff like getpwnam() and friends will still work .. just make sure to watch any hard coded paths (better yet, make them configurable at run time)
As long as you can guarantee it'll only be executed on a similar version of the OS on similar hardware your program will work fine if it statically linked. so, if you build for a 2.6 Linux and statically link you will be fine to run on (almost) all 2.6 Linux distributions.
Be warned you can't statically link some parts of GLIBC so if you're using them you'll have to dynamically link anyway. From memory the name service stuff (nss) parts required dynamic linking when I was investigating it.
You can't statically link a program for (say) Linux then expect it to run on BSD or Windows. BSD and Unix don't present or handle their system calls in the same way Linux does. I tell a slight lie because the BSDs have a Linux emulation layer that can be enabled, but out of the box it won't work.
No it will not work. Static linking for distribution independence is a concept from the old unix ages and is not recommended. By the fact you can't as many libraries are not avail as static libraries anyway.
Follow the Linux Standard Base way, this is your only chance to get as much cross distribution portability as possible.
The LSB also works fine if you program for FreeBSD and Solaris.
There are two compatibility questions at issue here: library versions and library inventory.
You don't say what libraries you are using.
If you have no '-l' options, then the only 'library' is glibc itself, which serves as the interface to the kernel. Glibc versions are upward compatible. If you link on a glibc 2.x system you can run on a glibc 2.y, for y > x. The developers make a firm commitment to this.
If you have -l options, static linking is always safe. If you are dynamically linked, you have to ensure that (1) the library is present on the target system, and (2) has a compatible version. Your Mileage Might Vary as to whether the target distro has what you need.