difference between dynamic loading and dynamic linking? - dynamic

Routine is not loaded until it is called. All routines are kept on disk in a re-locatable load format. The main program is loaded into memory & is executed. This is called Dynamic Linking.
Why this is called Dynamic Linking? Shouldn't it be Dynamic Loading because Routine is not loaded until it is called in dynamic loading where as in dynamic linking, Linking postponed until execution time.

This answer assumes that you know basic Linux command.
In Linux, there are two types of libraries: static or shared.
In order to call functions in a static library you need to statically link the library into your executable, resulting in a static binary.
While to call functions in a shared library, you have two options.
First option is dynamic linking, which is commonly used - when compiling your executable you must specify the shared library your program uses, otherwise it won't even compile. When your program starts it's the system's job to open these libraries, which can be listed using the ldd command.
The other option is dynamic loading - when your program runs, it's the program's job to open that library. Such programs are usually linked with libdl, which provides the ability to open a shared library.
Excerpt from Wikipedia:
Dynamic loading is a mechanism by which a computer program can, at run
time, load a library (or other binary) into memory, retrieve the
addresses of functions and variables contained in the library, execute
those functions or access those variables, and unload the library from
memory. It is one of the 3 mechanisms by which a computer program can
use some other software; the other two are static linking and dynamic
linking. Unlike static linking and dynamic linking, dynamic loading
allows a computer program to start up in the absence of these
libraries, to discover available libraries, and to potentially gain
additional functionality.
If you are still in confusion, first read this awesome article: Anatomy of Linux dynamic libraries and build the dynamic loading example to get a feel of it, then come back to this answer.
Here is my output of ldd ./dl:
linux-vdso.so.1 => (0x00007fffe6b94000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f400f1e0000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f400ee10000)
/lib64/ld-linux-x86-64.so.2 (0x00007f400f400000)
As you can see, dl is a dynamic executable that depends on libdl, which is dynamically linked by ld.so, the Linux dynamic linker when you run dl. Same is true for the other 3 libraries in the list.
libm doesn't show in this list, because it is used as a dynamically loaded library. It isn't loaded until ld is asked to load it.

Dynamic loading means loading the library (or any other binary for that matter) into the memory during load or run-time.
Dynamic loading can be imagined to be similar to plugins , that is an exe can actually execute before the dynamic loading happens(The dynamic loading for example can be created using LoadLibrary call in C or C++)
Dynamic linking refers to the linking that is done during load or run-time and not when the exe is created.
In case of dynamic linking the linker while creating the exe does minimal work.For the dynamic linker to work it actually has to load the libraries too.Hence it's also called linking loader.
Hence the sentences you refer may make sense but they are still quite ambiguous as we cannot infer the context in which it is referring in.Can you inform us where did you find these lines and at what context is the author talking about?

Dynamic loading refers to mapping (or less often copying) an executable or library into a process's memory after it has started. Dynamic linking refers to resolving symbols - associating their names with addresses or offsets - after compile time.
Here is the link to the full answer by Jeff Darcy at quora
http://www.quora.com/Systems-Programming/What-is-the-exact-difference-between-Dynamic-loading-and-dynamic-linking/answer/Jeff-Darcy

I am also reading the "dinosaur book" and was confused with the loading and linking concept. Here is my understanding:
Both dynamic loading and linking happen at runtime, and load whatever they need into memory.
The key difference is that dynamic loading checks if the routine was loaded by the loader while dynamic linking checks if the routine is in the memory.
Therefore, for dynamic linking, there is only one copy of the library code in the memory, which may be not true for dynamic loading. That's why dynamic linking needs OS support to check the memory of other processes. This feature is very important for language subroutine libraries, which are shared by many programs.

Dynamic linker is a run time program that loads and binds all of the dynamic dependencies of a program before starting to execute that program. Dynamic linker will find what dynamic libraries a program requires, what libraries those libraries require (and so on), then it will load all those libraries and make sure that all references to functions then correctly point to the right place. For example, even the most basic “hello world” program will usually require the C library to display the output and so the dynamic linker will load the C library before loading the hello world program and will make sure that any calls to printf() go to the right code.

Dynamic Loading: Load routine in main memory on call.
Dynamic Linking: Load routine in main memory during execution time,if call happens before execution time it is postponed till execution time.
Dynamic loading does not require special support from Operating system, it is the responsibility of the programmer to check whether the routine that is to be loaded does not exist in main memory.
Dynamic Linking requires special support from operating system, the routine loaded through dynamic linking can be shared across various processes.
Routine is not loaded until it is called. All routines are kept on disk in a re-locatable load format. The main program is loaded into memory & is executed. This is called Dynamic Linking.
The statement is incomplete."The main program is loaded into main memory & is executed." does not specify when the program is loaded.
If we consider that it is loaded on call as 1st statement specifies then its Dynamic Loading

We use dynamic loading to achieve better space utilization
With dynamic loading a program is not loaded until it is called.All routines are kept on a disk in a relocatable load format.The main program is loaded into memory and is executed.
When a routine needs to call another routine, the calling routine first checks to see whether has been loaded.If not , the relocatable linking loader is called to load the desired routine into memory and update program's address tables to reflect this change.Then control is passed to newly loaded routine
Advantages
An unused routine is never loaded .This is most useful when the program code
is large where infrequently occurring cases are needed to handle such as
error routines.In this case although the program code is large ,used code
will be small.
Dynamic loading doesn't need special support from O.S.It is the
responsibility of user to design their program to take advantage of
method.However, O.S can provide libraries to help the programmer

There are two types of Linking Static And Dynamic ,when output file is executed without any dependencies(files=Library) at run time this type of linking is called Static where as Dynamic is of Two types 1.Dynamic Loading Linking 2.Dynamic Runtime Linking.These are Described Below
Dynamic linking refers to linking while runtime where library files are brought to primary memory and linked ..(Irrespective of Function call these are linked).
Dynamic Runtime Linking refers to linking when required,that means whenever there is a function call happening at that time linking During runtime..Not all Functions are linked and this differs in Code writing .

Related

Which component does dynamic linking?

My question is equivalent to: "What exactly is Dynamic Linker? Which part of an OS does it belong to?".
I know that dynamic linking is done by a component called "dynamic linker" which is also a part of an Operating System. I was wodering if this component can be seen as a part of
Linker (the same that does static linking),
Loader,
RunTime Environment (given that dynamic linking is done while program is "running")
or is it completely different component?
I know that dynamic linking is done by a component called "dynamic linker" which is also a part of an Operating System.
The dynamic linker is a part of the OS only on some OSes, namely Windows.
On UNIX, it is not part of the OS, but rather a part of libc, and you could have multiple dynamic linkers on a single system.
The dynamic loader is part of the runtime environment. It closely cooperates with the static linker (which must prepare the data structures used by the loader) and the OS kernel, but is never a "part" of either.

MoltenVK driver calls the wrong functions for some Vulkan function calls

I'm trying to use Vulkan on MacOS, with the eventual goal of making a cross platform program. My program worked when I statically linked MoltenVK (with the vulkan headers copied into it).
My current setup statically links vulkan.framework, and uses an ICD to load libMoltenVK.dylib. This initially appeared to work: in the logs, I can see "INFO: Found ICD manifest file [expected path to my manifest file]". All the extensions I expect are then found and listed, including VK_KHR_surface. I enable it when I create my VkInstance.
I can call various vulkan functions successfully, including vkCreateMacOSSurfaceMVK. However, when I try to call vkGetPhysicalDeviceSurfaceCapabilitiesKHR, my program crashes.
With no validation layers, it seems like vulkan loads the wrong function: the next symbol in the call stack is vkCreateQueryPool instead of the expected vkGetPhysicalDeviceSurfaceCapabilitiesKHR, and MoltenVK prints [***MoltenVK ERROR***] VK_ERROR_INITIALIZATION_FAILED: vkCreateQueryPool: Unsupported query pool type: -1094795586. The program crashes on the next vulkan call, which happens to be vkGetPhysicalDeviceSurfaceFormatsKHR, but which actually calls into MVKDevice::destoryQueryPool.
With validation layers on, the program crashes on vkGetPhysicalDeviceSurfaceCapabilitiesKHR, which tries to call 0x0. There are no errors printed about validation failing for any of them.
I tried turning on Dynamic Linker API usage/ Dynamic Library loads in Xcode's scheme editor, but I didn't find the information they provided helpful.
Why would vulkan/dyld not hook up some functions correctly? How can I debug this?
You are actually using the Vulkan Loader, not an "ICD", to load libMoltenVK.dylib. libMoltenVK.dylib itself is considered an Installable Client Driver (ICD).
Are you using vkGetInstanceProcAddr() to get the function address of vkGetPhysicalDeviceSurfaceCapabilitiesKHR() ? The Vulkan Loader only exports the core (and some WSI) functions. You'll need to get the addresses of other extension functions with vkGetInstanceProcAddr().
See this code for an example of using GIPA. Search for "vkGetPhysicalDeviceSurfaceCapabilitiesKHR". vulkaninfo calls this extension successfully on MacOS.
I suspect that the MoltenVK lib exports this extension function directly, which is why it worked when you statically linked MoltenVK. Also, when using the Vulkan loader, you should not be linking the MoltenVK library to your application. The Vulkan loader loads it for you.

What is the difference in byte code like Java bytecode and files and machine code executables like ELF?

What are the differences between the byte code binary executables such as Java class files, Parrot bytecode files or CLR files and machine code executables such as ELF, Mach-O and PE.
what are the distinctive differences between the two?
such as the .text area in the ELF structure is equal to what part of the class file?
or they all have headers but the ELF and PE headers contain Architecture but the Class file does not
Java Class File
Elf file
PE File
Byte code is, as imulsion noted, an intermediate step, right before compilation into machine code. Because the last step is left to load time (and often runtime, as is the case with Just-In-Time (JIT) compilation, byte code is architecture independent: The runtime (CLR for .net or JVM for Java) is responsible for mapping the byte code opcodes to their underlying machine code representation.
By comparison, native code (Windows: PE, PE32+, OS X/iOS: Mach-O, Linux/Android/etc: ELF) is compiled code, suited for a particular architecture (Android/iOS: ARM, most else: Intel 32-bit (i386) or 64-bit). These are all very similar, but still require sections (or, in Mach-O parlance "Load Commands") to set up the memory structure of the executable as it becomes a process (Old DOS supported the ".com" format which was a raw memory image). In all the above, you can say , roughly, the following:
Sections with a "." are created by the compiler, and are "default" or expected to have default behavior
The executable has the main code section, usually called "text" or ".text". This is native code, which can run on the specific architecture
Strings are stored in a separate section. These are used for hard-coded output (what you print out) as well as symbol names.
Symbols - which are what the linker uses to put together the executable with its libraries (Windows: DLLs, Linux/Android: Shared Objects, OS X/iOS: .dylibs or frameworks) are stored in a separate section. Usually there is also a "PLT" (Procedure Linkage Table) which enables the compiler to simply put in stubs to the functions you call (printf, open, etc), that the linker can connect when the executable loads.
Import table (in Windows parlance.. In ELF this is a DYNAMIC section, in OS X this is a LC_LOAD_LIBRARY command) is used to declare additional libraries. If those aren't found when the executable is loaded, the load fails, and you can't run it.
Export table (for libraries/dylibs/etc) are the symbols which the library (or in Windows, even an .exe) can export so as to have others link with.
Constants are usually in what you see as the ".rodata".
Hope this helps. Really, your question was vague..
TG
Byte code is a 'halfway' step. So the Java compiler (javac) will turn the source code into byte code. Machine code is the next step, where the computer takes the byte code, turns it into machine code (which can be read by the computer) and then executes your program by reading the machine code. Computers cannot read source code directly, likewise compilers cannot translate immediately into machine code. You need a halfway step to make programs work.
Note that ELF binaries don't necessarily need to be machine/arch specific per se.
The interesting piece is the "interpreter" header field: it holds a path name to a loader program that's executed instead of the actual binary. This one then is responsible for loading the actual program, loading and linking libraries, etc. This is the way how eg. ld.so comes in.
Theoretically one could create an ELF binary that holds java bytecode (or a complete jar). This just needs some appropriate "interpreter" program which starts up a JVM and loads the code from the binary into it.
Not sure whether this actually has been done before, but certainly possible.
The same can be done w/ quite any non-native code.
It also could serve for direct multiarch support via some VM like qemu:
Let the target platform (libc+linker scripts) put the arch name into the interpreter program name (eg. /lib/ld.so.x86_64, /lib/ld.so.armhf, ...).
Then, on a particular arch (eg. x86_64), the one with native arch name will point to the original ld.so, while the others point to some special one that calls up something like qemu-system-XXX.

static versus shared libraries in small embedded systems using C without OS (assuming XIP)

Does small embedded system without RTOS/OS uses dynamic/shared libraries. my understanding is that its very tough to use it and will be not productive.
If we are calling an API multiple times which is present in a static library. Does API code will be placed at every call location like macro expansion or code/text will be common for all calls. I think code/text will be common.
If I have made a static library for a .c files which has multiple API's and I am statically linking it with main file and in main file only one API has been called so my question is does whole library is included in final .bin or only particular API code.
from above questions you can assume that I am missing fundamentals itself so can anyone please provide the related links to brush up these.
Regards
[edit]
I have tried following things
addition.c module
`int addition(int a,int b)`
`{`
`int result;`
`result = a + b;`
`return result;`
`}`
`size addition.o`
23 0 0 23 17 addition.o
multiplication.c module
`int multiplication(int a, int b)`
`{`
`int result;`
`result = a * b;`
`return result;`
`}`
`size multiplication.o`
21 0 0 21 15 multiplication.o
created object file of both and put in archieve
ar cr libarith.a addition.o multiplication.o
then statically linked to my main application
example.c module
`#include "header.h"`
`#include <stdio.h>`
`1:int main()`
`2:{`
`3:int result;`
`4:result = addition(1,2);`
`5:printf("addition result is : %d\n",result);`
`6:result = multiplication(3,2);`
`7:printf("multiplication result is : %d\n",result);`
`8:return 0;`
`9:}`
gcc -static example.c -L. -larith -o example
size of example
511141 1928 7052 520121 7efb9 example
commented line number 6 of example.c
and again linked
gcc -static example.c -L. -larith -o example
size of example
511109 1928 7052 520089 7ef99 example
32 bytes of difference between above two
thats mean addition.o is not included in example
merged both modules addition.c and multiplication.c as addmult.c as below
int addition(int a,int b)
{
int result;
result = a + b;
return result;
}
int multiplication(int a, int b)
{
int result;
result = a * b;
return result;
}
created object file and put in archieve
before doing that i have deleted previous archieve
ar cr libarith.a addmult.o
now commented line number 6 of example.c
gcc -static example.c -L. -larith -o example
size example
511093 1928 7052 520073 7ef89 example
uncommented line nmber 6 of example.c
size example
511141 1928 7052 520121 7efb9 example
My question is in both cases if both functions are called final text size is same but if only one function is called then there is difference of 16
but multiplication.o size is 23 so definitly it has been not included but how we will justify 16.
If i am missing some fundamental itself ?
To dynamically load and link a library at runtime requires code to perform the load/link operation. That capability is normally part of an operating system. Moreover in a system without mass-storage of some kind, dynamic linking would not have any benefits since the dynamically linked code would have to exist in memory in any case so may as well have been statically linked.
To answer the second part of your question, a static library is simply a collection of object files in an archive. The linker will only extract and link the object code necessary to resolve symbols referenced in the executable as a whole. Some smart linkers can discard unused functions from within an object file, but you should not rely on that.
So by linking a static library you are not including all the unused code in the library. You can probably tell that by comparing the size of all your library files with the size of the executable binary - you will probably see that your executable is far smaller than the sum of the sizes of the libraries linked. Also your linker will have an option to create a map file which will tell you exactly what code has been included, and if it has a cross-reference output facility, what code references or is referenced by what.
If you are building your own static libraries, or even your own non-library code, it will pay to ensure good granularity at the object file level. For example if an object file contains two functions, one used and one unused, most linkers will have no choice but to include both, whereas if the functions are defined in separate compilation units (source files), then they will be in separate object files (even when collated into a library) and can be separately linked.
If you really have a embedded system without any operating system, then your hardware has essentially a fixed software, which you can change only by physical means (e.g. a soldering iron, or plugging something, etc...). In that case, that software runs on the "bare iron" and is doing somehow what an OS is providing (it is managing the physical resources and interacts directly with the I/O ports by appropriate machine instruction).
In particular, an embedded system without any OS cannot have any kind of dynamic libraries, because by definition these libraries need to be inside some files (on the embedded processor), and to have files you need an operating system.
The exact definition of what exactly is an operating system is debatable and fuzzy; I believe that providing a file system is one of the roles of most current OSes
Since shared libraries (or static libraries) are libraries sitting inside some files, you cannot have them without an OS. Something which provide files is by definition an operating system.
Perhaps you are using a cross-development chain to develop your embedded software. If you want to get something which runs on the bare metal, your chain has to ultimately give a single binary image which you can flash into a ROM, then solder or plug that ROM -or transfer somehow physically- in your embedded hardware (some tools enable you to flash an entire self contained processor).
I believe you might be confused, and you should read more about operating systems, kernels, the linux kernel, file systems, syscalls, RTOS, linkers & loaders, cross-compilers, microcontrollers, shared libraries, dynamic linkers ....
As Clifford suggested in comments, you could have an embedded system with some file system and some dynamic linker; in my view that would make an embryonic operating system, but it is a debatable matter of definition.
Notice that making a dynamic linker might not be an easy task (you'll need to do relocation); you could either make a generic ELF dynamic loader, or you could restrict the form of the dynamically loaded modules, and perhaps use your specific ld script to generate them.
You already have all the fundamentals you need. Without an operating system, mass storage (disc, filesystem, etc) and mulitple/many different programs that can take advantage of the shared library it doesnt make any sense. You dont save anything and it probably costs you a little more if you were to fake it enough to use a shared library in a fixed bare metal environment.
You mentioned having codesourcery, how do you learn these things? You disassemble your binaries and see what the compiler did. Does it link the entire gcc library because you used one divide? Does it link the entire C library because you used one function (does it even work to try to link a C library function, many have system calls to an operating system which you have to resolve). Start by using a simple divide in a very simple function (needs to be generic)
unsigned int fun ( unsigned int a, unsigned int b )
{
return(a/b);
}
DO NOT call that function with fixed constants and do not call it from the same .c file, the best thing would be to simply add that function as is, and do nothing else with it just have it sit there. You may hit problems even trying to compile it, once you do, disassemble and see what the compiler did with it, see if the entire gcc library was added or just the code for that one function.
You cant trust any old web page or resource as it may not be the same tools you are using and may be out dated, the compiler you are using right now is the one that matters, right now, no other. And the answers are all right there in front of you.
No, they dont use dynamic libraries, the functions needed are linked in as needed. The optimizer may choose to inline some code, but in general the code for each function is in one place and each call to it is a call, it is not like a macro, in general. Again the optimizer may choose otherwise for performance reasons (small enough functions that dont consume too much memory and are small enough that the code required to make a function call is excessive compared to the function itself. Also that function needs to be in the same optimization space, for gcc this is the same .c file, for llvm this could be any code in the project.
I have some examples, cortex-m and others, bare metal. http://github.com/dwelch67 you may find some that may help answer your questions, examine for example that the compiler will implement a public function like the one above AND inline it when used. If you declare the function as static, then the optimizer, if it inlines, doesnt need to implement the function in the binary. if you make a call to a function like that in the same .c file, for example
c = fun(10,5);
there is a good chance that the optimizer if used, will replace that code with
c = 2;
and not perform the divide at all.

DLL and LIB files - what and why?

I know very little about DLL's and LIB's other than that they contain vital code required for a program to run properly - libraries. But why do compilers generate them at all? Wouldn't it be easier to just include all the code in a single executable? And what's the difference between DLL's and LIB's?
There are static libraries (LIB) and dynamic libraries (DLL) - but note that .LIB files can be either static libraries (containing object files) or import libraries (containing symbols to allow the linker to link to a DLL).
Libraries are used because you may have code that you want to use in many programs. For example if you write a function that counts the number of characters in a string, that function will be useful in lots of programs. Once you get that function working correctly you don't want to have to recompile the code every time you use it, so you put the executable code for that function in a library, and the linker can extract and insert the compiled code into your program. Static libraries are sometimes called 'archives' for this reason.
Dynamic libraries take this one step further. It seems wasteful to have multiple copies of the library functions taking up space in each of the programs. Why can't they all share one copy of the function? This is what dynamic libraries are for. Rather than building the library code into your program when it is compiled, it can be run by mapping it into your program as it is loaded into memory. Multiple programs running at the same time that use the same functions can all share one copy, saving memory. In fact, you can load dynamic libraries only as needed, depending on the path through your code. No point in having the printer routines taking up memory if you aren't doing any printing. On the other hand, this means you have to have a copy of the dynamic library installed on every machine your program runs on. This creates its own set of problems.
As an example, almost every program written in 'C' will need functions from a library called the 'C runtime library, though few programs will need all of the functions. The C runtime comes in both static and dynamic versions, so you can determine which version your program uses depending on particular needs.
Another aspect is security (obfuscation). Once a piece of code is extracted from the main application and put in a "separated" Dynamic-Link Library, it is easier to attack, analyse (reverse-engineer) the code, since it has been isolated. When the same piece of code is kept in a LIB Library, it is part of the compiled (linked) target application, and this thus harder to isolate (differentiate) that piece of code from the rest of the target binaries.
One important reason for creating a DLL/LIB rather than just compiling the code into an executable is reuse and relocation. The average Java or .NET application (for example) will most likely use several 3rd party (or framework) libraries. It is much easier and faster to just compile against a pre-built library, rather than having to compile all of the 3rd party code into your application. Compiling your code into libraries also encourages good design practices, e.g. designing your classes to be used in different types of applications.
A DLL is a library of functions that are shared among other executable programs. Just look in your windows/system32 directory and you will find dozens of them. When your program creates a DLL it also normally creates a lib file so that the application *.exe program can resolve symbols that are declared in the DLL.
A .lib is a library of functions that are statically linked to a program -- they are NOT shared by other programs. Each program that links with a *.lib file has all the code in that file. If you have two programs A.exe and B.exe that link with C.lib then each A and B will both contain the code in C.lib.
How you create DLLs and libs depend on the compiler you use. Each compiler does it differently.
One other difference lies in the performance.
As the DLL is loaded at runtime by the .exe(s), the .exe(s) and the DLL work with shared memory concept and hence the performance is low relatively to static linking.
On the other hand, a .lib is code that is linked statically at compile time into every process that requests. Hence the .exe(s) will have single memory, thus increasing the performance of the process.