How to tell c++ linker that some classes will be added later by dlopen - g++

I have legacy c++ code that I'm trying to re-engineer.
I want to take some part of code out of the project as a ".so" shared library and load them dynamically by "dlopen".
I have written a dynamic loading mechanism which can load new modules dynamically at runtime.
Now I want to decouple existing modules from main project.
For instance I have extracted module "X" from the main project and created shared library which can be loaded later, but some part of the main project are using module X's classes directly and I can't change them yet.
I can compile the project by using module X's header files, but linker throw out "undefined reference" error.
How can I tell c++ linker that these classes will be added later by dlopen mechanism at runtime?
note: I can link and run project by copying created ".so" file of module X in "/lib" folder and use it when linking by "-lX" flag, but if I delete this file form the /lib folder the project fails on startup.

I know if you use X's classes directly you have to link X.so to your program. But if you link X.so you can use dlopen in runtime.

What you need is called an import library. They contain small wrappers for all necessary functions and thus satisfy all static linker dependencies. At runtime these wrappers will load dynamic library if it's not yet loaded and forward execution to real implementation inside library.
Import libraries is a standard feature of Windows DLLs but they are not available out-of-the-box on Linux (or any POSIX system). You can implement wrappers by hand or use Implib.so to generate them automatically.

Related

Include .lib in a .dll, which is used by program as plugin

I am using Visual Studio 2008 trying to create a .dll. The dll uses an external library (.lib). Compiling and linking works fine (I included the paths to header/lib in the options). When my .dll is used by a program (as a plugin) it says "externalLibrary.dll missing" but there is no externalLibrary.dll, just a externalLibrary.lib.
Are there different options of linking (so the externalLibrary is already in my .dll)? Or can i simply create a .dll from the .lib? Or any other solutions to this problem?
Edit (to be more concrete):
In project properties i added
the header path # C/C++ - General - Additional Include Directories
the library path # Linker - General - Additional Library Directories
the library name # Linker - Input - Additional Dependencies (although
this doesn't change anything)
The .lib file you are using is an import library which basically means that it contains only stubs for functions/classes/... but not the actual implmentation. That implementation is in the dll. An import library is only useful for the linker as it uses it to resolve symbols. But at runtime, the actual compiled code is needed so your application/dll looks for the dll. But even if your dll is used as a plugin, it's no problem for it to depend on other dlls. So if you have the other dll I suggest you go that way. (what is 'externalLibrary' btw?, it's not normal a vendor supplies you only with an import library and not the dll)
If you really do not want to use the external dll, you'll have to find the static library for the code of 'externalLibrary'. Unlike the import library, a static library does contain all symbols complete with actual implementation etc. So after linking with a static library, your application/dll contains the code itself and does not need to resolve it at runtime.

Eclipse managed make with static and dynamic linked libraries at the same time

I am using the managed make functionality of Eclipse CDT. Creating the project using dynamic only libraries is working as expected. But the boost_unit_test_framework should be linked statically, because it contains the main function. On the command line it is not a problem to link to dynamic and static libraries in a mixture. So this is a working example:
g++ -L../Debug -L../boost/lib -o "Test" ./Test.o -ldynLib -Wl,-Bstatic -lboost_unit_test_framework -Wl,-Bdynamic
The dynlib and the standard libraries like libc are linked dynamically and the boost_unit_test_framework is linked statically. BUT how can I enter this information in the Settings of the Project? I can not see any way.
It may be possible to flag this library in every project for static linking, for example in a global place. There is convention used by QNX ([manual]). It is possible to use LIBPREF_library and LIBPOST_library to add Options before or after the specified library.
Update:
I have still no clue how to solve the described problem. But in the meantime I have switched my build system from Managed Make to CMake. And additionally I am now using the Qt Creator because it is able to index boost and does not freeze the UI while updating some internal structures ...
[manual] http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/prog/make_convent.html#USEMAC
I don't think you need to specify the type of linking. Dynamic libraries can't be linked statically, and vice versa. On one of my projects, under Project Properties -> C/C++ Build -> Settings, I have both static and dynamic libraries listed under Libraries. It seems to work out what type they are and link fine either way.
Dynalic libraries goes in : Linker/Libraries/Libraries (-l)
Static libraries goes in : Linker/miscelanous/Other files and objects

What's the meaning of dylib files?

My C++ compiler creates "dylib" files which contain dynamic libraries. Whats the difference between .dylib and .so files?
And what is the difference between files in Mach-O format and files in an ELF format? I have to build files for later use under iOS (static libraries only/Mach-O) and Android (ELF).
Thanx!
I found that:
One Mach-O feature that hits many people by surprise is the strict
distinction between shared libraries and dynamically loadable modules.
On ELF systems both are the same; any piece of shared code can be used
as a library and for dynamic loading. Use otool -hv some_file to see
the filetype of some_file.
Mach-O shared libraries have the file type MH_DYLIB and carry the
extension .dylib. They can be linked against with the usual static
linker flags, e.g. -lfoo for libfoo.dylib. However, they can not be
loaded as a module. (Side note: Shared libraries can be loaded
dynamically through an API. However, that API is different from the
API for bundles and the semantics make it useless for an dlopen()
emulation. Most notably, shared libraries can not be unloaded.) [This
is no longer true—you can use dlopen() with both dylibs and bundles.
However, dylibs still can't be unloaded.]
Loadable modules are called "bundles" in Mach-O speak. They have the
file type MH_BUNDLE. Since no component involved cares about it, they
can carry any extension. The extension .bundle is recommended by
Apple, but most ported software uses .so for the sake of
compatibility. Bundles can be dynamically loaded and unloaded via dyld
APIs, and there is a wrapper that emulates dlopen() on top of that
API. [dlopen is now the preferred API.] It is not possible to link
against bundles as if they were shared libraries. However, it is
possible that a bundle is linked against real shared libraries; those
will be loaded automatically when the bundle is loaded.
To compile a normal shared library on OS X, you should use -dynamiclib
and the extension .dylib. -fPIC is the default.

Difference between modules and shared libraries?

The title mostly covers it, what is the difference between a module and a shared library? I just found this distinction in CMake's add_library command, where they say:
SHARED libraries are linked dynamically and loaded at runtime. MODULE libraries are plugins that are not linked into other targets but may be loaded dynamically at runtime using dlopen-like functionality.
But I can load a shared object using dlopen(), can't I?
The difference is that you can link to a SHARED library with the linker, but you cannot link to a MODULE with the linker. On some platforms.
So... to be fully cross-platform and work everywhere CMake works, you should never do this:
# This is a big NO-NO:
add_library(mylib MODULE ${srcs})
target_link_libraries(myexe mylib)
To be fair, on Windows, they're both just dlls, and so this code might actually work. But when you take it to a platform where it's impossible to link to the MODULE, you'll encounter an error.
Bottom line: if you need to link to the library, use SHARED. If you are guaranteed that the library will only be loaded dynamically, then it's safe to use a MODULE. (And perhaps even preferable to help detect if somebody does try to link to it...)
I think the distinction being made is that shared libraries are specified by the developer at compile-time and must be present for the application to run, even though their methods are loaded at runtime. A module, i.e. plugin, adds additional support at runtime but isn't required. Yes, you can dlopen() a shared library but in that case it would not have been specified as a required part of the program and functions instead as a module.
Another difference is in how ..._OUTPUT_DIRECTORY and ..._OUTPUT_NAME are handled:
Module libraries are always treated as library targets. For non-DLL platforms shared libraries are treated as library targets. For DLL platforms the DLL part of a shared library is treated as a runtime target and the corresponding import library is treated as an archive target. All Windows-based systems including Cygwin are DLL platforms.
For example, this means that if you compile a SHARED library on Windows, LIBRARY_OUTPUT_DIRECTORY will be ignored, because it's looking at ARCHIVE_OUTPUT_DIRECTORY and RUNTIME_OUTPUT_DIRECTORY instead.

Difference between load-time and run-time dynamic linking

What is the difference between Load-time dynamic linking and Run-time dynamic linking?
Load-time Dynamic Linking
When an executable is linked to a DLL at build time the linker will not insert object code but rather it insert a stub which basically says a function of this name is located in this DLL.
Now when the executable is run, bits of the executable will be missing (i.e the function stubs) so before the program is allowed to run the program loader fixes up these missing functions by replacing them with entry points into the DLL files.
Only after all the stubs have been replace (i.e resolved) will the executable be allowed to run.
That is load time dynamic linking.
Run-time Dynamic Linking
In this case the executable was not linked to any DLL library file, so it will not contain any stubs into the dll and as such the program loader has no issue running the executable.
But the task of getting access to the function from with-in the DLL is left to the executable and can be done using the GetProcAddress Windows API.
That is run time dynamic linking.
You forgot the "homework" tag.
Load-time linking means that the DLL you're linking to is loaded when your application starts, regardless of whether or not you actually use the functionality in that DLL. Dynamic linking means that the functionality of the DLL is only loaded when it's actually needed.
Load time dynamic linking is performed by Operating System when an application is loaded. OS uses the information linker has placed in the file to locate the names of the dll, and then searches for those dlls, And if it fails to locate the Dll, it simply terminates and gives error message, otherwise, OS maps the DLL into the virtual address space of the process and increases the DLL reference count.