Why cant you statically link dynamic libraries? - static-linking

When using external libraries, you often have to decide whether you use the static or the dynamic version of the library. Typically, you can not exchange them: If the library is build as dynamic library, you can not link statically against it.
Why is this the case?
Example: I am building a C++ program on windows and use a library that provides a small .lib file for the linker and a large .dll file that must be present when running my executable. If the library code in the .dll can be resolved at runtime, why can't it be resolved at compile time and directly put into my executable?

Why is this the case?
Most linkers (AIX linker is a notable exception) discard information in the process of linking.
For example, suppose you have foo.o with foo in it, and bar.o with bar in it. Suppose foo calls bar.
After you link foo.o and bar.o together into a shared library, the linker merges code and data sections, and resolves references. The call from foo to bar becomes CALL $relative_offset. After this operation, you can no longer tell where the boundary between code that came from foo.o and code that came from bar.o was, nor the name that CALL $relative_offset used in foo.o -- the relocation entry has been discarded.
Suppose now you want to link foobar.so with your main.o statically, and suppose main.o already defines its own bar.
If you had libfoobar.a, that would be trivial: the linker would pull foo.o from the archive, would not use bar.o from the archive, and resolve the call from foo.o to bar from main.o.
But it should be clear that none of above is possible with foobar.so -- the call has already been resolved to the other bar, and you can't discard code that came from bar.o because you don't know where that code is.
On AIX it's possible (or at least it used to be possible 10 years ago) to "unlink" a shared library and turn it back into an archive, which could then be linked statically into a different shared library or a main executable.
If foo.o and bar.o are linked into a foobar.so, wouldn't it make sense that the call from foo to bar is always resolved to the one in bar.o?
This is one place where UNIX shared libraries work very differently from Windows DLLs. On UNIX (under common conditions), the call from foo to bar will resolve to the bar in main executable.
This allows one to e.g. implement malloc and free in the main a.out, and have all calls to malloc use that one heap implementation consistently. On Windows you would have to always keep track of "which heap implementation did this memory come from".
The UNIX model is not without disadvantages though, as the shared library is not a self-contained mostly hermetic unit (unlike a Windows DLL).
Why would you want to resolve it to another bar from main.o?
If you don't resolve the call to main.o, you end up with a totally different program, compared to linking against libfoobar.a.

Related

Creating a executable out of a library in CMake, and linking the library in another library

I am trying to link foo-lib library into another target bar-lib however doing the following results in an error.
(add_executable):
Cannot find source file: foo-lib
How can I create an executable out of a library and the same library can be linked into another target?
add_library(foo-lib STATIC src/foo.cpp)
add_executable(foo-ut foo-lib)
target_include_directories(foo-ut PRIVATE include)
target_link_libraries(foo-ut PUBLIC lib1 lib2)
# second library that links foo-lib
add_library(bar-lib STATIC src/bar.cpp)
add_executable(bar-ut bar-lib)
target_include_directories(bar-ut PRIVATE include)
target_link_libraries(bar-ut PUBLIC foo-lib)
This worked the way I wanted but I am not sure if I should be adding foo.cpp for the bar-ut target
add_executable(bar-ut src/foo.cpp src/bar.cpp)
I'm not going to question the design choices and what you need it for and if it is a good idea in any way. I will just provide you with a "scalable" way of doing this.
As the others pointed out add_executable() requires source files.
Now assuming that the source files that you use to create a static library contain a main() function. Then you can create (out of the same source files) an executable, by passing to the add_executable the same source files as you would to add_library.
As the program grows these would get lengthy, so what you should do is something that is no longer recommended by "CMake best practices" and that is to introduce a SOURCES variable. I.e.:
set(PROJECT_SOURCES source1.cpp source2.cpp source3.cpp)
set(PROJECT_HEADERS header1.h header2.h header3.h)
add_library(foo-lib STATIC ${PROJECT_SOURCES} ${PROJECT_HEADERS})
add_executable(foo-ut ${PROJECT_SOURCES} ${PROJECT_HEADERS}
As your program grows you would just add the respective files into the designated variables. Now as to possible improvements:
fabian mentioned a very good thing which is OBJECT libraries, since you are rebuilding the same files for both the executable and library you could just create an object library and link it. This would make it twice as fast (you only need to compile once).
Since these SOURCES are already once passed to some target, you could just get them from the target's properties via get_target_properties(MY_SOURCES foo-lib SOURCES) this would give you a variable MY_SOURCES that contains sources which are used by the target library.

Linking priority in cmake

Is there any way i can set a priority at linking some libs/classes in cmake?
I have a library BFciLib that i use in my class, and have added it in cmake like this
target_link_libraries(Main PRIVATE BFciLib GL ${GTKMM_LIBRARIES})
I had to overwrite some of the method of that lib so that i am able to develop on my pc (without having to connect my device for which the mentioned lib is used etc).
So i have C wrapped those methods, and basically now i have 2 definitions of the same method.
I need to link my newly created methods to be used in MyClass, instead of the original ones(the ones of the lib) without modifying the class. But still use the library since are more other methods used. So i believe that is to be done in the cmake file.
For that i have added another executable, but i fail to see what more should i do:
#new exe
add_executable(
NewMain
src/main.cpp
src/MyClass.hpp
src/MyClass.cpp
src/ClassWithCWrappers.hpp
src/ClassWithCWrappers.cpp
)
target_link_libraries(NewMain PRIVATE BFciLib GL ${GTKMM_LIBRARIES})
This still returns "duplicate definition" error.
How am i supposed to do?
Don't try to override global functions. Create an interface abstraction that can be implemented with either your device library or your PC alternative. Then have separate device and PC builds where you link in only one implementation each.

GTest not finding tests in separate compilation units

I've got a program written in C++, with some subfolders containing libraries linked in. There's a top level SConscript, which calls SConscript files in the subfolders/libraries.
Inside a library cpp, there is a GTest test function:
TEST(X, just_a_passing_test) {
EXPECT_EQ(true, true);
}
There is main() in the top level program source, which just calls GTests main, and has another GTest test within it:
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
TEST(Dummy, should_pass){
EXPECT_EQ(true, true);
}
Now the issue is that when I run the program, GTest only runs the test in the main.cpp source. Ignoring the test in the library. Now it gets bizarre when I reference an unrelated class in the same library cpp in main.cpp, in a no side-effect kind of way (eg. SomeClass foo;), the test magically appears. I've tried using -O0 and other tricks to force gcc to not optimize out code that isn't called. I've even tried Clang.
I suspect it's something to do with how GTest does test discovery during compilation, but I can't find any info on this issue. I believe it uses static initialization, so maybe there's some weird ordering going on there.
Any help/info is greatly appreciated!
Update: Found a section in the FAQ that sounds like this problem, despite it referring specifically to Visual C++. Which includes a trick/hack to force the compiler to not discard the library if not referenced.
It recommends not putting tests in libraries, but that leaves me wondering how else would you test libraries, without having an executable for every one, making quickly running them a pain and with bloated output.
https://code.google.com/p/googletest/wiki/Primer#Important_note_for_Visual_C++_users
From the scene-setting one gathers that the library whose gtest test case
goes missing is statically linked in the application build. Also that the
GNU toolchain is in use.
The cause of the problem behaviour is straightforward. The test
program contains no references to anything in the library that contains
TEST(X, just_a_passing_test). So the linker doesn't need to link any
object file from that library to link the program. So it doesn't. So the
gtest runtime doesn't find that test in the executable, because it's not there.
It helps to understand that a static library in GNU format is an archive
of object files, adorned with a house-keeping header block and a global symbol table.
The OP discovered that by coding in the program an ad hoc reference to
any public symbol in the problem library, he could "magically" compel its
test case into the program.
No magic. To satisfy the reference to that public symbol, the linker is
now obliged to link an object file from the library - the one that contains
the definition of the symbol. And the OP imparts that the library is made
from a .cpp. So there is only one object file in the library, and it
contains the definition of the test case, too. With that object file in the
linkage, the test case is in program.
The OP twiddled in vain with the compiler options, switching from GCC to clang,
in search of a more respectable way to achieve the same end. The compiler is
irrelevant. GCC or clang, it gets its linking done by the system linker, ld
(unless unusual measures have been taken to replace it).
Is there a more respectable way to get ld to link an object file from a
static library even when the program refers to no symbols in that object file?
There is. Say the problem program is app and the problem static library is
libcool.a
Then the usual GCC commandline that links app resembles this, in the relevant
points:
g++ -o app -L/path/to/the/libcool/archive -lcool
This delegates a commandline to ld, with additional linker options and
libraries that g++ deems to be defaults for the system where it finds itself.
When the linker comes to consider -lcool, it will figure out this is a request
for the archive /path/to/the/libcool/archive/libcool.a. Then it will figure
out whether at this point it has still got any unresolved symbol references in hand
whose definitions are compiled in object files in libcool.a. If there are
any, then it will link those object files into app. If not, then it links
nothing from libcool.a and passes on.
But we know there are symbol definitions in libcool.a that we want to
link, even though app does not refer to them. In that case, we can tell
the linker to link the object files from libcool.a even though they are
not referenced. More precisely, we can tell g++ to tell the linker to do that,
like so:
g++ -o app -L/path/to/the/libcool/archive -Wl,--whole-archive -lcool -Wl,-no-whole-archive
Those -Wl,... options tell g++ to pass the options ... to ld. The --whole-archive
option tells ld to link all object files from subsequent archives, whether they
are referenced or not, until further notice. The -no-whole-archive tells the
ld to stop doing that and resume business as usual.
It may look as if -Wl,-no-whole-archive is redundant, as it's the last thing on the
g++ commandline. But it's not. Remember that g++ appends system default libraries
to the commandline, behind the scenes, before passing it to the ld. You definitely
do not want --whole-archive to be in force when those default libraries are linked.
(The linkage will fail with multiple definition errors).
Apply this solution to the problem case and TEST(X, just_a_passing_test)
will be executed, without the hack of forcing the program to make some no-op
reference into the object file that defines that test.
There's an obvious downside to this solution in the general case. If it happens that the library from
which we want to force linkage of some unreferenced object file contains a
bunch of other unreferenced object files that we really don't need.
--whole-archive links them all of them too, and they're just bloat in the program.
The --whole-archive solution may be more respectable that the no-op reference
hack, but it's not respectable. It doesn't even look respectable.
The real solution here is just to do the reasonable thing. If you want the
linker to link the definition of something in your program, then don't keep that a secret from
the linker. At least declare the thing in each compilation unit where you
expect its definition to be used.
Doing the reasonable thing with gtest test-cases involves understanding that
a gtest macro like TEST(X, just_a_passing_test) expands to a class definition,
in this case:
class X_just_a_passing_test_Test : public ::testing::Test {
public:
X_just_a_passing_test_Test() {}
private:
virtual void TestBody();
static ::testing::TestInfo* const test_info_ __attribute__ ((unused));
X_just_a_passing_test_Test(X_just_a_passing_test_Test const &);
void operator=(X_just_a_passing_test_Test const &);
};
(plus a static initializer for test_info_ and a definition for TestBody()).
Likewise for the TEST_F, TEST_P variants. Consequently, you can deploy these
macros in your code with just the same constraints and expectations that would
apply to class definitions.
In this light, if you have a library libcool defined in cool.h, implemented in cool.cpp
and want gtest unit tests for it, to be executed by a test program tests
that is implemented in tests.cpp, the reasonable thing is:-
Write a header file, cool_test.h
#include "cool.h" in it
#include <gtest/gtest.h> in it.
Then define your libcool test cases in it
#include "cool_test.h" in tests.cpp,
Compile and link tests.cpp with libcool and libgtest
And it's obvious why you wouldn't do what the OP has done. You would not define
classes that are needed by tests.cpp, and not needed by cool.cpp, within cool.cpp
and not in tests.cpp.
The OP was averse to the advice against defining the test cases in the library
because:
how else would you test libraries, without having an executable for every one,
making quickly running them a pain.
As a rule of thumb I would recommend the practice of maintaining a gtest executable
per library to be unit-tested: running them quickly is painless with commonplace automation tools
such a make, and it's far better to get a pass/fail verdict per library than
just a verdict for a bunch of libraries. But if you don't want to do that there's still nothing to the
objection:
// tests.cpp
#include "cool_test.h"
#include "cooler_test.h"
#include "coolest_test.h"
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Compile and link with libcool, libcooler, libcoolest and libgtest

Getting CMake to give an error/warning about unreferenced symbols

I'm wondering how I would go about making CMake produce an error, or at least a warning, when the linker cannot find symbols that are referenced in a source file?
For example, let's say I have foo.c:
#include "bar.h" //bar.h provides bar()
void foo(void)
{
bar()
return;
}
In the case that I am building a static library, if i am not smart about how i have used my add_library() directive, the default behavior seems to be to not even give a warning that bar is an unreferenced symbols in foo's object archive file (.a)
The CMAKE_SHARED_LINKER_FLAGS compiler flags for building shared libraries should get the compiler to do what you want.
set(CMAKE_SHARED_LINKER_FLAGS "-Wl,--no-undefined")
On Unix systems, this will make the linker report any unresolved symbols from object files (which is quite typical when you compile many targets in CMake projects, but do not bother with linking target dependencies in proper order).
Source: http://www.cmake.org/Wiki/CMake_Useful_Variables
There's the -z now for the GCC linker these days, but yeah, this isn't CMake's problem.
The most fool-proof way I've found only works on shared libraries, but what you do is basically write a test for each shared library and it then just does dlopen(path, RTLD_NOW) (and similar for Windows) and then use its return value as the test return value. To get a list of all shared objects, I have a wrapper function around add_library which adds all shared libraries to a global property which then is used to generate the tests dynamically. I remember there being some way to tell if a target was shared or static, but I'm not finding it the docs right now.

How can I force GCC to compile functions that are not used?

I am splitting off some of the code in my project into a separate library to be reused in another application. This new library has various functions defined but not implemented, and both my current project and the other application will implement their own versions of these functions.
I implemented these functions in my original project, but they are not called anywhere inside it. They are only called by this new library. As a result, the compiler optimizes them away, and I get linking failures. When I add a dummy call to these functions, the linking failures disappear.
Is there any way to tell GCC to compile these functions even if they're not being called?
I am compiling with gcc 4.2.2 using -O2 on SuSE linux (x86-64_linux_2.6.5_ImageSLES9SP3-3).
You could try __attribute__ ((used)) - see Declaring Attributes of Functions in the gcc manual.
Being a pragmatist, I would just put:
// Hopefully not a name collision :-)
void *xyzzy_plugh_zorkmid_3141592653589_2718281828459[] = {
&functionToForceIn,
&anotherFunction
};
at the file level of one of your source files (or even a brand new source file, something along the lines of forcedCompiledFunctions.c, so that it's obvious what it's for).
Because this is non-static, the compiler won't be able to take a chance that you won't need it elsewhere, so should compile it in.
Your question lacks a few details but I'll give it a shot...
GCC generally removes functions in very few cases:
If they are declared static
In some cases (like when using -fno-implement-inlines) if they are declared inline
Any others I missed
I suggest using 'nm' to see what symbols are actually exported in the resulting .o files to verify this is actually the issue, and then see about any stray 'static' keywords. Not necessarily in this order...
EDIT:
BTW, with the -Wall or -Wunused-function options GCC will warn about unused functions, which will then be prime targets for removal when optimising. Watch out for
warning: ‘xxx’ defined but not used
in your compile logs.
Be careful as the -Wunused-functions doesn't warn of unused functions as stated above. It warns of ununsed STATIC functions.
Here's what the man page for gcc says:
-Wunused-function
Warn whenever a static function is declared but not defined or a non-inline static function is unused. This warning is
enabled by -Wall.
This would have been more appropriate as a comment but I can't comment on answers yet.