I've got a program written in C++, with some subfolders containing libraries linked in. There's a top level SConscript, which calls SConscript files in the subfolders/libraries.
Inside a library cpp, there is a GTest test function:
TEST(X, just_a_passing_test) {
EXPECT_EQ(true, true);
}
There is main() in the top level program source, which just calls GTests main, and has another GTest test within it:
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
TEST(Dummy, should_pass){
EXPECT_EQ(true, true);
}
Now the issue is that when I run the program, GTest only runs the test in the main.cpp source. Ignoring the test in the library. Now it gets bizarre when I reference an unrelated class in the same library cpp in main.cpp, in a no side-effect kind of way (eg. SomeClass foo;), the test magically appears. I've tried using -O0 and other tricks to force gcc to not optimize out code that isn't called. I've even tried Clang.
I suspect it's something to do with how GTest does test discovery during compilation, but I can't find any info on this issue. I believe it uses static initialization, so maybe there's some weird ordering going on there.
Any help/info is greatly appreciated!
Update: Found a section in the FAQ that sounds like this problem, despite it referring specifically to Visual C++. Which includes a trick/hack to force the compiler to not discard the library if not referenced.
It recommends not putting tests in libraries, but that leaves me wondering how else would you test libraries, without having an executable for every one, making quickly running them a pain and with bloated output.
https://code.google.com/p/googletest/wiki/Primer#Important_note_for_Visual_C++_users
From the scene-setting one gathers that the library whose gtest test case
goes missing is statically linked in the application build. Also that the
GNU toolchain is in use.
The cause of the problem behaviour is straightforward. The test
program contains no references to anything in the library that contains
TEST(X, just_a_passing_test). So the linker doesn't need to link any
object file from that library to link the program. So it doesn't. So the
gtest runtime doesn't find that test in the executable, because it's not there.
It helps to understand that a static library in GNU format is an archive
of object files, adorned with a house-keeping header block and a global symbol table.
The OP discovered that by coding in the program an ad hoc reference to
any public symbol in the problem library, he could "magically" compel its
test case into the program.
No magic. To satisfy the reference to that public symbol, the linker is
now obliged to link an object file from the library - the one that contains
the definition of the symbol. And the OP imparts that the library is made
from a .cpp. So there is only one object file in the library, and it
contains the definition of the test case, too. With that object file in the
linkage, the test case is in program.
The OP twiddled in vain with the compiler options, switching from GCC to clang,
in search of a more respectable way to achieve the same end. The compiler is
irrelevant. GCC or clang, it gets its linking done by the system linker, ld
(unless unusual measures have been taken to replace it).
Is there a more respectable way to get ld to link an object file from a
static library even when the program refers to no symbols in that object file?
There is. Say the problem program is app and the problem static library is
libcool.a
Then the usual GCC commandline that links app resembles this, in the relevant
points:
g++ -o app -L/path/to/the/libcool/archive -lcool
This delegates a commandline to ld, with additional linker options and
libraries that g++ deems to be defaults for the system where it finds itself.
When the linker comes to consider -lcool, it will figure out this is a request
for the archive /path/to/the/libcool/archive/libcool.a. Then it will figure
out whether at this point it has still got any unresolved symbol references in hand
whose definitions are compiled in object files in libcool.a. If there are
any, then it will link those object files into app. If not, then it links
nothing from libcool.a and passes on.
But we know there are symbol definitions in libcool.a that we want to
link, even though app does not refer to them. In that case, we can tell
the linker to link the object files from libcool.a even though they are
not referenced. More precisely, we can tell g++ to tell the linker to do that,
like so:
g++ -o app -L/path/to/the/libcool/archive -Wl,--whole-archive -lcool -Wl,-no-whole-archive
Those -Wl,... options tell g++ to pass the options ... to ld. The --whole-archive
option tells ld to link all object files from subsequent archives, whether they
are referenced or not, until further notice. The -no-whole-archive tells the
ld to stop doing that and resume business as usual.
It may look as if -Wl,-no-whole-archive is redundant, as it's the last thing on the
g++ commandline. But it's not. Remember that g++ appends system default libraries
to the commandline, behind the scenes, before passing it to the ld. You definitely
do not want --whole-archive to be in force when those default libraries are linked.
(The linkage will fail with multiple definition errors).
Apply this solution to the problem case and TEST(X, just_a_passing_test)
will be executed, without the hack of forcing the program to make some no-op
reference into the object file that defines that test.
There's an obvious downside to this solution in the general case. If it happens that the library from
which we want to force linkage of some unreferenced object file contains a
bunch of other unreferenced object files that we really don't need.
--whole-archive links them all of them too, and they're just bloat in the program.
The --whole-archive solution may be more respectable that the no-op reference
hack, but it's not respectable. It doesn't even look respectable.
The real solution here is just to do the reasonable thing. If you want the
linker to link the definition of something in your program, then don't keep that a secret from
the linker. At least declare the thing in each compilation unit where you
expect its definition to be used.
Doing the reasonable thing with gtest test-cases involves understanding that
a gtest macro like TEST(X, just_a_passing_test) expands to a class definition,
in this case:
class X_just_a_passing_test_Test : public ::testing::Test {
public:
X_just_a_passing_test_Test() {}
private:
virtual void TestBody();
static ::testing::TestInfo* const test_info_ __attribute__ ((unused));
X_just_a_passing_test_Test(X_just_a_passing_test_Test const &);
void operator=(X_just_a_passing_test_Test const &);
};
(plus a static initializer for test_info_ and a definition for TestBody()).
Likewise for the TEST_F, TEST_P variants. Consequently, you can deploy these
macros in your code with just the same constraints and expectations that would
apply to class definitions.
In this light, if you have a library libcool defined in cool.h, implemented in cool.cpp
and want gtest unit tests for it, to be executed by a test program tests
that is implemented in tests.cpp, the reasonable thing is:-
Write a header file, cool_test.h
#include "cool.h" in it
#include <gtest/gtest.h> in it.
Then define your libcool test cases in it
#include "cool_test.h" in tests.cpp,
Compile and link tests.cpp with libcool and libgtest
And it's obvious why you wouldn't do what the OP has done. You would not define
classes that are needed by tests.cpp, and not needed by cool.cpp, within cool.cpp
and not in tests.cpp.
The OP was averse to the advice against defining the test cases in the library
because:
how else would you test libraries, without having an executable for every one,
making quickly running them a pain.
As a rule of thumb I would recommend the practice of maintaining a gtest executable
per library to be unit-tested: running them quickly is painless with commonplace automation tools
such a make, and it's far better to get a pass/fail verdict per library than
just a verdict for a bunch of libraries. But if you don't want to do that there's still nothing to the
objection:
// tests.cpp
#include "cool_test.h"
#include "cooler_test.h"
#include "coolest_test.h"
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Compile and link with libcool, libcooler, libcoolest and libgtest
Related
When using external libraries, you often have to decide whether you use the static or the dynamic version of the library. Typically, you can not exchange them: If the library is build as dynamic library, you can not link statically against it.
Why is this the case?
Example: I am building a C++ program on windows and use a library that provides a small .lib file for the linker and a large .dll file that must be present when running my executable. If the library code in the .dll can be resolved at runtime, why can't it be resolved at compile time and directly put into my executable?
Why is this the case?
Most linkers (AIX linker is a notable exception) discard information in the process of linking.
For example, suppose you have foo.o with foo in it, and bar.o with bar in it. Suppose foo calls bar.
After you link foo.o and bar.o together into a shared library, the linker merges code and data sections, and resolves references. The call from foo to bar becomes CALL $relative_offset. After this operation, you can no longer tell where the boundary between code that came from foo.o and code that came from bar.o was, nor the name that CALL $relative_offset used in foo.o -- the relocation entry has been discarded.
Suppose now you want to link foobar.so with your main.o statically, and suppose main.o already defines its own bar.
If you had libfoobar.a, that would be trivial: the linker would pull foo.o from the archive, would not use bar.o from the archive, and resolve the call from foo.o to bar from main.o.
But it should be clear that none of above is possible with foobar.so -- the call has already been resolved to the other bar, and you can't discard code that came from bar.o because you don't know where that code is.
On AIX it's possible (or at least it used to be possible 10 years ago) to "unlink" a shared library and turn it back into an archive, which could then be linked statically into a different shared library or a main executable.
If foo.o and bar.o are linked into a foobar.so, wouldn't it make sense that the call from foo to bar is always resolved to the one in bar.o?
This is one place where UNIX shared libraries work very differently from Windows DLLs. On UNIX (under common conditions), the call from foo to bar will resolve to the bar in main executable.
This allows one to e.g. implement malloc and free in the main a.out, and have all calls to malloc use that one heap implementation consistently. On Windows you would have to always keep track of "which heap implementation did this memory come from".
The UNIX model is not without disadvantages though, as the shared library is not a self-contained mostly hermetic unit (unlike a Windows DLL).
Why would you want to resolve it to another bar from main.o?
If you don't resolve the call to main.o, you end up with a totally different program, compared to linking against libfoobar.a.
I've run into a somewhat unexpected behavior in Xcode/Objective-C. I know it's probably not advised, but if I want to make my own struct in_addr in a .m file, it seems I can't. This implies something rather strange about namespaces and symbol pollution in Objective-C. The same seems to apply for many other networking types and perhaps other POSIX-y things as well.
I came up with a very basic example that demonstrates this behavior. Note that this snippet is the entire contents of the .m file.
#define _SYS_SOCKET_H_
#define _NETINET_IN_H_
#include <stdint.h>
struct in_addr {
uint32_t foo;
};
which yields the build error Redefinition of 'in_addr'.
This implies some fairly strange things about Objective-C. For starters, I wouldn't expect <stdint.h> to bring in any networking types. But even allowing that it might, defining _NETINET_IN_H_ first should prevent the definition of struct in_addr. And yet even still, this code refuses to build.
Is it possible to somehow forgo this forced symbol visibility? Is there a list of symbols that are included, no matter what? Is there a good reason for this behavior?
edit: Stranger still, if i remove <stdint.h> and change the uint32_t to int, this actually does compile.
If you go into the Report navigator and read the full error emitted by the clang tool, you'll see a big hint:
In module 'Darwin' imported from /Users/csrstka/Desktop/asdfasdf/asdfasdf/main.m:1:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/usr/include/netinet/in.h:302:12: note: field has name 's_addr' here
in_addr_t s_addr;
As you can see, the existing in_addr is coming from the Darwin module, which is implicitly imported due to your #include of stdint.h, which is part of the Darwin module. You can see this if you go to Product > Perform Action > Preprocess in Xcode—instead of copying in all the headers you've imported, there's just one line about importing Darwin.C.stdint.
Basically, there are a few purposes for modules; they improve compile times by cutting down on redundant compilation tasks, and they prevent people from messing with library headers via #defines like you're trying to do. ;-) For more on Objective-C modules, how they work, and the rationale behind them, see this link:
https://clang.llvm.org/docs/Modules.html#introduction
Of particular interest to your question are the following excerpts:
The primary user-level feature of modules is the import operation, which provides access to the API of software libraries. However, today’s programs make extensive use of #include, and it is unrealistic to assume that all of this code will change overnight. Instead, modules automatically translate #include directives into the corresponding module import. For example, the include directive
#include <stdio.h>
will be automatically mapped to an import of the module std.io. Even with specific import syntax in the language, this particular feature is important for both adoption and backward compatibility: automatic translation of #include to import allows an application to get the benefits of modules (for all modules-enabled libraries) without any changes to the application itself. Thus, users can easily use modules with one compiler while falling back to the preprocessor-inclusion mechanism with other compilers.
And later on:
If any submodule of a module is imported into any part of a program, the entire top-level module is considered to be part of the program. As a consequence of this, Clang may diagnose conflicts between an entity declared in an unimported submodule and an entity declared in the current translation unit, and Clang may inline or devirtualize based on knowledge from unimported submodules.
Or, if you'd prefer to turn them off and get more traditional C-like behavior, you can simply set Enable Modules (C and Objective-C) to No in Xcode's Build Settings, or compile without the -fmodules flag if you're using the command line.
I'm wondering how I would go about making CMake produce an error, or at least a warning, when the linker cannot find symbols that are referenced in a source file?
For example, let's say I have foo.c:
#include "bar.h" //bar.h provides bar()
void foo(void)
{
bar()
return;
}
In the case that I am building a static library, if i am not smart about how i have used my add_library() directive, the default behavior seems to be to not even give a warning that bar is an unreferenced symbols in foo's object archive file (.a)
The CMAKE_SHARED_LINKER_FLAGS compiler flags for building shared libraries should get the compiler to do what you want.
set(CMAKE_SHARED_LINKER_FLAGS "-Wl,--no-undefined")
On Unix systems, this will make the linker report any unresolved symbols from object files (which is quite typical when you compile many targets in CMake projects, but do not bother with linking target dependencies in proper order).
Source: http://www.cmake.org/Wiki/CMake_Useful_Variables
There's the -z now for the GCC linker these days, but yeah, this isn't CMake's problem.
The most fool-proof way I've found only works on shared libraries, but what you do is basically write a test for each shared library and it then just does dlopen(path, RTLD_NOW) (and similar for Windows) and then use its return value as the test return value. To get a list of all shared objects, I have a wrapper function around add_library which adds all shared libraries to a global property which then is used to generate the tests dynamically. I remember there being some way to tell if a target was shared or static, but I'm not finding it the docs right now.
I'm trying to find the best way to package a static library(lets call it Lib1) that includes an optional class(say, ClassA), which itself requires a second static library(Lib2). In other words, Lib2 is only needed if ClassA is referenced in the project's code. Things seem to work fine, unless Lib1 is used in a project that doesn't use ClassA(and hence does not include Lib2), but requires the -ObjC linker flag(because of other project dependencies, not mine).
I'm trying to come up with a an easy solution for the following three scenarios:
1) project includes my static lib, does NOT use the optional class, does not specify the -ObjC flag
2) project includes my static lib, does NOT use the optional class, but requires -ObjC flag
3) project includes my static lib + second static library, and DOES use the optional class (we don't care about the -ObjC flag at this point)
Is there a linker flag out there to strip my optional class out of the final project app so that it doesn't require the second static lib? I guess my other alternatives are to release multiple versions of my static lib, one that includes the option class(the standard choice), one that does not(the alternate, for projects with -ObjC requirements), or maybe supply a stub file, that supplies empty implementations of all the classes needed from the second static library? This seems like it could be a common problem in the static library world... is there a best practice for this scenario?
Thanks!
Solution:
1) Suggest to my -ObjC users that they use -force_load instead. (thanks Rob!)
2) For users that can't do 1, I'll have a alternate build that does not include ClassA
The best practice is always to have the final binary link all the static libs required. You should never bundle one static library into another. You should absolutely never bundle a well-known (i.e. open-source) static library into a static library you ship. This can create incredible headaches for the final consumer because they can wind up with multiple versions of the same code. Tracking down the bugs that can come from this is insanely difficult. If they're lucky, they'll just get confusing compiler errors. If they're unlucky, their code will behave in unpredictable ways and randomly crash.
Ship all the static libraries separately. Tell your clients which ones they need to link for various configurations. Trying to avoid this just makes their lives difficult.
Some other discussions that may be useful:
Duplicate Symbol Error: SBJsonParser.o? (Example of a customer who ran into a vendor doing this to him)
Linking static libraries, that share another static library
Why don't iOS framework dependencies need to be explicitly linked to a static library project or framework project when they do for an app project?
The -ObjC flag should be preventing the automatic stripping of ClassA entirely, whether its used or not (see TN1490 for more details).
If ClassA is never used except in certain circumstances and you want to save space, you should probably move ClassA into its own static library. Or use #ifdef to conditionally compile it.
Alternately, you can remove the -ObjC flag and use -force_load to individually load any category-only compile units (which is the problem -ObjC is used to address).
I am splitting off some of the code in my project into a separate library to be reused in another application. This new library has various functions defined but not implemented, and both my current project and the other application will implement their own versions of these functions.
I implemented these functions in my original project, but they are not called anywhere inside it. They are only called by this new library. As a result, the compiler optimizes them away, and I get linking failures. When I add a dummy call to these functions, the linking failures disappear.
Is there any way to tell GCC to compile these functions even if they're not being called?
I am compiling with gcc 4.2.2 using -O2 on SuSE linux (x86-64_linux_2.6.5_ImageSLES9SP3-3).
You could try __attribute__ ((used)) - see Declaring Attributes of Functions in the gcc manual.
Being a pragmatist, I would just put:
// Hopefully not a name collision :-)
void *xyzzy_plugh_zorkmid_3141592653589_2718281828459[] = {
&functionToForceIn,
&anotherFunction
};
at the file level of one of your source files (or even a brand new source file, something along the lines of forcedCompiledFunctions.c, so that it's obvious what it's for).
Because this is non-static, the compiler won't be able to take a chance that you won't need it elsewhere, so should compile it in.
Your question lacks a few details but I'll give it a shot...
GCC generally removes functions in very few cases:
If they are declared static
In some cases (like when using -fno-implement-inlines) if they are declared inline
Any others I missed
I suggest using 'nm' to see what symbols are actually exported in the resulting .o files to verify this is actually the issue, and then see about any stray 'static' keywords. Not necessarily in this order...
EDIT:
BTW, with the -Wall or -Wunused-function options GCC will warn about unused functions, which will then be prime targets for removal when optimising. Watch out for
warning: ‘xxx’ defined but not used
in your compile logs.
Be careful as the -Wunused-functions doesn't warn of unused functions as stated above. It warns of ununsed STATIC functions.
Here's what the man page for gcc says:
-Wunused-function
Warn whenever a static function is declared but not defined or a non-inline static function is unused. This warning is
enabled by -Wall.
This would have been more appropriate as a comment but I can't comment on answers yet.