Hosting CLR in native - COM interfaces, unresolved external symbol _CStdStubBuffer_Release#4 - com

I'm experimenting with hosting CLR (not trying to use mono for now, though I will probably try). Basically, I'm following this:
http://www.lenholgate.com/blog/2010/07/clr-hosting---a-flexible-managed-plugin-system-part-1.html
However, I've got some troubles with COM itself, as it's not so thoroughly explained in that article, and I'm learning as I go (haven't been playing with COM before).
I've defined an interface:
import "unknwn.idl";
[
object,
uuid(55d96f88-9633-4ad7-b9de-1a546ea73307),
helpstring("INative interface"),
pointer_default(unique)
]
interface INative : IUnknown
{
HRESULT Write(BSTR s);
}
[
object,
uuid(74eeeaaa-d73c-436e-b52d-5c8a972ce60a),
helpstring("ITestManager Interface"),
pointer_default(unique)
]
interface IManagedHost : IUnknown
{
HRESULT Init(INative* native);
}
And included generated files in my native project. However, I can't build the executable, because linker cannot resolve: _CStdStubBuffer_Release#4. The RpcRT4.lib is linked, but I've run dumpbin rpcrt4.lib /all | grep _CStdStubBuffer_Release and there is nothing like that exported by that library, there is CStdStubBuffer_DebugServerRelease. So, question is what exactly references that method, if it supposedly should not exist?
Did some more investigation, and I've found out that this method is referenced via the .c file that is generated by IDL tool:
const CInterfaceStubVtbl _INativeStubVtbl =
{
&IID_INative,
&INative_ServerInfo,
4,
0, /* pure interpreted */
CStdStubBuffer_METHODS
};
and CStdStubBuffer_METHODS are defined in RpcProxy.h of windows SDK v7.0A:
#define CStdStubBuffer_METHODS \
CStdStubBuffer_QueryInterface,\
CStdStubBuffer_AddRef, \
CStdStubBuffer_Release, \
CStdStubBuffer_Connect, \
CStdStubBuffer_Disconnect, \
CStdStubBuffer_Invoke, \
CStdStubBuffer_IsIIDSupported, \
CStdStubBuffer_CountRefs, \
CStdStubBuffer_DebugServerQueryInterface, \
CStdStubBuffer_DebugServerRelease
which indeed wants that CStdStubBuffer_Release method, though it's a bit different than: http://msdn.microsoft.com/en-us/library/windows/desktop/ms764247%28v=VS.85%29.aspx.
What other library should I link then?

CStdStubBuffer_Release() is implemented by another file that's auto-generated by midl.exe, dlldata.c. The DLLDATA_ROUTINES macro generates it.
This problem is pointing to a project configuration mistake, sounds like you added the proxy .c file to your main project instead of using it in the proxy/stub project. Which also consumes the dlldata.c. I'm pretty sure there is no need for the proxy/stub in a custom CLR host, so simply remove the .c file.

Related

Weak function definitions for interrupt vector in a static library are preferred over strong ones

Introduction
arm-none-eabi-gcc version 10.3-2021.10 20210824
device: nordic nRF52840/nRF52832/nRF52833
library affected: Nordic NRF5 SDK 17.1.0 - custom build system (CMake)
I wrote the CMake build system for Nordic NRF5 SDK (natively it only supports makefiles). The build system has a executable (application) and multiple underlying static libraries. The dependencies go like this:
application
...
- NordicAl (abstraction layer)
- nrf5_sdk
...
//root/CMakeLists.txt
add_executable(application)
...
add_subdirectory(lib/NordicAl)
...
target_link_libraries(application PRIVATE
nordic_al
...)
....
//root/lib/NordicAl/CMakeLists.txt
add_library(nordic_al)
...
add_subdirectory(lib/nrf5_sdk)
target_link_libraries(nordic_al PRIVATE
nrf5_sdk
...)
...
//root/lib/NordicAl/lib/nrf5_sdk/CMakeLists.txt
add_library(nrf5_sdk)
...
target_sources(nrf5_sdk PRIVATE
...
${NRF5_SDK_ROOT}/modules/nrfx/mdk/gcc_startup_${PLATFORM_MCU_FAMILY}.S
${NRF5_SDK_ROOT}/components/libraries/hardfault/nrf52/handler/hardfault_handler_gcc.c
)
Problem
I have created a custom C hard-fault handler on top of the Nordic nRF5 SDK. It works with the previous build system (makefile build system). It must be noted that the previous build system does not create the static libraries, as does the new CMake system. It just links everything unconditionally.
In the perfect world, the user of the SDK (i.e., I) should define a callback (HardFault_c_handler) and it will be called by the interrupt vector in case of a hard-fault.
In the nRF5 SDK library, a startup file (modules/nrfx/mdk/gcc_startup_nrf52840.S) is included in the target nrf5_sdk (static library). The relevant code for this problem:
__isr_vector:
.long __StackTop /* Top of Stack */
.long Reset_Handler
.long NMI_Handler
.long HardFault_Handler
...
.weak HardFault_Handler
.type HardFault_Handler, %function
HardFault_Handler:
b .
.size HardFault_Handler, . - HardFault_Handler
Additionally, there is a strong defition of HardFault_Handler in a c file that should take precedence over this weak definition. The file (components/libraries/hardfault/nrf52/handler/hardfault_handler_gcc.c) contains:
extern void HardFault_c_handler(uint32_t *);
void HardFault_Handler(void) __attribute__(( naked ));
void HardFault_Handler(void)
{
__ASM volatile(
...
" .ltorg \n"
: : "X"(HardFault_c_handler)
);
}
The code from the c file should be called by the MCU in a case of the hard-fault, but it does not.
My question is why? How to make it prefer the strong function? My thinking now, although I am not sure. Because this callback, i.e., HardFault_Handler, is not referenced in the main application (or before getting to the startup file) the linker does not need to resolve it. Only when it sees this symbol in the startup file it looks for it, and because this is a static library it only looks for the first occurrence.
Things I tried
removing static libraries, this fixes the problem,
separating weak definition of the HardFault_Handler into separate assembly file, this makes the linker link the function from the file that appears first, using -Wl,-trace-symbol=HardFault_Handler I see that the linker only looks for the first occurence and than stops (irrelevant of weak and strong).
putting c file before the startup file in the sources, does not change the result.
Edit
My linker flags:
-mcpu=cortex-m4
-mfloat-abi=hard
-mfpu=fpv4-sp-d16
-mthumb
-mabi=aapcs
-ffreestanding
-fno-common
-finline-small-functions
-findirect-inlining
-fstack-protector-strong
-ffunction-sections
-fdata-sections
-Wl,--gc-sections
--specs=nano.specs
I figured out, as I am using CMake, that I can supply OBJECT keyword with the add_library() function. In that case the keyword works as expected. Take note that object library linked to another object library does not work properly. And the underlying object library must, also, be included in the top-most (non-object library) target.

undefined reference to kaldi::MessageLogger::MessageLogger error while adding kaldi source libraries to CMakeLists.txt [duplicate]

What are undefined reference/unresolved external symbol errors? What are common causes and how to fix/prevent them?
Compiling a C++ program takes place in several steps, as specified by 2.2 (credits to Keith Thompson for the reference):
The precedence among the syntax rules of translation is specified by the following phases [see footnote].
Physical source file characters are mapped, in an implementation-defined manner, to the basic source character set
(introducing new-line characters for end-of-line indicators) if
necessary. [SNIP]
Each instance of a backslash character (\) immediately followed by a new-line character is deleted, splicing physical source lines to
form logical source lines. [SNIP]
The source file is decomposed into preprocessing tokens (2.5) and sequences of white-space characters (including comments). [SNIP]
Preprocessing directives are executed, macro invocations are expanded, and _Pragma unary operator expressions are executed. [SNIP]
Each source character set member in a character literal or a string literal, as well as each escape sequence and universal-character-name
in a character literal or a non-raw string literal, is converted to
the corresponding member of the execution character set; [SNIP]
Adjacent string literal tokens are concatenated.
White-space characters separating tokens are no longer significant. Each preprocessing token is converted into a token. (2.7). The
resulting tokens are syntactically and semantically analyzed and
translated as a translation unit. [SNIP]
Translated translation units and instantiation units are combined as follows: [SNIP]
All external entity references are resolved. Library components are linked to satisfy external references to entities not defined in the
current translation. All such translator output is collected into a
program image which contains information needed for execution in its
execution environment. (emphasis mine)
[footnote] Implementations must behave as if these separate phases occur, although in practice different phases might be folded together.
The specified errors occur during this last stage of compilation, most commonly referred to as linking. It basically means that you compiled a bunch of implementation files into object files or libraries and now you want to get them to work together.
Say you defined symbol a in a.cpp. Now, b.cpp declared that symbol and used it. Before linking, it simply assumes that that symbol was defined somewhere, but it doesn't yet care where. The linking phase is responsible for finding the symbol and correctly linking it to b.cpp (well, actually to the object or library that uses it).
If you're using Microsoft Visual Studio, you'll see that projects generate .lib files. These contain a table of exported symbols, and a table of imported symbols. The imported symbols are resolved against the libraries you link against, and the exported symbols are provided for the libraries that use that .lib (if any).
Similar mechanisms exist for other compilers/ platforms.
Common error messages are error LNK2001, error LNK1120, error LNK2019 for Microsoft Visual Studio and undefined reference to symbolName for GCC.
The code:
struct X
{
virtual void foo();
};
struct Y : X
{
void foo() {}
};
struct A
{
virtual ~A() = 0;
};
struct B: A
{
virtual ~B(){}
};
extern int x;
void foo();
int main()
{
x = 0;
foo();
Y y;
B b;
}
will generate the following errors with GCC:
/home/AbiSfw/ccvvuHoX.o: In function `main':
prog.cpp:(.text+0x10): undefined reference to `x'
prog.cpp:(.text+0x19): undefined reference to `foo()'
prog.cpp:(.text+0x2d): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o: In function `B::~B()':
prog.cpp:(.text._ZN1BD1Ev[B::~B()]+0xb): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o: In function `B::~B()':
prog.cpp:(.text._ZN1BD0Ev[B::~B()]+0x12): undefined reference to `A::~A()'
/home/AbiSfw/ccvvuHoX.o:(.rodata._ZTI1Y[typeinfo for Y]+0x8): undefined reference to `typeinfo for X'
/home/AbiSfw/ccvvuHoX.o:(.rodata._ZTI1B[typeinfo for B]+0x8): undefined reference to `typeinfo for A'
collect2: ld returned 1 exit status
and similar errors with Microsoft Visual Studio:
1>test2.obj : error LNK2001: unresolved external symbol "void __cdecl foo(void)" (?foo##YAXXZ)
1>test2.obj : error LNK2001: unresolved external symbol "int x" (?x##3HA)
1>test2.obj : error LNK2001: unresolved external symbol "public: virtual __thiscall A::~A(void)" (??1A##UAE#XZ)
1>test2.obj : error LNK2001: unresolved external symbol "public: virtual void __thiscall X::foo(void)" (?foo#X##UAEXXZ)
1>...\test2.exe : fatal error LNK1120: 4 unresolved externals
Common causes include:
Failure to link against appropriate libraries/object files or compile implementation files
Declared and undefined variable or function.
Common issues with class-type members
Template implementations not visible.
Symbols were defined in a C program and used in C++ code.
Incorrectly importing/exporting methods/classes across modules/dll. (MSVS specific)
Circular library dependency
undefined reference to `WinMain#16'
Interdependent library order
Multiple source files of the same name
Mistyping or not including the .lib extension when using the #pragma (Microsoft Visual Studio)
Problems with template friends
Inconsistent UNICODE definitions
Missing "extern" in const variable declarations/definitions (C++ only)
Visual Studio Code not configured for a multiple file project
Errors on Mac OS X when building a dylib, but a .so on other Unix-y systems is OK
Class members:
A pure virtual destructor needs an implementation.
Declaring a destructor pure still requires you to define it (unlike a regular function):
struct X
{
virtual ~X() = 0;
};
struct Y : X
{
~Y() {}
};
int main()
{
Y y;
}
//X::~X(){} //uncomment this line for successful definition
This happens because base class destructors are called when the object is destroyed implicitly, so a definition is required.
virtual methods must either be implemented or defined as pure.
This is similar to non-virtual methods with no definition, with the added reasoning that
the pure declaration generates a dummy vtable and you might get the linker error without using the function:
struct X
{
virtual void foo();
};
struct Y : X
{
void foo() {}
};
int main()
{
Y y; //linker error although there was no call to X::foo
}
For this to work, declare X::foo() as pure:
struct X
{
virtual void foo() = 0;
};
Non-virtual class members
Some members need to be defined even if not used explicitly:
struct A
{
~A();
};
The following would yield the error:
A a; //destructor undefined
The implementation can be inline, in the class definition itself:
struct A
{
~A() {}
};
or outside:
A::~A() {}
If the implementation is outside the class definition, but in a header, the methods have to be marked as inline to prevent a multiple definition.
All used member methods need to be defined if used.
A common mistake is forgetting to qualify the name:
struct A
{
void foo();
};
void foo() {}
int main()
{
A a;
a.foo();
}
The definition should be
void A::foo() {}
static data members must be defined outside the class in a single translation unit:
struct X
{
static int x;
};
int main()
{
int x = X::x;
}
//int X::x; //uncomment this line to define X::x
An initializer can be provided for a static const data member of integral or enumeration type within the class definition; however, odr-use of this member will still require a namespace scope definition as described above. C++11 allows initialization inside the class for all static const data members.
Failure to link against appropriate libraries/object files or compile implementation files
Commonly, each translation unit will generate an object file that contains the definitions of the symbols defined in that translation unit.
To use those symbols, you have to link against those object files.
Under gcc you would specify all object files that are to be linked together in the command line, or compile the implementation files together.
g++ -o test objectFile1.o objectFile2.o -lLibraryName
-l... must be to the right of any .o/.c/.cpp files.
The libraryName here is just the bare name of the library, without platform-specific additions. So e.g. on Linux library files are usually called libfoo.so but you'd only write -lfoo. On Windows that same file might be called foo.lib, but you'd use the same argument. You might have to add the directory where those files can be found using -L‹directory›. Make sure to not write a space after -l or -L.
For Xcode: Add the User Header Search Paths -> add the Library Search Path -> drag and drop the actual library reference into the project folder.
Under MSVS, files added to a project automatically have their object files linked together and a lib file would be generated (in common usage). To use the symbols in a separate project, you'd
need to include the lib files in the project settings. This is done in the Linker section of the project properties, in Input -> Additional Dependencies. (the path to the lib file should be
added in Linker -> General -> Additional Library Directories) When using a third-party library that is provided with a lib file, failure to do so usually results in the error.
It can also happen that you forget to add the file to the compilation, in which case the object file won't be generated. In gcc you'd add the files to the command line. In MSVS adding the file to the project will make it compile it automatically (albeit files can, manually, be individually excluded from the build).
In Windows programming, the tell-tale sign that you did not link a necessary library is that the name of the unresolved symbol begins with __imp_. Look up the name of the function in the documentation, and it should say which library you need to use. For example, MSDN puts the information in a box at the bottom of each function in a section called "Library".
Declared but did not define a variable or function.
A typical variable declaration is
extern int x;
As this is only a declaration, a single definition is needed. A corresponding definition would be:
int x;
For example, the following would generate an error:
extern int x;
int main()
{
x = 0;
}
//int x; // uncomment this line for successful definition
Similar remarks apply to functions. Declaring a function without defining it leads to the error:
void foo(); // declaration only
int main()
{
foo();
}
//void foo() {} //uncomment this line for successful definition
Be careful that the function you implement exactly matches the one you declared. For example, you may have mismatched cv-qualifiers:
void foo(int& x);
int main()
{
int x;
foo(x);
}
void foo(const int& x) {} //different function, doesn't provide a definition
//for void foo(int& x)
Other examples of mismatches include
Function/variable declared in one namespace, defined in another.
Function/variable declared as class member, defined as global (or vice versa).
Function return type, parameter number and types, and calling convention do not all exactly agree.
The error message from the compiler will often give you the full declaration of the variable or function that was declared but never defined. Compare it closely to the definition you provided. Make sure every detail matches.
The order in which interdependent linked libraries are specified is wrong.
The order in which libraries are linked DOES matter if the libraries depend on each other. In general, if library A depends on library B, then libA MUST appear before libB in the linker flags.
For example:
// B.h
#ifndef B_H
#define B_H
struct B {
B(int);
int x;
};
#endif
// B.cpp
#include "B.h"
B::B(int xx) : x(xx) {}
// A.h
#include "B.h"
struct A {
A(int x);
B b;
};
// A.cpp
#include "A.h"
A::A(int x) : b(x) {}
// main.cpp
#include "A.h"
int main() {
A a(5);
return 0;
};
Create the libraries:
$ g++ -c A.cpp
$ g++ -c B.cpp
$ ar rvs libA.a A.o
ar: creating libA.a
a - A.o
$ ar rvs libB.a B.o
ar: creating libB.a
a - B.o
Compile:
$ g++ main.cpp -L. -lB -lA
./libA.a(A.o): In function `A::A(int)':
A.cpp:(.text+0x1c): undefined reference to `B::B(int)'
collect2: error: ld returned 1 exit status
$ g++ main.cpp -L. -lA -lB
$ ./a.out
So to repeat again, the order DOES matter!
Symbols were defined in a C program and used in C++ code.
The function (or variable) void foo() was defined in a C program and you attempt to use it in a C++ program:
void foo();
int main()
{
foo();
}
The C++ linker expects names to be mangled, so you have to declare the function as:
extern "C" void foo();
int main()
{
foo();
}
Equivalently, instead of being defined in a C program, the function (or variable) void foo() was defined in C++ but with C linkage:
extern "C" void foo();
and you attempt to use it in a C++ program with C++ linkage.
If an entire library is included in a header file (and was compiled as C code); the include will need to be as follows;
extern "C" {
#include "cheader.h"
}
what is an "undefined reference/unresolved external symbol"
I'll try to explain what is an "undefined reference/unresolved external symbol".
note: i use g++ and Linux and all examples is for it
For example we have some code
// src1.cpp
void print();
static int local_var_name; // 'static' makes variable not visible for other modules
int global_var_name = 123;
int main()
{
print();
return 0;
}
and
// src2.cpp
extern "C" int printf (const char*, ...);
extern int global_var_name;
//extern int local_var_name;
void print ()
{
// printf("%d%d\n", global_var_name, local_var_name);
printf("%d\n", global_var_name);
}
Make object files
$ g++ -c src1.cpp -o src1.o
$ g++ -c src2.cpp -o src2.o
After the assembler phase we have an object file, which contains any symbols to export.
Look at the symbols
$ readelf --symbols src1.o
Num: Value Size Type Bind Vis Ndx Name
5: 0000000000000000 4 OBJECT LOCAL DEFAULT 4 _ZL14local_var_name # [1]
9: 0000000000000000 4 OBJECT GLOBAL DEFAULT 3 global_var_name # [2]
I've rejected some lines from output, because they do not matter
So, we see follow symbols to export.
[1] - this is our static (local) variable (important - Bind has a type "LOCAL")
[2] - this is our global variable
src2.cpp exports nothing and we have seen no its symbols
Link our object files
$ g++ src1.o src2.o -o prog
and run it
$ ./prog
123
Linker sees exported symbols and links it. Now we try to uncomment lines in src2.cpp like here
// src2.cpp
extern "C" int printf (const char*, ...);
extern int global_var_name;
extern int local_var_name;
void print ()
{
printf("%d%d\n", global_var_name, local_var_name);
}
and rebuild an object file
$ g++ -c src2.cpp -o src2.o
OK (no errors), because we only build object file, linking is not done yet.
Try to link
$ g++ src1.o src2.o -o prog
src2.o: In function `print()':
src2.cpp:(.text+0x6): undefined reference to `local_var_name'
collect2: error: ld returned 1 exit status
It has happened because our local_var_name is static, i.e. it is not visible for other modules.
Now more deeply. Get the translation phase output
$ g++ -S src1.cpp -o src1.s
// src1.s
look src1.s
.file "src1.cpp"
.local _ZL14local_var_name
.comm _ZL14local_var_name,4,4
.globl global_var_name
.data
.align 4
.type global_var_name, #object
.size global_var_name, 4
global_var_name:
.long 123
.text
.globl main
.type main, #function
main:
; assembler code, not interesting for us
.LFE0:
.size main, .-main
.ident "GCC: (Ubuntu 4.8.2-19ubuntu1) 4.8.2"
.section .note.GNU-stack,"",#progbits
So, we've seen there is no label for local_var_name, that's why linker hasn't found it. But we are hackers :) and we can fix it. Open src1.s in your text editor and change
.local _ZL14local_var_name
.comm _ZL14local_var_name,4,4
to
.globl local_var_name
.data
.align 4
.type local_var_name, #object
.size local_var_name, 4
local_var_name:
.long 456789
i.e. you should have like below
.file "src1.cpp"
.globl local_var_name
.data
.align 4
.type local_var_name, #object
.size local_var_name, 4
local_var_name:
.long 456789
.globl global_var_name
.align 4
.type global_var_name, #object
.size global_var_name, 4
global_var_name:
.long 123
.text
.globl main
.type main, #function
main:
; ...
we have changed the visibility of local_var_name and set its value to 456789.
Try to build an object file from it
$ g++ -c src1.s -o src2.o
ok, see readelf output (symbols)
$ readelf --symbols src1.o
8: 0000000000000000 4 OBJECT GLOBAL DEFAULT 3 local_var_name
now local_var_name has Bind GLOBAL (was LOCAL)
link
$ g++ src1.o src2.o -o prog
and run it
$ ./prog
123456789
ok, we hack it :)
So, as a result - an "undefined reference/unresolved external symbol error" happens when the linker cannot find global symbols in the object files.
If all else fails, recompile.
I was recently able to get rid of an unresolved external error in Visual Studio 2012 just by recompiling the offending file. When I re-built, the error went away.
This usually happens when two (or more) libraries have a cyclic dependency. Library A attempts to use symbols in B.lib and library B attempts to use symbols from A.lib. Neither exist to start off with. When you attempt to compile A, the link step will fail because it can't find B.lib. A.lib will be generated, but no dll. You then compile B, which will succeed and generate B.lib. Re-compiling A will now work because B.lib is now found.
Template implementations not visible.
Unspecialized templates must have their definitions visible to all translation units that use them. That means you can't separate the definition of a template
to an implementation file. If you must separate the implementation, the usual workaround is to have an impl file which you include at the end of the header that
declares the template. A common situation is:
template<class T>
struct X
{
void foo();
};
int main()
{
X<int> x;
x.foo();
}
//differentImplementationFile.cpp
template<class T>
void X<T>::foo()
{
}
To fix this, you must move the definition of X::foo to the header file or some place visible to the translation unit that uses it.
Specialized templates can be implemented in an implementation file and the implementation doesn't have to be visible, but the specialization must be previously declared.
For further explanation and another possible solution (explicit instantiation) see this question and answer.
This is one of most confusing error messages that every VC++ programmers have seen time and time again. Let’s make things clarity first.
A. What is symbol?
In short, a symbol is a name. It can be a variable name, a function name, a class name, a typedef name, or anything except those names and signs that belong to C++ language. It is user defined or introduced by a dependency library (another user-defined).
B. What is external?
In VC++, every source file (.cpp,.c,etc.) is considered as a translation unit, the compiler compiles one unit at a time, and generate one object file(.obj) for the current translation unit. (Note that every header file that this source file included will be preprocessed and will be considered as part of this translation unit)Everything within a translation unit is considered as internal, everything else is considered as external. In C++, you may reference an external symbol by using keywords like extern, __declspec (dllimport) and so on.
C. What is “resolve”?
Resolve is a linking-time term. In linking-time, linker attempts to find the external definition for every symbol in object files that cannot find its definition internally. The scope of this searching process including:
All object files that generated in compiling time
All libraries (.lib) that are either explicitly or implicitly
specified as additional dependencies of this building application.
This searching process is called resolve.
D. Finally, why Unresolved External Symbol?
If the linker cannot find the external definition for a symbol that has no definition internally, it reports an Unresolved External Symbol error.
E. Possible causes of LNK2019: Unresolved External Symbol error.
We already know that this error is due to the linker failed to find the definition of external symbols, the possible causes can be sorted as:
Definition exists
For example, if we have a function called foo defined in a.cpp:
int foo()
{
return 0;
}
In b.cpp we want to call function foo, so we add
void foo();
to declare function foo(), and call it in another function body, say bar():
void bar()
{
foo();
}
Now when you build this code you will get a LNK2019 error complaining that foo is an unresolved symbol. In this case, we know that foo() has its definition in a.cpp, but different from the one we are calling(different return value). This is the case that definition exists.
Definition does not exist
If we want to call some functions in a library, but the import library is not added into the additional dependency list (set from: Project | Properties | Configuration Properties | Linker | Input | Additional Dependency) of your project setting. Now the linker will report a LNK2019 since the definition does not exist in current searching scope.
Incorrectly importing/exporting methods/classes across modules/dll (compiler specific).
MSVS requires you to specify which symbols to export and import using __declspec(dllexport) and __declspec(dllimport).
This dual functionality is usually obtained through the use of a macro:
#ifdef THIS_MODULE
#define DLLIMPEXP __declspec(dllexport)
#else
#define DLLIMPEXP __declspec(dllimport)
#endif
The macro THIS_MODULE would only be defined in the module that exports the function. That way, the declaration:
DLLIMPEXP void foo();
expands to
__declspec(dllexport) void foo();
and tells the compiler to export the function, as the current module contains its definition. When including the declaration in a different module, it would expand to
__declspec(dllimport) void foo();
and tells the compiler that the definition is in one of the libraries you linked against (also see 1)).
You can similary import/export classes:
class DLLIMPEXP X
{
};
undefined reference to WinMain#16 or similar 'unusual' main() entry point reference (especially for visual-studio).
You may have missed to choose the right project type with your actual IDE. The IDE may want to bind e.g. Windows Application projects to such entry point function (as specified in the missing reference above), instead of the commonly used int main(int argc, char** argv); signature.
If your IDE supports Plain Console Projects you might want to choose this project type, instead of a windows application project.
Here are case1 and case2 handled in more detail from a real world problem.
Also if you're using 3rd party libraries make sure you have the correct 32/64 bit binaries
Microsoft offers a #pragma to reference the correct library at link time;
#pragma comment(lib, "libname.lib")
In addition to the library path including the directory of the library, this should be the full name of the library.
Visual Studio NuGet package needs to be updated for new toolset version
I just had this problem trying to link libpng with Visual Studio 2013. The problem is that the package file only had libraries for Visual Studio 2010 and 2012.
The correct solution is to hope the developer releases an updated package and then upgrade, but it worked for me by hacking in an extra setting for VS2013, pointing at the VS2012 library files.
I edited the package (in the packages folder inside the solution's directory) by finding packagename\build\native\packagename.targets and inside that file, copying all the v110 sections. I changed the v110 to v120 in the condition fields only being very careful to leave the filename paths all as v110. This simply allowed Visual Studio 2013 to link to the libraries for 2012, and in this case, it worked.
Suppose you have a big project written in c++ which has a thousand of .cpp files and a thousand of .h files.And let's says the project also depends on ten static libraries. Let's says we are on Windows and we build our project in Visual Studio 20xx. When you press Ctrl + F7 Visual Studio to start compiling the whole solution ( suppose we have just one project in the solution )
What's the meaning of compilation ?
Visual Studio search into file .vcxproj and start compiling each file which has the extension .cpp. Order of compilation is undefined.So you must not assume that the file main.cpp is compiled first
If .cpp files depends on additional .h files in order to find symbols
that may or may not be defined in the file .cpp
If exists one .cpp file in which the compiler could not find one symbol, a compiler time error raises the message Symbol x could not be found
For each file with extension .cpp is generated an object file .o and also Visual Studio writes the output in a file named ProjectName.Cpp.Clean.txt which contains all object files that must be processed by the linker.
The Second step of compilation is done by Linker.Linker should merge all the object file and build finally the output ( which may be an executable or a library)
Steps In Linking a project
Parse all the object files and find the definition which was only declared in headers ( eg: The code of one method of a class as is mentioned in previous answers, or event the initialization of a static variable which is member inside a class)
If one symbol could not be found in object files he also is searched in Additional Libraries.For adding a new library to a project Configuration properties -> VC++ Directories -> Library Directories and here you specified additional folder for searching libraries and Configuration properties -> Linker -> Input for specifying the name of the library.
-If the Linker could not find the symbol which you write in one .cpp he raises a linker time error which may sound like
error LNK2001: unresolved external symbol "void __cdecl foo(void)" (?foo##YAXXZ)
Observation
Once the Linker find one symbol he doesn't search in other libraries for it
The order of linking libraries does matter.
If Linker finds an external symbol in one static library he includes the symbol in the output of the project.However, if the library is shared( dynamic ) he doesn't include the code ( symbols ) in output, but Run-Time crashes may occur
How To Solve this kind of error
Compiler Time Error :
Make sure you write your c++ project syntactical correct.
Linker Time Error
Define all your symbol which you declare in your header files
Use #pragma once for allowing compiler not to include one header if it was already included in the current .cpp which are compiled
Make sure that your external library doesn't contain symbols that may enter into conflict with other symbols you defined in your header files
When you use the template to make sure you include the definition of each template function in the header file for allowing the compiler to generate appropriate code for any instantiations.
Use the linker to help diagnose the error
Most modern linkers include a verbose option that prints out to varying degrees;
Link invocation (command line),
Data on what libraries are included in the link stage,
The location of the libraries,
Search paths used.
For gcc and clang; you would typically add -v -Wl,--verbose or -v -Wl,-v to the command line. More details can be found here;
Linux ld man page.
LLVM linker page.
"An introduction to GCC" chapter 9.
For MSVC, /VERBOSE (in particular /VERBOSE:LIB) is added to the link command line.
The MSDN page on the /VERBOSE linker option.
A bug in the compiler/IDE
I recently had this problem, and it turned out it was a bug in Visual Studio Express 2013. I had to remove a source file from the project and re-add it to overcome the bug.
Steps to try if you believe it could be a bug in compiler/IDE:
Clean the project (some IDEs have an option to do this, you can also
manually do it by deleting the object files)
Try start a new project,
copying all source code from the original one.
Linked .lib file is associated to a .dll
I had the same issue. Say i have projects MyProject and TestProject. I had effectively linked the lib file for MyProject to the TestProject. However, this lib file was produced as the DLL for the MyProject was built. Also, I did not contain source code for all methods in the MyProject, but only access to the DLL's entry points.
To solve the issue, i built the MyProject as a LIB, and linked TestProject to this .lib file (i copy paste the generated .lib file into the TestProject folder). I can then build again MyProject as a DLL. It is compiling since the lib to which TestProject is linked does contain code for all methods in classes in MyProject.
Since people seem to be directed to this question when it comes to linker errors I am going to add this here.
One possible reason for linker errors with GCC 5.2.0 is that a new libstdc++ library ABI is now chosen by default.
If you get linker errors about undefined references to symbols that involve types in the std::__cxx11 namespace or the tag [abi:cxx11] then it probably indicates that you are trying to link together object files that were compiled with different values for the _GLIBCXX_USE_CXX11_ABI macro. This commonly happens when linking to a third-party library that was compiled with an older version of GCC. If the third-party library cannot be rebuilt with the new ABI then you will need to recompile your code with the old ABI.
So if you suddenly get linker errors when switching to a GCC after 5.1.0 this would be a thing to check out.
Your linkage consumes libraries before the object files that refer to them
You are trying to compile and link your program with the GCC toolchain.
Your linkage specifies all of the necessary libraries and library search paths
If libfoo depends on libbar, then your linkage correctly puts libfoo before libbar.
Your linkage fails with undefined reference to something errors.
But all the undefined somethings are declared in the header files you have
#included and are in fact defined in the libraries that you are linking.
Examples are in C. They could equally well be C++
A minimal example involving a static library you built yourself
my_lib.c
#include "my_lib.h"
#include <stdio.h>
void hw(void)
{
puts("Hello World");
}
my_lib.h
#ifndef MY_LIB_H
#define MT_LIB_H
extern void hw(void);
#endif
eg1.c
#include <my_lib.h>
int main()
{
hw();
return 0;
}
You build your static library:
$ gcc -c -o my_lib.o my_lib.c
$ ar rcs libmy_lib.a my_lib.o
You compile your program:
$ gcc -I. -c -o eg1.o eg1.c
You try to link it with libmy_lib.a and fail:
$ gcc -o eg1 -L. -lmy_lib eg1.o
eg1.o: In function `main':
eg1.c:(.text+0x5): undefined reference to `hw'
collect2: error: ld returned 1 exit status
The same result if you compile and link in one step, like:
$ gcc -o eg1 -I. -L. -lmy_lib eg1.c
/tmp/ccQk1tvs.o: In function `main':
eg1.c:(.text+0x5): undefined reference to `hw'
collect2: error: ld returned 1 exit status
A minimal example involving a shared system library, the compression library libz
eg2.c
#include <zlib.h>
#include <stdio.h>
int main()
{
printf("%s\n",zlibVersion());
return 0;
}
Compile your program:
$ gcc -c -o eg2.o eg2.c
Try to link your program with libz and fail:
$ gcc -o eg2 -lz eg2.o
eg2.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
collect2: error: ld returned 1 exit status
Same if you compile and link in one go:
$ gcc -o eg2 -I. -lz eg2.c
/tmp/ccxCiGn7.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
collect2: error: ld returned 1 exit status
And a variation on example 2 involving pkg-config:
$ gcc -o eg2 $(pkg-config --libs zlib) eg2.o
eg2.o: In function `main':
eg2.c:(.text+0x5): undefined reference to `zlibVersion'
What are you doing wrong?
In the sequence of object files and libraries you want to link to make your
program, you are placing the libraries before the object files that refer to
them. You need to place the libraries after the object files that refer
to them.
Link example 1 correctly:
$ gcc -o eg1 eg1.o -L. -lmy_lib
Success:
$ ./eg1
Hello World
Link example 2 correctly:
$ gcc -o eg2 eg2.o -lz
Success:
$ ./eg2
1.2.8
Link the example 2 pkg-config variation correctly:
$ gcc -o eg2 eg2.o $(pkg-config --libs zlib)
$ ./eg2
1.2.8
The explanation
Reading is optional from here on.
By default, a linkage command generated by GCC, on your distro,
consumes the files in the linkage from left to right in
commandline sequence. When it finds that a file refers to something
and does not contain a definition for it, to will search for a definition
in files further to the right. If it eventually finds a definition, the
reference is resolved. If any references remain unresolved at the end,
the linkage fails: the linker does not search backwards.
First, example 1, with static library my_lib.a
A static library is an indexed archive of object files. When the linker
finds -lmy_lib in the linkage sequence and figures out that this refers
to the static library ./libmy_lib.a, it wants to know whether your program
needs any of the object files in libmy_lib.a.
There is only object file in libmy_lib.a, namely my_lib.o, and there's only one thing defined
in my_lib.o, namely the function hw.
The linker will decide that your program needs my_lib.o if and only if it already knows that
your program refers to hw, in one or more of the object files it has already
added to the program, and that none of the object files it has already added
contains a definition for hw.
If that is true, then the linker will extract a copy of my_lib.o from the library and
add it to your program. Then, your program contains a definition for hw, so
its references to hw are resolved.
When you try to link the program like:
$ gcc -o eg1 -L. -lmy_lib eg1.o
the linker has not added eg1.o to the program when it sees
-lmy_lib. Because at that point, it has not seen eg1.o.
Your program does not yet make any references to hw: it
does not yet make any references at all, because all the references it makes
are in eg1.o.
So the linker does not add my_lib.o to the program and has no further
use for libmy_lib.a.
Next, it finds eg1.o, and adds it to be program. An object file in the
linkage sequence is always added to the program. Now, the program makes
a reference to hw, and does not contain a definition of hw; but
there is nothing left in the linkage sequence that could provide the missing
definition. The reference to hw ends up unresolved, and the linkage fails.
Second, example 2, with shared library libz
A shared library isn't an archive of object files or anything like it. It's
much more like a program that doesn't have a main function and
instead exposes multiple other symbols that it defines, so that other
programs can use them at runtime.
Many Linux distros today configure their GCC toolchain so that its language drivers (gcc,g++,gfortran etc)
instruct the system linker (ld) to link shared libraries on an as-needed basis.
You have got one of those distros.
This means that when the linker finds -lz in the linkage sequence, and figures out that this refers
to the shared library (say) /usr/lib/x86_64-linux-gnu/libz.so, it wants to know whether any references that it has added to your program that aren't yet defined have definitions that are exported by libz
If that is true, then the linker will not copy any chunks out of libz and
add them to your program; instead, it will just doctor the code of your program
so that:-
At runtime, the system program loader will load a copy of libz into the
same process as your program whenever it loads a copy of your program, to run it.
At runtime, whenever your program refers to something that is defined in
libz, that reference uses the definition exported by the copy of libz in
the same process.
Your program wants to refer to just one thing that has a definition exported by libz,
namely the function zlibVersion, which is referred to just once, in eg2.c.
If the linker adds that reference to your program, and then finds the definition
exported by libz, the reference is resolved
But when you try to link the program like:
gcc -o eg2 -lz eg2.o
the order of events is wrong in just the same way as with example 1.
At the point when the linker finds -lz, there are no references to anything
in the program: they are all in eg2.o, which has not yet been seen. So the
linker decides it has no use for libz. When it reaches eg2.o, adds it to the program,
and then has undefined reference to zlibVersion, the linkage sequence is finished;
that reference is unresolved, and the linkage fails.
Lastly, the pkg-config variation of example 2 has a now obvious explanation.
After shell-expansion:
gcc -o eg2 $(pkg-config --libs zlib) eg2.o
becomes:
gcc -o eg2 -lz eg2.o
which is just example 2 again.
I can reproduce the problem in example 1, but not in example 2
The linkage:
gcc -o eg2 -lz eg2.o
works just fine for you!
(Or: That linkage worked fine for you on, say, Fedora 23, but fails on Ubuntu 16.04)
That's because the distro on which the linkage works is one of the ones that
does not configure its GCC toolchain to link shared libraries as-needed.
Back in the day, it was normal for unix-like systems to link static and shared
libraries by different rules. Static libraries in a linkage sequence were linked
on the as-needed basis explained in example 1, but shared libraries were linked unconditionally.
This behaviour is economical at linktime because the linker doesn't have to ponder
whether a shared library is needed by the program: if it's a shared library,
link it. And most libraries in most linkages are shared libraries. But there are disadvantages too:-
It is uneconomical at runtime, because it can cause shared libraries to be
loaded along with a program even if doesn't need them.
The different linkage rules for static and shared libraries can be confusing
to inexpert programmers, who may not know whether -lfoo in their linkage
is going to resolve to /some/where/libfoo.a or to /some/where/libfoo.so,
and might not understand the difference between shared and static libraries
anyway.
This trade-off has led to the schismatic situation today. Some distros have
changed their GCC linkage rules for shared libraries so that the as-needed
principle applies for all libraries. Some distros have stuck with the old
way.
Why do I still get this problem even if I compile-and-link at the same time?
If I just do:
$ gcc -o eg1 -I. -L. -lmy_lib eg1.c
surely gcc has to compile eg1.c first, and then link the resulting
object file with libmy_lib.a. So how can it not know that object file
is needed when it's doing the linking?
Because compiling and linking with a single command does not change the
order of the linkage sequence.
When you run the command above, gcc figures out that you want compilation +
linkage. So behind the scenes, it generates a compilation command, and runs
it, then generates a linkage command, and runs it, as if you had run the
two commands:
$ gcc -I. -c -o eg1.o eg1.c
$ gcc -o eg1 -L. -lmy_lib eg1.o
So the linkage fails just as it does if you do run those two commands. The
only difference you notice in the failure is that gcc has generated a
temporary object file in the compile + link case, because you're not telling it
to use eg1.o. We see:
/tmp/ccQk1tvs.o: In function `main'
instead of:
eg1.o: In function `main':
See also
The order in which interdependent linked libraries are specified is wrong
Putting interdependent libraries in the wrong order is just one way
in which you can get files that need definitions of things coming
later in the linkage than the files that provide the definitions. Putting libraries before the
object files that refer to them is another way of making the same mistake.
A wrapper around GNU ld that doesn't support linker scripts
Some .so files are actually GNU ld linker scripts, e.g. libtbb.so file is an ASCII text file with this contents:
INPUT (libtbb.so.2)
Some more complex builds may not support this. For example, if you include -v in the compiler options, you can see that the mainwin gcc wrapper mwdip discards linker script command files in the verbose output list of libraries to link in. A simple work around is to replace the linker script input command file with a copy of the file instead (or a symlink), e.g.
cp libtbb.so.2 libtbb.so
Or you could replace the -l argument with the full path of the .so, e.g. instead of -ltbb do /home/foo/tbb-4.3/linux/lib/intel64/gcc4.4/libtbb.so.2
Befriending templates...
Given the code snippet of a template type with a friend operator (or function);
template <typename T>
class Foo {
friend std::ostream& operator<< (std::ostream& os, const Foo<T>& a);
};
The operator<< is being declared as a non-template function. For every type T used with Foo, there needs to be a non-templated operator<<. For example, if there is a type Foo<int> declared, then there must be an operator implementation as follows;
std::ostream& operator<< (std::ostream& os, const Foo<int>& a) {/*...*/}
Since it is not implemented, the linker fails to find it and results in the error.
To correct this, you can declare a template operator before the Foo type and then declare as a friend, the appropriate instantiation. The syntax is a little awkward, but is looks as follows;
// forward declare the Foo
template <typename>
class Foo;
// forward declare the operator <<
template <typename T>
std::ostream& operator<<(std::ostream&, const Foo<T>&);
template <typename T>
class Foo {
friend std::ostream& operator<< <>(std::ostream& os, const Foo<T>& a);
// note the required <> ^^^^
// ...
};
template <typename T>
std::ostream& operator<<(std::ostream&, const Foo<T>&)
{
// ... implement the operator
}
The above code limits the friendship of the operator to the corresponding instantiation of Foo, i.e. the operator<< <int> instantiation is limited to access the private members of the instantiation of Foo<int>.
Alternatives include;
Allowing the friendship to extend to all instantiations of the templates, as follows;
template <typename T>
class Foo {
template <typename T1>
friend std::ostream& operator<<(std::ostream& os, const Foo<T1>& a);
// ...
};
Or, the implementation for the operator<< can be done inline inside the class definition;
template <typename T>
class Foo {
friend std::ostream& operator<<(std::ostream& os, const Foo& a)
{ /*...*/ }
// ...
};
Note, when the declaration of the operator (or function) only appears in the class, the name is not available for "normal" lookup, only for argument dependent lookup, from cppreference;
A name first declared in a friend declaration within class or class template X becomes a member of the innermost enclosing namespace of X, but is not accessible for lookup (except argument-dependent lookup that considers X) unless a matching declaration at the namespace scope is provided...
There is further reading on template friends at cppreference and the C++ FAQ.
Code listing showing the techniques above.
As a side note to the failing code sample; g++ warns about this as follows
warning: friend declaration 'std::ostream& operator<<(...)' declares a non-template function [-Wnon-template-friend]
note: (if this is not what you intended, make sure the function template has already been declared and add <> after the function name here)
When your include paths are different
Linker errors can happen when a header file and its associated shared library (.lib file) go out of sync. Let me explain.
How do linkers work? The linker matches a function declaration (declared in the header) with its definition (in the shared library) by comparing their signatures. You can get a linker error if the linker doesn't find a function definition that matches perfectly.
Is it possible to still get a linker error even though the declaration and the definition seem to match? Yes! They might look the same in source code, but it really depends on what the compiler sees. Essentially you could end up with a situation like this:
// header1.h
typedef int Number;
void foo(Number);
// header2.h
typedef float Number;
void foo(Number); // this only looks the same lexically
Note how even though both the function declarations look identical in source code, but they are really different according to the compiler.
You might ask how one ends up in a situation like that? Include paths of course! If when compiling the shared library, the include path leads to header1.h and you end up using header2.h in your own program, you'll be left scratching your header wondering what happened (pun intended).
An example of how this can happen in the real world is explained below.
Further elaboration with an example
I have two projects: graphics.lib and main.exe. Both projects depend on common_math.h. Suppose the library exports the following function:
// graphics.lib
#include "common_math.h"
void draw(vec3 p) { ... } // vec3 comes from common_math.h
And then you go ahead and include the library in your own project.
// main.exe
#include "other/common_math.h"
#include "graphics.h"
int main() {
draw(...);
}
Boom! You get a linker error and you have no idea why it's failing. The reason is that the common library uses different versions of the same include common_math.h (I have made it obvious here in the example by including a different path, but it might not always be so obvious. Maybe the include path is different in the compiler settings).
Note in this example, the linker would tell you it couldn't find draw(), when in reality you know it obviously is being exported by the library. You could spend hours scratching your head wondering what went wrong. The thing is, the linker sees a different signature because the parameter types are slightly different. In the example, vec3 is a different type in both projects as far as the compiler is concerned. This could happen because they come from two slightly different include files (maybe the include files come from two different versions of the library).
Debugging the linker
DUMPBIN is your friend, if you are using Visual Studio. I'm sure other compilers have other similar tools.
The process goes like this:
Note the weird mangled name given in the linker error. (eg. draw#graphics#XYZ).
Dump the exported symbols from the library into a text file.
Search for the exported symbol of interest, and notice that the mangled name is different.
Pay attention to why the mangled names ended up different. You would be able to see that the parameter types are different, even though they look the same in the source code.
Reason why they are different. In the example given above, they are different because of different include files.
[1] By project I mean a set of source files that are linked together to produce either a library or an executable.
EDIT 1: Rewrote first section to be easier to understand. Please comment below to let me know if something else needs to be fixed. Thanks!
Inconsistent UNICODE definitions
A Windows UNICODE build is built with TCHAR etc. being defined as wchar_t etc. When not building with UNICODE defined as build with TCHAR defined as char etc. These UNICODE and _UNICODE defines affect all the "T" string types; LPTSTR, LPCTSTR and their elk.
Building one library with UNICODE defined and attempting to link it in a project where UNICODE is not defined will result in linker errors since there will be a mismatch in the definition of TCHAR; char vs. wchar_t.
The error usually includes a function a value with a char or wchar_t derived type, these could include std::basic_string<> etc. as well. When browsing through the affected function in the code, there will often be a reference to TCHAR or std::basic_string<TCHAR> etc. This is a tell-tale sign that the code was originally intended for both a UNICODE and a Multi-Byte Character (or "narrow") build.
To correct this, build all the required libraries and projects with a consistent definition of UNICODE (and _UNICODE).
This can be done with either;
#define UNICODE
#define _UNICODE
Or in the project settings;
Project Properties > General > Project Defaults > Character Set
Or on the command line;
/DUNICODE /D_UNICODE
The alternative is applicable as well, if UNICODE is not intended to be used, make sure the defines are not set, and/or the multi-character setting is used in the projects and consistently applied.
Do not forget to be consistent between the "Release" and "Debug" builds as well.
Clean and rebuild
A "clean" of the build can remove the "dead wood" that may be left lying around from previous builds, failed builds, incomplete builds and other build system related build issues.
In general the IDE or build will include some form of "clean" function, but this may not be correctly configured (e.g. in a manual makefile) or may fail (e.g. the intermediate or resultant binaries are read-only).
Once the "clean" has completed, verify that the "clean" has succeeded and all the generated intermediate file (e.g. an automated makefile) have been successfully removed.
This process can be seen as a final resort, but is often a good first step; especially if the code related to the error has recently been added (either locally or from the source repository).
Missing "extern" in const variable declarations/definitions (C++ only)
For people coming from C it might be a surprise that in C++ global constvariables have internal (or static) linkage. In C this was not the case, as all global variables are implicitly extern (i.e. when the static keyword is missing).
Example:
// file1.cpp
const int test = 5; // in C++ same as "static const int test = 5"
int test2 = 5;
// file2.cpp
extern const int test;
extern int test2;
void foo()
{
int x = test; // linker error in C++ , no error in C
int y = test2; // no problem
}
correct would be to use a header file and include it in file2.cpp and file1.cpp
extern const int test;
extern int test2;
Alternatively one could declare the const variable in file1.cpp with explicit extern
Even though this is a pretty old questions with multiple accepted answers, I'd like to share how to resolve an obscure "undefined reference to" error.
Different versions of libraries
I was using an alias to refer to std::filesystem::path: filesystem is in the standard library since C++17 but my program needed to also compile in C++14 so I decided to use a variable alias:
#if (defined _GLIBCXX_EXPERIMENTAL_FILESYSTEM) //is the included filesystem library experimental? (C++14 and newer: <experimental/filesystem>)
using path_t = std::experimental::filesystem::path;
#elif (defined _GLIBCXX_FILESYSTEM) //not experimental (C++17 and newer: <filesystem>)
using path_t = std::filesystem::path;
#endif
Let's say I have three files: main.cpp, file.h, file.cpp:
file.h #include's <experimental::filesystem> and contains the code above
file.cpp, the implementation of file.h, #include's "file.h"
main.cpp #include's <filesystem> and "file.h"
Note the different libraries used in main.cpp and file.h. Since main.cpp #include'd "file.h" after <filesystem>, the version of filesystem used there was the C++17 one. I used to compile the program with the following commands:
$ g++ -g -std=c++17 -c main.cpp -> compiles main.cpp to main.o
$ g++ -g -std=c++17 -c file.cpp -> compiles file.cpp and file.h to file.o
$ g++ -g -std=c++17 -o executable main.o file.o -lstdc++fs -> links main.o and file.o
This way any function contained in file.o and used in main.o that required path_t gave "undefined reference" errors because main.o referred to std::filesystem::path but file.o to std::experimental::filesystem::path.
Resolution
To fix this I just needed to change <experimental::filesystem> in file.h to <filesystem>.
When linking against shared libraries, make sure that the used symbols are not hidden.
The default behavior of gcc is that all symbols are visible. However, when the translation units are built with option -fvisibility=hidden, only functions/symbols marked with __attribute__ ((visibility ("default"))) are external in the resulting shared object.
You can check whether the symbols your are looking for are external by invoking:
# -D shows (global) dynamic symbols that can be used from the outside of XXX.so
nm -D XXX.so | grep MY_SYMBOL
the hidden/local symbols are shown by nm with lowercase symbol type, for example t instead of `T for code-section:
nm XXX.so
00000000000005a7 t HIDDEN_SYMBOL
00000000000005f8 T VISIBLE_SYMBOL
You can also use nm with the option -C to demangle the names (if C++ was used).
Similar to Windows-dlls, one would mark public functions with a define, for example DLL_PUBLIC defined as:
#define DLL_PUBLIC __attribute__ ((visibility ("default")))
DLL_PUBLIC int my_public_function(){
...
}
Which roughly corresponds to Windows'/MSVC-version:
#ifdef BUILDING_DLL
#define DLL_PUBLIC __declspec(dllexport)
#else
#define DLL_PUBLIC __declspec(dllimport)
#endif
More information about visibility can be found on the gcc wiki.
When a translation unit is compiled with -fvisibility=hidden the resulting symbols have still external linkage (shown with upper case symbol type by nm) and can be used for external linkage without problem if the object files become part of a static libraries. The linkage becomes local only when the object files are linked into a shared library.
To find which symbols in an object file are hidden run:
>>> objdump -t XXXX.o | grep hidden
0000000000000000 g F .text 000000000000000b .hidden HIDDEN_SYMBOL1
000000000000000b g F .text 000000000000000b .hidden HIDDEN_SYMBOL2
Functions or class-methods are defined in source files with the inline specifier.
An example:-
main.cpp
#include "gum.h"
#include "foo.h"
int main()
{
gum();
foo f;
f.bar();
return 0;
}
foo.h (1)
#pragma once
struct foo {
void bar() const;
};
gum.h (1)
#pragma once
extern void gum();
foo.cpp (1)
#include "foo.h"
#include <iostream>
inline /* <- wrong! */ void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
gum.cpp (1)
#include "gum.h"
#include <iostream>
inline /* <- wrong! */ void gum()
{
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
If you specify that gum (similarly, foo::bar) is inline at its definition then
the compiler will inline gum (if it chooses to), by:-
not emitting any unique definition of gum, and therefore
not emitting any symbol by which the linker can refer to the definition of gum, and instead
replacing all calls to gum with inline copies of the compiled body of gum.
As a result, if you define gum inline in a source file gum.cpp, it is
compiled to an object file gum.o in which all calls to gum are inlined
and no symbol is defined by which the linker can refer to gum. When you
link gum.o into a program together with another object file, e.g. main.o
that make references to an external symbol gum, the linker cannot resolve
those references. So the linkage fails:
Compile:
g++ -c main.cpp foo.cpp gum.cpp
Link:
$ g++ -o prog main.o foo.o gum.o
main.o: In function `main':
main.cpp:(.text+0x18): undefined reference to `gum()'
main.cpp:(.text+0x24): undefined reference to `foo::bar() const'
collect2: error: ld returned 1 exit status
You can only define gum as inline if the compiler can see its definition in every source file in which gum may be called. That means its inline definition needs to exist in a header file that you include in every source file
you compile in which gum may be called. Do one of two things:
Either don't inline the definitions
Remove the inline specifier from the source file definition:
foo.cpp (2)
#include "foo.h"
#include <iostream>
void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
gum.cpp (2)
#include "gum.h"
#include <iostream>
void gum()
{
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
Rebuild with that:
$ g++ -c main.cpp foo.cpp gum.cpp
imk#imk-Inspiron-7559:~/develop/so/scrap1$ g++ -o prog main.o foo.o gum.o
imk#imk-Inspiron-7559:~/develop/so/scrap1$ ./prog
void gum()
void foo::bar() const
Success.
Or inline correctly
Inline definitions in header files:
foo.h (2)
#pragma once
#include <iostream>
struct foo {
void bar() const { // In-class definition is implicitly inline
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
};
// Alternatively...
#if 0
struct foo {
void bar() const;
};
inline void foo::bar() const {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
#endif
gum.h (2)
#pragma once
#include <iostream>
inline void gum() {
std::cout << __PRETTY_FUNCTION__ << std::endl;
}
Now we don't need foo.cpp or gum.cpp:
$ g++ -c main.cpp
$ g++ -o prog main.o
$ ./prog
void gum()
void foo::bar() const

Generating compilation database for a single target with cmake

I'm using CMAKE_EXPORT_COMPILE_COMMANDS variable of cmake to obtain a json compilation database that I can then parse to identify the options that are given to the compiler for each source file. Now, the project I'm working on has several targets, and there are several occurrences of the source files that are used in different targets in the database, as can be shown by the example below:
f.c:
int main () { return MACRO; }
CMakeLists.txt:
cmake_minimum_required (VERSION 2.6)
project (Test)
add_executable(test1 f.c)
add_executable(test2 f.c)
target_compile_options(test1 PUBLIC -DMACRO=1)
target_compile_options(test2 PUBLIC -DMACRO=2)
running cmake . -DCMAKE_EXPORT_COMPILE_COMMANDS=1 will produce the following compile-commands.json file, with two entries for f.c, and no easy way to distinguish between them.
[
{
"directory": "/home/virgile/tmp/cmakefile",
"command": "/usr/bin/cc -DMACRO=1 -o CMakeFiles/test1.dir/f.c.o -c /home/virgile/tmp/cmakefile/f.c",
"file": "/home/virgile/tmp/cmakefile/f.c"
},
{
"directory": "/home/virgile/tmp/cmakefile",
"command": "/usr/bin/cc -DMACRO=2 -o CMakeFiles/test2.dir/f.c.o -c /home/virgile/tmp/cmakefile/f.c",
"file": "/home/virgile/tmp/cmakefile/f.c"
}
]
I'm looking for a way to specify that I'm only interested in e.g. target test1, as what you can do in build tool mode with --target, preferably without having to modify CMakeLists.txt, but this is not a major issue. What I'd like to avoid, on the other hand, is to read the argument of -o in the "command" entry and discriminate between the test1.dir and test2.dir path component.
Apparently this feature has been implemented in a merge-request about 3 months ago: https://gitlab.kitware.com/cmake/cmake/-/merge_requests/5651
From there I quote:
The new target property EXPORT_COMPILE_COMMANDS associated with the
existing global variable can be used to optionally configure targets for
their compile commands to be exported.
So it seems that you now can set a property on the respective targets in order to control whether or not they will be included in the generated DB.
This feature is part of cmake 3.20. Official docs: https://cmake.org/cmake/help/latest/prop_tgt/EXPORT_COMPILE_COMMANDS.html
Based on the docs the approach for limiting the compile DB to only a specific target should be to set CMAKE_EXPORT_COMPILE_COMMANDS to OFF and then use target_set_properties to set the EXPORT_COMPILE_COMMANDS property on the desired target to ON. E.g.
set(CMAKE_EXPORT_COMPILE_COMMANDS OFF)
...
set_target_properties(test1 PROPERTIES EXPORT_COMPILE_COMMANDS ON)
It seems that this is not supported at the moment. There is a request for this in the CMake issue tracker: https://gitlab.kitware.com/cmake/cmake/issues/19462

Wine spec files

I have a Windows DLL called morag.dll containing functions foo and bar. I also have a Linux SO called morag.so containing the Linux implementations of foo and bar (same parameters on each platform). I have a Windows application that loads morag.dll that I want to run under wine. The application itself runs fine, however I need to create the mapping between foo and bar which are expected by my application to be found in morag.dll to instead use foo and bar in morag.so.
To do this I know I need to create morag.dll.spec file and winebuild it into morag.dll.so.
Following instructions here I created a wrapper in morag.c containing functions Proxyfoo and Proxybar which do nothing more than call real functions foo and bar. Then I created morag.dll.spec thus:-
1 stdcall foo (long ptr) Proxyfoo
2 stdcall bar (ptr ptr) Proxybar
I compiled my c part, winebuild the spec file, and then use winegcc to link them into morag.dll.so
Then I read this page which suggested you maybe didn't need the proxy function so I tried without the c part altogether and made a spec file thus:-
1 stdcall foo (long ptr)
2 stdcall bar (ptr ptr)
And as above, did the winebuild step and the winegcc link step.
In both cases these were the options I used.
winebuild --dll -m32 -E ./morag.dll.spec -o morag.dll.o
ldopts= -m32 -fPIC -shared -L/usr/lib/wine -L/opt/morag/lib -lmorag
winegcc $(ldopts) -z muldefs -o morag.dll.so [morag.o] morag.dll.o
N.B. [..] denotes I only used this in the case where I was also building the c part.
in both cases, when my application running under wine tries to load the entry point in the DLL using GetProcAddress it fails.
I ran wine with WINEDEBUG=+module,+relay and saw the attempt and failure recorded as follows:-
0025:Ret KERNEL32.LoadLibraryExA() retval=7dbc0000 ret=00447b84
0025:Call KERNEL32.GetProcAddress(7dbc0000,00b2d060 "foo") ret=00447c8a
0025:Ret KERNEL32.GetProcAddress() retval=00000000 ret=00447c8a
It seems it has found and loaded my morag.dll.so since LoadLibraryExA has returned the handle to it, but when it tries to find function foo within that HMODULE handle it fails.
If I issue:-
nm -D morag.dll.so
I see foo and bar shown as U in both cases. In the case where there are proxy functions as well, the proxy functions are shown as T.
I assume that this is because I have not built the morag.dll.so file correctly, either with the wrong options, or that my spec file is not correctly formed. I am not sure which of the two schemes described above I should be using.
All help most appreciated.
I ran into the same problem today.
What was missing in my case was the proper exporting rule for e.g. foo and bar in the built-in DLL. Conveniently, besides the --dll object, the winebuild tool can create a .def file for us, e.g.:
morag.def: morag.spec
$(WINEBUILD) --def -E $< -o $#
The resulting .def file must be linked into morag.dll.so along with other objects. This does the job.

Importing Modules in D

I am trying to use basic module importing in D (language release 2). As a guide I used example on dlang.org but my simple program will not compile. The files are in the same directory.
Here is my main.d file's contents:
import std.stdio;
import mymodule;
void main(string[] args){
sayHello();
writeln("Executing Main");
}
And here is my module file's contents (mymodule.d):
void sayHello(){
writeln("hello");
}
To compile I execute via bash:
dmd main.d
And the error output is:
main.o: In function `_Dmain':
main.d:(.text._Dmain+0x5): undefined reference to `_D8mymodule8sayHelloFZv'
collect2: ld returned 1 exit status
--- errorlevel 1
You need to list all of the modules that you're compiling on the command line. If you don't list a module, then it won't be compiled. The modules that are compiled will be able to use the uncompiled modules, since the compiler pulls in their declarations, but the compiler generates no object files for them. So, when the linker goes to link, it will complain that that there are definitions missing. In this case, it complains about the fact that mymodule.sayHello hasn't been defined.
If you want the compiler to automatically search for all of the modules that the first module imports and compile them all for you, then you're going to need to use rdmd, which is a wrapper for dmd, which comes with dmd. dmd itself doesn't do that. It only compiles the modules that you tell it to.
You haven't imported std.stdio in mymodule. So, even if you do dmd main.d mymodule.d like you should be (or even better, dmd -w main.d mymodule.d or dmd -wi main.d mymodule.d), it'll fail to compile mymodule, because writeln hasn't been declared. The fact that main.d imported it has no effect on mymodule.
While it's not a big deal in this case, you really should put a module modulename; declaration at the top of your modules. The compiler will infer the module name from the filename, but once you have subpackages, you need to do it, or you'll have importing problems, because only the filename is inferred, not the package names. So, if you have foo/bar/mod.d, and you don't have a module declaration in mod.d, it'll be inferred as mod, not foo.bar.mod.
dmd mymodule.d main.d
The only languages I know that are smart enough to work out dependencies on their own are Go and Haskell.