Problems with midl.exe and cppwinrt.exe from CMake - cmake

This is somewhat of a follow-on to How to use midlrt.exe to compile .idl to .winmd?
I have this in my CMakeLists.txt . My questions are less about the CMake logic and more about the output of the midl and cppwinrt commands, and subsequent errors in compiling and linking. I suspect maybe I'm missing some command-line options.
# Pathnames for WinRT References
set (WINSDKREFDIR "$ENV{WindowsSdkDir}References\\$ENV{WindowsSDKVersion}")
# Remove trailing \ from $ENV{WindowsSDKVersion}
string (REGEX MATCH "[^\\]*" WINSDKVER $ENV{WindowsSDKVersion})
# COMMAND lines wrapped in this post for readability, not wrapped in the actual CMakeLists.txt
add_custom_target (MYLIB_PREBUILD ALL
COMMAND midl /winrt /ns_prefix /x64 /nomidl
/metadata_dir
"${WINSDKREFDIR}windows.foundation.foundationcontract\\3.0.0.0"
/reference
"${WINSDKREFDIR}windows.foundation.foundationcontract\\3.0.0.0\\Windows.Foundation.FoundationContract.winmd"
/reference
"${WINSDKREFDIR}Windows.Foundation.UniversalApiContract\\8.0.0.0\\Windows.Foundation.UniversalApiContract.winmd"
/out "${MYDIR}\\GeneratedFiles" "${MYDIR}\\MyClass.idl"
COMMAND cppwinrt
-in "${MYDIR}\\GeneratedFiles\\MyClass.winmd"
-ref ${WINSDKVER} -component -pch "pch.h" -out "${MYDIR}\\GeneratedFiles"
)
add_dependencies (MYLIB MYLIB_PREBUILD)
In the cppwinrt command, I've tried different forms of -ref [spec] and -pch options, but seem to get the same results regardless. These are the problems I've run into:
MIDLRT generates a header file "MyClass.h" with several problems:
It #includes <windows.h>, which ultimately #defines preprocessor macros for GetClassName and GetCurrentTime that cause compiler errors in WinRT functions with those names.
I spent some hours tracking that down and learning to compile with #define COM_NO_WINDOWS_H to prevent that.
It #includes non-existent *.h files from WinRT References Contracts directories instead of the Include directories:
#include "C:\Program Files (x86)\Windows Kits\10\References\10.0.18362.0\Windows.Foundation.FoundationContract\3.0.0.0\Windows.Foundation.FoundationContract.h"
#include "C:\Program Files (x86)\Windows Kits\10\References\10.0.18362.0\Windows.Foundation.UniversalApiContract\8.0.0.0\Windows.Foundation.UniversalApiContract.h"
So I made a copy of this file and replaced those with
#include <winrt/Windows.Foundation.h>
CPPWINRT generates "module.g.cpp" that #includes "MyNamespace.MyClass.h", but does not also generate that .h file. It does generate "MyNamespace/MyClass.h" (note "/" instead of "."), so I created the former .h and simply #include the latter .h from it.
CPPWINRT doesn't generate all of the base headers that I see in Microsoft examples. It generates only headers directly related to MyClass -- e.g., defining the template base class winrt::MyNamespace::implementation::MyClassT<>, the wrapper winrt::MyNamespace::MyClass, etc.
winrt::MyNamespace::factory_implementation::MyClass is not defined. MyClassT<> is defined there, but not MyClass. I find a paradigm for that from a Microsoft example and paste it in:
// Missing from the generated stuff -- derived from a Microsoft example:
namespace winrt::MyNamespace::factory_implementation
{
struct MyClass : MyClassT<MyClass, implementation::MyClass>
{
};
}
I received compiler warnings about inconsistent definitions of CHECK_NS_PREFIX_STATE: in some places it was "always" and in other places it was "never". So now I #define MIDL_NS_PREFIX and #define CHECK_NS_PREFIX_STATE="always"
Now the build gets through the compiler, but I have unresolved external symbols in the linker. I think these things are supposed to be defined inline in a "winrt/base.h", but cppwinrt did not export such a file (as I see in Microsoft examples), and the equivalent file in the system directory contains only prototypes, not bodies:
WINRT_GetRestrictedErrorInfo
WINRT_RoInitialize
WINRT_RoOriginateLanguageException
WINRT_SetRestrictedErrorInfo
WINRT_WindowsCreateString
WINRT_WindowsCreateStringReference
WINRT_WindowsDeleteString
WINRT_WindowsPreallocateStringBuffer
WINRT_WindowsDeleteStringBuffer
WINRT_WindowsPromoteStringBuffer
WINRT_WindowsGetStringRawBuffer
WINRT_RoGetActivationFactory
WINRT_WindowsDuplicateString
Am I missing some simple thing that would resolve all of these problems with missing, incomplete, and incorrect generated files?

The unresolved external symbol errors indicate that you are missing an import library. In this case you will want to link against the WindowsApp.lib umbrella library, that exports the required symbols.
Note that the symbol names you are observing are an artifact of C++/WinRT's requirement to build with as well as without the Windows SDK headers. It addresses this by declaring the imports (with a WINRT_ prefix to prevent clashes with the SDK header declarations), and then maps the renamed symbols using the /ALTERNATENAME linker switch.
I'm not sure this is going to solve all of your issues, but you certainly would want to add ${MYDIR}\\GeneratedFiles to your additional include directories. That should take care of the inability to include the generated headers from the winrt subdirectory (base.h as well as the projected Windows Runtime type headers).
cppwinrt also writes stub implementations for your own types into ${MYDIR}\\GeneratedFiles\\sources, when it processes the .winmd file previously complied from your .idl(s). It's unfortunate, but there's a manual step involved here: You need to copy the generated .h and .cpp files to your source tree, and implement the skeleton implementations. This is required whenever you modify one of your interface definitions.
As a note, the module.g.cpp files generated for my projects do not include any of my custom type headers. Maybe you are using an older version of C++/WinRT (I'm using v2.0.200203.5). I believe this was changed with the introduction of type-erased factories in C++/WinRT version 2.0. Unless you are doing this already, you should use cppwinrt from the Microsoft.Windows.CppWinRT NuGet package as opposed to the binary that (used to) ship with the Windows SDK.

Related

module.map for accessing (Swift and Objective-C) classes in main target from test target

I am in the process of adding Swift classes to an existing Objective-C project. As part of this, I have added a MyProjectTests.swift to the target MyProjectTests. It imports Swift classes from target MyProject with import MyProject and that works just fine.
I now want to use #import Swift; in MyProjectTests.mas well. However, the compiler issues the error Module 'MyProject' not found.
I have these questions:
Make both import and #import succeed in test target
Why can it be the case that the Swift compiler sees module MyProject but the Objective-C compiler does not? What build settings in MyProjectTest do I have to change to make #import MyProject succeed as well.
Export Objective-C classes from main target
Ultimately MyProjectTest.swift and MyProjectTest.m also need access to Objective-C classes from target MyProject. So far I have multi-targetted such files, but I want to switch to using modules also here.
My current understanding is that this is a matter of providing a module.map file which would list header files for the classes I wish to "export".
What are the exact steps I have to go through? Where should I place the header file and which build settings do I need to change in the two targets MyProject and MyProjectTests?
I currently have a (so far empty) module.map inside my project and build settings for target MyProject include Defines Module: Yes, Product Module Name: MyProject.
UPDATE I am by now wondering whether it might be impossible to expose (Objective-C) files from an iOS application (instead of framework) project as a module. But then it already seems to work for Swift files (somehow).
I've by now concluded that this is not possible with current Xcode (6.1.1) tooling. (What a waste of time!)
The old scheme of bi-targeting source files to both MyProject and MyProjetTest also presents several challenges for a mixed Objective-C/Swift project with a non-trivial amount of code:
Its Objective-C part defines a legacy NS_ENUM(Integer, Repeat) which name-clashes with Swift.Repeat<T>. Referring to it as MyProject.Repeat (not MyProjectTests.Repeat) causes problems when compiling for target MyProjectTests, which changing this target's Project Name (also) to MyProject (not: MyProjectTests) does not seem to solve.
Compilation of constructs where Swift class A employs Objective-C class B, which in turn employs Swift class C does not seem to be possible in a straightforward way. Since the compiler has not yet produced MyProject-Swift.h with the definition of C, it cannot compile B. But since it cannot compile B it cannot compile A and therefore cannot produce MyProject-Swift.h. Catch 22, or so it seems.
Bi-targeted Swift code imports Swift classes from auto-generated MyProject-Swift.h. For the target MyProjectTests this name does not apply, yet that's what it is in the source files. I did not want to go down the road of changing MyProjectTests' Project Name (see above). Importing the right auto-generated file via the targets *.pch may be possible, but then it may be not ...

What is the relation between .h and .m files?

I know that .m files are where the implementation is and .h files have the method signatures, etc. When one wants to use a certain class in his class, then he imports the .h file. Preprocessor replaces the import .h file with the content of the .h file. What I don't understand is how come access to implementation become available from just preprocessor bringing the .h content? What is the runtime mechanism that allows this?
Importing the .h file isn't actually what does that, so you're correct to be confused!
When a program is compiled, each file is compiled to an "object file", and those are all linked together into an executable program. It's this linking step that provides access to the implementation.
Similarly, any libraries you use need to be linked against (Xcode's project templates do this for you for Foundation, UIKit/AppKit, and other common libraries). This type of linkage is done partially at compile time, then finished dynamically when your app launches, so that it gets the version of the libraries included with the OS instead of the version you compiled with.
Importing the header simply lets the compiler know what things are in the linked library so that it can compile code that references them. If you look up the functionality you use dynamically instead of letting the compiler do it (via dlopen, dlsym, NSClassFromString, NSSelectorFromString, etc...), then you can use linked code without importing its header.

Including headers from an unmanaged C++ code inside C++/CLI code

I'm writing a CLR wrapper for an unmanaged C++ library.
There are two files I'm including from the unmanaged lib:
//MyCLIWrapper.h
#include "C:\PATH\TO\UNMANAGED\Header.h"
#include "C:\PATH\TO\UNMANAGED\Body.cpp"
Then I'm writing CLI implementations for the unmanaged library functions:
//MyCLIWrapper.h
// includes ...
void MyCLIWrapper::ManagedFunction()
{
UnmanagedFunction(); // this function is called successfuly
}
However, if my Unmanaged function contains calls to other functions that are defined in other unmanaged header files. This causes a compiler linkage error.
If I add includes to the unmanaged headers that define these functions, my errors get resolved. However, there is a lot of functions, and a lot of includes required.
Is there a different way to approach this?
EDIT:
P.S.
My managed code is in a separate Visual Studio project (output - DLL), and the compile settings are set to /CLR. Unmanaged code is in a separate Win32 project (output - DLL).
Also, after more research I concluded that theoretically I could set my Win32 unmanaged project to CLR and just add my managed classes and headers in there as an entry point, and then it would all compile into a single DLL file. That would probably solve (?) the linkage errors. However, I would prefer to preserve the loose coupling as well as the additional series of problems that can raise from setting my unmanaged project to CLR.
EDIT #2:
The unmanaged class that I'm referencing (body.cpp, header.h) contains includes to the required files that define the functions that are causing the problems. However, my managed code doesn't pick up on the includes that are in the unmanaged body.cpp and header.h.
Linker errors are a different kettle of fish from compiler errors. You forgot to document the exact linker errors you see, but a very common mishap when you compile code with /clr in effect is that the default calling convention for non-C++ member function changes. The default is __clrcall, a convention that's optimized for managed code. While functions compiled without /clr defaults to __cdecl. Which changes the way the function name is mangled. You see this back in the linker error message, is shows that it is looking for a __clrcall function and can't find it.
You'll need to either explicitly declare your functions in the .h file with __cdecl. Or tell the compiler that these functions are not managed code. Which is the best way to tackle it:
#pragma managed(push, off)
#include "unmanagedHeader.h"
#pragma managed(pop)
Solution was fairly simple:
I added both unmanaged and managed projects to a single solution in Visual Studio.
Set the unmanaged project's "Configuration Type" to "Static Library" (.lib).
Right click on the managed project -> References -> Add Reference -> Projects -> -> Add Reference.
Then in my managed class, I include the header.h (only) just like I did in my question.
Compiled successfully!
Thank you

GTest not finding tests in separate compilation units

I've got a program written in C++, with some subfolders containing libraries linked in. There's a top level SConscript, which calls SConscript files in the subfolders/libraries.
Inside a library cpp, there is a GTest test function:
TEST(X, just_a_passing_test) {
EXPECT_EQ(true, true);
}
There is main() in the top level program source, which just calls GTests main, and has another GTest test within it:
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
TEST(Dummy, should_pass){
EXPECT_EQ(true, true);
}
Now the issue is that when I run the program, GTest only runs the test in the main.cpp source. Ignoring the test in the library. Now it gets bizarre when I reference an unrelated class in the same library cpp in main.cpp, in a no side-effect kind of way (eg. SomeClass foo;), the test magically appears. I've tried using -O0 and other tricks to force gcc to not optimize out code that isn't called. I've even tried Clang.
I suspect it's something to do with how GTest does test discovery during compilation, but I can't find any info on this issue. I believe it uses static initialization, so maybe there's some weird ordering going on there.
Any help/info is greatly appreciated!
Update: Found a section in the FAQ that sounds like this problem, despite it referring specifically to Visual C++. Which includes a trick/hack to force the compiler to not discard the library if not referenced.
It recommends not putting tests in libraries, but that leaves me wondering how else would you test libraries, without having an executable for every one, making quickly running them a pain and with bloated output.
https://code.google.com/p/googletest/wiki/Primer#Important_note_for_Visual_C++_users
From the scene-setting one gathers that the library whose gtest test case
goes missing is statically linked in the application build. Also that the
GNU toolchain is in use.
The cause of the problem behaviour is straightforward. The test
program contains no references to anything in the library that contains
TEST(X, just_a_passing_test). So the linker doesn't need to link any
object file from that library to link the program. So it doesn't. So the
gtest runtime doesn't find that test in the executable, because it's not there.
It helps to understand that a static library in GNU format is an archive
of object files, adorned with a house-keeping header block and a global symbol table.
The OP discovered that by coding in the program an ad hoc reference to
any public symbol in the problem library, he could "magically" compel its
test case into the program.
No magic. To satisfy the reference to that public symbol, the linker is
now obliged to link an object file from the library - the one that contains
the definition of the symbol. And the OP imparts that the library is made
from a .cpp. So there is only one object file in the library, and it
contains the definition of the test case, too. With that object file in the
linkage, the test case is in program.
The OP twiddled in vain with the compiler options, switching from GCC to clang,
in search of a more respectable way to achieve the same end. The compiler is
irrelevant. GCC or clang, it gets its linking done by the system linker, ld
(unless unusual measures have been taken to replace it).
Is there a more respectable way to get ld to link an object file from a
static library even when the program refers to no symbols in that object file?
There is. Say the problem program is app and the problem static library is
libcool.a
Then the usual GCC commandline that links app resembles this, in the relevant
points:
g++ -o app -L/path/to/the/libcool/archive -lcool
This delegates a commandline to ld, with additional linker options and
libraries that g++ deems to be defaults for the system where it finds itself.
When the linker comes to consider -lcool, it will figure out this is a request
for the archive /path/to/the/libcool/archive/libcool.a. Then it will figure
out whether at this point it has still got any unresolved symbol references in hand
whose definitions are compiled in object files in libcool.a. If there are
any, then it will link those object files into app. If not, then it links
nothing from libcool.a and passes on.
But we know there are symbol definitions in libcool.a that we want to
link, even though app does not refer to them. In that case, we can tell
the linker to link the object files from libcool.a even though they are
not referenced. More precisely, we can tell g++ to tell the linker to do that,
like so:
g++ -o app -L/path/to/the/libcool/archive -Wl,--whole-archive -lcool -Wl,-no-whole-archive
Those -Wl,... options tell g++ to pass the options ... to ld. The --whole-archive
option tells ld to link all object files from subsequent archives, whether they
are referenced or not, until further notice. The -no-whole-archive tells the
ld to stop doing that and resume business as usual.
It may look as if -Wl,-no-whole-archive is redundant, as it's the last thing on the
g++ commandline. But it's not. Remember that g++ appends system default libraries
to the commandline, behind the scenes, before passing it to the ld. You definitely
do not want --whole-archive to be in force when those default libraries are linked.
(The linkage will fail with multiple definition errors).
Apply this solution to the problem case and TEST(X, just_a_passing_test)
will be executed, without the hack of forcing the program to make some no-op
reference into the object file that defines that test.
There's an obvious downside to this solution in the general case. If it happens that the library from
which we want to force linkage of some unreferenced object file contains a
bunch of other unreferenced object files that we really don't need.
--whole-archive links them all of them too, and they're just bloat in the program.
The --whole-archive solution may be more respectable that the no-op reference
hack, but it's not respectable. It doesn't even look respectable.
The real solution here is just to do the reasonable thing. If you want the
linker to link the definition of something in your program, then don't keep that a secret from
the linker. At least declare the thing in each compilation unit where you
expect its definition to be used.
Doing the reasonable thing with gtest test-cases involves understanding that
a gtest macro like TEST(X, just_a_passing_test) expands to a class definition,
in this case:
class X_just_a_passing_test_Test : public ::testing::Test {
public:
X_just_a_passing_test_Test() {}
private:
virtual void TestBody();
static ::testing::TestInfo* const test_info_ __attribute__ ((unused));
X_just_a_passing_test_Test(X_just_a_passing_test_Test const &);
void operator=(X_just_a_passing_test_Test const &);
};
(plus a static initializer for test_info_ and a definition for TestBody()).
Likewise for the TEST_F, TEST_P variants. Consequently, you can deploy these
macros in your code with just the same constraints and expectations that would
apply to class definitions.
In this light, if you have a library libcool defined in cool.h, implemented in cool.cpp
and want gtest unit tests for it, to be executed by a test program tests
that is implemented in tests.cpp, the reasonable thing is:-
Write a header file, cool_test.h
#include "cool.h" in it
#include <gtest/gtest.h> in it.
Then define your libcool test cases in it
#include "cool_test.h" in tests.cpp,
Compile and link tests.cpp with libcool and libgtest
And it's obvious why you wouldn't do what the OP has done. You would not define
classes that are needed by tests.cpp, and not needed by cool.cpp, within cool.cpp
and not in tests.cpp.
The OP was averse to the advice against defining the test cases in the library
because:
how else would you test libraries, without having an executable for every one,
making quickly running them a pain.
As a rule of thumb I would recommend the practice of maintaining a gtest executable
per library to be unit-tested: running them quickly is painless with commonplace automation tools
such a make, and it's far better to get a pass/fail verdict per library than
just a verdict for a bunch of libraries. But if you don't want to do that there's still nothing to the
objection:
// tests.cpp
#include "cool_test.h"
#include "cooler_test.h"
#include "coolest_test.h"
int main(int argc, char** argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
Compile and link with libcool, libcooler, libcoolest and libgtest

after importing '.h' into the '.m' file, are they FOREVER linked?

I have a CarClass.h file that declares CarClass.
I then #import this CarClass.h file into my CarClass.m file where I of course then go on to implement all my CarClass methods.
Finally, my CarAPP.m file (which contains the main) ALSO #imports CarClass.h - and everything works just fine.
Ss there are actually no problems there :-)
However, I'm not sure I understand WHY it works - cause the linkage seems a little off: if CarAPP.m imports ONLY the CarClass.h file - without also importing the CarClass.m file, then where does it GET or SEE the implementations from?
Is it the case that once the ".m" file - which imports the ".h" file - is compiled, then the two files (.h and .m) are sorta forever linked or something?
I just don't get it...
The compiling process is split in different phases, and #import directives are interpreted long before any linkage occurs.
When you give code files (.c, .m) to your compiler, it will try to generate a code object file (.o) from it; that is, a binary representation of your code. This file is not yet executable because it needs more information. Especially, it's not linked to any other file. Header files, supposed to contain only declarations and no definition, typically don't get their own matching .o file.
After all your code files have been made into code objects, the compiler will put them all together and invoke the linker. The linker will resolve all external references, and then will produce an executable file.
The point is that header files tell the compiler that a function or method exists somewhere. This is enough at the current phase of compilation to produce object files: the compiler just needs to be told what exists, not where's the definition. Only when you actually link you need to know this.
Since all your code object files get packaged together, your whole program gets access to everything that was publicly declared within itself. This is why you don't need to explicitly "link" CarAPP.m against CarClass.m.
It's also possible to mislead the compiler and declare functions in header files that not defined anywhere. If you use them in your program, the first phases of compilation will go just fine (no syntax error, no "undeclared function") but it will break at link-time, since the linker won't be able to locate the nonexistent function.
When you have #import whatEver.h, the pre-processer tries to finds the corresponding file in the default location. If found, it just pastes the content of the whatEver.h to the corresponding source file where ever you use #import whatEver.h. So, to get a final executable, your source files should pass Pre-Process, Compile and Linker stages.
When you have CarClass.h in CarAPP.m, the linker goes to find the implementations of CarClass.h in CarClass.m. Strictly, speaking it goes to find the definitions in CarClass.o. Compiler is happy as long as there are declarations of what you use and the linker is happy as long as there are definitions for the declarations when you intend to use.
When you import CarClass.h to your CarAPP.m, you are saying to linker to find the CarClass.h method implementations in CarClass.o. So, your final executable is a combination of CarAPP.o and CarClass.o. To understand more about how compiling and linking is done, Program Compilation. Though link is C/C++ specific, it should give you an idea.