Getting imported targets through `find_package`? - cmake

The CMake manual of Qt 5 uses find_package and says:
Imported targets are created for each Qt module. Imported target names should be preferred instead of using a variable like Qt5<Module>_LIBRARIES in CMake commands such as target_link_libraries.
Is it special for Qt or does find_package generate imported targets for all libraries? The documentation of find_package in CMake 3.0 says:
When the package is found package-specific information is provided through variables and Imported Targets documented by the package itself.
And the manual for cmake-packages says:
The result of using find_package is either a set of IMPORTED targets, or a set of variables corresponding to build-relevant information.
But I did not see another FindXXX.cmake-script where the documentation says that a imported target is created.

find_package is a two-headed beast these days:
CMake provides direct support for two forms of packages, Config-file Packages
and Find-module Packages
Source
Now, what does that actually mean?
Find-module packages are the ones you are probably most familiar with. They execute a script of CMake code (such as this one) that does a bunch of calls to functions like find_library and find_path to figure out where to locate a library.
The big advantage of this approach is that it is extremely generic. As long as there is something on the filesystem, we can find it. The big downside is that it often provides little more information than the physical location of that something. That is, the result of a find-module operation is typically just a bunch of filesystem paths. This means that modelling stuff like transitive dependencies or multiple build configurations is rather difficult.
This becomes especially painful if the thing you are trying to find has itself been built with CMake. In that case, you already have a bunch of stuff modeled in your build scripts, which you now need to painstakingly reconstruct for the find script, so that it becomes available to downstream projects.
This is where config-file packages shine. Unlike find-modules, the result of running the script is not just a bunch of paths, but it instead creates fully functional CMake targets. To the dependent project it looks like the dependencies have been built as part of that same project.
This allows to transport much more information in a very convenient way. The obvious downside is that config-file scripts are much more complex than find-scripts. Hence you do not want to write them yourself, but have CMake generate them for you. Or rather have the dependency provide a config-file as part of its deployment which you can then simply load with a find_package call. And that is exactly what Qt5 does.
This also means, if your own project is a library, consider generating a config file as part of the build process. It's not the most straightforward feature of CMake, but the results are pretty powerful.
Here is a quick comparison of how the two approaches typically look like in CMake code:
Find-module style
find_package(foo)
target_link_libraries(bar ${FOO_LIBRARIES})
target_include_directories(bar ${FOO_INCLUDE_DIR})
# [...] potentially lots of other stuff that has to be set manually
Config-file style
find_package(foo)
target_link_libraries(bar foo)
# magic!
tl;dr: Always prefer config-file packages if the dependency provides them. If not, use a find-script instead.

Actually there is no "magic" with results of find_package: this command just searches appropriate FindXXX.cmake script and executes it.
If Find script sets XXX_LIBRARY variable, then caller can use this variable.
If Find script creates imported targets, then caller can use these targets.
If Find script neither sets XXX_LIBRARY variable nor creates imported targets ... well, then usage of the script is somehow different.
Documentation for find_package describes usual usage of Find scripts. But in any case you need to consult documentation about concrete script (this documentation is normally contained in the script itself).

Related

How can I avoid clashes with targets "imported" with FetchContent_MakeAvailable?

Suppose I'm writing an app, and managing its build with CMake; and I also want to use a library, mylib, via the FetchContent mechanism.
Now, my own CMakeLists.txt defines a bunch of targets, and so does mylib's CMakeLists.txt. If I were to install mylib, then find_package(mylib), I would only get its exported targets, and even those would be prefixed with mylib:: (customarily, at least). But with FetchContent, both my app's and mylib's (internal and export-intended) targets are in the "global namespace", and may clash.
So, what can I do to separate those targets - other than meticulously name all of my own app's targets defensively?
I would really like it if it were possible to somehow "shove" all the mylib targets into a namespace of my choice.
Note: Relates to: How to avoid namespace collision when using CMake FetchContent?
In the current CMake (<=3.24) world, there are no features for adjusting the names of the targets in other black-box projects, whether included via find_package, add_subdirectory, or FetchContent. Thus, for now, it is incumbent on you to avoid name-clashes in targets, install components, test names, and anywhere else this could be a problem.
Craig Scott says as much in his (very good) talk at CppCon 2019, see here: https://youtu.be/m0DwB4OvDXk?t=2186
The convention he proposes is to use names that are prefixed with SomeProj_. He doesn't suggest to literally use ${PROJECT_NAME}_, and I wouldn't either, because doing so makes the code harder to read and grep (which is extremely useful for understanding a 3rd-party build).
To be a good add_subdirectory or FetchContent citizen, however, it is not enough to simply namespace your targets as SomeProj_Target; you must also provide an ALIAS target SomeProj::Target. There are a few reasons for this:
Your imported targets from find_package will almost certainly be named SomeProj::Target. It should be possible for consumers of your library to switch between FetchContent and find_package easily, without changing other parts of their code. The ALIAS target lets you expose the same interface in both cases. This will become especially pressing when CMake 3.24 lands with its new find_package-to-FetchContent redirection features.
CMake's target_link_libraries function treats names that contain :: as target names always and will throw configure-time error if the target does not exist. Without the ::, it will be treated as a target preferentially, but will turn into a linker flag if the target doesn't exist. Thus, it is preferable to link to targets with :: in their names.
Yet, only IMPORTED and ALIAS targets may have :: in their names.
Points (2) and (3) are good enough for me to define aliases.
Unfortunately, many (most?) CMake builds are not good FetchContent citizens and will flaunt this convention. Following this convention yourself reduces the chance of integration issues between your project and any other, but obviously does nothing to prevent issues between two third party projects that might define conflicting targets. In these cases, you're just out of luck.
An example of defining a library called Target that will play nice with FetchContent:
add_library(SomeProj_Target ${sources})
add_library(SomeProj::Target ALIAS SomeProj_Target)
set_target_properties(
SomeProj_Target
PROPERTIES
EXPORT_NAME Target
OUTPUT_NAME Target # optional: makes the file libTarget.so on disk
)
install(TARGETS SomeProj_Target EXPORT SomeProj_Targets)
install(EXPORT SomeProj_Targets NAMESPACE SomeProj::)
For a more complete example that plays nice with install components, include paths, and dual shared/static import, see my blog post.
See these upstream issues to track the progress/discussion of these problems.
#22687 Project-level namespaces
#16414 Namespace support for target names in nested projects
As #AlexReinking , and, in fact, Craig Scott, suggest - there's no decent current solution.
You can follow the following CMake issues through which the solution will likely be achieved:
#22687 Project-level namespaces (more current)
#16414 Namespace support for target names in nested projects (longer discussion which influenced the above, recommended reading)

Find out why cmake adds specific link flags

I have big project with cmake. It mostly works.
But recently some combination of compilation server vs test server broke. Investigation found that final compile/link command calls gcc (...) -licudata -licui18n -licuuc (...), this introduces dependency on shared library which is not present on test server.
How do I find out what in my project (my library, imported library, found library, whatever) adds those 3 flags to compile command?
I don't add them explicitly, so something is done automagically and I want to find it. compile_commands.json doesn't have them because linking flags don't belong in it. CMakeCache.txt has those flags in some obscure variable PC_LIBXML_STATIC_LIBRARIES:INTERNAL but removing them there doesn't affect compile/link command.
Note that this question is not about dealing with libicu specifically but about a method for investigation in general (though comments about eventual known problems with libicu would be appreciated too).
I found out that dependency graphs created by cmake can have more details that was configured for our project. Here are all options: https://cmake.org/cmake/help/latest/module/CMakeGraphVizOptions.html I expect GRAPHVIZ_EXTERNAL_LIBS, GRAPHVIZ_SHARED_LIBS are most important to set to true.
We enabled everything that was possible to enable, filtered out nothing and resulting graph was massive (to big for xdot - luckily .dot files are human readable), but showed that Boost::regex uses those 3 libraries.

Change toolchain in a subfolder with cmake [duplicate]

It seems like CMake is fairly entrenched in its view that there should be one, and only one, CMAKE_CXX_COMPILER for all C++ source files. I can't find a way to override this on a per-target basis. This makes a mix of host-and-cross compiling in a single CMakeLists.txt very difficult with the built-in CMake facilities.
So, my question is: what's the best way to use multiple compilers for the same language (i.e. C++)?
It's impossible to do this with CMake.
CMake only keeps one set of compiler properties which is shared by all targets in a CMakeLists.txt file. If you want to use two compilers, you need to run CMake twice. This is even true for e.g. building 32bit and 64bit binaries from the same compiler toolchain.
The quick-and-dirty way around this is using custom commands. But then you end up with what are basically glorified shell-scripts, which is probably not what you want.
The clean solution is: Don't put them in the same CMakeLists.txt! You can't link between different architectures anyway, so there is no need for them to be in the same file. You may reduce redundancies by refactoring common parts of the CMake scripts into separate files and include() them.
The main disadvantage here is that you lose the ability to build with a single command, but you can solve that by writing a wrapper in your favorite scripting language that takes care of calling the different CMake-makefiles.
You might want to look at ExternalProject:
http://www.kitware.com/media/html/BuildingExternalProjectsWithCMake2.8.html
Not impossible as the top answer suggests. I have the same problem as OP. I have some sources for cross compiling for a raspberry pi pico, and then some unit tests that I am running on my host system.
To make this work, I'm using the very shameful "set" to override the compiler in the CMakeLists.txt for my test folder. Works great.
if(DEFINED ENV{HOST_CXX_COMPILER})
set(CMAKE_CXX_COMPILER $ENV{HOST_CXX_COMPILER})
else()
set(CMAKE_CXX_COMPILER "g++")
endif()
set(CMAKE_CXX_FLAGS "")
The cmake devs/community seems very against using set to change the compiler since for some reason. They assume that you need to use one compiler for the entire project which is an incorrect assumption for embedded systems projects.
My solution above works, and fits the philosophy I think. Users can still change their chosen compiler via environment variables, if it's not set then I do assume g++. set only changes variables for the current scope, so this doesn't affect the rest of the project.
To extend #Bill Hoffman's answer:
Build your project as a super-build, by using some kind of template like the one here https://github.com/Sarcasm/cmake-superbuild
which will configure both the dependencies and your project as an ExternalProject (standalone cmake configure/build/install environment).

Produce static libs from tensorflow_cc and tensorflow_framework

As far as I understand using bazel I can only produce libtensorflow_cc.so and libtensorflow_framework.so.
I need to produce static libs that are position independent (-fPIC) because I'll link them to a dynamic lib of my own later.
I found this answer which suggest the use of a Makefile included in the project.
I successfully used it to replace the libtensorflow_cc.so but what can I do to replace libtensorflow_framework.so?
Not an actual answer, but too long for a comment.
I managed to do something like what you mention using Bazel on Windows. In particular, I wanted to make a single wrapper DLL with one or two headers (limited in functionality) that I could move around easily. I'll write a summary of the things that I did; it's rather convoluted an customized for our needs, but maybe you find something useful.
I pass --config=monolithic to the bazel build command (besides any other option that you need). That will avoid modularizing the library and thus remove the dependency to a libtensorflow_framework.so (see
tools/bazel.rc).
The goal that I build is not any of the ones in the TensorFlow repository. Instead, I add a very small program that uses my wrapper as a new Bazel target (a C++ file plus my headers headers and a BUILD file). So all of TensorFlow had to be compiled beforehand in order to compile this final dummy program.
When I get that done, I take advantage of the fact that Bazel does already compile every subgoal as a static library. I check a file under the bazel-bin directory generated for my dummy program goal with a name ending .params - there I find the path of all the static libraries that were used to compile it.
I copy all of these intermediate static libraries to somewhere else. Also, I copy a bunch of headers I will need to compile my final wrapper (TensorFlow own's, but also Eigen, Protobuf and Nsync now too). I put all of this in a build area I have prepared before.
I use NMake Makefile to produce my custom DLL, using the static libraries, the copied headers and my own thin wrapper.
And that's about it, I think. I have an ugly Bash script I run on MSYS2 that does everything for me. Usually with every new release I need to tweak one or two things (some option in the configure script, some additional headers I need to copy, etc.), but I do get it to work in the end. It's quite a lot of fiddling though, so I'm not necessarily saying you should use the same approach (but feel free to ask for details about any step if you want).
Using the -2.params files #jdehesa mentioned and bazel verbose output (-s switch), you can even create a link command to eventually statically link these intermediate static libraries. I automated this process for Windows/Linux/macOS and included it to the vcpkg package manager. To use it just run vcpkg install tensorflow:x64-windows-static. If you're interested in the sources, you'll find them here.

In cmake, what is a "project"?

This question is about the project command and, by extension, what the concept of a project means in cmake. I genuinely don't understand what a project is, and how it differs from a target (which I do understand, I think).
I had a look at the cmake documentation for the project command, and it says that the project command does this:
Set a name, version, and enable languages for the entire project.
It should go without saying that using the word project to define project is less than helpful.
Nowhere on the page does it seem to explain what a project actually is (it goes through some of the things the command does, but doesn't say whether that list is exclusive or not). The cmake.org examples take us through a basic build setup, and while it uses the project keyword it also doesn't explain what it does or means, at least not as far as I can tell.
What is a project? And what does the project command do?
A project logically groups a number of targets (that is, libraries, executables and custom build steps) into a self-contained collection that can be built on its own.
In practice that means, if you have a project command in a CMakeLists.txt, you should be able to run CMake from that file and the generator should produce something that is buildable. In most codebases, you will only have a single project per build.
Note however that you may nest multiple projects. A top-level project may include a subdirectory which is in turn another self-contained project. In this case, the project command introduces additional scoping for certain values. For example, the PROJECT_BINARY_DIR variable will always point to the root binary directory of the current project. Compare this with CMAKE_BINARY_DIR, which always points to the binary directory of the top-level project. Also note that certain generators may generate additional files for projects. For example, the Visual Studio generators will create a .sln solution file for each subproject.
Use sub-projects if your codebase is very complex and you need users to be able to build certain components in isolation. This gives you a very powerful mechanism for structuring the build system. Due to the increased coding and maintenance overhead required to make the several sub-projects truly self-contained, I would advise to only go down that road if you have a real use case for it. Splitting the codebase into different targets should always be the preferred mechanism for structuring the build, while sub-projects should be reserved for those rare cases where you really need to make a subset of targets self-contained.